Condition monitoring models based on archived measured data contain all the relevant information needed by O&M personnel. But up to now it has only been possible to apply them to steady-state conditions. This has now changed with the introduction of a state-of-the art tool that can evaluate in real-time abnormal behaviour in transient operating conditions, such as during the start-up or shutdown sequences of key power plant equipment.

Dr. Holger Hackstein, Siemens AG, Energy Sector, Germany

In order to earn you have to produce, and if you are able to accurately predict the availability of your components and reliably avert outages, you have a distinct advantage. In the light of the increasing demand for power, this applies also, and especially, to power generation. For this reason, power plant operators around the world have one prime objective: they want to produce continuously and safely and, for economic reasons, to optimize operating modes and emissions.

A prime lever for improving efficiency and availability is knowing how well your components are working and when the current progress of wear and tear will bring them to the brink of failure, i.e. to the point at which they need to be overhauled. Or the other way around how long the plant can continue producing without risk.

Model-based monitoring of machinery and installations can provide the desired reliability of production and safeguard against failure. It works by comparing the modeled ‘normal behaviour’ of the components and processes with the values actually measured online. In practice, this makes it possible to analyze the condition of machinery and installations, and to fully ‘plan’ the servicing and maintenance effort required to keep them up and running.

However, it has not been possible up to now to monitor all the machines and processes in the power plant around the clock and in all operating modes because classical monitoring systems cover only limited areas, e.g. the turbine or individual processes, and work too slowly to keep track of transients. Furthermore, thermodynamic and physics-based models entail a major disadvantage in that it takes a lot of effort to define and programme them.

Versatile condition monitoring

A new system for condition-oriented maintenance has achieved a breakthrough in both of these aspects. It is the only system to date that monitors all components and processes even during load changes – which have not been covered previously – and during start-ups and shutdowns. The comparison between predicted and actual behavior takes just seconds, which is essential to be able to cope with transients. It also detects ‘creeping’ faults long before they reach a critical point and therefore gives maintenance teams the opportunity to take remedial action in good time.

And finally, the new system’s revolutionary ‘auto-engineering’ feature makes high-effort modeling and manual engineering superfluous. The system ‘learns’ the normal behaviour patterns of a machine or installation purely from its history, i.e. the archive data. These archive data contain all the relevant information about the plant’s history, the physical correlations and especially the inter-dependencies between the various processes and components.

This affords the great advantage that all correlations and peculiarities of a machine or installation are learned automatically and do not have to be ‘taught’ by high-effort engineering of the model parameters. Changes can be made at any time, ‘at the push of a button’. The output also provides valuable hints for optimizing the operating mode and reducing emissions.

Limit-oriented monitoring

In this type of monitoring, the measured values that represent the condition of any mechanical equipment are monitored on the basis of simple absolute limits, often not even differentiated according to operating modes or service conditions. One example of this classical form of monitoring, which serves primarily to avoid damage, is the protection function of the power plant control system, which issues an alarm if a limit is reached, and trips the machine if the limit continues to be exceeded for a defined time. A special case of this limit-based monitoring is the machine protection monitoring functions defined by some codes and regulations, which define the condition of a machine as either good or bad by relating the measured value to the defined limit.

Model-based condition monitoring

Model-based condition monitoring simulates the components or processes to be monitored in a model essentially comprising a set of characteristic values that describe the process or the component as accurately as possible. During monitoring, these characteristics are compared with the values actually measured. The match or differences permit conclusions to be drawn as to the condition of the monitored object.

It goes without saying that the quality of the results correlates with the accuracy with which the condition was originally modeled; the more accurate the model, the more accurately the actual condition of the component or process can be identified. The degree of departure from a pre-defined ‘normal’ behaviour pattern is recognizable from the simple difference between the value in the model and the actual value measured.

Once the model has been constructed, the parameters have to be defined. The aim is to define the parameters of the model so as to achieve the best possible match between the system and its model. Precisely in the case of complex and in some cases very extensive systems such as steam generators, gas turbines or production processes, this parameter identification step frequently involves problems and considerable effort.

In practice, therefore, modeling of complex components is usually dispensed with in favour of standard water/steam cycles and standard components that have already been adequately described in the past. Particularly in power plant operation, this approach often leads to inaccuracies in assessing the condition of a given machine, because only the process itself has been modeled, not the components or sub-processes involved. Such models are therefore suitable for obtaining general information on the status of the process, but not for planning future maintenance activities or for analyzing specific machines.

Date-driven modeling

Data-driven methods have a major advantage: they are much simpler, and thus less expensive to model. Even complex components and convoluted processes can be simulated with moderate effort. One of the most popular methods in this category is neural networks.

Neural networks dispense with time-consuming and expensive model engineering and especially with the need to identify all the necessary model parameters. Data-driven models afford the decisive – time and cost savings – advantage that they derive their ‘knowledge’ about the component or process exclusively from measurement data that are already available because, with the exception of any spurious measurements possibly present, the data stored in the archives are the result of past interaction between components and processes. They arise from the physical, thermodynamic and mechanical laws that apply between these components and processes. The measured data therefore contain all the correlations and dependencies needed for diagnosis and analysis. So all it takes to model the functioning of all the components and machines is to ‘tap’ the power plant archive.

The procedure is identical for all models whether physics, thermodynamic or measurement data-based: as soon as the process or work sequence of a component has been sufficiently realistically replicated with the appropriate values (‘expected values’), monitoring can commence. In the simplest case, the model provides an expected value for every operating parameter and compares it with the value actually measured.

Figure 1 shows a real measurement signal from the component or process under consideration in red. Over time, this signal exhibits different values that may fluctuate greatly from operating mode to operating mode. If the model has been adequately trained with historical data, it will provide an expected value in real time, which is shown in blue in this case, that is parallel with the measured value from the plant. To do this, the model draws upon the knowledge it has acquired from the physical or thermodynamic equations and the archived measurement data. In well-trained models, the real measured value and the expected value will therefore be nearly identical.


Figure 1: Measured (red) and expected (blue) condition evaluation values over time
Click here to enlarge image

On the right-hand side of Figure 1 a discrepancy between the real measured value and the expected value can be seen. Based on the previous history, the model concludes that the value of the parameter shown here should be higher, given the values of all the other parameters in this model. These departures from normal behaviour, i.e. points where the real measured value and the expected value do not match, are most easily recognized by observing the residual, that is, the difference between the measured value and the expected value. The residual for this schematic example is shown in Figure 2.


Figure 2: The residual is the difference between the measured and the expected value – normal vs. abnormal behaviour
Click here to enlarge image

If the residual is zero or at least nearly zero (left-hand side of Figure 2), the following is true: the model and measured values are almost identical; and the model is behaving similarly or identically to the real component or process with respect to the available information. If the model has been generated for normal behaviour, or has knowledge about normal operating behaviour, the condition of the real component or process can be considered normal.

Even if the models work very precisely, they will not always output residuals that are constantly zero. Because of measurement and system errors, there will often be interference of the real measurement signal, for example due to noise. Slight discrepancies in the model response are therefore normal. In practice, this can be handled by applying a tolerance to the residual that permits slight fluctuations around zero. The magnitude of these permissible fluctuations depends on the signal under consideration, which usually exhibits a certain fluctuation range. It also depends on the precision of the instrumentation.

Modern systems not only automatically derive the range of fluctuation but also suggest ready-defined upper and lower limits for the residuals. In practice, to avoid alarms being caused by minute violations of the limits, a defined delay is applied that determines how long the limit value must be continuously violated before an alarm is generated.

If a residual nevertheless violates its limits (right-hand side of Figure 2), this can mean the following: either (1) the model is not able to determine the correct value from the information available, e.g. because the plant has entered an operating mode that has not yet been incorporated into the model (unknown behaviour), or (2) the component or process under consideration is tending away from normal behaviour, which could be evidence of an incipient fault.

It is then up to the user to determine whether the limit violation is due to a condition that is unknown to the model but in principle tolerable, or indicates an incipient fault or even some damage present.

The residual is therefore the crucial factor in model-based monitoring. As long as the residual is close to zero, it is reasonable to conclude that the component or process is in a normal, and therefore fault-free condition. Since the residual of each measured parameter already factors in the correlations with all other parameters, the residual is often regarded as a sort of intelligent baseline. But, of course, how intelligent this baseline really is will depend on the model used.

Monitoring of transient operating states

Transient processes are by definition all non-steady-state conditions. Start-up and shutdown sequences are particularly interesting for the purpose of diagnosis, because they often give a good insight into the rotational dynamics of the components.

Unfortunately, hardly any model-based methods have been designed for monitoring transient processes. This calls for models based on values measured at intervals in the seconds range. Furthermore, the model must have an adequate knowledge base about transient behaviour patterns, which has to be built up during training.

Early fault detection during shutdown

This system has been benchmark-tested in the USA. One of the aims was to evaluate whether it is able to meet previously undeliverable customer requirements.

Solutions to date have enabled the plant operator to monitor and evaluate only steady-state operating behaviour. Any kind of transient behaviour, for example transitions to and from higher power outputs and the entire start-up and shutdown sequences of machines have been beyond the capabilities of current systems. It is universally accepted that precisely these transient states are of outstanding importance for diagnosis. This was brought home particularly painfully in one case where irregularities went unnoticed during a shutdown sequence and subsequently led to unscheduled downtime.

For benchmarking, the plant operator was able to provide the archived data measured over the past few years. This period covered a large number of turbine shutdown sequences, and the model was trained on shutdown sequences defined as ‘normal’ by the plant operator. After training, all the remaining shutdown sequences were studied using the method described here by comparing the archived real behavior with the defined normal behaviour.

The residuals for all the unknown shutdown sequences were determined using the trained model. Except for one shutdown sequence, all others were identified as normal and unexceptional. As can be seen in Figures 3 & 4, the real measured value (red) and the expected value from the model (blue) show an almost perfect match during the entire shutdown sequence.


Figure 3: Time plot of the vibration on a turbine bearing during a normal shutdown sequence
Click here to enlarge image

null


Figure 4: Time plot of the bearing temperature during a normal shutdown sequence
Click here to enlarge image

However, one shutdown sequence yielded significant residuals and was classed as abnormal. Figure 5 shows increased vibration in the right-hand side of the chart. Even though the vibration is not significantly greater here than at the start of the shutdown sequence, comparison with the values from the model identifies this behaviour as abnormal. However, as the defined limits were not exceeded, a classical monitoring system would not have reported this behaviour, since there are operating conditions where vibration rises above these levels and the values themselves are well within the tolerance range.


Figure 5: Shutdown sequence showing distinct anomalies for vibration on a turbine bearing
Click here to enlarge image

Although the vibration behaviour during this shutdown sequence is neither exceptional nor higher than permissible in absolute terms, comparison with the values from the model clearly showed that there was a significant departure from the normal. After shutdown, it would have been easy to use the SPPA-D3000 Plant Monitor to study this process further or to diagnose it in detail with the aid of software specially developed for turbomachinery diagnostics.

Effective maintenance strategies

Use of model-based condition monitoring permits description of the condition of a component or an entire process. Thanks to the latest methods, costly engineering of the models is no longer needed, as the example of the SPPA-D3000 Plant Monitor shows. Modeling and parameterization are performed solely on the basis of archived measured data.

Plant Monitor is not only able to monitor all components and processes in the power plant with minimal effort, it also makes it possible for the first time to monitor transient operating states such as turbine shutdown in real time on the basis of a defined model, i.e. by comparing actual with expected behaviour patterns. This enables the plant operator to judge whether an observed condition is still normal or whether potentially critical changes have occurred.

The outcome of this assessment is immediately available as key information for plant management and maintenance planning. Since conventional machine protection systems and even modern I&C systems do not feature the expected values required for this assessment, neither of these can provide such sensitive early warning of faults.

Sensitive early warning of faults and therefore avoidance of unscheduled downtime and damage are essential for implementing a fully condition-dependent maintenance strategy throughout the power plant. Across-the-board monitoring of the condition of components and processes, even under transient operating conditions, makes it possible actively to plan maintenance activities, prevent damage, predict availability, and optimize plant operation.