The technology behind automation & optimization

abb
Control room of a 780 MW combined cycle cogeneration plant
Credit: ABB

As our power stations use ever-more sophisticated automation solutions to optimize how they operate, Paul Breeze looks at the software modeling and technology behind this growing trend.

When a Brazilian wood pulp mill, Cenibra in the state of Minas Gerais, wanted to improve the operational efficiency of its combined heat and power (CHP) system, it chose to install a plant optimization system to help control the complex steam supply network. Cenibra wished to reduce the losses caused by the venting of steam to the atmosphere due to excess production when the steam system became unbalanced. This was not a simple problem because the plant had a number of processes that required steam and the demand from each could change regularly and in unpredictable ways.

The plant’s steam demand was met by a mixture of biomass-fired boilers using waste generated at the plant, gas turbine cogeneration systems and auxiliary boilers burning oil. Under normal operation these auxiliary boilers would only be used when the other sources could not supply the required steam. Running them was expensive and often led to excess steam. However, keeping them offline could leave some mill processes lacking steam. In both cases plant efficiency and, ultimately, turnover were affected.

The solution, provided by Metso, was an advanced process control (APC) system utilizing a software suite that can operate as a supervisory layer on top of a plant’s existing distributed control system (DCS).

The APC uses a multi-variable predictive control (MPC) approach, also known as model predictive control, in which a model of the plant is constructed, including all the loops and variables that can affect steam production and consumption. An operating target is then set based on various parameters such as steam temperature and pressure or power output, and the control system takes charge of the whole plant using control loops to maintain the system at the target point while steam production and demand change.

The automation system installed at the wood pulp mill exemplifies the type of APC system that has the potential to improve operations of CHP and power plants across the globe. In the case of Cenibra, the new system reduced steam venting to the atmosphere by 90 per cent while still maintaining overall plant stability, something the operators could not do themselves.

Elsewhere an APC system might be used to control the operation of a fossil fuel-fired boiler to ensure both low nitrogen oxides (NOx) production and efficient fuel combustion or to maintain the stability of a combined-cycle power plant providing grid support services, such as renewable support and fast ramping, or even to control a multi-fuel biomass plant to maintain efficiency and emissions as the fuel mix changes.

This type of automation system has been in use in process industries for several years, but is less common in the power sector which has its own traditional ways of working. In some cases this has led to reluctance to adopt the new technology. Overcoming such innate conservatism is vital if such advanced systems are to be used widely in the power sector.

“Operator acceptance of new ways of working is important,” stresses Juha-Pekka Jalkanen, Metso’s director of plant performance solutions. This was echoed by Julio Ribeiro, recovery line and utilities coordinator at Cenibra. “There was a change in the operating paradigm because the operators were used to doing it their way. At first they were a little sceptical about it but soon they saw it brought benefits.”

A new paradigm

It is easy to understand why these systems might be viewed with a little suspicion. The key to the new paradigm is that the control system, not the operator, controls the plant; the operator’s role becomes that of an executive overseeing the operations. In order for this to be possible, the operator’s knowledge of the plant and how it functions must be programmed into the control system. This knowledge forms the basis for a ‘model’ of the power station that is created in software and mimics all the processes in the plant. Such models can take different forms depending upon what is known or understood about the process.

One of the most important types of model is the mechanistic model, derived from the underlying principles of the process being modelled. In the case of a power plant, these will be the physical and thermodynamic principles that govern the operation of furnaces, steam generators and turbines. Such a model is only possible if the processes are well understood and reasonably stable. It requires experts that can provide the necessary understanding and can be time consuming to create, but where it is possible to construct such a model it will provide the best foundation for an automation system.

“I would prefer first principles,” says Pascal Stijns, power and energy consultant at Honeywell Process Solutions. “If you do not understand first principles,” he adds, “something is wrong”.

A mechanistic model in which conditions are not necessarily stable can also be accommodated, in this case by using ‘fuzzy logic’, which allows for a degree of variability in each process loop rather than precise fixed points. This might be applied, for example, to a fluidized bed boiler burning a variety of different fuels where the precise operating conditions will vary with the fuel mix.

epm
The operator’s knowledge of the plant must be programmed into the control system
Credit: EPM

There are other situations where it is impossible to build a mechanistic model, either because the process is too complicated, insufficiently understood, or simply there is nobody available who understands it. In this case, the only solution is some form of ‘black box’ model. A black box model considers the power plant as a box with inputs and outputs. Its contents are unknown.

All that can be known about the black box must be extrapolated from the various sensors and measuring instruments within the plant. These provide the inputs and outputs of the black box. Special software techniques can then be used to generate relationships between the inputs and outputs by first collecting data to show how these vary during normal operation, and then calculating how they interact with one another. Once the model is generated, it will allow stable operation to be established without actually knowing what is going on inside the box.

One of the key tools applied in the situation is neural networks. A neural network creates a model by ‘learning’ how the outputs of the black box should respond to particular inputs. Given sufficient training, a neural network can develop an operational model of the power plant. Neural networks can be tricky to use without any knowledge of the underlying processes, but if used intelligently they can provide consistent models for power plants where no other solution is available.

Honeywell
Specialists provide the fundamental understanding necessary to build models for automated power plant control systems
Credit: Honeywell

The need for a black box approach to model creation highlights one of the dilemmas facing the power industry today: the availability of specialists capable of providing the fundamental understanding necessary to build models for automated control systems.

As Honeywell’s Stijns points out, the power industry is ageing and many of the specialists who understand how these plants operate are retiring. One solution is to train new specialists and, ironically, it is the models at the heart of modern automation systems that may provide the key resource for training in the future.

If the model-based automation system accurately mimics the power plant in all its details, it can be used not only to control the plant, but also as a simulator. “This can be used as a training tool and as an engineering tool,” says Tomasz Kosik of Emerson Process Management. As an engineering tool it can be used to explore any problems which may arise or to experiment with a plant reconfiguration offline rather than with the actual plant. But the same simulator can be used to train new staff.

While this offers a way forward, specialists who have been trained using a simulator may lack inner knowledge of the plant, so that when they come to take charge of the actual plant they see it as an IT system rather than a series of physical processes. Then they may be tempted, as Kosik suggests, to ‘play’ the plant without realizing exactly what the real-world consequences may be.

Meanwhile, recognizing the problems associated with a lack of experts, many automation system providers are seeking ways of creating the systems needed to control a plant without access to the specialist knowledge that model building requires. As Alexander Frick, head of power plant optimisation at ABB, notes, “today the power industry is mostly still a people’s industry”. That places limitations on the spread of more conventional automated control systems. ABB, like other companies, is trying to distil the knowledge of the specialists that are available today and package it into software tools that can then be used by people with less specialist power plant knowledge.

Power plant optimization has always been high on the agenda of both operators and suppliers, but according to Frick, “a lot of people want it but only a very few people can do it”. Thus, creating tools that can replace the specialists will be vital for the future, particularly when trying to sell tools to countries that do not have the pools of experts available. ABB has had some success in India with this approach and is aiming to extend it further.

Five senses of a control system

If the APC system is the brain of an advanced power plant control and optimization system, then the sensors and measuring devices are its fingers, eyes and ears, i.e. the means by which it gauges the state of the power plant in order to be able to control it. Here some of the greatest advances have taken place.

According to Metso’s Jalkanen there are more and more new instruments being used in power plants today, and it is these that provide the foundation for plant control. It is only by knowing the state of a plant, in real time, that it becomes possible to control it. Further, the more data is available from different parts of the plant, the tighter and more accurate the control regime can be.

Take a power plant’s boiler, for example. In the past, there was only a limited amount of temperature data that could be acquired from within the boiler itself. Much had to be inferred from operations downsteam. Today, however, it is possible to create a profile of the temperatures within the boiler using cameras, infra-red detectors, lasers and acoustic measurements. These provide an extraordinary level of detail that can support a much more sophisticated control regime, which in turn provides better boiler control.

Modern sensors and measurement techniques make almost any variable within a plant accessible. For example, by measuring variations in the concentration of corrosive gases within a plant’s flue gas and combining these with other measurements such as boiler temperature gradients, it is possible to assess the corrosion rate of boiler components.

New techniques can also make older ones redundant. In the past it was important to take live steam measurements even though this was difficult and costly. With the thermodynamic measurements available today, it is possible to predict live steam conditions and there is no need to measure them.

Renewable optimization

The technologies that are being developed to enable power plant optimization are most often targeted at combustion plants of various sorts. This is partly because these are the most difficult to control, and partly because new grid demands ” that gas- and coal-fired plants should be able to ramp quickly and to operate at variable outputs, for example ” mean that optimization is essential if these plants are to be able to generate economically.

Some automation developers are now beginning to look at the application of optimization technology to renewable energy plants. There is less scope here but opportunities are beginning to be recognized.

In a large solar photovoltaic plant, there will be a large number of individual solar panels divided into groups, with each group supplying power to a single inverter that interfaces with the grid. Inverter efficiency varies with the amount of input power, which will change as the light intensity changes. By making ‘soft’ connections between solar panels and inverters it is possible to use an automation system to reconfigure the connections as the light level changes in order to run more or fewer inverters at their optimum efficiency. This helps raise overall efficiency and provide a higher quality feed to the grid.

Other possibilities exist with wind power. Turbines tend to lose efficiency when the wind direction or speed changes. Maximum efficiency is only recovered when the turbine has yawed to meet the new wind direction or the blade pitch has been adjusted to optimize the rotor for a change in wind speed. By integrating weather forecasting and software optimization tools it is possible to predict the wind changes in advance and anticipate them by making wind turbine blade pitch or yawing adjustments so that overall higher efficiency of energy capture is achieved.

Grid support is another area where optimization tools can be deployed. In the past, the main operational objective of a renewable energy-based plant was to produce “as much power as possible by deploying as many megawatts as possible, seeking quantity above quality”, as Massimo Danieli, global business unit manager for power generation at ABB, aptly describes it.

More and more, however, solar and wind plants are being asked to provide quality, as well as quantity. This can be provided at a local level by operating solar power plants and wind farms differently. It can also be achieved over a much wider area by using a pooling or fleet management approach.

Power plant pooling

The principle of power plant pooling is a simple extension of the concepts that underpin plant automation. What an automation and optimization system can achieve within a single plant can be extended to a group of power plants. These can then be managed from a single control room. Not only does this allow greater gains from plant optimization, it also allows systems to be extended to cover asset management. With this there is “potential for customer profitability and cost reduction”, said Dieter Fluck, vice president, product management, instrumentation, controls & electrical at Siemens Energy.

Fleet optimization may involve a fleet of fossil fuel power plants, but it could also involve a heterogeneous group of both renewable and conventional generation plants. Fluck cited a project for Origin Energy in Australia which involved the control of plants at 13 locations and included wind, coal and gas-based generation with different control systems and operating regimes. By optimizing these as a group, the strengths of each can be exploited; the renewable plants provide cheap power whereas the fossil fuel plants can react fast to changes in demand. “Pooling allows an increase in profitability,” said Fluck.

Elsewhere this type of approach can be used to create ‘virtual power plants’ that aggregate a disparate group of generating units and operate them as a single power plant. The virtual power plant might bring together wind generators from across a wide region to increase reliability, or it might be an agglomeration of small diesel units providing auxiliary services to the grid, something the individual units would not be able to do.

Fleet management can solve problems too. When the South African utility Eskom was faced with the problem of controlling emissions from its 13 coal-fired power stations it turned to fleet management. Emissions are now measured at each plant and relayed to a central control room, where performance is monitored and correlated with the plants’ operation status. The emissions monitoring system was built using software from Invensys.

The system allows the company’s environmental management team to anticipate when emissions are likely to exceed limits for each plant, such as during startup or shutdown, so that the appropriate exemptions can be obtained. It also allows the load to be spread during peak demand periods so that the emissions from an individual plant do not exceed its absolute cap. Otherwise a plant might have to be shut down, affecting both supply security and profitability.

Centralizing the control and monitoring of power plants allows teams of experts to be assembled at a single point, instead of needing such experts at each plant. With expertise and experience set to dwindle, this approach is likely to become much more commonplace in the future.

In one way and another, advanced automation and optimization systems are clearly changing the face of power generation.

More Power Engineering International Issue Articles
Power Engineering International Archives
View Power Generation Articles on PennEnergy.com

No posts to display