Access to offshore wind turbines can be problematic for technicians
Credit: Dong Energy
Information technology and operating technology have traditionally run disparate systems, but increasingly they are converging, providing the opportunity for power plants to collect information from within, writes Penny Hitchin
Technological advances, political imperatives and economic drivers all play their role in the evolution of energy generation portfolios.
Continuous developments in technology and communications make a significant contribution to this. Rapid advances in connectivity are enabling self-powered, intelligent wireless sensors to provide valuable information about the condition of components operating remotely.
Condition monitoring (CM) is an important addition to the suite of available techniques for improving performance and avoiding downtime in power generation. Analysing data collected from sensors situated within the plant enables technicians to get a better understanding of performance and predict mechanical wear and failure.
In modern plant, wireless networks and sensors are used to measure increasing amounts of temperature, pressure, vibration and flow data from the heart of the power plant.
Intelligent devices can be coupled to intelligent networks and the information can be analyzed remotely, locally or a combination of both. Sophisticated software is used to provide health information about the equipment, allowing early fault detection and enabling planned maintenance. This also contributes to historical baseline data for predictive maintenance or other performance evaluation.
Condition monitoring techniques are normally used on rotating equipment while periodic inspection using non-destructive testing techniques are used for stationary plant equipment such as steam boilers, piping and heat exchangers.
Increasingly the role of conventional thermal plant is changing from providing constant baseload to delivering power at times of peak demand. Frequent switching on and off of plant imposes additional stresses on the equipment. In this situation, continuing to follow traditional interval-based maintenance can be problematic.
Opting instead for condition monitoring should enable timely preventative maintenance. Vibration, noise and temperature are used as key indicators of the state of the health of components while monitoring lubrication oil for the number and size of particles in a sample can provide significant information about asset wear.
As power plant has become increasingly automated, operators have downsized in-house maintenance departments and increasingly outsource some of this function to external specialists. Condition monitoring is often contracted out to diagnostic specialists.
In the burgeoning wind industry, wind farms made up of multiple turbines are usually located in remote areas with no staff on-site. Access to offshore wind turbines can be particularly problematic in bad weather conditions and it may be weeks or months before technicians can get on-site to attend to faults. Condition monitoring is proving a boon in this environment as diagnostic companies develop a range of specialist monitoring services.
Advances in IT
Increases in computing speed and capacity have enabled collection, analysis and storage of increasing volumes of information. Big data, the Internet of Things, wireless mesh networks and cloud computing all contribute to this.
Condition monitoring uses data from sensors or sensor units situated in rotating machinery. In the past this data was passed back to a computer (usually on site) for analysis, but this is changing, as there is a move to shift some of the PC software into firmware on the equipment itself.
This change is being driven by developments in hardware, software and connectivity, but the programming is the key. Phil George, system architect at Rockwell Automation, says: “The algorithms are the clever bit – they analyze vibration or lubricating oil, and use these results to spot impending problems and issues”.
He explains: “Putting the analysis closer to the machine and doing the pre-analysis there is a more efficient way of putting a system together.”
This provides a more reliable connection and enables the system to make local decisions and raise alarms.
“An unexpected vibration will trigger a local alarm to do something about it, and can also collect that information back at the database and spread the alarms out further afield”. Relevant information is then passed back to the software which processes the information in a more generic way for further analysis.
The ease with which wireless instrumentation can now be deployed compared to hard-wired devices means that many new data sets for rotating machinery can be collected.
Going forward it is likely that the Internet of Things will see the capability to embed and measure data at a much lower level. This may be in the sensor or into the bearing itself. An example of such innovation is a bearing which includes a self-powered, intelligent wireless sensor within to provide instant condition monitoring data via the network.
The bearing monitors dynamic parameters such vibration, temperature, lubrication condition and load, and informs the user when conditions are abnormal and threaten to cause bearing damage.
The algorithms applied to the dynamic data can spot the abnormal conditions that cause damage and can take actions to prevent it. The technology is currently undergoing trials in the wind turbine and railway industries.
Advances have enabled processing to be moved closer to the application, so that data is pre-analyzed locally based on the signature it is seeing, and action can be taken locally.
The data is also passed up the system for further use. As increased computing power provides the ability to handle more information, evolving standards in databases make it easier to extract information and share it via a cloud platform.
A monitoring workstation
Credit: Brüel & Kjær Vibro
George explains: “In the past there was a separate system for CM and the alarms and information were stored on a PC somewhere in the plant. What I am seeing now is the ability to shift that information out of the proprietary system into a database or a historian format so it can be used in other ways.
“If you make that information into SQL data [a standard language for accessing databases], then you can use all sorts of standard business tools to take elements of that information and use that on a dashboard.”
The analogy he uses is that of a Formula One car, which has evolved from a relatively straightforward, high-powered machine into a high-tech device covered in thousands of sensors which provide information locally but also feed back to central control. The additional data enables modelling of the processes within the complex machine. In the case of power plant this enables a designer of pumps or turbines to collect information on the performance of all its units installed globally.
Traditionally there are many monitoring techniques used in hydroelectric power. Vibration monitoring is the primary one, but others include magnetic flux and partial discharge analysis used for measuring the condition of the stator insulation.
Mike Hastings, senior application engineer at Brüel & Kjær Vibro, points out that, as in the case of hydropower, there are gains to be made from software development that focuses on improving integration and correlation of data rather than developing new systems.
As technology advances it is possible to collect a mass of data, including actual time signals at regular intervals. When a machine breaks down, the raw data from both before and after the breakdown can be automatically downloaded and post-processed to analyze the root cause of the problem.
Hastings says: “In this case it is important to work with time signals and process raw data, not simply the diagnostic measurements that were initially set up. With post-processing analysis, diagnostic measurements can be set up on the fly to analyze the data. This allows us to do a thorough analysis to find the cause of the fault.”
With increasing amounts of data being collected, ensuring that CMS data can interface with different systems so that the information is available in a central database is a significant step. Data mining current and historic records can be used to build better understanding of how faults develop over time.
As the body of knowledge increases, the certainty and accuracy of diagnosis and prognostic service calls will increase. The body of information will be invaluable to equipment designers.
However intelligent or sophisticated the software and remote control systems are, human operators are essential. The user interface must be designed to present relevant, accessible and comprehensible information. Increasingly, operators are presented with a visual interface which offers layers of menu options.
Boerries Homann of Brüel & Kjær Vibro says that offering an appropriate user interface is important. “In the past, operators used the software every day, but we now have to focus on usability to make it as simple as you can, but not too simple. We try to support the user and different profiles of users. Customers should work with system so that they can take value out of the system.”
CM of wind turbines
The wind generation industry has grown rapidly over the last couple of decades, with a pronounced trend for offshore developments to increase in size and distance from shore.
Operating in remote locations in very variable conditions makes wind turbine operations very different from conventional generating equipment. It can be difficult to replicate the conditions in the laboratory, and in a new industry it will take time to establish if machine design is sufficiently robust for years of operation in harsh conditions.
Condition monitoring of turbines is invaluable in determining maintenance schedules. Specialist companies offer a combination of service and software – for example, condition monitoring, detecting faults, diagnostic work and prognostics work. The data will also provide a historical baseline over time.
Brüel & Kjær Vibro monitors over 7000 operational wind turbines from its data centres in Denmark, the US and China. Around 120 measurements are collected from a typical offshore 6 MW wind turbine. Each reading has a time signal which gives a substantial amount of data, which can be measured remotely on demand by the user or the diagnostics centre. In addition to serving the wind farm operators, collating and analyzing this data enables the company to refine its own monitoring techniques and also provide feedback to equipment designers.
When one sensor picks up a fault, other sensors will also pick it up, so there could be several alarms for the same fault. Alarms are categorized by severity class, which means that the operator can be informed whether an alarm needs immediate action or this can be delayed. An alarm management system can determine if alarms signal one fault or different faults.
Homann says: “We have developed alarm management systems so we can detect false alarms and ensure they are not put through the system. We have this up front so that it comes before the user interface and avoids false alarms. We try and make sure that, if there is one fault, there is only one alarm. We aim to avoid alarm flooding which creates problems for the operator.”
Drive train main bearing failures have been a major problem in wind turbines. They are costly to repair as at least one large crane is required to get access to the fault. However, as most main bearing faults develop gradually, they can be detected in advance, enabling the operator to formulate a predictive maintenance strategy before the problem becomes critical.
Romax Technology, specialist in wind turbine gear trains, says that O&M cost analysis for a wind farm with multi-megawatt class turbines showed that cost savings of more than 40 per cent could be gained through predictive maintenance and monitoring of main bearings. Most savings resulted from better monitoring and detection, enabling more effective scheduling of main bearing maintenance and replacement.
Condition monitoring is assuming an increasing significance in the operation and maintenance of diverse power generating plant as advances in hardware, software and connectivity are enabling ongoing advances in the ability to collect information from within.
Analyzing this data provides information for predictive maintenance. Collating data historically also provides invaluable insights for equipment designers, helping them understand the operational stresses that components undergo.
Information technology and operating technology have traditionally run disparate systems, but increasingly – across engineering and industry – they are converging. Sharing this data offers massive advantages, but ensuring that this is done securely requires the development of standards and will continue to be an ongoing challenge for software developers.
Penny Hitchin is a freelance journalist specializing in energy matters