Power plant condition monitoring in the age of big data

Condition monitoring is changing as computing technology evolves. We spoke with Justin Eggart of GE Power & Water about the future of the technology for power plant operators

PEi: What is the difference between condition monitoring technology in larger power generation plants vs smaller or on-site generation applications?

JE: The technology is very much similar from a condition monitoring perspective. The difference is primarily one of scale. A large combined-cycle plant will have hundreds of sensors and thousands of data tags: we’re trying to do the same thing as in a small on-site generation facility, which is to optimize your maintenance and improve the reliability, availability and performance of your asset. Much of the technology employed is the same, but there tends to be more of it in larger plants.

The difference would be, if I have a reciprocating engine, part of an on-site power generation plant, and compare it to a large combined-cycle gas turbine, some sensors will be different on those two pieces of equipment because the size and scale is different. You tend to have more sensors on a larger machine. Whereas in a large combined-cycle power plant you may be integrating 500+ sensors, in a small facility there may be 10 sensors.

A small facility may have traditionally employed a walkaround data collection programme, but the larger you get, the more automated you get. Primarily the technology varies in scale, and in the key performance indicators.

In a district heating plant you will have a boiler, which is consistent with a pure power generation plant. Downstream of the boiler, to optimize for a district heating application, you’ll have different sensors.

PEi: What has been the biggest technology change in condition monitoring systems during your career?

JE: Automation and tremendous improvement in computing technologies. That’s what’s providing most of the opportunities today. Even in the last five years, the cost of computing technology has come down tremendously and capability has gone up tremendously, which allows us to deploy much more automation.

Things that used to be manual calculations done by an engineer in a plant are now entirely automated, and this has really allowed much more capability to advance in this area of condition monitoring – what might have been primarily focused on things like anomaly or failure detection in the past has now moved beyond that to maintenance optimization and optimization outcomes such as reliability or availability.

Failure detection is the standard at this point, with much work focused in the area of expert software, which is how you go from detection to insights and actions.

GE has been making significant investments in software and analytics across our whole portfolio. We’ve invested over $1 billion in software development in the last few years.

The condition monitoring space is focused on three areas. The first layer is an asset performance management software set that includes failure detection plus integration into maintenance automation, so you can link the two, from failure to maintenance actions.

The second layer is operational optimization software. Once you have optimized around the asset and its maintenance, now you can start to optimize around how the plant operates.

How does the customer run that plant, and its key performance indicators – reliability, performance, steam generation – whatever is unique to that plant.

The third layer is business optimization: how do you optimize the profitability of a power plant? When you get to that level it requires heavy involvement from the customer side, so what GE has done is develop a software framework that allows customers to input key variables from their side and get feedback.

One of the significant things in the industry is the move to the cloud, which allows us to provide capabilities at a much lower-cost model, a much more analytic-capable model: cloud-based software as a service model, built on the industrial platform Predix.

Inhibiting growth in the past, especially with smaller applications, was getting the customer to get ROI around the investment model in condition monitoring technology. With the move to the cloud, we are much more capable at a lower cost.

As customers don’t have to invest in data acquisition and computers and software for each location or site, we can leverage that across many customers so the cost model comes down for each. And there’s been a change in the model, so now the user, instead of walking up to a piece of condition monitoring equipment and looking at it, is logging on to a web page.

Today we offer a cloud model (software as service model) around monitoring and diagnostics, improving capability, adding more features and functionality as well as analytics.

We’re releasing two to three analytics per week around anomaly detection and optimization of maintenance practices. It’s easier to add capabilities and update with the cloud model. Diagnostics are typically focused on detecting an issue; analytics are focused on also, in an automated fashion, offering insight into what’s causing that issue and what you should do about it.

From a customer perspective, traditional OEMs have offered some level of condition monitoring capability with their equipment, or through third parties. Many customers are asking today for an integrated condition monitoring framework across the entire power plant or backup generation system.

They want a full view, not just an individual view from component. This is an area that GE is working in now: how do we provide that level of technology across all equipment, whether GE or non-GE, so that the customer has a view of the maintenance requirements and condition of the equipment across the whole enterprise.

Our customers, the operators of power plants, have been asking not only GE but everybody for this capability. We’re seeing some of other OEMs looking to offer those kinds of multi-vendor services – also control system vendors.

Two things distinguish GE: experience across a wide range of power generation equipment, not only GE equipment but interfacing with other vendors’ equipment; and the tremendous amount of data collected from this equipment, which helps us when then combined with some investment in software, analytics and automation. Some smaller traditional condition monitoring companies may not have that level of expertise and investment.

PEi: What changes do you foresee in the condition monitoring space over the next five to 10 years?

JE: In the next five years there will continue to be more sensor technology embedded into equipment. As sensors become smaller and lower-cost, more OEMs will embed more sensor capability, which will generate more data to help do condition monitoring. The challenge will be not drowning in that data. Smaller users without large engineering staffs on-site may struggle to use all the data available.

The next generation of condition monitoring will use cloud-based capability. It will stream data into the cloud and provide software and analysis capability to those users at a much more comprehensive level than in the past.

The technology is changing tremendously; there is so much capability and investment in the whole space of software analytics and big data today.

It’s likely that in the next 10 years we will have extraordinary automation available. A piece of power generation equipment on-site will be tweeting messages to the world about its operation. Reliability and availability will continue to move up, and performance will be able to be optimized to the particular requirements of an individual site.

PEi: Aren’t there increased cybersecurity risks involved as more equipment comes onto the Internet of Things?

JE: There have to continue to be significant investments in cybersecurity – all customers are concerned about the risks.

The good news is that, just as technology is advancing on the software side, it is also advancing quickly on the security side, driven by a consumer market which is often ahead of the industry market from a trying-new-things perspective.

We will also see tremendous improvements in cybersecurity capability that will have to be embedded in all of these devices.

We do offer on-premise capability but, while we offer that, we encourage customers to take advantage of the cloud.

On-site condition monitoring involves similar capabilities but tends to be more expensive and slower to take advantage of updates. We do it that way for two reasons: many users traditionally used on-site condition monitoring technology, and so it’s a paradigm shift to move to the cloud so they’re more comfortable with on-site. The other reason is the risk around cybersecurity. Many customers are not comfortable yet in how that risk is managed. However, there are advances in activity solutions: for example, we deploy both internet-based connectivity as well as cellular technology as backup. There will continue to be a need for some level of capability at site, but anything critical for the site will have backup connectivity capabilities.


Justin Eggart is General Manager, Fleet Management, Power Generation Services at GE Power & Water. www.ge.com

No posts to display