Modern connectivity provides better views and more data to enhance furnace control.
Until recently, the conventional furnace controller has been an independent entity responsible on its own for performance, optimization and adaption to process and production conditions. As part of the control system, controllers receive sensor data, parameters and commands that are relevant for pro-duction. With this information, the control circuits of the controllers swap between different settings in order to adapt to deviations in setpoint or execute manual commands or those created by a parent SCADA or manufacturing execution system (MES).
Most controllers work using the PID mechanism to direct a process value to the setpoint demanded. Along with this control loop, subroutines may further adjust the setpoints based on actual process conditions as indicated by sensor readings. This adaption is mainly based on analytical formulas and strict rule sets.
Modern Connectivity Yields Better Performance
How can these valuable and proven functionalities be enhanced by removing the limited view of information and extending the data horizon? How can this be achieved? The answer is the bi-directional connection and integration of the controller into an Industrial Internet of Things (IIoT) platform, which offers a holistic view of data and allows for machine learning and digital twin concepts.
Imagine having an IIoT platform that breaks up the common data silos found at almost every production site. With its underlying data warehouse and data model, an IIoT platform can gather not only all process and production values but also all the related data, such as quality results, part specifications and steel analyses. The main purpose of data consolidation into a single source is to increase data re-liability, transparency, accessibility and, in general, to provide a holistic view of data. Apart from all the different applications available on such platforms, the data model is the most valuable yet most un-derrated component. At the same time, it is also the component that requires the most effort and labor to make it reliable and complete.
Based on a well-designed and complete data landscape, the power of machine learning can be employed to enhance performance. Incorporating quality and audit data, supervised learning approaches can yield highly sophisticated data models to predict aspects that are usually inaccessible and challenging to transfer into the analytical rules on which common controllers rely.
However, such models, created and trained on the IIoT platform, can be executed on the controllers to enhance the given rule set using predictions. These calculation results can also be used as a part of new rules or to increase the accuracy of internal operations. Another area of use is the continuous prediction of process endpoints. For example, if the predicted endpoints are deviating from target specifications, the controller can change the course of the process accordingly to avoid subnormal part treatment.
Whereas the model application should be performed on the controller itself, there are solid reasons why the process of model building and training should be performed on dedicated cloud and server infrastructure. First, cloud environments (provided by a hyper-scaler or on site) provide the necessary calculation power and unlimited storage capacities. Second, all the data needed to perform supervised learning is usually not transferred to the device but is available to the IIoT data platform. Finally, the hardware requirements of a controller can remain at a moderate level because the model application is a cheap operation, whereas model training is usually costly.
Since controllers are a crucial (but sensitive) part of production, it is beneficial to avoid tasks that demand longer runtimes and high computing power to ensure controller stability and performance. The credo should always be to “use the device for what it is made for,” which is process controlling in this case.
Top-Level Anomaly Detection
Another option is to run anomaly detection directly on the platform environment. With the full context of the production data and the whole history of process data available, common patterns can be detected and defined as normal and common behavior. Analyzing the live processes with these patterns can reveal abnormal behavior. With a failure mode and effect analysis (FMEA) and root-cause analysis running in the background, it is possible to trace the detected behavior to certain causes. The knowledge of the origin of the problem then allows for advice and execution of automatic commands to the controller, permitting adaption to the detected anomaly.
Digital Twin and Virtual Prototyping
A third advantage of the real-time data exchange between controller and cloud environment can be gained with the use of digital twins. The digital twin of a controller consists of an accurate dataset that mirrors the state of its real-life counterpart. However, data alone does not form a digital twin. Since a digital twin should behave like the real object or device, it is necessary to enrich it with methods and tools to mimic real behavior. This can be done by means of simulation and the application of ma-chine-learning models to predict a virtual process. For sure, some aspects are hard to simulate since there are uncertainties and lack of knowledge or experience. In addition, some topics are difficult to handle via machine learning for a variety of reasons (e.g., the number of examples is too small or a certain behavior is not represented among the examples). However, combining both worlds and making use of hybrid models often helps to cover the most important factors relevant for the process.
After validation of the underlying models and simulations, the digital twin can be used to compare the actual process data that has been gathered by the controller with simulated data created by the digital twin. This makes prototyping less risky and faster without interfering with the real process. Once the experimental settings have been validated, they can be pushed out to the controller to be available and active during future production runs.
The concept of connecting a controller to an IIoT platform creates a multitude of possibilities. While the controller hardware does not need to meet high requirements, the controller firmware must be updated to be capable of receiving and executing machine-learning models. This does not require a big in-vestment on the hardware side, unlike in edge computing, which is based on the principle to perform all modeling and all calculations on the device itself.
With all information already available in a central data center, which IIoT platforms should provide, the next step is to make profitable and extensive use of it, not only for the sake of business intelligence but also to optimize and adapt processes in real time.
For more information: Mike Loepke is head of software and digitalization at Nitrex Metal Inc. in Saint-Laurent, Quebec, Canada. He can be reached at 514-335-7191, email@example.com or www.nitrex.com.
All graphics courtesy of the author.