Image in modal.

High-temperature processes – such as those required to manufacture metal products – are very difficult to monitor, model and control. Monitoring a high-temperature, highly reactive material (Fig. 1) re-quires extremely robust, externally cooled instruments that can be short-lived and extremely expensive.

The development of computer modeling and simulation tools have led to great advances in under-standing how materials move and behave within hot, corrosive environments. Controlling relevant features of the flow, for example, have led to product-quality improvements that could not be achieved by previous experiment-only methods. These computer models can be used to develop advanced control algorithms and inform capital-investment decisions regarding instrumentation and equipment modification. Combined with topological optimization methods, computer simulations will drive next-generation furnace design.

Unfortunately, high-fidelity computational simulations can take significant time to run and require large computers.  Process optimization requiring many simulations at different conditions can be expensive.  To mitigate these obstacles to widespread use of sophisticated computer models, we have developed artificial-intelligence (AI) methods based on deep neural networks that use high-fidelity simulation results as training data for the neural network to learn the critical process components or the system as a whole.

The resulting trained neural network accurately emulates the simulations run on large computer plat-forms but can run on a game-class desktop computer – one with a good graphics-processing unit – in a fraction of the time required for the equivalent simulation to run on a high-performance computing (HPC) system. The use of the trained neural network combined with process sensor data brings the power of HPC simulation to the plant floor and empowers the plant engineer to make better control decisions in time to maintain the high product-quality standards increasingly demanded by downstream manufacturers.

These fast-running models can also be used as the computational engine for advanced control algorithms, process optimization, topological optimization and to train early-career production staff.  The use of these techniques can greatly reduce the time and computational cost of both product and process optimization in addition to improved process control. Computational tools in general help to lower risk in design changes when outcomes are predicted with greater accuracy.


Training the Machine-Learning Tool

Like most data projects, much of the effort goes into the generation and curation of the training data. This starts with the selection of the major control variables to establish the control space that will be explored by computer simulation.

Care must be taken to establish the appropriate variable ranges and to understand variable interactions. Corner points – the part of the control space with extreme variable interaction – may be nonsensical and yield spurious results. Typically, we try to establish a control space that is a complete hyper-cube when normalized, but this is not completely necessary.

Next, we establish a balance strategy to sample the control space. There are many methods, from grid patterns and response-surface methods to Bayesian approaches. In general, we strive to collect the most amount of information (minimizing uncertainty) while minding our simulation budget. Computer simulations can be time-consuming and therefore expensive.

A glass-furnace simulation and the associated tracer post-processing can take several days to yield results. Simulation precision must also be considered when choosing a control-space sampling strategy. After generating the data set, it is usually a good idea to double check the data history to ensure the appropriate variables were set while others were held constant. Best practices usually include documenting the data provenance for the data set for future reference.

After the data is prepared, splitting the set into training, test and validation sets is good for exploring various model architectures. This process can be computationally expensive and is best done on an HPC system or with cloud resources. We often use autoencoder or graph neural network architectures. Access to the low-dimensional latent space gives additional capabilities, such as the ability to cluster and categorize distinct process modes that have various quality implications.


*Click the image for greater detail

Fig. 2. Graphical user interface for fast-running AI model of glass-melt furnace operation. Actual parameters are intentionally obfuscated to protect trade information.


Application to High-Temperature Manufacturing Processes

Once a neural network is trained, it can predict the simulation results for conditions within the assigned control space in near real time. It should be validated by processes that consider the risk tolerance of the use case. Additional software can be added to prevent use outside the safe regions. This light-weight inference model can usually be run on a game-class desktop or laptop computer. A graphics-processing unit, also known as a game card, can be used to accelerate the computations. The output can be some combination of predicted process fiducials, such as predicting temperatures that are actually measured; critical material properties, such as major vortices; quality indices; and entire process state, such as velocity, temperature and concentration fields (Fig. 2).Production data from the physical plant can also be used in combination with the simulation data. This can be tricky because the actual plant data is often from specific points in the process and may be very noisy. Simulators are often tuned to match the production data by using extra, unphysical parameters, which are often known as fudge factors. With data-driven methods, the machine-learning routine attempts to replicate the physics of the system with some regressive method. Extracting the physics of the process from these machine-learning methods is an ongoing research topic.


*Click the image for greater detail

Fig. 3. Diagram of the flows in a glass-melt furnace. LLNL and Vitro Glass scientists developed an AI model of the gas release and glass flows based on gas burner configurations and other process conditions.


Example from the Float-Glass Industry

Used in the production of plate glass – a type commonly used in windows and doors – a glass-melt furnace performs multiple functions, such as melting and homogenizing raw materials, as well as strip-ping gases and impurities from the melt. Small variances in the process can significantly affect the quality of the final product (Fig. 3).

The first step in modern process control for glass-melt furnaces consists of setting the furnace temperatures based on complex computation fluid dynamics (CFD) models that simulate the process (Fig. 4). When the measured furnace temperatures deviate from desired temperatures, the CFD code is run again to tell operators how to adjust the furnace set points. This process currently takes up to two weeks.

Engineers at Vitro Glass (formerly known as PPG Glass) are interested in reducing the time between measuring the deviation in the process control points and making the corrections to the parameters for operating furnaces. They posed this problem to researchers at Lawrence Livermore National Laboratory (LLNL) and asked for help in solving it.


*Click the image for greater detail

Fig. 4. Tracer flows from a simulation of a glass-melt furnace.


To address the challenge, researchers at LLNL used machine-learning methods to develop a reduced-order glass-furnace model so plant engineers can make informed process adjustments in real time. The machine-learning algorithm is much less computationally intensive than the original CFD model and can be used as a real-time furnace-control system running on a desktop computer instead of a larger supercomputer. The new control system based on machine learning can instantly inform the furnace operator what new set points to use when temperatures deviate from normal.

Through a project funded by the Department of Energy High Performance Computing for Manufactuing (HPC4Mfg) program, which aims to provide supercomputing resources to industry partners, LLNL is working with Vitro Glass to demonstrate this new process control approach in what has traditionally been an energy-intensive manufacturing sector. This fast-running prediction tool can save roughly two weeks of production per year (per furnace) and increases productivity by 2%. Extrapolating those improvements to the entire U.S. glass manufacturing industry suggests that 130,000 metric tons of car-bon dioxide emissions could be saved.


*Click the image for greater detail

Fig. 5. Cast-aluminum ingot develops a crack after cooling. LLNL and Arconic scientists developed an AI model of the residual stresses based on process conditions.


Example from the Metal-Casting Industry

Coils of aluminum sheet metal get their start as immense ingots that are drawn from molten aluminum in direct-chill (DC) casting facilities. Some aluminum ingots must be reprocessed to meet high quality standards set by customers in the aerospace and automotive industries. Ingots may be cropped to remove end cracks or individually machined to remove defects on the rolling face (Fig. 5). Occasional-ly, casting rounds are completely abandoned. While aluminum can be readily melted for another cast-ing round, the energy expended to remove cracks and melt recycled aluminum is wasted.


Arconic, a metal engineering and manufacturing company, and other domestic producers cast billions of pounds of aluminum annually. An estimated $60 million per year in energy savings could be achieved if the entire U.S. aluminum industry cut its ingot-scrapping rate by 50%. Those savings do not include the additional energy expended during downstream processing of scrapped ingots.

However, predicting the probability of defects across all possible process conditions, such as casting speed and cooling rate, for each alloy can be difficult. Pilot experiments are expensive, hazardous and difficult to control, preventing manufacturers from moving beyond commonly accepted practices and finding unexpected innovations. Computer simulations provide an alternative to the experimental approach by predicting the likelihood of defects for different manufacturing conditions.

The industry/laboratory team optimized the off-the-shelf modeling software ProCAST to quickly identify ingot-processing inputs that minimize cracks by developing a casting model on high-performance computing hardware. The model was validated through multiple casts under a range of manufacturing conditions. Each simulation required several days. Researchers determined that the model success-fully predicted the interaction of heat, solidification, microstructure and strength development to sup-port improved aluminum DC casting.

Next, the team coupled the casting simulation data to numerical optimization and sampling codes. The resulting machine-learning solution quickly determined the success or failure of casting with any set of parameters. As a result, predictions that used to take days to complete in the initial model now take Arconic minutes in its own offices. In addition, the researchers found casting solutions unlikely to be revealed by trial-and-error experiments.

The HPC4Mfg project potentially opens new markets to Arconic by enabling the company to cost-effectively meet standards from aerospace and automotive customers. The company will save time and energy by casting ingots with fewer defects and save the volume of cooling water required when recasting scrapped ingots. With modifications, the model could apply to all materials manufacturing, including steel, titanium, nickel-based alloys and different aluminum alloys.

Based on Arconic’s estimates, eliminating half of scrapped materials across all U.S. structural-material casting industries could save $365 million per year in energy costs. By increasing the production and material quality at lower prices and reduced energy consumption, U.S. materials companies could gain greater advantage in an industry with many foreign competitors.


Current and Future Projects

Currently, the LLNL team is applying these methods on a variety of industrial systems, including gas-turbine optimization; mixed-material additive-manufacturing topological optimization; modeling of hot-steel rolling; and modeling and control for the robotic manufacturing of sheet-steel products. We are looking for other manufacturing partners, especially with high-energy-intensity applications.

The High Performance Computing for Energy Innovation (HPC4EI) is the umbrella initiative for the HPC4Mfg Program and High Performance Computing for Materials (HPC4Mtls) Program. This work is supported by the Department of Energy Advanced Manufacturing Office through the High-Performance Computing for Manufacturing Program. HPC4Mfg aims to provide expertise and supercomputing resources to industry partners to improve industry competitiveness and reduce energy consumption.


For more information: Visit https://hpc4energyinnovation.llnl.gov or contact Aaron Fisher, Head of Numerical Analysis and Simulations, Center for Applied Scientific Computing at Lawrence Livermore National Laboratory. He can be reached at fisher47@llnl.gov.

Lawrence Livermore National Laboratory is operated by Lawrence Livermore National Security LLC for the U.S. Department of Energy, National Nuclear Security Administration under Contract DE-AC52-07NA27344.