The fourth industrial revolution – the revolution that ushered in the Internet of Things (IoT) and Internet of Services (IoS) – has come to be known as Industry 4.0. At the Hannover Messe in 2011, Germany launched a project called “Industrie 4.0” designed to fully digitize manufacturing. The larger vision of Industry 4.0 is the digital transformation of manufacturing, leveraging third-platform technologies and innovation accelerators in the convergence of IT (information technology) and OT (operational tech-nology) to integrate connected factories and industry, smart decentralized and self-optimizing systems, and the digital supply chain in the information-driven cyber-physical environment of the fourth industrial revolution[1] (Fig. 1).


Visual depiction

Fig. 1. Visual depiction of four industrial revolutions0


The initial goals of Industry 4.0 typically have been automation, manufacturing process improvement and productivity/production optimization. The more advanced goals are innovation and the transition to new business models and revenue sources using information technologies and services as cornerstones.

The nine technologies reshaping production are specified in Figure 2. As can be noted, the new era of manufacturing is very much cross-disciplinary and requires skillsets that currently are not within the traditional engineering curricula. The nation’s economic competitiveness and the expansion of U.S. innovation will be in peril if we cannot attract more diverse, versatile and open minds into manufacturing engineering careers with multiplicity of cultural and personal perspectives and interdisciplinary skills.

It is imperative that we bring more women and under-represented minorities into manufacturing engineering. The need for skilled workers has never been higher. A new paradigm is needed to achieve this goal. A multidisciplinary team of material-processing and manufacturing experts with data scientists at the University of California along with colleagues at the University of Wisconsin’s Grainger Institute and the University of Massachusetts (Lowell) are working toward establishing an advanced manufacturing hub to address the needs of the fourth industrial revolution.


reshaping production

Fig. 2. Nine technologies are reshaping production[3]


Industry Transformation

The intent of this article is not to discuss the changes that are needed in our educational system but rather to focus on the nexus of data science and the metal-processing and manufacturing industries. Both industries are going through a major transformation. We are currently at the nascent stages. The transformation is pivoted by the advent of data science. The amount of data that are being generated, captured, stored, mined and utilized is transforming data into process information, which in turn is being transformed to knowledge and, more importantly, process controls that supersede human capabilities.

There is a joke going around that the factory of the future will have two beings in it: a person and a dog. The person’s job is to feed the dog, and the dog’s job is to make sure the human does not touch anything. Though it is an exaggeration, there is some truth in this analogy. The future of work and, more importantly, the future of the worker will significantly change in the 21st century.


skills shortage in manufacturing

Fig. 3. Skills shortage in U.S. manufacturing 2018-2028 (BLS Data)[4]


Advances in Computational Capabilities

During the last four decades or so, we have seen major developments in mathematical modeling of material and metal processes, much of it due to advances in computational capabilities. These advances enabled us to solve complex sets of mathematical equations quantitatively depicting the process being modeled. When physical models have been verified, meaning the physics is correct and the model is a robust one, then we are in a good position in that the model can generate data that we can rely on. The ideal situation is when the physical model is correct, it accurately describes the process, all the critical variables in the model are known and there are sufficiently large sets of data complementing the model. In such situations, one does not need to implement artificial intelligence (AI) or machine-learning methodologies.

In Figure 4, the quadrant at the upper right-hand corner depicts the region where the physical model is good and when we have much data. In much of our metal-processing scenarios, however, the models are inadequate because there are many interaction coefficients that cannot be easily measured or the equations describe an equilibrium situation, when in fact the process is in a non-equilibrium state. When the models are inadequate, however, there exists much data (data that has been obtained from the plant or experimentally), and then we can use machine learning to learn from the data (see the upper left-hand quadrant of Figure 4). The situation of bad models and bad data (meaning incomplete data, missing data, unbalanced data, biased data and small size data) is when deep learning or self-supervised learning can be implemented (lower left-hand quadrant).

Metal-processing operations generally reside toward the left-hand side of Figure 4. Indeed, there are competent physics-based models to simulate casting of molten metal and heat treating of metal components. However, the prevailing industry sentiment is that while simulation software is an indispensable tool of the trade, it only gets producers so far.

Metal processors are good at what they do. If an operation is making 95% good parts, the 5% scrap is due to inherent process variations and special cause variations. Special cause variation can be thought of as a defective component produced due to a procedural breakdown, such as incorrect tool or machine setup or excessive tool wear occurring at some point during a run going unnoticed. Even if the procedural issues are all cleaned up, the inherent variations remain. Manufacturing operations are targeting incremental improvements of a half of a percent. To accomplish this, data and tools are needed to analyze and determine which, if any, of the data being collected are driving the variation.


machine learning chart


Fig. 4. Quadrant indicating when machine learning and deep learning are applicable. Note that when a good physical model exists and data are available (right side of visual), artificial intelligence (AI) meth-odologies are not needed.


Data Collection

Collecting data effectively begins with a data management plan. For many years, data collection in the foundry industry, for example, was driven by the standards of quality systems such as ISO-9000, QS-9000 or TS-16949. In short, the basic requirements include: collect the data that drives quality, have a reaction plan and demonstrate that this is in place.

However, these quality systems have no stipulation that the data-collection and documentation methods be standardized from one department to another. Therefore, it is possible that casting plant data may be uploaded into a cloud server while heat-treat furnace data are kept as circular chart hardcopies within the same facility. The manufacturing facility sets itself up in departmental silos where one department is unaware of the data being collected by another or how it could be useful to them (Fig. 5).

Overcoming silos and organizing the data already being collected is a good place to start the data management plan. Building a culture of data-driven decision-making begins with being open about data and increasing access and visibility, such as broadcasting control charts and production statistics in real time on monitors throughout the facility. With a good understanding of which data are currently collected, align the collection frequency based on the needs of all departments so the most value is gained from it.  Attaching timestamps and serial numbers to data allows one to aggregate relevant data to the product of interest and organize the data for the types of analysis depicted by Figure 4.

It is likely that the existing data is insufficient to identify the inherent variation, which affects quality metrics. This may be due to a small dataset or perhaps the key variable is simply not being captured. With machine learning, it is important to think beyond typical tabular data such as pressures and temperatures of metal and mold materials. Data comes in many formats. For example, images can be processed through clustering algorithms and assigned a classification or rating. Image data could be pictures of parts, process profile plots or X-ray inspection images. Time-series data provide important information by monitoring sub-processes that get lost in a simple cycle time measurement. Of course, additional tabular data – such as ambient environmental data (air temperature, humidity, etc.) – are also pertinent.

The larger question becomes: What are the data available, and what are the data that are not captured but that should be collected, stored and monitored?



Fig. 5. Departmental data silos are a challenge to implementing machine learning in many materials manufacturing operations.[5]



It is amazing that today one can see a path to a future state of production where machines autonomously adjust their process settings based on upstream data to account for natural process variation. The future of metals manufacturing is far from being written. What is certain, however, is that the fourth industrial revolution is transforming manufacturing. Establishing what are the data that need to be captured, stored and analyzed is the foundation for data-driven decision-making.


1.    Industry 4.0: the essence explained in a nutshell


3.    Thoben et al., Int. J. of Automation Technology, 2017

4.    Boston Consulting Group Analysis


6.    BLS Data, OEM (Oxford Economics Model), Deloitte and Manufacturing Institute skills research ini-tiative

7.    Kopper, Adam E., and Diran Apelian, “Predicting Quality of Casting via Supervised Learning Meth-od,” International Journal of Metalcasting (2021): 1-13