The world is evolving. What was adequate and useful yesterday will be different to succeed tomorrow. With the technical advances of Industry 4.0, the future of the quality profession is also changing. LNS Research (2017, p.4) defines Quality 4.0 as “closely aligning quality management with Industry 4.0 to enable enterprise efficiencies, performance, innovation and business models.”
Merging the human element with Quality 4.0 is even more critical now than ever. Machines can accomplish so much of what humans used to do. Now is the time to leverage technology while capitalizing on the unique qualities humans bring, which machines cannot accomplish.
As quality professionals, there is tremendous value to be added by combining “soft” people skills with quality tools by utilizing uniquely human characteristics, such as creativity, imagination, questioning, intuition, perception, feeling, and curiosity. At its best, this can result in innovation.
Today’s global, high-speed, complex, and disruptive business environment demands companies to make their organizations more adaptive and agile. Upskilling their personnel and setting their culture intentionally is imperative. The companies that develop it right will have an incredible competitive advantage.
Paulise (2022) emphasizes the importance of employee engagement in developing a company culture that prioritizes quality and innovation. It is necessary to merge the soft side of quality tools through a corporate culture to lay the foundation for innovation. While Quality 4.0 tools will help organizations drive innovation, conscious teamwork is essential to create an organization with engaged and motivated employees, resulting in a positive cycle of increasing outcomes. Engaged employees are better equipped to create high-quality products and services for their customers. Customers, in turn, have a more fulfilling buying experience, leading to increased sales, positive recommendations, and even new ideas for future products. Ultimately, focusing on culture and incorporating Quality 4.0 tools drives innovation and success for companies. Organizations cannot solely focus on technology; they must also focus on the human element through corporate culture.
ASQ (2024) explicitly calls out the Quality 4.0 tools that should be used to alleviate challenges and support digital transformation, including artificial intelligence (AI), big data, blockchain, deep learning, enabling technologies, machine learning, and data science. In this article, we will look at how quality professionals can partner unique human capabilities with the technology associated with Quality 4.0. Enabling technologies are the foundational tools that make advanced technologies and applications possible, such as AI, big data, deep learning, machine learning, and data science. Data science is the extraction of insights using statistical methods, machine learning, and other techniques. Therefore, this article will focus on a subset of these tools, including AI, big data, machine learning, and deep learning.
Artificial intelligence
AI encompasses many technologies that enable machines and computers to perform tasks often associated with human intelligence, such as problem-solving. Traditional AI focuses on classification, prediction, or recognition, while generative AI aims to produce new data based on the properties of training data. Other areas within AI include machine learning, natural language processing, computer vision, and expert systems, among others. AI improves quality by enhancing customer experiences, improving service delivery, and ensuring consistent quality. Examples of using AI to improve quality include:
- AI-driven drug quality control in the pharmaceutical industry
- AI-enhanced food safety monitoring in the food and beverage industry
- AI-powered diagnostics and treatment planning in healthcare
- Automated visual inspection in consumer electronics
- Fraud detection in financial services
- Inventory management in retail
- Network optimization in telecommunications
- Personalized customer service in hospitality
- Predictive maintenance in manufacturing
It has been said before that AI will not replace your job. Instead, you will be replaced by someone who knows how to use AI. AI tools can generate great drafts and ideas. However, the results and conclusions must be reviewed thoroughly to ensure they are accurate and ethical.
Let’s look at one example. AI is being used widely in manufacturing for predictive maintenance to enhance quality. Predictive maintenance systems use machine learning algorithms to analyze data from sensors embedded in the equipment. By continuously monitoring parameters such as vibration, temperature, and sound, AI can identify patterns and anomalies that indicate potential equipment failures or diminished performance.
However, potential errors can occur due to data quality issues, model limitations, changing conditions, or complex failure modes. The data can be incomplete, noisy, or contain outliers, which lead to erroneous predictions. The AI model can also be overfit to historical data, or the model could be too simple and underfit. Operating conditions or equipment wear and tear can also impact the model. Finally, there may be rare failure modes not well represented in the training data or a combination of factors that are difficult for the AI model to predict accurately. Therefore, human intervention is still needed.
Human intervention is necessary to interpret AI predictions within the broader context of their experience and knowledge. Experts must make decisions based on the predictions by considering factors that AI might not be capable of quantifying, such as operational priorities and business implications. In addition, human intervention is required when an AI system encounters an unexpected or new situation. These models are only as good as the data they were trained on. Anomalies should be flagged by AI to enable experts to determine if it is a true positive or false negative to prevent unnecessary action. Finally, and most importantly, human oversight is critical to ensure AI recommendations do not compromise the safety of employees and customers or violate ethical standards.
Big Data
Big data refers to extremely large and complex data sets. As organizations continue to collect more and more data, it becomes increasingly more challenging to process and analyze data using traditional methods. Big data enables organizations to collect, process, and analyze vast amounts of data to gain actionable insights. Examples of big data to enhance quality include:
- Crime prediction and prevention to improve public safety
- Monitor and optimize energy production and consumption
- Patient outcomes and treatment efficacy in healthcare
- Personalized learning in education based on student performance data, learning behaviors, and feedback
- Process optimization and defect reduction in manufacturing
- Supply chain visibility to identify delays, predict demand, and streamline operations
Just because a data set is bigger does not mean it is necessarily better. Do not just trust the data. It is essential to know and understand where the data comes from and how it was collected. Analyzing data and arriving at a graph or number can be easy. However, if inapplicable or invalid data is included in the data set, it can lead to false conclusions.
Let’s look at an example of supply chain management. While big data analytics can enhance supply chain visibility, some potential challenges and limitations can lead to inaccuracies; therefore, human involvement is still necessary. There can be data quality issues such as incomplete or inaccurate data and data silos in which the data is fragmented across different systems or departments. Supply chains are dynamic and influenced by numerous variables, such as demand fluctuations, supplier reliability, and transportation delays. While the amount of data may be vast, it still may not capture or account for the dynamic nature of these factors. In addition, uncertain events such as natural disasters and geopolitical changes may impact big data models.
Human intervention is critical to contextualize the results generated by big data analytics. It is essential to have experts validate the findings, identify anomalies, and make sense of the complex patterns. Supply chain professionals provide the expertise necessary to complement big data analytics as they understand the intricacies of supply chain operations and can refine data-driven insights.
Machine learning
Machine learning is a subset of AI focusing on developing algorithms and statistical models that enable computers to perform tasks without explicit instructions. Machine learning improves quality by analyzing data to identify patterns, make predictions, and optimize processes. Examples of using machine learning to enhance quality include:
- Analyzing historical traffic data, real-time traffic conditions, and user input to optimize route planning for logistics and transportation services
- Analyzing medical images to detect anomalies and assist in diagnosis
- Analyzing satellite imagery, soil data, and weather patterns to predict crop health in agriculture
- Analyzing sensor data from machinery to predict maintenance needs
- Forecasting demand to optimize inventory levels in retail
- Inspecting parts and vehicles for defects in automotive manufacturing
- Monitoring and optimizing the production process in food and beverage manufacturing
Machine learning can provide useful algorithms and statistical models. However, the learning is only as strong as the data set used for machine teaching. What methods are used to cross-check and ensure the machine learning is accurate? Human intervention is still imperative. Machine learning can provide incorrect results.
Let’s look at an example of a medical diagnosis. Inaccurate results can stem from limited training data, data quality issues, rare or unusual cases, and human error in notation. Machine learning models rely heavily on the quality and quantity of data. Small or non-representative datasets can result in a model that is not generalizable to new patients. Data quality issues can arise from medical imaging datasets that have noise or errors, impacting the performance of the machine learning algorithms. In addition, poor-quality images or incorrect annotations can lead to inaccurate predictions. Therefore, human expertise is still required.
Human intervention is necessary to provide ground truth labels for training datasets. This verification ensures the accuracy of the dataset used to train the machine learning models. The results of the machine learning algorithms must be interpreted by experts, such as radiologists and clinicians, to verify the accuracy and provide additional context. Human intervention is imperative for accurate diagnosis and treatment decisions, particularly in complex cases. Finally, human intervention is essential for responsible patient care and decision-making.
Deep learning
Deep learning is a subset of machine learning that uses neural networks with many layers that learn from big data. Neural networks are designed to process information the same way the human brain processes information. Deep learning improves quality by leveraging complex neural networks to analyze large datasets, identify patterns, and make predictions. Examples of deep learning include:
- Analyzing customer behavior and preferences to develop personalized product recommendations on e-commerce platforms
- Analyzing historical sales data, market trends, and external factors to forecast demand
- Analyzing images of manufactured products to detect defects automatically
- Analyzing x-rays and MRI scans to assist radiologists in detecting and diagnosing diseases
- Analyzing transactional data to detect fraudulent activities
- Processing data from sensors to enable autonomous vehicles to perceive and understand surroundings
Technology such as image classification, complex pattern recognition, and adjusting images based on heuristics can be used for many advances. However, it is vital to bring in the human element to ensure tools are fully applicable, such as ensuring that different body types and skin colors are included when product performance is developed and evaluated.
Let’s look at an example of analyzing transactional data to detect fraudulent activity. Deep learning can produce inaccurate results due to imbalanced data, concept drift, and data quality issues. First, fraudulent transactions are rare compared to legitimate transactions; therefore, the data is imbalanced. Using imbalanced datasets when training a deep learning model can result in a model that has difficulty distinguishing between legitimate and fraudulent transactions, which leads to false positives or false negatives. In addition, fraudulent activities evolve. Therefore, the models may have difficulty adjusting to these changes, which leads to decreased performance since the model’s assumptions become outdated. In addition, transactional data may have errors, inconsistencies, or missing data, impacting the model's performance. While deep learning models are useful, human expertise is still crucial.
Fraud analysts have domain expertise and knowledge about fraud patterns, trends, and emerging threats. Therefore, their insight is an essential complement to the output of deep learning models. These analysts also bring their expertise in broader business operations, regulatory requirements, and customer behavior, which enables them to identify false positives, investigate suspicious activities, and make informed decisions. In addition, as a human intervention, analysts provide feedback to improve the performance of deep learning models over time. Finally, trained fraud analysts or investigators must make ethical and legal considerations of potentially fraudulent activities.
Conclusion
The world has evolved, and people expect information to be immediately available. Electronic communication tools and social media allow communication to occur even faster than before. There is no room for error.
Sensors and actuators are becoming more affordable and available. However, that does not mean they are right for every job. Just because something is giving you data, the data may not apply to what you are trying to study. Being led to a faulty conclusion is worse than knowing there is a question to be answered. Virtual reality (VR) is becoming an important training tool. As humans, we must remember that life is not always an exact replica of the VR simulation. Quick judgment and tools like checklists will still need to be second nature in some jobs. Similarly, textbooks often contain a correct answer for problem questions listed at the back of the book. In reality, the answer to a problem is a lot more variable.
The availability of data and additional tools make it easier than ever to analyze data. However, just because a tool generates a pretty (or not-so-pretty) picture, it does not necessarily tell you whether the picture is accurate or applicable to the situation. Human intervention and expert review are still necessary. Analysts must still go see what is happening and where and how the data is collected to understand it truly. It is essential not just to trust or use the data or results as is. One must know the data and understand what is contained within the data set. Knowing what is missing from the data set is just as important.
While these tools hold great promise, they are not infallible. They can produce incorrect results. Human expertise and oversight are essential to ensuring accurate, reliable, and ethical use.
References
ASQ (2024). Quality 4.0. https://asq.org/quality-resources/quality-4-0
LNS Research (2017). Quality 4.0 impact and strategy handbook. https://blog.lnsresearch.com/quality40ebook#sthash.nktmIyIJ.cmYj75DD.dpbs
Paulise, L. (2022). We culture: 12 skills for growing teams in the future of work. Quality Press. Milwaukee, WI.