In a manufacturing setting where consistent quality matters, variability in how individual technicians and operators perform their jobs can be frustrating for managers. Companies need a way to achieve consistent quality, without reducing the capacity for innovation and improvement.
Individual operator differences can lead to issues with product quality. One way to determine which practices are affecting quality is to use multivariate data analytics.
Companies that take specific steps to help reduce individual performance variations often realize huge financial gains as a result. For example, Ford Motor Co. saved $886 million USD over four years after implementing a program of sharing best practices throughout its manufacturing sites in the early 2000s.¹
Achieving this type of best practice optimization, however, relies on having reliable, actionable data. According to Stan Kwiecien, a Best-Practice Replication deployment manager at Ford Motor company:
“Measurement must be an integral part of the knowledge management process. When establishing a community of practice, determining the measures and metrics is critical. These must be relevant to the work performed by the community of practice and be a trusted means of measurement readily recognized by members of the community of practice.”²
Processes vs. practices in product manufacturing
It’s important to make a distinction between processes and practices. Many efforts to reduce variability focus primarily on refining processes, for example, the enormous success of Six Sigma stems from using established statistical process controls to eliminate deviations in quality. However, despite making changes in processes, companies may find that variability can persist due to differences in individual operator practices.
While a process outlines the specific steps and perhaps order in which tasks are completed, a practice refers to how the tasks are actually performed. Inevitably, individual differences creep into the performance of a process put into practice. These differences can lead to innovation and improve process implementation, or they can have the opposite effect. How can you tell which?
The answer, of course, is to analyze the performance data. This means looking at the specific details of how a process is implemented in practice and understanding which elements are contributing to deviations in quality or leading to consistency problems, and which are helping to improve it.
Is that possible? Yes. Using multivariate data analytics we can explore a number of variables such as production data or quality data for different operators and find correlations that have an impact on the outcomes. This might include looking at material properties data, performance metrics for the product or components of the product, and test results from various stages of the manufacturing process.
Measuring operator variation
In looking for variables that affect the quality of a production process, a company might compare data from individual final assembly operators across a number of quality or evaluation metrics. For example, a manufacturer of analytical instruments reviewed the data from individual final assembly and quality assessment technicians in order to understand quality problems before releasing a new component (used in an instrument) to customers. This involved measuring a set of data including:
- metrology data (for the component)
- material properties data (for the component)
- performance metrics (for the component)
- spectra of standard samples (using the whole instrument)
Let’s look at the results comparing the data from three technicians (given fictional names): Adam, Mike and Roger.
In this first graph (below), each dot represents a summarized view of quality and performance data for one component/instrument. Dots that are close together represent instruments with quality profile that are similar. Dots that are far apart correspond to instruments performing very differently in quality and performance assessment. We have color-coded the instrument points according to the technician: Adam in green, Mike in Blue and Roger in red.
The data represents 300+ instruments, colored according to technician, with about 125 each for Adam and Mike, and the rest for Roger.
Comparing individual technicians
In the following three graphs, we can compare the individual differences in final assembly and performance data measurements for each technician.
This graph (above) shows that Adam achieves a consistency in final assembly of the component in the instrument. This is inferred from the closeness of the green points, indicating that the measured quality and performance data in each case is extremely stable. Adam delivers a final product that is verifiably similar over time.
The next plot (above) shows the performance of Roger. In about 2/3rds of the cases in which he is involved, Roger performs exactly as Adam, that is, the result of his work is a product with consistent quality. However, Roger also has some deviations in quality, but overall the quality profile is relatively well aligned with Adam.
Finally, by looking at the same criteria in a third plot (above) for Mike’s performance, we can see a huge variation in results. Mike's performance is sometimes similar to Adam's (product with stable quality and performance) but oftentimes far away from Adam's (product with very varying quality and performance).
Interpreting the results
After seeing results like this, a manager may wonder, what now? It may make sense to ask a few additional questions:
- Did the data point out specific areas of the process in which Mike’s performance can be adjusted?
- Should Mike be assigned other tasks?
- What changes are necessary to make Mike perform more like Adam?
- Should Adam's practices become a standard for the company (creating a set of "best practices)?
After exposing differences on the individual level, the data analytics model can be interrogated by looking in contributions plots and similar graphs for understanding what measurements are causing the performance differences between the technicians.
In assessing manufacturing processes, the evaluation is not always about identifying the points of highest or lowest quality, but about ensuring stability: quality that stays consistent over time. It may not be necessary to achieve the maximum every time, but to remain stable at some sustainable level of quality. That´s what this dataset is all about. The goal is to find the points at which quality and practice intersect in a way that creates a sustainable (and economic) production process.
This kind of reasoning isn’t limited to technical products. It applies equally well to every day consumer goods ranging from things like binoculars to vacuum cleaners to shoes.
In general, having a good understanding of the specific data points and metrics that affect quality, and understanding how individual deviations in practice can affect consistency of the product, are important to maintaining an efficient process and quality product.
Want to know more?
Watch a recent webinar on using six sigma for quality control. (Registration required to watch recorded webinar.)
Read more about using data analytics to optimize production processes.
Have a question?
Ask it below and we'll answer. Or leave a comment.
- S. Kwiecien and D. Wolford, “Gaining Real Value Through Best-Practice Replication: How Ford Motor Company Counts the Returns on Knowledge Efforts,” Knowledge Management Review 4 (March–April 2001): 12–15.
- APQC, “Measuring the Impact of Measuring the Impact of Knowledge Management,” 2003. Accessed 3 Oct. 2018, ftp://public.dhe.ibm.com/services/us/gbs/bus/hcm/rbtt/ford.pdf
- E. Matson and L. Prusak, “The Performance Variability Dilemma”, MIT Sloan Management Review, Oct. 15, 2003, https://sloanreview.mit.edu/article/the-performance-variability-dilemma/