Casting is a process of pouring a liquid, typically metal or plastic, into a mold to produce a solid part. That’s the theory, but in fact “solid” parts often contain voids or inclusions, which may or may not impact the performance of the finished part. Porosity can be bad, meaning that the resulting part will not perform as required due to reduced strength or open pores left after final machining. It can be acceptable, meaning either that the void will be removed by secondary processes such as machining or that it is internal and of a size and shape that will not affect performance. Or, the porosity can be desirable when it is created intentionally to reduce weight. Even then the size of pores can be critical. The challenge is differentiating bad porosity from acceptable porosity or desired porosity, and diverting bad parts before either spending additional production time on them or sending a sub-standard part to market.
The Ideal Inspection Process
If a void is on the outside of the part, it can be easily identified and evaluated visually. Dealing with internal voids is the challenge, and the technologies for identifying them involve either 2-D x-ray or 3-D computed tomography (3-D CT). The ideal inspection process is one that can:
1. inspect 100% of your cast parts and do so quickly enough to keep the production process moving
2. be repeatable, reproducible, and accurate enough to reject all unacceptable parts without falsely rejecting parts with non-problematic voids
3. achieve the above goals without requiring frequent recalibration or mastering
4. provide a 3-D representation of the scanned results to enable meaningful analysis
5. be capable of analyzing the part and classifying it as good or bad
Two-dimensional inspection is very limited in meeting all of these goals. Unless a part is flat, 2-D x-ray provides minimal information and can lead to rejection of perfectly acceptable parts. Today’s increasingly complex parts require 3-D CT in order to meet these goals. However, until now the first goal has been a problem even for 3-D systems. The good news is that the best 3-D CT systems can now fully examine a part as complex as an automotive engine cylinder head in as little as 90 seconds, a piston in just 30 seconds, and plastic medical components in just a couple of seconds. But among 3-D systems there is a great deal of variation in the accuracy they offer, their ease of use, and their real throughput when taking into account re-calibration and mastering needs.
Traditional CT
CT technology was originally developed to produce images of internal features, but not to measure them with great accuracy. In medical applications, CT is used to diagnose disease, trauma, or abnormality, but not to precisely measure what is seen. In most cases, the physician is either simply looking for the presence of a problem or wanting hands-on data available for an operation. In industrial applications, CT has traditionally been used for nondestructive testing (NDT) to determine the presence or absence of internal flaws. Parts with identifiable internal flaws were generally considered faulty, and the extent of the flaw wasn’t an issue. For this reason CT systems with a substantial margin of error in their measurements and significant “drift” between measurements were perfectly acceptable. Accurate and repeatable measurements require specific system design. Even a handheld scanner, for example, can produce realistic images, but more accurate data requires a suitable coordinate measuring machine (CMM).
Crossing Over to High-Precision Measurement
One way of compensating for a CT system’s margin of error is the addition of a ‘fudge factor’ to the numbers provided by the system. In other words, the system provides a reasonable approximation of the dimensions being measured, and system software adds a plus/minus range to compensate for the uncertainty of that measurement. Voids that fall within this expanded range are cause for rejection of the part. This is a conservative approach designed to eliminate the acceptance of possible faulty parts. The problem with this “better-safe-than-sorry” approach is that it leads to the scrapping of castings that may actually be perfectly acceptable, and reducing process yield can be an expensive way to prevent mistakes.
Drift, the lack of precise repeatability, is typically handled by frequently recalibrating the system to reduce variation in measurements or by continuous comparison to some standard of known dimensions. Frequent calibration can boost accuracy, but only by adding costly steps and slowing the entire measurement process. And while accuracy is improved immediately following recalibration, that accuracy is progressively lost over time. Another way to correct for drift or excessive margin of error is “scaling.” This is the continuous comparison of the part being analyzed to an item of known dimensions, using software to compensate for error found in the measurement of the known quantity. This works in theory; the problem is that the error rate in CT measurement at different scales and densities is non-linear. This is because x-rays are not monochromatic, resulting in scatter as different wavelengths encounter an edge.
In short, the only way to really produce fast, repeatable, accurate measurement results is to use a system that physically ensures accurate measurements of parts or pores without the need for compensation. The best way to determine the capability of a system is by direct observation while evaluating systems, preferably under the most challenging conditions a system will face in actual use. Measure parts of significantly different sizes, and repeat the process measuring smaller, larger, and again smaller parts. Observe the required setup and evaluate the repeatability and the accuracy of resulting measurements by comparing them to results obtained with known systems such as a CMM.
Why NDT Systems Don’t “Cross Over”
Measurement systems originally designed for NDT, while perfectly adequate for their original purpose fall short in several areas when it comes to tight-tolerance measurements. First, most were designed to allow a wide range of positional adjustment in x-ray source, test piece, and x-ray detector to accommodate test pieces of varying sizes. This flexibility comes at a substantial cost. First, increased degrees of freedom will create significant sources of error and reduce system accuracy and often are not necessary. In many cases the real benefit to this “flexibility” is letting manufacturers repurpose existing systems and achieve economies of scale in manufacturing such systems. The truth is that metrological systems that claim to “do it all” pay the price in significantly reduced accuracy.
Another potential source of error is the rotary platform that turns the part during testing. This is a component that has to move during testing, but the steadiness and predictability of that movement depends on the bearing technology on which the platform turns. High accuracy rotary axes cost more, but provide more accurate positioning during measurement. The amount of movement caused by individual, adjustable components and lower-grade rotary axes may be small, but collectively they add significant error in the measurement of parts and defects. It doesn’t take much error to send a perfectly good part to the scrap heap, especially when the system is overstating the measured error “just to be safe.”
Temperature variation can also impact accuracy. A change of even a few degrees can significantly throw off results as components of the CT system expand or contract. When the goal was simply to find internal flaws in an NDT application to acknowledge a problem without quantification, the error produced by a change in temperature would not have impacted results. Porosity on the other hand is measured in much smaller increments. And while systems can be recalibrated periodically to account for temperature change, that recalibration takes the system out of operation and reduces throughput. Systems designed to minimize or eliminate the impact of temperature change cost more, but they are much more cost effective in operation and produce reliable, repeatable results.
Final Decisions
While system software cannot adequately compensate for decreased physical accuracy of a system, it can evaluate results and assist in the sometimes-complicated determination of part quality. The ideal standard by which a part should be measured is its original CAD design. Fortunately, computers can easily understand the 3-D CAD models from which molds are made and can compare the digitized results of a CT scan to those models, pointing out all deviations from the original model and highlighting those that exceed the acceptable range. This can be a critical feature of any CT system, but it only works if the comparison is based on accurate data from the CT system.
Evaluating Systems
Obviously, not all applications require the highest degree of accuracy and throughput. For those that do, there are several steps in determining the performance of a system. The first is to define your needs in terms of:
-
the size range of the parts you will evaluate
-
the acceptable degree of porosity
-
whether you need 100% testing of castings
-
how fast your line operates
-
how continuous your testing process must be
-
cost of false rejections
-
cost of slippage
-
Your budget for test equipment
-
Cost of testing a single part including preparation time, calibration, and scan time
-
Your operational costs including cost of false reject (overkill) or defective parts that pass inspection (escape).
Armed with that information, you can begin evaluating systems. Keep in mind that performance in a laboratory or other controlled setting is not necessarily what you will find on the production line. Use your own parts for testing. And unless you are willing to frequently recalibrate an inline system, require that your parts be measured without prior calibration. Monitor results of continuous testing to ensure repeatability and reproducibility. And make sure you know how wide of a range is being used to define acceptable parts and how that range is determined.
Check both data and demonstrated performance and find out what degree of repeatability the vendor is willing to guarantee in operation. Check to see how the system adapts to temperature change and what calibration is required to keep the system accurate over time. See whether the software is used to compensate for variations in measurement. And finally, base your financial decisions on the “big picture,” incorporate labor costs, productivity, and avoidable cost of scrap or escape along with equipment cost.
The good news is that today’s advanced 3-D CT technology can let you inspect 100% of the cast parts coming off of your line without slowing down your operation. It can do that consistently with little or no adjustment or calibration. And it can do all that accurately, ensuring that the parts you pass are free of problematic voids and that acceptable parts don’t get scrapped in the name of caution by a system that can’t tell the difference.