Materials Characterization & Testing: The Challenge of Holding Tight Rockwell Test Result Tolerances
As manufacturers strive to increase the quality of the products they produce, there is a corresponding need to improve the accuracy of the measurements used to control or monitor the product's quality. Rockwell hardness testing is one of those measurements. In an effort to reduce the variations in the performance of their products, some users are attempting to hold the Rockwell results on critical parts to +/-1.0 Rockwell point. This article will discuss the realities of trying to hold such a tight tolerance and will provide some insight on how to do it.
Our discussion will focus on the popular HRC scale for the following reasons:
- We know what the "true" HRC hardness levels are since NIST holds the HRC standard and transfers those standards to industry by the sale of Standard Reference Material (SRM) test blocks. All of the manufacturers of test blocks trace to NIST by using the SRMs. Focusing on HRC eliminates the variations inherent in all of the other Rockwell scales due to the fact that the different test-block manufacturers create and maintain their own standards for those scales.
- The materials at the high end of the HRC range tend to be more predictable, which helps us greatly when we want to maximize the performance of the Rockwell test.
Understanding What You Are Trying to MeasureThe Rockwell test is a depth-of-penetration measurement where one regular scale Rockwell point is equal to 0.002mm (or 0.000080 in.) of penetration depth of the indenter into the surface of the material being tested. Digital testers display to a resolution of 0.1, which represents 8 millionths of an inch. These are very small displacements, and it is a tribute to the robust nature of the test principle, developed by Stanley Rockwell over 85 years ago, that we can expect better than 80-millionths of an inch measurement repeatability on a daily basis.
Items to be Considered
There are three areas that must be understood when trying to hold tight Rockwell hardness levels.
- How close is your result to the true value?
- What is the magnitude of the variations you should expect within your lab and between your lab and other labs?
- How repeatable and stable are your testers, and how reliable is your testing process?
Fortunately there are several tools available to help the user understand the answers to these questions.
Uncertainty Calculations - How Confident Are You in Your Answer?
Every measurement you make has an uncertainty associated with the result, and Rockwell testing is no exception. Uncertainty calculations can be confusing. Fortunately Appendix X2 of ASTM Test Method E18-05 for Rockwell testing defines an excellent method to calculate the uncertainty of the many different steps in the Rockwell testing process, including the final result. The process and formulas defined in the standard are valid for all of the Rockwell scales.
The formula used to determine the uncertainty of a typical end user's result takes into consideration several key items:
- The uncertainty of the NIST SRMs that are used to transfer the hardness level to industry
- The uncertainty and bias of calibrations necessary to verify the performance of the hardness tester
- The performance of the tester short and long term
- The resolution of the tester's display
The example given in ASTM E18 for a 30 HRC hardness range material using an analog tester resulted in a calculated uncertainty of +/-1.42 HRC. Therefore, in this example the user could say with a 95% confidence level that the true hardness of a material reading 30 HRC is somewhere between 28.58 and 31.42. This would make it impossible to accurately monitor a tolerance of +/-1.0 HRC in the 30 HRC range.
Exhibit 1 shows an example of what the result might be in the 63 HRC range under good conditions using a digital tester. In this example the results indicate an uncertainty of +/-0.63. This is a much better result, but there is still a significant amount of uncertainty if your tolerance is 1 Rockwell point.
The uncertainty statement allows us to understand how close (or far) our test result may be from the "true" Rockwell hardness of the material. The numbers are pretty large and statistically valid, but how well do testers perform in the real world? Again, ASTM E18 helps us out. The Precision and Bias study results (Ref ASTM RR: E28-1021) shown in the latest versions of E18 are a good indication of the type of precision you can expect both within your lab and between your lab and another lab. An ASTM Precision and Bias study is basically a round-robin study that involves several labs testing the same materials the same way, using a variety of testers representative of the real world. The findings of the study indicate that the best results, as expected, were obtained at the high end of the HRC scale. The study showed that the variations within the labs were +/-0.43, and the variations between labs were +/-1.09. With this magnitude of real variations, you can see that a 1-point tolerance cannot be held without performing extra steps to keep the results tighter.
To have a chance to hold tight tolerances, your tester must be performing at a very high level. GR&R tests now are popular in helping the user to understand the performance of their measuring instruments. GR&R tests normally include three operators, 10 test samples and three tests on each sample by each operator. See Exhibit 2 for GR&R results obtained on a modern closed-loop tester at the 63 HRC level using a typical tolerance of +/-3.0. The result of 4.49% is considered very good and well within the desired 10% for GR&R testing. The result means that the tester's variations used 4.49% or 0.27 of the 6-point tolerance. If you reduce the tolerance to +/-1.0, the GR&R result would change to 13.46%. The 0.27 value, however, stays the same. It should be noted that this number includes some material variations inherent in the test blocks that were used in the study. If your material has more variations, the number will increase. This result is reasonable, and it was obtained using a modern, top-performing, closed-loop instrument (Fig. 1) under ideal conditions. Older, dead-weight units will typically produce results three to five (or more) times higher, making the task of holding a tolerance of +/-1 point increasingly difficult if not impossible.
How's Your Testing Program?If all of the above items have been considered and meet expectations, the next concern would be the actual performance of the test by your operators. The best Rockwell tester in the world cannot give the correct results if it is not well-maintained and properly used. Normal maintenance must be done according to the manufacturer's recommendations. The frequency of verification as defined in ASTM E18 is intended for normal-use instruments. To hold ultra-tight tolerances, the indirect verifications must be done more often, at least every three to six months. You may want to consider having direct verifications done on an annual basis. Direct verifications, which involves verifying the force and displacement performance of the tester, are not normally done in the field. However, they can help tremendously to maintain the performance of an instrument.
The technique used by the operator must be correct and consistent from day to day. They must make sure that the anvil is clean, properly seated and capable of supporting the samples so that there is no lost motion that could result in bad readings (Fig. 2). The indenter must be in good condition and properly seated in its holder. The test surface of the sample must be clean and free from any scale or decarb that may affect the results. The better the surface finish, the better the results. The bottom surface must also be clean and capable of supporting the test force.
How Can it be Done?The older Rockwell units are very durable but have lots of variations that make holding 1-point difficult if not impossible. Fortunately with today's technology, the instruments can perform much better and increase your chance of achieving the tight tolerance limits.
Here are a few pointers:
- Minimize the uncertainty by only using a digital tester.
- Eliminate the calibration bias by applying a correction factor to get closer to the true value. You will probably need a separate correction factor for each indenter you use.
- If possible use only one tester that has known performance and a GR&R result less than 10% of the tolerance you want to hold. Protect that tester from damage by other users.
- If you have to use more than one tester or lab, transfer blocks between the labs to prove that they are getting the same results. You may also want to transfer the indenter(s).
- Plot the performance of the tester daily so you can detect any changes as soon as they happen.
- Perform regular maintenance and ASTM-required verifications.
- Teach your operators the proper testing technique and make sure that they do it right every day.
The bottom line is holding a +/-1.0 Rockwell hardness tolerance is a challenging task when all of the possible variations are taken into consideration. To accomplish such a task, you must have the best equipment and have properly defined and executed testing and operator training programs.
What Does it All Mean?The remaining question is whether holding a +/-1.0 Rockwell point tolerance has any real benefit or meaning. Is it possible to determine the effect of the difference in the performance of a product that has a variation of one Rockwell point? Tensile tests typically have uncertainties well beyond the range of 1-point Rockwell. Can abrasion/wear resistance be measured close enough to tell a difference? These answers are left to the user. Hopefully this article has helped you understand what it takes to get meaningful Rockwell hardness results with the highest precision possible.
For more information: Ed Tobolski, Hardness Product Manager, Wilson/Shore Instruments, An Instron Company; ph.: 781-575-5840; fax: 781-575-5770; email: email@example.com
Additional related information may be found by searching for these (and other) key words/terms via BNP Media LINX at www.industrialheating.com: Rockwell hardness, tensile, depth of penetration
The items that contribute to the uncertainty of a Rockwell test are as follows:
Exhibit 1: The uncertainty of a typical series of HRC 63 range Rockwell tests
- The Rockwell tester's lack of repeatability (short term)
- The Rockwell tester's lack of reproducibility (long term)
- The resolution of the hardness display
- The error of the tester's calibration during indirect verification
- The uncertainty of the error determination
Assume the following to be a typical example of a series of tests at the 63 HRC level (Reference ASTM E18 Appendix X2):
Five test are performed on a localized area of one sample with the following results: 63.1, 63.0, 63.2, 63.1, and 63.2. The average is 63.12.
- From the test results the tester's uncertainty due to short-term repeatability is 0.038.
- The tester resolution uncertainty is 0.029 for a digital tester with 0.1 display resolution.
For the purpose of the uncertainly calculation:
- Assume that the long term reproducibility uncertainty is 0.125
- Assume that the calibration error is 0.15
- Assume that the uncertainty of the calibration error is 0.2
The results of the uncertainty calculation would therefore be:
umeas = square root(.0382 + 0.1252 + 0.0292 + 0.22) = 0.24
The expanded uncertainty (U) for one test is the standard uncertainty (u) times the coverage factor (2) plus the calibration error of 0.15:
U meas = 2 ´ 0.24 + 0.15 = +/-0.63 HRC
As a result of is this calculation you could say with a confidence level of 95% that the average hardness of a sample, in the area of the tests, is between 63.75 and 62.49 (63.12 +/- 0.63).