Developing a Supplier Audit for Quality
Laura B. Forker, Ph.D., C.P.M.
Laura B. Forker, Ph.D., C.P.M., Assistant Professor, Boston University, Boston, MA 02215, 617-353-4164.
James C. Hershauer, Ph.D.
James C. Hershauer, Ph.D., Professor Arizona State University, Tempe, AZ 85287
81st Annual International Conference Proceedings - 1996 - Chicago, IL
During the past 10-20 years, customer expectations of product and service quality have escalated. Industry has responded by focusing efforts and resources on controlling, assuring, and managing quality in order to meet the challenge of global competition and to satisfy customer requirements. Quality management, the synthesis of quality control, assurance, and other planning and control activities that help formulate and implement a firm's quality policy, has received particular attention. Quality management has long-term potential for enhancing a firm's quality performance and, ultimately, its profitability and market share . Quality performance is seen as a cornerstone of overall firm performance.
Measuring the Critical Factors of Quality Management.
To effectively manage operations for the improvement of quality performance, a reliable and valid measuring instrument of quality management practices is needed. Besides the Malcolm Baldridge National Quality Award assessment form, only two known studies have constructed measuring instruments for assessing quality management implementation   . The items for these survey instruments were collected from the prescriptions of quality management scholars and gurus  and from other surveys   and judgmentally grouped into "critical factors of quality management." Because the Saraph, Benson, and Schroeder  instrument was the first comprehensive survey instrument to be published, it was chosen as the foundation instrument for the present study's surveys. Other researchers (e.g., , ) have also used the Saraph, Benson, and Schroeder  instrument in their efforts to improve measures of quality management. In fact, many of Flynn, et. al.'s   constructs correspond directly to constructs in the Saraph, Benson, and Schroeder  instrument. There are parallels with the Malcolm Baldrige award measures as well . These parallels are illustrated in Exhibit 1. (See next page.)
While comprehensive in terms of the quality management practices examined, the Saraph, Benson, and Schroeder  study was limited by the small number of partici-pant firms (20), all of which were rather large (at least 1,000 employees per firm), and by the survey's implementation in one metropolitan area (Minneapolis-St. Paul). The intent of the study - to construct an instrument that measures implementation of the critical factors of quality management - is an important project for researchers and practitioners. However, the universality of their results needs further exploration.
Our study replicated the Saraph, Benson, and Schroeder  survey of quality management practices in a large sample of firms of all sizes across North America to see if the same reliability and validity results could be obtained for the instrument. As Nunnally [17, p. 3] has stated: "One of the primary aspects of standardization requires that different people who employ the measuring instrument, or supposedly alternative measures of the same attribute, should obtain similar results." Two industries - electronic components and aerospace components - were surveyed. These industries were chosen due to the critical importance of component quality in the performance (and safety) of the finished goods they become a part of. Results from the two industry surveys are reported separately to isolate the potentially confounding influence of industry type.
Saraph, Benson, and Schroeder  identified eight critical factors of quality management: Role of management leadership and quality policy, Role of the quality department, Training, Product/service design, Supplier quality management, Process management, Quality data and reporting, and Employee relations. They examined these factors for reliability, item-to-scale correlations, content validity, criterion-related validity, and construct validity. Construct validity was evaluated by factor analyzing the measurement items assigned by the researchers to each individual scale. For seven of the eight scales, the items assigned formed a single factor. Items for the Process management factor were split into two scales, however. The first factor explained 56% of the variance, the second factor explained 28%.
(/) designates the breaks between the three columns of measurements Malcolm Baldridge / SBS / Flynn, et. al.
Leadership / Role of Top Management & Quality Policy / Top Management Support Strategic Quality Planning / /
Information & Analysis / Quality Data & Reporting / Quality Information
Human Resource Development & Management / Employee Relations / Work Force
Management / Training /
/ Product Service Design /
Management of Process Quality / Process Management-Operating Procedures / Process Management
Customer Focus & Satisfaction / Supplier Quality Management / Supplier Involvement
Quality & Operational Results / Role of the Quality Department / Customer Involvement
After examining the items that loaded onto the second Process Management factor, we decided not to use those questions on our survey instrument. The items either repeated other questions that loaded onto the first factor or else asked questions about practices that were rather outdated. Including the items dropped by Saraph, Benson, and Schroeder  to improve the reliability of their instrument, a total of 16 items from the original research instrument were deleted, leaving 62 quality management practices that were included in the current study's survey instrument. (See Appendix for a listing of these quality management practices and the abbreviations used for the corresponding variables.)
A second modification to the original instrument involved the scale for measuring the implementation of each practice. Saraph, Benson, and Schroeder  utilized a scale that evaluated level of use (from very low to very high) of current and ideal practice of each item. We used a scale measuring "current extent of use" (from "No Extent" to "Great Extent") of each practice to clarify the way in which quality management should be interpreted and measured. As Tustin [19, p. 46] points out:
Degree and level of quality are quite different conceptually and represent dual use of the same word. In the case of the degree of quality, the word quality primarily means conformance to requirements. Because deviations from those requirements are generally defects, a low degree of quality results in increased costs from rework or rejection. In the case of level of quality, the word quality refers mainly to the nature of the design, such as number of features included; therefore, the higher the level of quality (e.g., more features) the more likely the acceptable price will be higher.
Because component product quality is typically measured by manufacturers in terms of conformance to requirements (specifications), we desired a scale for the perceptual measures of quality management practice that would also reflect "degree" of quality (i.e., conformance). Therefore, "current extent of use" was used. Five-point scales were chosen over seven-point scales due to the findings of Lissitz and Green  who showed through a Monte Carlo simulation that improvement in reliability (as evidenced by coefficient alpha) levels off after five scale points are exceeded.
The firms that answered this survey are suppliers to two large fabricators of finished goods and subassemblies. The customer firms come from different high-tech industries (electronics and aerospace) with virtually no overlap in their supply bases. Both firms are divisions of large multinational organizations and engage in transactions with other divisions of their corporations as well as with outside suppliers. To minimize translation and interpretation difficulties with the survey instrument, only North American suppliers of direct materials to these firms were surveyed. Both customer firms operate in highly competitive environments.
Survey instruments were mailed in the summer of 1992 to 421 suppliers of the electronics customer firm and to 348 suppliers of the aerospace customer firm. Two hundred and ninety two usable responses from the electronics suppliers were received for a usable response rate of 69%, and 264 usable responses from the aerospace suppliers were received for a usable response rate of 76%. There was no overlap among responses from the two supplier populations. Electronics firms ranged in size from 3 employees to 25,000 employees, with average firm size around 551 workers; aerospace firms ranged in size from 3 employees to 3,700 employees, with average firm size around 256 workers. Respondents were located in geographic areas across North America and represented a variety of component product lines.
Results obtained from administering the Saraph, Benson, and Schroeder  research instrument to two different industry samples that included firms from across North America and firms of a smaller average size than those surveyed in the original study provide mixed support for the reliability and validity of the instrument. Factor analyses of the individual constructs, as performed in the original study, showed all constructs to be reliable but only six scales demonstrated construct validity across both industry samples. Principal components analyses of all 62 items revealed only one of the constructs (Role of top management and quality policy) to have discriminant validity across both industries. Some other constructs had discriminant validity in one sample but not in the other. Reliability analyses of the new factors yielded equivalent or higher coefficient alphas for all but two (electronics sample) and three (aerospace sample) constructs, compared with the alphas for the original constructs. Correlation analysis of the original constructs with an independent measure of quality performance revealed no significant relationship between the quality management practices and actual performance in either industry, thereby shedding doubt on the criterion-related validity of the constructs. However, because the practices were selected from the quality management literature, the Saraph, Benson, and Schroeder  constructs do exhibit classical content validity.
Table 1: Suggested Revised Instrument for Measuring Quality Management
Factor / Number of Items / Items
Role of the Quality Department / 5 / QD1, QD2, QD3, QD4, QD5
Role of Top Management in Quality / 9 / TM1, TM4, TM5, TM6, TM7, TM8, TM9,
Product/Service Design / 5 / PD1, PD2, PD3, PD4, PD5
Quality Data and Reporting / 6 / QR1, QR2, QR3, QR4, QR5, QR6
Supplier Relationship Management / 4 / SM4, SM5, SM6, SM7
Automation for Quality / 4 / PM2, PM4, PM5, PM7
Emphasis on Quality to Employees / 8 / T1, T2, T3, T4, T7, T8, ER1, ER2
Analysis and Monitoring for Quality / 6 / A1, A2, A3, A4, A5, A6
Conclusion: A Suggested Revised Instrument. Continuous improvement of quality will be effective only if an accurate means of measuring quality management practices can be formulated and validated in different industry settings. The Saraph, Benson, and Schroeder  research instrument is one of the few comprehensive instruments that addresses this purpose. The present study sought to validate that instrument in two high-tech industry settings. The results are mixed. It appears that alternative measures of quality management not included on the instrument may be necessary for inclusion in future surveys to capture those practices that influence quality performance. However, the Saraph, Benson, and Schroeder  instrument is a good foundation from which future instruments can be refined. A suggested revised survey instrument is offered as our conclusion.
Based on our results and additional review of the eight constructs in the original Saraph, Benson, and Schroeder  instrument, we propose a revised research instrument for use in measuring quality management practice. Additional research will be needed to further validate the revised instrument. The revised instrument contains eight constructs. One construct is unchanged from the SBS instrument, three involve minor modifications, two involve significant revisions and renaming, one involves some combination of items from two of the prior constructs, and one is a new construct called "Analysis and Monitoring for Quality".
Table 1 summarizes the suggested revised instrument for use in industry and for further analysis by researchers. With 47 items, the revised instrument achieves improvement in parsimony. It is also more clearly focused on organizational factors which directly affect quality, demonstrated in the new list of factor names.
The revised quality management instrument is particularly useful to firms that are evaluating and rationalizing their supply bases. As part of a supplier certification and quality improvement program, the instrument can be sent to existing and potential suppliers to assess the firms' scope and depth of quality management practice. Since quality has become an order qualifier in today's competitive environment, identification of practices that can lead to improved performance is an important consideration for industry.
- Aronson, E., Ellsworth, P.C., Carlsmith, J.M., & Gonzales, M.H.; (1990). Methods of research in social psychology. New York: McGraw-Hill.
- Cattell, R.B. (1966). The meaning and strategic use of factor analysis. In Handbook of multivariate experimental psychology, ed. R.B. Cattell. Chicago: Rand-McNally.
- Cronbach, L.J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16, 297-334.
- Crosby, P. (1979). Quality is free. New York: Hodder & Stoughton.
- Deming, W.E. (1982). Quality, productivity, and competitive position. Cambridge, Mass.: MIT Center for Advanced Engineering.
- Dillman, D.A. (1978). Mail and telephone surveys: The total design method. New York: John Wiley & Sons.
- Feigenbaum, A.V. (1983). Total quality control: Engineering and management. New York: McGraw-Hill.
- Flynn, B.B., Schroeder, R.G., and Sakakibara, S. (1993). Relationship between quality management practices and performance: A path analytic approach. 1993 Proceedings: Decision Sciences Institute, 1858-1860.
- Flynn, B.B., Schroeder, R.G., and Sakakibara, S. (1994). A framework for quality management research and an associated measurement instrument. Journal of Operations Management, 11, 339-366.
- Garvin, D.A. (1983, September-October) Quality on the line. Harvard Business Review, 65-75.
- Garvin, D.A. (1988). Managing quality. New York: The Free Press.
- Ishikawa, K. (1985). What Is total quality control? The Japanese way. New York: Prentice-Hall.
- Juran, J.M., & Gyrna, F.M. (Eds.). (1988). Juran's quality control handbook. (4th ed.). New York: McGraw-Hill.
- Kim, J.Q. & Mueller, C.W.. (1978). Factor analysis: Statistical methods and practical issues. Newbury Park: Sage Publications.
- Lissitz, R.W., & Green, S.B. (1975). Effect of the number of scale points on reliability: A Monte Carlo approach. Journal of Applied Psychology, 60(1), 10-13.
- Nunnally, J.C. (1967). Psychometric theory. New York: McGraw-Hill.
- Rose, F. (1995, February 24). Japanese companies are angling for niche in aerospace industry. The Wall Street Journal, p. B7A.
- Saraph, J., Benson, P.G., & Schroeder, R. (1989). An instrument for measuring the critical factors of quality management. Decision Sciences, 20, 810-829.
- Tustin, C.O. (1992). The operationalization of service quality using quality dimensions and expectation/perception gap analysis. Unpublished doctoral dissertation, Arizona State University, Tempe.
- United States Department of Commerce, The Malcolm Baldridge National Quality Award: 1993 Examination, (Technology Administration, National Institute of Standards and Technology, 1993).
APPENDIX: LIST OF VARIABLES USED IN ANALYSIS
|NAME||DESCRIPTION||TYPE OF DATA|
|TM1||Assumption of responsibility for quality performance by the most senior executive responsible for profit and loss||Interval|
|TM2||Acceptance of responsibility for quality by major department heads||Interval|
|TM3||Importance of quality results in top management performance appraisals||Interval|
|TM4||Support by top management of long-term quality improvement process||Interval|
|TM5||Participation by department heads in the quality improvement process||Interval|
|TM6||Degree to which objectives are set by top management for quality performance||Interval|
|TM7||Specific quality goals set within the firm||Interval|
|TM8||Extent of functional involvement (# of functions included) in the firm's quality goal-setting process||Interval|
|TM9||Clear understanding by top management of quality goals and policy set within the firm||Interval|
|TM10||Importance attached to quality by top management in relation to costs and schedules||Interval|
|TM11||Review of quality issues in top management meetings||Interval|
|TM12||Belief by top management that quality improvement is a way to increase profits||Interval|
|TM13||Existence of a comprehensive quality plan in the firm||Interval|
|PD1||Thorough new product/service design reviews before the product/service is produced and marketed||Interval|
|PD2||Coordination among affected departments in the product/service development process||Interval|
|PD3||Emphasis on quality of new products/services instead of cost or schedule objectives||Interval|
|PD4||Unambiguous product/service specifications and procedures||Interval|
|PD5||Consideration of implementation/producibility in the product/service design process||Interval|
|PD6||Emphasis on quality by sales, customer service, and PR personnel||Interval|
|SM1||Extent to which you select suppliers based on quality rather than price or schedule||Interval|
|SM2||Thoroughness of your supplier rating system||Interval|
|SM3||Reliance by you on a few dependable suppliers||Interval|
|SM4||Amount of education you provide to your suppliers||Interval|
|SM5||Extent of technical assistance you provide your suppliers||Interval|
|SM6||Involvement of suppliers in your product development process||Interval|
|SM7||Extension of long-term contracts to your suppliers||Interval|
|SM8||Clarity of specifications provided to your suppliers||Interval|
|QD1||Visibility of the quality department in the firm||Interval|
|QD2||Degree of access the quality department has to top management||Interval|
|QD3||Degree of decision-making influence of the quality department||Interval|
|QD4||Coordination between the quality department and other departments||Interval|
|QD5||Effectiveness of the quality department at influencing quality improvement||Interval|
|PM1||Use of preventive equipment maintenance|
|PM2||Automation of inspection, review, checking of work||Interval|
|PM3||Evenness of production schedules/work distribution||Interval|
|PM4||Automation of processes||Interval|
|PM5||"Fool-proofing" of process design to minimize chances of employee errors||Interval|
|PM6||Unambiguous work or process instructions given to employees||Interval|
|QR1||Data available on quality costs in your firm||Interval|
|QR2||Data available on quality shortfalls (error rates, defect rates, scrap, number of defects)||Interval|
|QR3||Timeliness of the quality data||Interval|
|QR4||Use of quality data as tools to manage quality||Interval|
|QR5||Relevance of the quality data made available to hourly employees||Interval|
|QR6||Relevance of the quality data made available to managers and supervisors||Interval|
|QR7||Use of quality data to evaluate supervisor and managerial performance||Interval|
|QR8||Display of quality data, control charts, etc. at employee work stations||Interval|
|ER1||Implementation of quality-related employee involvement program||Interval|
|ER2||Effectiveness of implemented quality-related employee involvement programs||Interval|
|ER3||Responsibility assigned to employees for error-free output||Interval|
|ER4||Feedback provided to employees on their quality performance||Interval|
|ER5||Participation in quality decisions by nonsupervisory employees||Interval|
|ER6||Continual enhancement of quality awareness among employees||Interval|
|ER7||Recognition of employees for superior quality performance||Interval|
|ER8||Extent of supervisors' effectiveness in solving quality problems||Interval|
|T1||Specific work-skills training (technical and vocational) given to hourly employees throughout the firm||Interval|
|T2||Quality-related training given to hourly employees throughout the firm||Interval|
|T3||Quality-related training given to managers and supervisors throughout the firm||Interval|
|T4||Firm-wide training in the total quality concept
(philosophy of company-wide responsibility for quality)
|T5||Firm-wide training in basic statistical techniques
(e.g., histograms and control charts)
|T6||Firm-wide training in advanced statistical techniques
(e.g., design of experiments and regression analysis)
|T7||Commitment of the firm's top management to employee training||Interval|
|T8||Available resources for employee training in the firm||Interval|
|TM||Role of top management and quality policy||Construct|
|SM||Supplier quality management||Construct|
|QD||Role of the quality department||Construct|
|PM||Process management/operating procedures||Construct|
|QR||Quality data and reporting||Construct|