Quality from Customer Needs to Customer Satisfaction 9144041667

129 32 217MB

English Pages [608] Year 2003

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Quality from Customer Needs to Customer Satisfaction
 9144041667

Citation preview

Bo Bergman Bengt Klefsjö

from Customer Needs to Customer Satisfaction

Bo Bergman Bengt Klefsjö

Quality from Customer Needs to Customer Satisfaction

Translator: Karin Ashing

A Studentlitteratur

Copying prohibited A11 rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. The papers and inks used in this product are environment-friendly.

Art. No 4633 ISBN 91-44-04166-7 © Bo Bergman, Bengt Klefsp and Studentlitteratur 2003 Translator: Karin Ashing Cover design by Lars Tempte Cover illustration: © Digital Vision Second Edition Printed in Sweden Studentlitteratur, Lund Web-address: www.studentlitteratur.se Printing/year 1 2 3 4 5 6 7 8 9 10 2007 06 05 04 03

Contents

Preface to the second edition 13 General outline of the book 15

Part I — Quality for Success 1

Quality and Quality Improvements 21 1.1 Quality 21 1.1.1 Some definitions 21 1.1.2 The customer concept 27 1.1.3 Services 29 1.1.4 Quality dimensions 31 1.2 The Cornerstones of TQM 34 1.2.1 Focus on customers 36 1.2.2 Base decisions on facts 38 1.2.3 Focus on processes 40 1.2.4 Improve continuously 42 1.2.5 Let everybody be committed 45 1.3 The Importance of Systems Thinking 47 1.4 Notes and References 49

2

Quality and Success 51 2.1 Quality and Profitability 52 2.1.1 Relations between quality and profitability 52 2.1.2 Studies of quality and profitability 53 2.2 Quality and Product Development 57 2.3 Quality and Productivity 60 2.4 Costs of Poor Quality 62 2.5 Quality, Work environment and Work development 66 2.6 Notes and References 69

© Studentlitteratur

3

3

The History of the Quality Movement 71 3.1 Prehistory 71 3.2 Taylorism and Inspection 74 3.3 Walter A. Shewhart 76 3.4 W. Edwards Deming and Joseph M. Juran 78 3.5 The Japanese Miracle 81 3.6 The Awakening in the Western World 88 3.7 Service Quality 89 3.8 Some Perspectives of the Quality Movement 91 3.9 Notes and References 96

Part II - Design for Quality 4

Customer-focused Product Development 101 4.1 Product Development Methodology 102 4.1.1 Requirements 104 4.1.2 Concepts 105 4.1.3 Improvement 106 4.1.4 More about the product development process 107 4.2 Service Development 111 4.2.1 Models for service development 112 4.2.2 Flow charts 114 4.3 Software Development 116 4.3.1 The Cleanroom Methodology 117 4.4 Notes and References 119

5

Quality Function Deployment 121 5.1 Background 121 5.2 The Four Steps 123 5.3 The Quality House 126 5.4 Function Deployment 130 5.4.1 Examples of applications 130 5.4.2 Benefits and difficulties 133 5.5 Notes and References 133

4

© Studentlitteratur

6 Reliability 135 6.1 The Aim of Reliability Engineering 135 6.2 Dependability 137 6.3 Basic Concepts 138 6.3.1 Reliability function 139 6.3.2 Failure Rate 139 6.4 System Reliability 142 6.4.1 Series systems 142 6.4.2 Parallel systems 143 6.4.3 Other systems 144 6.4.4 Redundancy 145 6.5 Repairable Systems 146 6.6 Feedback 147 6.6.1 Probability Plotting 148 6.6.2 11-1-plotting 151 6.7 Some Qualitative Analysis Methodologies 155 6.7.1 Failure Mode and Effect Analysis — FMEA 155 6.7.2 Fault Tree Analysis — FTA 157 6.7.3 Top-down or bottom-up? 158 6.8 The Development of Reliability Engineering 160 6.9 Notes and References 163 7

Design of Experiments 166 7.1 One-Factor-at-a-Time Experiments 166 7.2 A Weighing Experiment 169 7.3 A Factorial Design 170 7.3.1 A plan of experiments 171 7.3.2 Estimation of effects 173 7.3.3 Analysis of the result from the experiment 177 7.4 Fractional Factorial Designs 181 7.5 Studied Factors and Irrelevant Disturbing Factors 184 7.6 Conjoint Analysis 185 7.7 Notes and References 186

8

Robust Design 188 8.1 Taguchi's Philosophy 188 8.2 The Design Process 190 8.2.1 System Design 191 8.2.2 Parameter Design 191

© Studentlitteratur

5

8.3 8.4

8.5

8.6

8.2.3 Tolerance Design 192 Robust Design 193 The Loss Function 194 8.4.1 Taguchi's Loss Function 194 8.4.2 Sony's television sets 195 8.4.3 An illustration from Saab 197 Design of Experiments 198 8.5.1 Experiment plans with design and disturbance parameters 198 8.5.2 Signal-to-noise ratio 200 8.5.3 A summary of Taguchi's philosophy 202 Notes and References 202

Part III - Production for Quality 9

Statistical Process Control 207 9.1 Variation 208 9.2 Process Improvement 211 9.3 Notes and References 215

10 The Seven Improvement Tools 216 10.1 Data Collection 217 10.2 Histograms 220 10.3 Pareto Charts 222 10.4 Cause-and-effect Diagrams 224 10.5 Stratification 228 10.6 Scatter Plots 230 10.7 Control Charts 231 10.8 Notes and References 233 11 Control Charts 234 11.1 Requirements on Control Charts 235 11.2 Principles for Control Charts 236 11.3 Choice of Control Limits 238 11.4 Control Charts for the Process Mean 241 11.4.1 An x-chart when µ and o are unknown 241 11.5 Control Charts for Dispersion 245 11.5.1 R-charts 246 6

© Studentlitteratur

11.5.2 s-charts 247 11.6 Combining x-chart and R-chart 248 11.7 Sensitivity 248 11.7.1 The OC-Curve 248 11.7.2 The ARL-Curve 250 11.7.3 Rational subgroups 251 11.8 Some Other Control Charts 252 11.8.1 The Western Electric alarm signals 252 11.8.2 Cusum charts 254 11.8.3 EWMA charts 255 11.9 Control Charts for Attribute Data 256 11.9.1 p-charts 256 11.9.2 c-charts 260 11.9.3 A note on positive lower control limits 260 11.10 Notes and References 261 12 Capability 262 12.1 Capability Measures 262 12.1.1 Some Capability indices 263 12.1.2 Sensitivity and robustness in capability indices 267 12.2 Process and Machine Capability 269 12.3 Capability Studies 271 12.3.1 Definition 271 12.3.2 The steps of a capability study 272 12.3.3 Data analysis and measurement precision 278 12.3.4 Implementing capability studies 280 12.4 Notes and References 280 13 Quality in the Supply Process 283 13.1 The Purchasing Process 284 13.2 Supplier Partnership 286 13.3 Acceptance Sampling 287 13.3.1 Deming's All-or-Nothing Rule 288 13.3.2 Acceptance Sampling by Attributes 291 13.3.3 Standard Systems for Acceptance Sampling 296 13.4 Notes and References 298

© Studentlitteratur

7

Part IV - Quality as Customer Satisfaction 14 External Customer Satisfaction 301 14.1 Customer Needs and Customer Satisfaction 301 14.1.1 The Kano model 305 14.1.2 Customizing 308 14.1.3 Customer satisfaction 311 14.2 Measuring External Customer Satisfaction 313 14.2.1 Dissatisfaction and complaints 313 14.2.2 Measuring satisfaction 316 14.2.3 Satisfaction, repurchase rates and financial results 320 14.3 Some Explanatory Models 323 14.3.1 GrOnroos' Model 323 14.3.2 The Gap Model 323 14.4 Closing the Loop 327 14.5 Loyalty Based Management 328 14.6 Value Creating Networks 336 14.7 Notes and References 340 15 Internal Customer Satisfaction 342 15.1 Motivation 342 15.2 Participation and Co-workership 350 15.3 Measuring Internal Customer Satisfaction 354 15.4 The Connection between Internal and External Customer Satisfaction 359 15.5 Notes and References 360 16 Customer Satisfaction Index 361 16.1 The Swedish Quality Index 361 16.2 American Customer Satisfaction Index, ACSI 365 16.3 The European Customer Satisfaction Index, ECSI 369 16.4 Some Industry Specific Indices 372 16.5 Notes and References 373

8

© Studentlitteratur

Part V - Leadership for Quality 17 Leadership 379 17.1 Leaders and leadership 380 17.2 Deming's 14 Points 384 17.3 Profound knowledge 389 17.4 Learning Organisations 392 17.5 Corporate culture 396 17.6 TQM as a Management System 397 17.7 Notes and References 401 18 Mission, Vision, Goals and Strategies 404 18.1 Mission and Vision 404 18.2 Goals, means and policy 409 18.3 Policy Deployment 415 18.4 Balanced Scorecards 420 18.5 Notes and References 423 19 Processes and Process Management 425 19.1 Processes 426 19.2 Process Management 431 19.2.1 The Process Management Methodology 431 19.2.2 Different roles in Process Management 437 19.2.3 An illustration 441 19.3 An Example of Process Improvement 441 19.4 Benchmarking 445 19.5 Process Innovation 448 19.6 The Capability Maturity Model 450 19.7 Notes and References 453 20 Quality Systems 455 20.1 The ISO 9000 series 455 20.1.1 Background 456 20.1.2 The ISO 9000:2000 series 458 20.1.3 The ISO 9000:2000 standard 459 20.1.4 The requirement standard ISO 9001:2000 462 20.1.5 The ISO 9004:2000 standard 468

Studentlitteratur

9

20.1.6 Some comments to the ISO 9000:2000 series 468 20.1.7 Third party certification 469 20.2 Some Similar Standards 470 20.2.1 QS-9000 — The system of the American automotive industry 470 20.2.2 ISO/TS 16949 — the common quality standard in the automotive industry 472 20.2.3 TL 9000 — the Telecom variant 474 20.2.4 AS 9000 — the standard of the Aerospace Industry 476 20.3 Implementing Quality Systems 476 20.4 Notes and References 478 21 Company Assessments 480 21.1 The Deming Prize 480 21.1.1 Background 480 21.1.2 Effects of the Deming Prize 483 21.2 Malcolm Baldrige National Quality Award 483 21.2.1 Model and criteria 484 21.2.2 The Award Process 485 21.2.3 Award Winners 487 21.3 The European Quality Award 494 21.3.1 The EFQM Excellence Model 495 21.3.2 The Award Process 497 21.3.3 Award Winners 497 21.4 The Swedish Quality Award 499 21.4.1 The SIQ Model for Performance Excellence 500 21.4.2 The Award and the Award Process 502 21.4.3 The Swedish Institute for Quality, SIQ 503 21.5 Self-assessment 504 21.6 Notes and References 510 22 The Seven Management Tools 512 22.1 Affinity Diagram 513 22.2 Tree Diagram 518 22.3 Matrix Diagram 520 22.4 Interrelationship Diagraph 523 22.5 Matrix Data Analysis 525 10

© Studentlitteratur

22.6 Process Decision Program Chart 525 22.7 Activity Network Diagram 526 22.8 Relations Between the Different Tools 528 22.9 Notes and References 528 23 Improvement Programmes 530 23.1 QC-circles and Improvement Teams 530 23.2 Suggestion Systems 533 23.3 T50 — ABB Sweden's Customer Focus Programme 535 23.4 Six Sigma 540 23.4.1 Six Sigma as an improvement programme 541 23.4.2 Practical work with Six Sigma 547 23.5 A Quality-conscious and Sustainable Society 552 23.6 Notes and References 558

Part VI - Tables, References, Index Tables 563 Table 1 P(X x) when X is Bin(n,p) 563 Table 2 P(X x) when X is Po(m) 565 Table 3 P(X x) when X is N(0,1) 567 Table 4 Constants for Control Charts 568 References 569 Index 599

© Studentlitteratur

11

Preface to the second edition Quality as a competitive factor is at least as important today as it was ten years ago, when the first edition of this book was written. Global competition and more aware customers with tougher requirements have prompted greater efforts at achieving Total Quality Management in all sectors of our society. Total Quality Management involves continuous improvement and development work, which is also true for the quality field as such. Over the past ten years, Total Quality Management strategies have changed in various ways. Ideas have been altered, or have acquired wider applications, of which the "quality" and "customer" concepts are good examples; and new methodologies, such as Six Sigma and self-assessment have gained a foothold. The quality area has also achieved a more distinct position in the academic field of Management, offering courses at several of our universities and university colleges. The work to integrate quality with environment and working environment issues is underway, with a sustainable society as the overall objective. It is our hope that this edition will be even better suited for courses on quality and quality improvements in universities and university colleges; as well as in the business community and public administration, highlighting how quality work can be conducted on all levels, i.e. from needs to satisfaction. We have made a conscious attempt to balance an academic, scientific attitude with a more narrative and illustrative approach, to reach as many readers as possible. In this edition we have tried to absorb new trends, and to create a book that provides a useful basis for the requirements of tomorrow as well. This means that some parts of the book are completely or partly new, while other sections have just been updated. We have integrated a discussion on service, goods and software quality; © Studentlitteratur

13

Preface to the third edition

placed a greater emphasis on product development and attached greater weight to strategies for achieving and measuring customer satisfaction. Moreover, we have expanded the sections about leadership; business analysis and self-assessment, using criteria from different quality awards, and describing the ISO 9000:2000 series. On the other hand, we have reduced the material on reliability, not because this is less important than before, on the contrary, but because we intend to publish a book specifically on this important subarea of quality methodology. Over the years, we have had many valuable ideas and comments from colleagues and friends in the industry and public administration. We are very grateful to all who helped us with previous editions. For this edition we owe special thanks to Anders Mellberg at Agria Djurförsäkringar; Arne Wittlöv at Volvo AB; Bertil Nilsson at Posten Brev; Richard Eichler at Saab Automobile; Mats Gunnerhed at FOA; Anette Kinde at IVF; Ingvar Johansson at SIQ and Karin Hugosson from Apoteket AB. We are also very grateful to Professor Yoshio Kondo's wife, Noriko Kondo, who was kind enough to help us with the calligraphy in some illustrations. Many thanks are due to Christer Jönsson at Studentlitteratur, Lund, Sweden, for his patience and skill in helping us to design layouts and pictures and to Karin Ashing for her efficient, skilful and helpful support with the translation to English. Many thanks also to Ronnie Andersson, Inger Jänchen and Bonnie Ohlsson at Studentlitteratur, Lund, Sweden, for their encouragement and support in the production of this and many other books. Our most heartfelt thanks go without a doubt to our wives, Elisabeth and Gunilla, who have once again endured seeing us absorbed in a laborious book project, and have had to live with piles of books and papers spread over tables and floors everywhere. Göteborg and Luleå, January 2003

14

C Studentlitteratur

General outline of the book

Aim

The aim of this book, Quality from Customer Needs to Customer Satisfaction, is to provide a general view of the Total Quality Management field. As suggested by the title, this means to identify and understand those for whom our activities are intended, the customers, and their needs and desires. Having accomplished this, the next step involves translating the needs and desires into properties of products, i.e. articles and services, which are then produced and delivered according to agreements and plans. After the article or service has been delivered, the customer should be provided with continued support. Quality also involves finding out how the customers experience the article or service, and all other contacts between the organisation and the customer. Finally, it concerns feeding these experiences back into the company processes, to bring about improvements. To sum up, Total Quality Management involves creating increased customer satisfaction at lower resource consumption. When we give courses, we sometimes receive comments to the effect that our basic courses focus too heavily on quantitative ("hard") material, in the form of statistical decision-making tools, or the opposite view, that they contain too much qualitative ("soft") material, dealing with leadership and employee participation. Our reply is that Total Quality Management is not a matter of either decision-making tools or human characteristics and relations — it includes both. This is what we are trying to reflect in this book, by covering a large area, comprising both "hard" and "soft" methodologies and tools. The implication of this is that some readers may find parts of this survey insufficiently penetrating, depending on their pre-knowledge and interests. For this purpose, to help anyone wishing to read more and acquire a deeper understanding, Studentlitteratur

15

The general outline of this book

Quality from customer need

Bakground, introductioi Quality and quality improvements — Chap 1

Customer needs • Understand customer needs • Identify customer needs • Competitors' weaknesses

External customer satisfaction — Chap 14 Customer focused product development — Chap 4 Quality Function Deployment — Chap 5

Quality and success — Chap 2

Translate customer needs to product properties

Customer focused product development — Chap 4 Quality Function Deployment — Chap 5

Develop goods and services

/ Customer focused product developpment — Chap 4 Quality Function Delployment — Chap 5 Reliability — Chap 6 Design of Experiments — Chap 7 Robust Design — Chap 8

Understanding of variation Process improvement — Chap 19 Improvement programmes — Chap 23 Company assessments — Chap 21 Variation — Chap 9 Process Control — Chap 11 Capability — Chap 12

—C---

(A:t P1:1Study Do

Business development Leadership—Chap 17 Mission, vision, goals and strategies — Chap 18 Processes and Process Management — Chap 19 Quality systems — Chap 20

16

Leadership - Part V

© Studentlitteratur

General outline of the book

to customer satisfaction — an overview and history - Part I The history of the quality movement — Chap 33

Develop processes, which can produce goods and services

Customer focused product development — Chap 4 Quality Function Deployment — Chap 5 Reliability — Chap 6 Design of Experiments — Chap 7 Robust Design — Chap 8

Production of goods and services

Variation — Chap 9 The seven improvement tools — Chap 10 Capability — Chap 11 Supplier co-operation — Chap 13

Customer satisfaction • Understand customer satisfaction • Measure customer satisfaction • Create customer loyalty

External customer satisfaction —Chap 14 Customer satisfaction index —Chap 16

and improvement of processes The seven improvement tools — Chap 10 The seven management tools — Chap 22 Internal customer satisfaction — Chap 15

Study

Study

Customer satisfaction

Study

feedback and improvement

Leadership - Part V

© Studentlitteratur

Company assessments — Chap 21 The seven management tools — Chap 22 Improvement programmes — Chap 23 Internal customer satisfaction — Chap 15

17

The general outline of this book

there is a section at the end of every chapter called: "Notes and References". Here we have, largely based on our own subjective views, selected books, articles and reports that we feel may be enriching to readers that are interested in further and in-depth studies. Disposition The book consists of five fairly independent parts. Some comments on previous edition were that it was difficult grasp the overall view; to see where the different methodologies or tools belong. In an effort to make it easier for the reader to get a general view, we have tried to create an easily accessible graphic illustration on the two proceeding pages, showing where the various parts of the book belong in the product life cycle. We would like to point out, however, that any such simplified illustration does, obviously, have its limitations, and should be seen purely as a way to create a comprehensive view. This book is not finished Anyone who has been involved in writing textbooks knows that there are often parts that need changing or improving in a new edition. This may be due to typographical errors; parts may appear in the wrong context; and the progress of the subject matter may mean that new items should be included and other excluded. Just as the general purpose of Total Quality Management is to generate interest and delight, we of course hope that the reader will find this book interesting, and that it will provide new knowledge. Even though this is the third edition, we are well aware that this version too needs continuous improvements, if the book is to have a useful purpose in the future. Therefore, we are most interested in the views and reflections of our readers, and should be grateful for feedback in all forms. You are welcome to contact us by letter, through our publisher Studentlitteratur, Box 141, SE-221 00 Lund, Sweden, or directly; Bo Bergman at [email protected] and Bengt Klefsjo at [email protected]. Thank you for your help to improve this book! 18

CO Studentlitteratur

Part I Quality for Success

The quality of a product is its ability to satisfy, and preferably exceed, the needs and expectations of the customers. In this first part of the book, we will discuss the quality concept; the development of the quality area; and the close relations between Total Quality Management and the success and profitability of a company or an organisation. We will also take a retrospective look at some quality issues.

General Electric reported costs savings of nearly USD 2 billion for 1999, thanks to the Six Sigma improvement programme. The Annual Report for 1999 from General Electric Capital Services states: "Wherever we are in the world, GE Capital is applying Six Sigma...it delivered over USD 400 million in net income in 1999". AlliedSignals (now Honeywell) reported similar cost savings, totalling more than half a billion USD in 1999. They have accomplished almost incredible savings by focusing heavily on quality, or, more specifically, by reducing the number of defects to a level that is unbelievably low. Following these and other successes, the Six Sigma programme is now spreading across the world. The electronics company Solectron has increased its work force from 2,200 in 1991 to almost 18,200 employees in 1997. They have taken over much of the production that other companies failed to make profitable. Solectron has also received the American Malcolm Baldrige National Quality Award twice. Today, it is extremely important for companies and organisations to continuously work with quality improvements, and the demand for quick changes has probably never been greater. The competition is global, and we are moving towards a world with fewer and fewer boundaries. To small companies, for example subcontractors to large, often international groups, the competition certainly will harden. Following the globalisation we are witnessing today, with mergers and other forms of co-operation resulting in AseaBrown Boveri, Ford-Volvo, GM-Saab and

Astra-Zeneca, for example, the demands for change and improvement will be gradually stronger. Concurrently, the Information Technology is progressing fast, which creates many new opportunities. Information flows, Internet contacts and ecommerce are expanding at an everfaster pace, and new types of companies that utilise these opportunities emerge. New demands and methodologies are created, through new alliances and networks between companies, whose interoperations benefit from the Information Technology. The values, methodologies and tools from Total Quality Management, which originally were developed for the private sector are now gaining ground in public administration and state-owned companies. Illustrations of this are that in Sweden the Lung medical clinic of the University of Linkbping, the Mail service department of the Swedish Post Office, and Alta School outside Stockholm, received the Swedish Quality Award, in 1996, 2000, and 2001, respectively. In 1999, the Swedish Government established The National Council for Quality and Development to support quality improvements in public service organisations. Also the Malcolm Baldrige National Quality Award in the US and The European Quality Award have recently been given to schools at different levels. If a country is to maintain, or preferably advance, its position on the world market, companies and organisations have to work more generally and systematically to adapt a Total Quality Management approach that will achieve continuous quality improvements.

1 Quality and Quality Improvements

In recent decades, we have seen a tremendous interest in quality as a strategic issue in the Western world. An important reason for this is the successes of Japanese industry, particularly in the 1970s and 1980s. These successes were largely due to the strategic role that quality played to Japanese managers. They realized early that the quality concept should emanate from the requirements and expectations of the customers. They also realized that the costs of quality defects, related to changes, scrappings, revisions and delays were significant, as was the cost of keeping large safety stocks. By systematically deploying simple statistical methods to identify and eliminate the sources of variation, they have been successful in reducing these costs drastically. In the last few years, companies and organisations in the West have accomplished similar improvements by focusing on quality issues, and so the gap to Japanese quality has probably narrowed. An effect of the recent accelerating interest in quality and quality improvements is that the word "quality" has become a frequently used — and misused — word. The inner meaning of this word, and how this meaning has changed over time, will be discussed in this chapter. We will also discuss the principles for what is today often called TQM, Total Quality Management.

1.1 Quality 1.1.1 Some definitions The word "quality" is derived from the latin "qualitas", meaning "of what". Its use goes back to antiquity. Cicero, the roman orator Studentlitteratur

21

Part — Quality for Success

and politician (106-43 a.D.), is thought to be the person who first used the word, see Bergman (1974)1. The word quality is still partly used in the sense of grade or property. One example of this is steel quality, which signifies different types of steel with various strength properties. However, the word has acquired a significantly broader meaning over the last few decades.

Figure 1.1 The Japanese signs for the concept of quality. The first sign is pronounced "hin" and roughly means "product". The second sign is pronounced "shitsu" and roughly means "quality". Originally this second sign illustrated two axes on top of a mussel and could be interpreted as "a promise of money or the value of money". Nowadays, the combination of the signs denotes the concept of "quality" and not only "product quality".

There are numerous definitions of the quality concept. Some of these are illustrated in Figure 1.2. A definition that is often too narrow is "conformance to requirements" (see Crosby, 1979). This definition has a producer perspective. A more customer-oriented definition, "fitness for use", has been credited to the American Joseph Juran2 (Duran, 1951). Edwards Deming even went a step further towards customer focusing when he emphasized that "quality should be aimed at the needs of the customer, present and future" (Deming, 1986, p. 5) and thus also pointed to the importance of considering today your customers of tomorrow.

2

22

Cicero is said to have formed the word "qualitas" by adding the substantivized morfem "-tas" to create "nature" or "character". Deming's and Juran's view of quality is discussed more in-depth in Chapter 3. © Studentlitteratur

1 Quality and Quality Improvements

The Japanese engineer Genichi Taguchi (see Taguchi & Wu, 1979) defines quality, or rather non-quality, as "the losses a product imparts to the society from the time the product is shipped". Even if Taguchi uses his definition for goods, the interpretation might just as well be used for quality of services. Taguchi's definition clearly highlights the connection to the consequences of our products, even to those not primarily using the product. Thus it becomes closely related to corporate responsibility and environmental problems, and to thoughts on a sustainable development and a sustainable society. The international standard for quality systems3 ISO 9000:2000 defines quality as "the degree to which a set of inherent characteristics fulfils the requirements, i.e. needs or expectations that are stated, generally implied or obligatory". Some definitions of the concept of quality "The degree to which a set of inherent characteristics fulfills the requiremenst, Lex. needs or expectations that are stated, generally implied or obligatory" ISO 9000:2000

"Conformance to requirementsyy`\. Philip Crosby

"The lack of quality is the losses a product imparts to the society from the time the product is shipped" Genichi Taguchi

"Quality is a state in which value entitlement is realized for the customer and provider in every aspect of the business relationship" Mikel Harry, Six Sigma Academy

"Quality should be aimed at the needs of the customer, present and future" Edwards Deming/

"Fitness for use" Joseph Juran

"... there are two common aspects of quality. One of these has to do with the consideration of the quality of a thing as an objective reality independent of the existence of man. The other has to do with what we think, feel or sense as a result of the objective reality. In other words, there is a subjective side of quality." Walter Shewhart

Figure 1.2 Some definitions of the quality concept.

3

ISO 9000 is addressed in Chapter 20. Studentlitteratur

23

Part 1- Quality for Success

For some time, the prevailing position, albeit in different wordings, has been that "the quality of a product is its ability to satisfy the needs and expectations of the customers". Product here signifies an article or a service, or a combination of the two. We feel that this definition should be expanded and suggest the following definition: "the quality of a product is its ability to satisfy, and preferably exceed, the needs and expectations of the customers". Here it is vital to remember that needs and expectations are two different things. Our expectations sometimes include elements that we do not really need. On the other hand, as customers, we have needs that we do not expect to be fulfilled, sometime because we do not realize our own needs. Some authors prefer to talk about "requirements, needs and expectations", but we do not feel that the word "requirement" needs to be emphasized, as, to us, requirements are integral parts of the "needs and expectations" concept.

Quality The quality of a product is its ability to satisfy, or preferably exceed, the needs and expectations of the customers. \kw Figure 1.3 Our definition of the "quality" concept for products.

The definitions in Figures 1.2 and 1.3 implies that it is not always sufficient to fulfil customer expectations. Our aim must be to even exceed the expectations. The customer should be surprised, delighted and enthusiastic. This is how loyal customers are often created, who will keep coming back, and who will also speak in positive terms about their experiences4. Some organisations have created their own interpretations of the quality concept, often with a clearly customer-oriented focus. One of these is "Everything we do must be done in a way that pleases the customers, so that they want to come back to us" (Gunnar Wivardsson,

4

24

Customer satisfaction and loyalty will be addressed further in Chapter 14. Studentlitteratur

1 Quality and Quality Improvements

Stena Line); another "Quality is when the customers come back — not the product" (CEO in a small Swedish company); a third is "Quality is the presence of value defined by customers" (Federal Express). These definitions make it very clear that as customers we do not only judge the individual product, but we also make an overall judgement of our experiences of the organisation that sells or makes the product. When buying a car, for example, we do not only consider the performance of the car, but also weigh in aspects such as spares availability, service and how we are treated. The product quality is then only part of our total quality experience. According to this way of thinking, quality is seen more as a relation between a product with its underlying organisation and the customer, than as a pure product property. A consequence of this widened outlook is the great interest associated with trademarks and their impacts today. Figure 1.4 shows another progressive definition of the quality concept, clearly highlighting holistic and relational aspects. Quality is what makes it possible for a customer to have a love affair with your product or service. Telling lies, decreasing the price or adding features can create a temporary infatuation. It takes quality to sustain a love affair. Love is always fickle. Therefore, it is necessary to remain close to the person whose loyalty you wish to retain. You must be ever on the alert to understand what pleases the customer, for only customers define what constitutes quality. The wooing of the customer is never done.

Figure 1.4 A noteworthy definition of the term quality, introduced by Myron Tribus6 in the ASQC Statistics Division Newsletter, 1990, No. 3, page 2.

An interesting discussion on the quality concept is given by Garvin (1984); see also Garvin (1988). He identified five approaches to the quality concept, the transcendent, product-based, user-based, manufacturing-based and value-based perspectives, see Figure 1.5.

6

This is sometimes referred to as "Brand Relationship Management". Myron Tribus (born in 1921) has served as a professor at the University of California, LA; worked at the Centre for Advanced Engineering Studies at MIT, for the American Department of Trade, and at Xerox in Rochester.

© Studentlitteratur

25

Part I — Quality for Success

Manufacturingbased

Transcendent Userbased Productbased

Valuebased

Garvins perspective on the quality concept

Figure 1.5 The five approaches of the quality concept introduced by Garvin. (From Garvin, 1984.)

The transcendent perspective rests on Platon's view of beauty. Advocates of this view argue that, as quality is identified when experienced, it cannot be defined exactly; "quality lies in the eye of the beholder". The product-based view, on the other hand, holds that quality is exactly measurable and is determined by its ability to objectively and precisely measure the extent to which a product possesses certain desirable characteristics. A consequence of this approach is, according to Garvin, that higher quality costs more, and that quality is an objective inherent attribute of the product, not something that the buyer or user can judge. Proponents of the user-based approach are of the opinion that quality is judged by the customer. The manufacturing-based approach relates to the fulfilment of tolerances and requirements in production. Quality here is mainly concerned with technology, and improved quality means less scrapping. According to the value-based outlook, quality is defined in relation to cost and price. A high quality product possesses the desired attributes at an acceptable price, or performance at an acceptable cost. Garvin's own conclusion of his discussion is that an organisation cannot have just one approach to the quality concept, but that different approaches are needed in different parts of the organisation. We find that of the definitions in Figure 1.2, Crosby's is manufacturing-based, while Deming's, Juran's and our own definition (Figure 1.3) have a user-based perspective. Our definition is also inspired by theories on service quality, addressing the gap between 26

Studentlitteratur

1 Quality and Quality Improvements

expectation and experience7. Shewhart's view of quality (see Figure 1.2 and Shewhart, 1931) includes an interpretation that may be perceived as a combination of a manufacturing-based (objective measurement) definition and a user-based (subjective assessment) definition. Here we should add that Juran often clarifies his definition of "fitness for use" by emphasizing that the quality notion consists of two separate elements, of which the first is that the product should be free from defects, and the second that it should possess properties that will fulfil customer needs. Building on Shewharts (1931) distinction between objective and subjective aspects on quality the Japanese professor Noriaki Kano8 has suggested an interesting model for the relations between these two aspects. In Chapter 14 we will discuss this important model in more detail.

1.1.2 The customer concept From the definition of "quality" follows that the customer concept is vital. In this book, the customer concept implies "the people or organisations that are the reason for our activities", i.e. "those for whom we want to create value" by our activities and products. hose we want to create value to are our customers

C Figure 1.6 The customers of an organisation are those for whom you want to create value.

In some contexts the "customer" concept feels unfamiliar and difficult to accept, not least because the word customer is often associated with a financial relation. This is particularly true in the public sector. The person paying for a service may be a totally different person than the one for whom the service is intended to create value. For most goods, and indeed also for many services, such as

7 8

We will come back to the Gap Model in Chapter 14. A photo can be found in Figure 14.2.

© Studentlitteratur

27

Part I — Quality for Success

hairdressing, restaurant meals and car washes, the person paying for the product is normally also enjoying its value. But who is the customer of the education imparted in a school? Is it the pupil, who, hopefully, receives a good education; is it the local authority paying for school premises and teacher salaries; is it future employers; parents; or the community at large? Focusing on established customer groups such as pupils, patients or subscribers would be inappropriate. In most cases, the business activities are for the good of many groups of people, which purports several customer categories. Therefore a general, overall and reasonably neutral word is needed not to prejudice concepts or discussions. When there are several customer categories, the various needs and expectations do not always coincide. At such times it is important to bring any conflicts concerning customer requests to the surface and to make conscious prioritisations. For this reason, every organisation should try to answer the question "Who are our customers?" or "For whom are we trying to create value?". Sometimes the answer is easy, in other situations the effort to arrive at a shared vision of who the customers are may be considerably more complicated. Who are, for instance, the customers of the police and who are customers of the tax or defence authorities? Thus, improving quality involves systematic and resource-efficient efforts to • find out who the customers of the organisation9 are • find out their needs and expectations • be sure to fulfil, and preferably exceed, these needs and expectations. Any one of these items may then be difficult to solve. The object of this book is to provide some ideas for this endeavour. Finally, we want to mention that Norrman (2001) discusses the shifting view on the customer. In the typical mass fabrication days the customer was seen only as an anonymous part of the market. 9

28

In this book, unless clearly stated otherwise, we altemate between the terns 'company' and 'organisation' to denominate the operation that produces services or products, in private or public business. Studentlitteratur

1 Quality and Quality Improvements

Today, the personalised customer and the relation between the provider and the customer is emphasised. For the future, Normann (2001) forecasts that we will see the customer more as a co-producer in a value creating network. In fact, this is already taking place, and this changing view is strongly related to the service dimension of most products. Services will be discussed in the next sub-section.

1.1.3 Services In this book we interpret the concept "products" as goods, services or mixture thereof. Due to the somewhat special character of services as products we make a short discussion about these. Lately, the general interest in service quality has increased considerably. A growing share of the GDP (gross domestic product), for instance, is in many countries made up of services. Services produced in the public sector include health care, schools, universities, the police, the judicial system and tax authorities. In the private sector we find transports, restaurants, banks, hotels, hairdresser's, video shops, and so on. Service characteristics are becoming different from what they used to be, however, as a growing proportion involves managing information. This new type of services has evolved as a side-effect of the rapid IT growth. Obviously, the design of services that rely heavily on information technology will be strongly affected by future developments. In fact, it is estimated that more than 80% of the information technology products are sold to the service sector. Services are very important in the traditional manufacturing industries as well. Much of what goes on internally in companies can be regarded purely as service production. Also, many services go with the sold articles. Some examples are sales, warranties, complaint processing, and maintenance. Once traditional manufacturing companies are increasingly turning into service producers, a trend that is very likely to grow. ABB Robotics, formerly manufacturing industrial robots, are now "delivering enhanced productivity and rationalization systems". According to Edvardson et al (2000a), no less than 96% of the working hours in the © Studentlitteratur

29

Part I — Quality for Success

company are related to service. We are moving towards a trend where we purchase functions rather than goods. We don't buy refrigerators but cold storage; and instead of cars, we are now buying transportation. Volvo Trucks, who got the Swedish Quality Award in 1998, have changed their core activity from manufacturing trucks to "creating trouble-free transport". Texas Instrument, winners of the Malcolm Baldrige National Quality Award in 1992, said "we offer solutions, we don't sell products". Xerox has changed from a copy machine company to "a document company, which provides solutions to help you manage documents". All in all, this means that it is extremely important to focus on handling and improving service quality. Although goods and services differ in many ways, it is crucial to point out that TQM methodologies and tools are largely independent of whether the product is purely an article, a service, or a combination of the two. Often highlighted distinctions between goods and services are: • Services are not as tangible as articles are, and therefore it may be difficult to explain, specify and measure the contents of a service. • The customer often plays an active role in creating the service. • The service is often "consumed" while being created, which also means that the service cannot be stored or transported. • The customer does not become the owner of anything after the service has been delivered. • Services consist of activities and processes and cannot therefore be tested or tried by the customer prior to purchase. • Service are often made up of a system of sub services, but the customer assesses the entire package, not the separate sub services. The differences between products and services are diffuse. This applies especially to new types of information services, such as storage (unimportant where) of large amounts of information, which can very quickly be transported to a customer, who then comes into possession of data that can be used instantly, or at some future point of time. Nevertheless, it is of course important to consider the differences connected with the design and execution of 30

CO Studentlitteratur

1 Quality and Quality Improvements

services. These two concepts correspond to construction and production, respectively, in the realm of goods. It is particularly important to establish that in many cases, the quality of a service is essentially determined at the moment when the person performing the service, the service supplier, meets the customer. This is often called the moment of truth, and refers to the crucial final meeting between the matador and the bull, when one of them is going to die, see Normann (1984). However, the moment of truthl° is, unlike the situation for a bull about to meet his matador, a moment full of possibilities, because of the ample opportunities that the supplier has to convince the customer of the excellence of his service. On the other hand, if a fault has occurred, it is generally too late to do something about it when the customer is gone or has "left the arena". Even the most perfect system of service design and execution is not worth much if it doesn't work at the very moment of truth. Therefore, service quality should, at least as much as in the case of goods, be regarded as a relation between the service and its supplier, and the customer on the receiving end.

1.1.4 Quality dimensions Even though many offerings contain both a hardware component and a service component the development of quality dimensions of goods and services have developed separately. In this section we follow this tradition and discuss the two sets of dimensions separately. Quality dimensions of goods

product quality concept has many dimensions. For goods, some of them are (see Figure 1.7):

The

• reliability, which is a measure of how often problems occur and how serious these are. 10

The notion "the moment of truth" gained popularity and intemational circulation, above all thanks to Jan Carlzon, the former CEO at SAS (see photo in Figure 1.18); see Gummesson (1999).

C Studentlitteratur

31

Part 1- Quality for Success

• performance of significance to the customers on the intended market segment, such as speed, capacity, useful life or size. • maintainability, which summarizes how easy or hard it is to detect, localize and take care of a problem • environmental impact, a measure of how the product effects the environment, e.g. in the form of emissions or rccyclability, and of how environmental aspects are treated in the production. • appearance, an aesthetic parameter created by design, .choice of colour, etc. • flawlessness, i.e. that the goods are not marred by errors or deficiencies at the time of purchase. • safety, meaning that the article does not cause damage to person or property, or, in some cases, offering adequate protection against damage. • durability, signifying that the product can be used, stored and transported without deteriorating or being damaged. Quality dimentions of an article Reliability Performance

Durability

Maintainability

Safety

Environmental impact

Flawlessness Appearance Figure 1.7 Some quality dimensions of an article.

Quality dimensions of services

The quality of a service, like the quality of goods, has several dimensions. Some of these are (see Zeithaml et al. 1990, and Figure 1.8): • reliability, referring to the consistency of performance, including punctuality and precision in terms of information and invoicing procedures, and doing what you have promised to do. 32

© Studentlitteratur

1 Quality and Quality Improvements

• credibility referring to being able to trust the supplier • access, i.e. how easy it is to come into contact with the supplier. This is where position, opening hours, supplier availability, and other technical facilities belong. • communication, the ability to communicate in an understandable way that is natural to the customer • responsiveness, i.e. willingness to help the customer • courtesy, which refers to the supplier's behaviour, e.g. politeness and kindness • empathy, the ability to understand the customer's situationll • tangibles, the physical environment in which the service is executed, i.e. the appearance of equipment and premises. To sum up, many of these dimensions are related to the customer's confidence in those providing the service. In addition to these dimensions, which are for the most part related to how the service is delivered, it is, obviously, also important to look to the actual service content, how this is made up, and if the service meets or exceeds the customers' needs and expectations. Quality dimentions of a service Reliability Credibility

Tangibles

Empathy

Access

Courtesy

Communication Responsiveness

Figure 1 .8 Some quality dimensions of a service.

11

The Greek origin for this word, "empatheia" stands for insight, feeling (the National Encyclopedia). Studentlitteratur

33

Part I — Quality for Success

A note on quality dimensions

It should be noted that a generic list of quality dimensions can only give a first set of ideas needed for product planning. Each product, article or service has to meet its own special set of customer requirements. These needs and expectations have to be thoroughly investigated and should have a major impact on the planning of the work to be performed. Another important aspect is to observe that the importance of the different dimensions may vary with the product. For example, with an aeroplane, operational reliability is considerably more important than appearance, while for a watch, the opposite might be the case. At a surgical operation, the reliability factor is no doubt vital, whereas communication skills may be less important. In teaching, communication skills are paramount, while it is no disaster if the teacher should not immediately be able to answer every question.

1.2 The Cornerstones of TQM Nowadays, quality issues are regarded as an integral part of the activities of an increasing number of private as well as public companies and organisations. This is the basis for what is often referred to as Total Quality Management, TQM, which, in our interpretation means "a constant endeavour to fulfil, and preferably exceed, customer needs and expectations at the lowest cost, by continuous improvement work, to which all involved are committed, focusing on the processes in the organisation". We see this as a matter of active prevention, change and improvement rather than control and repair. This is sometimes expressed as "prepare and prevent instead of repair and repent". The quality work is a continuous process, and not a one-off project, and furthermore, developing products and processes but also supporting the personal development of those involved in these processes. There are many descriptions of the Total Quality Management concept in literature, but few definitions. We see TQM as a whole concept, where values, methodologies and tools combine—to attain 34

© Studentlitteratur

1 Quality and Quality Improvements

higher customer satisfaction with less resource consumption. This whole concept can be interpreted as a management system, to which discussion we will revert in Chapter 17. A quality strategy in a company must be built on the top management's continuous and consistent commitment to quality issues. Joseph Juran12, one of the greatest authorities in the quality field, expresses it thus: "To my knowledge, no company has attained world-class quality without upper management leadership". The top management have to include quality aspects in the company vision, and support activities regarding quality financially, morally and with management resources. The top management must also actively take part in the improvement process. If the management do not show by their actions that quality is at least as important as, say, direct costs and delivery times, the employees are not likely to embrace that view. One example of the importance of top management commitment with respect to quality is described in Karatsu (1988). These days, General Motors and Toyota are co-operating at a factory in Freemont, California. Before this co-operation existed, the GM factory had serious business problems. The new management, appointed after the co-operation agreement, concentrated their efforts on providing quality improvement training for all their employees. Today, the productivity and product quality at the Freemont factory are comparable to the results achieved at Toyota's factories in Japan. Previously the workers were blamed for the inferior quality of American cars. In reality, a lack of commitment and inadequate knowledge within the top management were the main reasons for their problems. Based on top management commitment a successful work with quality improvements can be built. This shall rest on a culture, based on the following values13

12 13

For more information about Juran, see Section 3.4. According to Schein (1992) , the culture within an organisation is built on "artefacts", (i.e. cultural expressions that you see and hear, such as clothing and language), the common values we refer to here explains how to work, act and solve problems, and the fundamental assumptions that are so deeply rooted that no one reflects on them. (Bruzelius & Skarvad, 2000.) Studentlitteratur

35

Part 1— Quality for Success

• • • • •

focus on customers base decisions on facts focus on processes improve continuously let everybody be committed.

In addition, it is important that all these values interrelate, and that a comprehensive picture is created. To separate the values mentioned above from other similar sets of values, we prefer to talk of cornerstones.

Improve continuously

Focus on processes Focus on customers Base decisions on tact

Let everybody be committed

Committed leadership

Figure 1.9 The values, cornerstones, which are the basis of Total Quality Management.

The cornerstones should then be supported by suitable methodologies and tools into a unit. Below we will look further into the meaning of the different cornerstones.

1.2.1 Focus on customers A central quality aspect today is to focus on the customers. Quality has to be valued by the customers, and put in relation to their needs and expectations. This means that quality is a relative term, which to a large extent is defined by the competition on the market. The quality of a product can be experienced as having deteriorated significantly, if a competitive alternative with better properties turns up on the market. The crisis in the American car industry in the beginning of the 1980s is a good example of this. In addition, the connection between the needs of the customer and the 36

C Studentlitteratur

1 Quality and Quality Improvements

function and price of the products means a lot for the customer's perception of the products quality. Focusing on customers implies finding out what they want and need, and then to systematically try to fulfil these needs and expectations in the development and manufacture of the product. It is not always easy to ascertain what the customers want, for example by market surveys. At times the customers themselves are not able to state their needs, and it takes considerable empathy to understand what they need. For example, when Toyota were about to launch Lexus, their new luxury model on the American market, they let some of their engineers live in American families, to be able to understand American customer needs properly. The importance of customisation can also be illustrated by the problems of Disney Europe outside Paris. Following the success of Disneyland in Florida, a similar plant was established in Japan, and was also very successful. From this they concluded that the business idea was universal, like a credit card. But the Japanese longed to escape into a different way of life for a day, they welcomed the American style. This was not the case in France and Europe, as here people had no wish to buy a day in America. What they wanted was a day at an ordinary amusement park, see Godfrey (1995). A great many companies and organisations produce goods and services and try to market these without adapting them to the customers' needs and requirements. There is a tendency, however, that companies and organisations tailor-make products to their customers. One example of this is Levis, which began manufacturing custom-made jeans a couple of years ago. The measurements were sent to the factory, where the jeans were produced according to the individual customer's figure and requirements. The price for customer-made jeans was approximately 8 US dollars higher, but sales rose sharply by 30%, see Godfrey (1995). An important component in this methodology is that the company acquires a very good understanding of their customers' needs. Information technology and e-commerce provide great opportunities to obtain knowledge about customer requirements, and to aim offers at suitable customer categories. By following up previous purchases and analysing the corresponding customers other purchases, it is possible to target offers of new relevant products. Studentlitteratur

37

Part I — Quality for Success

Focusing on the customers does not only apply to the external customer. Every employee has internal customers within the company. Their needs also have to be satisfied, in order for them to do a good job. It is important in Total Quality Management, with its strong focus on external customers and their satisfaction, not to forget the internal customers, the employees. Quality improvement is essentially a matter of providing the employees with better opportunities to do a good job, and to feel happy with their performance. This creates a breeding ground for satisfied external customers in the long run. Customers External

Internal 0 0 0

'a

0 0 0 0 0 0

'1 I).

L

II:A

Figure 1.10 The company or organisation and the external customers, who buy, use or are otherwise affected by the produced goods or services.

1.2.2 Base decisions on facts An important element in modern quality philosophy is to base decisions on facts, which are well-founded, and not to let random factors be of decisive importance. To do this requires knowledge about variation and ability to separate between natural variation and variation due to identifyable causes. Also needed is factual data of both numerical and verbal charcter. We have to obtain, structure, analyse and decide upon different types of information. Focusing on customers must not be only a slogan, but requires systematic information about the needs, requirements, reactions and opinions of the customers. Only about 25% of all new products are successful on the market (Kotler, 1997). A possible explanation is that the launch was not proceeded by a thorough examination of what the customers actually want and how much they are prepared to pay for it. Thus, decisions on product development have not been based on well-founded facts. Another cause of failure 38

© Studentlitteratur

1 Quality and Quality Improvements

might be that the company had insufficient knowledge about the product before releasing it on the market, they had missed crucial facts. Frequent recalls examplifies this problem. It is also important to have a strategy for making decisions based on facts in connection with manufacturing. Earlier, these facts were rarely used to draw the most important conclusions about the manufacturing process, even though a great many facts were collected and a lot of measurements were taken. Measurements were taken to evaluate single units, not to evaluate and improve the manufacturing process in which the units had been produced. Collected data were stored in files, later on tapes or discs, without ever being used. Simple statistical methods have not been used to process and analyse data, an analysis that could have provided an excellent basis for variation reduction of the manufacturing process, and thus for achieving improved quality. Basing decisions on facts implies actively looking for relevant information, which is then compiled and analysed. From the analysis conclusions are drawn. In order to work efficiently with improvements, we need to structure and analyse verbal information as well, for instance opinions and feelings.

Control chart

Pareto diagram

His ogram

Stratification Figure 1.11 The Seven QC Tools, or the Seven Improvement Tools, mainly intended for structuring and analysing numerical information.

© Studentlitteratur

39

Part 1— Quality for Success

Simple statistical tools, such as the Seven QC Tools, or the Seven Improvement Tools, (see Figure 1.11) and the Seven Management Tools, see Figure 1.1214 are efficient in facilitating the work to gather, structure and analyse numerical and verbal information.

1

1- -1 1 1

1- -1 1 1

abcdef abcdef abcdef abcdef abcdef

9 9

3 3 9

Matrix diagram

Affinity diagram

1 it 4. I II I irkri Process decision chart

(:) 110.0

0 wmfro

11 I I 004P °

Activity network diagram Figure 1.12 The Seven Management Tools, whose prime function is to help structure and analyse verbal information.

1.2.3 Focus on processes Most organised activities can be regarded as a process, i.e. "a set of interrelated activities that are repeated over time". The process transforms certain input, such as information and material, into certain output in the form of various types of goods or services. The purpose of the process is to satisfy its customers with the end result produced, while using as little resources as possible. The process is supported by an organisation consisting of people and their relationships, resources and tools. Identifying the process suppliers is another important task, and to provide clear signals

14

40

The Seven QC Tools are addressed in Chapter 10 and the Seven Management Tools are discussed in Chapter 22. © Studentlitteratur

1 Quality and Quality Improvements

about what is needed in the process, to minimize resources and to satisfy its customers.

Supplier

Resurces

Result

Equipment

Information

Manpower Material

)11'' Jo.

Goods

Customer

Services

Figure 1.13 A process is a set of interrelated activities that are repeated over time. It transforms certain resources into results that should satisfy the customers of the process with the smallest possible resource consumption.

The process is the part of the activity that links the past with the future. The process generates data that indicate how well the process satisfies the needs of the customers. With statistical tools and models, it is possible to draw conclusions from the process history about its future results, and to obtain the necessary information to improve the process. The process view means not only looking at every single piece of data, such as a measurement result or a customer complaint, as a unique phenomenon. Instead, it should be regarded as a part of the statistics that can provide information about how well the process is working, and how it can be improved15. Processes are often differentiated into the following three types (see Egnell 1994, and Figure 1.14): • Main processes16, whose task is to fulfil the needs of the external customers and to improve the products provided by the organisation. These processes have external customers. Examples of this type are processes for product development, production and distribution. • Support processes, whose task is to provide resources for the core processes. These processes have internal customers. Examples are recruitment, maintenance and information processes.

15 16

The process view is discussed further in Chapter 19. These are sometimes referred to as "operative processes" and "core processes". Studentlitteratur

41

Part l — Quality for Success

• Management processes, whose task is to make decisions on the targets and strategies of the organisation, and to implement improvements in the other organisational processes. These processes, too, have internal customers. This category is where procedures for strategic planning, targeting and revision processes belong.

External customers

Figure 1.14 An illustration of the processes in an organisation, based on their respective task. (From Egnell, 1994.)

1.2.4 Improve continuously External customer demands for quality are growing continuously, new technological solutions appear and new types of business activities are created. Therefore, continuous quality improvements of goods and services produced by the company are vital. Improving continuously is an important element in a successful quality strategy. Anyone who stops improving soon stops being good. The symbol of continuous improvement is the improvement cycle "Plan — Do — Study — Act", which will be addressed frequently in this book, especially in Chapter 9. Even without any external pressure, a continuous quality improvement work is well justified from a cost point of view. Measured costs due to defects and other non-quality contributions are large today. It is not unusual for them to amount to between 10% and 30% of the sales (see Section 2.4). In most cases, defects incur other costs as well. If a high rate of disturbances has been accepted, this has to be compensated for by processing many products and large 42

© Studentlitteratur

1 Quality and Quality Improvements

buffer stocks. The corresponding capital costs are not usually registered as costs due to poor quality. Their contribution can, however, represent a considerable part of the total costs. The methodology Six Sigma, which has lead to considerable savings in many companies, is based on statistical methods which are utilized systematically, using facts to bring about continuous improvements17.

This happens if we accept that 99% correct is sufficient: • Nine words are incorrect spelled at each side in your newspaper. • Almost four times per year you will not get your daily newspaper. • You should be without electricity, water or heating about 15 minutes each day. • At least 8 500 prescriptions should be incorrect each year. • About 23 700 transfers should each day be made to wrong account. • Drinking water in the waterpipe system should unusable about 1 hour per month.

Figure 1.15 An example of the consequences in Sweden a couple of years ago when accepting that 99% correct is sufficient. (From Hedman & Lindvall, 1993.)

The basic rule of continuous improvement says that it is always possible to improve products, processes and methodologies while using less resources, i.e. to achieve higher quality at lower costs. In many cases very simple steps can bring about dramatic effects in terms of improved quality and reduced total costs. The real challenge is to find these steps. The basic role of Continuous Improvements There is always a way to get improved quality using less resources

Figure 1.16 The basic rule of continuous improvement.

17

Six Sigma will be addressed in Chapter 23. Studentlitteratur

43

Part 1— Quality for Success

The element of continuous improvements is a mental picture that everything can be done better than what we are doing today, better in the sense that it provides "better customer benefit" and better in the sense "with less resources". Thus, what this amounts to is a win-win-work, to the benefit of both employees, customers and company. Let us here also comment on the popular slogan "Do it right the first time". This slogan must be interpreted carefully. In order to delight our customers we must be prepared to change to be able to improve. Therefore, we must dare to make some mistakes in the improvements process. We must realize and accept that mistakes will happen, and try to learn from them. When a mistake has occurred, it is essential to turn it into an asset by using the process information provided, to increase our knowledge about its opportunities for improvement. What must never happen is to focus on finding "scapegoats" when mistakes are made. According to Imai (1986, 1997), the Japanese term "kaizen", see Figure 1.17, is a conscious systematic effort to try to bring about continuous improvements. Kaizen is perceived as focusing on small improvements. This should not be understood to mean that methodologies leading to improvements by leaps could not be included in a successful quality strategy. We want to point out that according to our view, continuous improvements could very well include both small and big improvements, see also Juran (1964).

Figure 1.17 The Japanese word "kaizen" consists of the symbols "kai", meaning "to change" and "zen", which means "good". Together this represents "changing for the better". "Kaizen" is a very common word in Japanese. The term "kaizen" in the quality context was first used by Masaaki lmai (born 1930); see 'mai (1986, 1997).

44

© Studentlitteratur

1 Quality and Quality Improvements

1.2.5 Let everybody be committed For the quality work to be successful, it is essential to create conditions for participation in the work towards customer satisfaction with a continuous quality improvement. An important mean for quality improvements is therefore to facilitate the opportunities for all employees to be committed, and participate actively in the decision-making and improvement work. Key words here are communication, delegation and training, see Figure. 1.18. In his book "Moments of Truth" Carlzon (1987) tells the story of two stonemasons who make granite blocks square. Asked what they were doing, one of them answered tiredly that he was making granite blocks square, while the other one answered enthusiastically that he was helping to build a cathedral. The employee must have a chance to feel commitment, professional pride and responsibility, to be able to do a good job. The most important to a person is to know and feel that she is needed • When a person in freedom is allowed to take responsibility resources are released which otherwise not are available • A person who does not have information can not take responsibility. A person who has information can not escape taking responsibility (Jan Carlzon, Moments of truth)

Figure 1.18 Important elements to stimulate participation in an organisation are information and delegation. (From Carlzon, 1987.)

The wording for this element aims to communicate a positive outlook on people. Those who are given the chance to do a good job and to feel professional pride, and who are recognized when having performed well, will also be committed to their job. This leads to improved product and process quality. Creating opportunities for participation for all involved means to work actively at removing all obstacles for commitment, which often exist in our organi© Studentlitteratur

45

Part I - Quality for Success

sations today. But it also means that we as individuals have to take responsibility. In this effort to create collaboration, we need to individually develop our self-reliance, conversational abilities, purposefulness, co-creativity and ability to learn from experience, see Eklund & Lund (1999). As a consequence, the management should support and stimulate their employees to develop these and other qualities. Participation and commitment are achieved through the delegation of responsibility and authority. It is important to change vicious circles into good ones, see Figure 1.19. Nowadays, the discussion is not only about creating many job opportunities, but also that these should provide meaningful, stimulating tasks of great responsibility. Job satisfaction is both an important target and a vital means to achieve high quality. A good circle

A vicious circle -----Top management lack confidence

Inferior results

Inspection and control of details

Employees lose motivation

'---/

Top management has confidence

--------

De egate responsibility and authority

Employees motivated

Improved results _--

Figure 1.19 A vicious circle and a good circle, linked to the effect of delegating responsibility and authority.

Not only everyone within the company, but also all suppliers of material or components, must be involved in the quality work. An obvious tendency in large companies today is to drastically reduce the number of suppliers. Instead of choosing the supplier who offers the lowest price, they choose to establish connections with a small number of suppliers, to achieve increased commitment, responsibility and quality awareness. At the same time, the suppliers are given an increased responsibility for the development and manufacture of various sub systems, and regard this as interesting 46

© Studentlitteratur

1 Quality and Quality Improvements

business opportunities. Tetra Pak, for example, have cut down the number of suppliers in the last few years, from around 3000 to 255. IKEA18 has reduced the number of suppliers in Europe during the period 1995-2000 by approximately 35%, while their responsibilities have increased and the purchasing volume has virtually doubled. In SKF, suppliers are responsible for a maintenance system with status monitoring. The employee who makes a screw for a car seat probably does not feel that he is helping to build a car, but the supplier who is responsible for the whole seat probably does.

1.3 The Importance of Systems Thinking Even if we have structured the operations in our efforts to make the processes clear, we must never forget that the various processes are interdependent and affect one another. The processes make up a system, and therefore systems thinking on a long-term basis is necessary to achieve success. Systems thinking, or the ability to see the overall view and how the various integral parts are affecting one another, is an important element for successful quality improvements. Senge (1990) introduced a graphic method to describe system effects such as delays and negative or positive feedback and reinforcements. Senge (1990) calls typical system structures system archetypes19. Senge regards systems thinking as an essential ingredient in a learning organisation20. An important theme in TQM, that has not yet been emphasized sufficiently, is the realization that all involved can become winners. In every buyer-seller relation both parties should feel content when the purchase is concluded. Both parties have benefited from the deal.

18

19 20

IKEA is a Swedish retailer of furniture and home appliance operating on a global market. In August 2001 the IKEA Group had 143 stores in 22 countries. In addition there were some 20 stores owned and operated by franchisees outside the IKEA Group. The word "archetype" comes from the Greek "archetopos", which means prototype, origin. Senge's views on learning organisations will be discussed in more detail in Chapter 17. Studentlitteratur

47

Part 1— Quality for Success

This is an obvious example of the attitude that Deming (1993) calls win-win. All too often in the business world a win-loose relation is created, i.e. what one party wins is lost by the other. This makes all losers in the long run. The cultures that can create win-win relations are the ones that will be most successful in the long run. To create a win-win relation we have to trust one another. The Japanese professor Kaoru Ishikawa pointed out that we make people untrustworthy by not showing them enough trust. In doing so we are creating a self-fulfilling prophecy. If we base the co-operation on trust, it is possible to do business on terms that are mutually beneficial. Deming (1993) stresses that it always pays to expand the system in view. Having up till now only looked at the own organisation, or prior to that, the own division or function, we can see a clear tendency to expand the vision, so that customers and suppliers are also regarded as important parts of the system. By joining forces to improve the whole system and make it more efficient, all concerned can be winners. This development is partly due to a heightened insight about the win-win perspective, but also to a clear tendency to transfer responsibility for the value adding process to suppliers through outsourcing. The heightened focus on the whole supplier chain is often referred to as Supply Chain Management, see Christopher (1998). Today, this point of view goes even further as indicated by the concept of value creating networks; see e.g. Normann (2001). We can see that this systems view is spreading, for example in the form of relations between the private and public sector, mainly in health care and education. An interesting deployment of the systems view is the Swedish project "Progressive Aseda21", where values, methodologies and tools from TQM are used to strengthen the entire community and turn depopulation into moving-in, see for example Helling et al. (1998) and Fredriksson (2002). Networks, where companies or organisations cooperate, is another result of increased systems thinking. The forming of networks is facilitated by the development of the information technology.

21

48

1 is a community in the South of Sweden with about 2600 inhabitants. Aseda

Studentlitteratur

1 Quality and Quality Improvements

1.4 Notes and References In the closing section of each chapter we will suggest references for further study on the subjects discussed in the chapter. Obviously, this is a subjective and partly incomplete selection, but our aim is to make it easier for anyone interest to find literature for further study. Having discussed the concept of quality in this part of the book, as well as its development and significance in general terms, we will in later sections discuss different methodologies and tools that are put into practice in modern quality philosophy. Books with a focus on service quality are, for example, GrOnroos (1990), Edvardsson & Thomasson (1994), Gustafsson & Johnson (2003) and Zeithaml et al. (1990). Edvardsson & Gustafsson (1999) provides an up to date description of research in the quality area (both goods and services) in the Nordic countries. An early book on service quality is Normann (1984). In Garvin's classic book "Managing Quality" from 1988 different aspects of the terms quality and quality control are presented. Garvin discusses observations of the quality of air conditioning equipment. He found significant differences between different manufacturers, both in terms of internal and external quality. A comparison between Japanese and American producers provides a striking picture of the differences between Japanese and American quality. Incoming goods to American companies had a defective percentage of between 0.8% and 16%, while the corresponding figure for Japanese companies was between 0% and 0.3%. In the assembly department, American companies showed between 8 and 165 defects per 100 produced units, whereas the Japanese companies had between 0.15 and 3 defects per 100 units. Furthermore, the American companies had more than 20 times as many cases of warranty claims as the Japanese companies. Similar results have emerged in the car industry, see for example Womack et. al. (1990) and Dertouzos et al (1989). Other books well worth reading, of which some have indeed become classics, about the quality concept in general, and the importance of management commitment in particular, are Deming *Studentlitteratur

49

Part I — Quality for Success

(1986, 1993), Juran (1951, 1989, 1992), Feigenbaum (1951), Crosby (1979) and Oakland (1999). An amusing book providing a philosophical perspective on quality is "Zen and the Art of Motorcycle Maintenance" by Pirzig (1984). Hoyer & Hoyer (2001) introduce and discuss definitions of the quality concept made by various quality experts. For discussions about what successful quality strategy is and how different persons view the concept, see for example Boaden (1997), Dean & Bowen (1994), Grant et al. (1994), Spencer (1994), and Hellsten & Klefsjo (2000). To conclude, we would like to point out that it is not exactly known when the TQM (Total Quality Management) concept was created, or by whom. Some say that it was created in 1984 when the NALC (Naval Aviation Logistics Command) was about to implement quality improvement according to Ishikawa's ideas in the book "Total Quality Control", but didn't like the word "control". One of the employees, Nancy Warren, is said to have suggested "management" instead. "Deming is talking about management, why don't we call it Total Quality Management?"22. Others maintain that the origin of the concept is a mistranslation from Japanese, see Xu (1994). In Japanese there is no difference in meaning between the terms for "control" and "management". Park Dahlgaard et al. (2001) contends that the concept may have been created by Armand Feigenbaum, but points out that there is no actual proof of this. William Golomski (1924-2002), a famous American university teacher and consultant in quality management, maintains23 however, that Koji Kobayashi, former executive at NEC (Nippon Electric Company), was the first to use the term TQM in his speech of thanks when he received the Deming price24 already in 1974.

22 23 24

50

This version was given to one of the authors by William Latzko in 1998. Personal communication. See Section 21.1. Studentlitteratur

2 Quality and Success

Improved quality affects the success and prosperity of an organisation in many ways. Some of these are: • • • • • • •

more satisfied and loyal customers lower employee turnover and sick leave rates a stronger market position shorter lead times opportunities for capital release reduced costs due to waste and rework higher productivity

Figure 2.1 illustrates how Edwards Deming viewed the connection between improved quality and company prosperity. fImprove quality

—VP.-

Costs decrease because of less rework, fewer mistakes, fewer delays, snags, better use of machine-time and materials

Capture the market t Stay in with better quality —ill' business and lower price

Productivity —3111' improves

Provide jobs and more jobs

\•. Figure 2.1 The importance of quality, as expressed by Deming already in his seminars in Japan in 1950. (From Deming, 1986.)

Total Quality Management aims at creating "increased customer satisfaction with a reduced amount of resources". It concerns having satisfied customers who keep coming back, which leads to increased profitability. Another objective is to reduce the costs for the necessary resources, for instance by reducing faults and defects, with associated costs. What it all comes down to is to bring about "top line" improvements by achieving increased customer satisfaction, while running improvement projects that will create cost reductions, which will show in the "bottom line". Studentlitteratur

51

Part I - Quality for Success

2.1 Quality and Profitability 2.1.1 Relations between quality and profitability Figure 2.2 illustrates some relations between factors that affect quality and profitability, some of which will be addressed in this chapter. The many ways in which improved quality can result in improved profitability can lead to remarkable leverage effects. Improved profitability can be used to make the quality gap even wider in relation to competitors. Higher quality /A1\ -1

\•

Improved interna quality

Improved external quality

Fewer complaints Lees re-work, Fewer adjustments, Apodisturbances scrappings

Higher price

Lower .4_ Smaller buffer costs stocks end other reserves/ Storre marknadsandelar

Lower price

J

7 Larger margins of profit

Less fixed capital

Ec Improved profitability -?) Figure 2.2 Relations between improved quality and increased profitability.

Poor internal quality leads to various problems in production. For manufacturing companies this requires large buffer stocks and 52

Studentlitteratur

2 Quality and Success

reserves to eliminate the risk that problems in a production section will have serious consequences for other sections in the production chain. By enhancing the internal quality, it is possible to drastically minimize the need for intermediate stocks and other reserves. High intemal quality is a prerequisite for the Just-In-Time (JIT) approach. A company that works hard with its logistics must therefore work hard with its internal quality, and the quality of its suppliers too. Conversely, it is possible to use a reduction of buffer stocks and other reserves to bring quality problems into the light. Once these have been eliminated, further storage reductions can be implemented, which will in turn lead to further quality improvements. The connection between "Lower costs" and "Larger market share" by way of "Lower price", indicated by a dashed lines in Figure 2.2, should also be commented. It is, of course, possible to achieve short-term profitability improvements by offering lower prices, which will probably lead to larger market shares. Generally this is not a long-term solution, however. Instead, resources that are set free should be invested in production capacity and improved quality, which will provide an advantage over the competition. A method used in Japanese industry, according to Ahlmann (1989), is to invest in further development of new products and quality improvements by increasing the margins of profit. This way they have gained a lead that will no doubt be very difficult and costly for the competitors to reduce.

2.1.2 Studies of quality and profitability The investigation by Hendricks & Singhal

Of the many research findings that support the connection between Total Quality Management and profitability, an investigation by the American researchers Kevin Hendricks and Vinoy Singhal is particularly worth mentioning; see Hendricks & Singhal (1996, 1997, 1998, 1999). They examined some 600 recipients of national or regional American quality awards, based on similar values and criteria. About 75% of the investigated companies are manufacturing companies. The investigated factors included turnover, total assets, number of employees, profit margins and return on capital. © Studentlitteratur

53

Part I — Quality for Success

111 Award Winners

8 100

•Control firms

90 — o

o_ 80 • 70 0) 05 • 60 50 cr) cis

40



30

a)

20

ro

a) 10 Operating Sales Total Employees Return Return on sales on assets assets income Figure 2.3 Results from the investigation by Hendricks & Singhal (1999) as regards the post-implementation period, i.e. one year before receiving the award and five years on. (From Hendricks & Singhal, 1999.)

Each company in the investigation was studied during two periods. One, the implementation period, started approximately six years before the company received an award and finished the year before it had received the award. The other period, the post-implementation period, started a year before the company received its award and finished four years after the reception. During the implementation period, no difference could be read in financial terms between the award winning group and a control group of companies that had not received any reward, but were similar in other respects. The investigation indicates that the savings made during the implementation phase were of the same size as the investments made in education and change work. In the post-implementation period, the difference between the two groups, in financial terms, was very easily discernible. The average increase in operating results was 91% for the award winners, but only 43% in the control group. Total assets of the award winners increased in average by 79%, compared with 37% in the control group; see Figure 2.3. In conformity with financial and efficiency 54

Studentlitteratur

2 Quality and Success

measurements, the share indexl also increased by 115% in the award winner group, and by 80% in the control group. The Swedish study by Eriksson & Hansson

Results similar to those by Hendricks & Singhal were recently obtained in a study by Eriksson & Hansson (2002). They studied Swedish organisations, which had got a national, regional or in-company quality award and compared the results to a branch index of similar organisations. In this study the implementation period consists of the three years preceding the year the award application was submitted. The post-implementation period consisted of the three years immediately following the implementation period. For the post-implementation period the median difference between award receivers and the branch index was positive for all studied key numbers, when using 5% significance level; see Figure 2.4 and Eriksson & Hansson (2002).

18.00°/0 16.00% 14.00% 12.00% 10.00% 8.00% 6.00% 4.00% — 2.00% — 0.00%

Post Implementation Period

Sales

Total Assets

❑ Award Recipient — Competitor

Number of Return on ' Return on Employees Sales Assets ■ Award Recipient — Branch Average

Figure 2.4 The figure shows the median value of the difference between the award recipients and the competitors and between the award recipients and the branch average of the indicators studied during the post implementation period, defined here as three years staring the year before receiving the award. A positive percentage means that the award recipients outperform the competitors and the branch average, respectively. (From Eriksson & Hansson, 2002.)

'

The share index was measured by S&P 500 Index (S&P = Standard & Poor). Studentlitteratur

55

Part I - Quality for Success

The Baldrige Index

Another investigation that supports the connection between quality and profitability is carried out every year by NIST2 and is called "The Baldrige Index". It is a follow-up of the market price for those organisations that have received the Malcolm Baldrige National Quality Award. Every year, NIST invests a fictive 1000 USD on the first day of the month after the announcement that the organisation received the Malcolm Baldrige Award. For the group of three companies, which as a whole received the award between 1991-2000 and are quoted on the stock exchange3, the index has increased 4.5 times more than the S&P 500-index, yielding a 512% return on investment. In the case of these three companies and parent companies of the 17 subsidiaries that had received the award, the rate was 2.9 times better for the award winners, with a 322% return on investment4. Studying the group of 61 companies or parent companies of those who have received site visits, it turns out that this group grows 15% more than the S&P 500-index.

Q-100 in Quality Progress In Quality Progress a similar measure, known as the Q-1005, is regularly studied. This is an index that is based on 100 of the 500 companies on the S&P 500 list, selected because they have a good quality control system and a successful quality strategy. Public information is used to assess the various company systems. As a benchmark, Q-100 was given the nominal value of 100 on January 1, 2000. The index was 101.26 on June 30, 2000, i.e. an increase by 1.26% in the first six months. This can be compared with S&P 500, which decreased by 0.5% in the same period. During a period of one year as from September 30 1999, the Q-100 increased by 2 3

4 5

56

NIST is an acronym for National Institute of Standards and Technology. The whole company award recipients during 1991-2000 are Eastman Chemical (1993) and Solectron Corporation (1991 and 1997). The survey of "The Baldrige Index" can be studied on the Internet under the address: "www.nist.gov/public_affairs/stockstudy.htm" Further details on Q-100 can be found in George (2000). The companies are listed in the Quality Progress, September 2000 issue, p. 24-25. Studentlitteratur

2 Quality and Success

14.24%, while the S&P 500 increased by 13.28%6. During first quarter of 2002 the Q-100 was up 1.34% compared to a gain of 0.27% for the S&P 500. The Wrolstad & Kreugers investigation

Wrolstad & Kreuger (2001) reported on a study of 25 companies that had received a quality award in one of the states of the US during the period 1988-1996. The majority of states have awards that are based on the Malcolm Baldrige Award. The prerequisites vary slightly between different awards. For example, in several cases there are different levels of distinction, such as bronze, silver and gold. As shown in Figure 2.5, this study supports the view that companies receiving quality awards are successful. This study shows specifically that the award winners have succeeded far better than the companies in the control group, in terms of return on equity and operating margin, see Figure 2.5. Measure

Award winners

Match companies

Operaying margin Return on asstes Return on equity

46.77% 10.28% 18.73%

2.69% — 5.50% — 5.91%

Figure 2.5 Average changes over a four-year period, two years before to two years after the company received a state quality award in the US. This has been compared with comparable companies by size and SIC code that have not received any award. (From Wrolstad & Kreuger, 2001.)

2.2 Quality and Product Development It is becoming increasingly important to create conditions for high quality already in the design of products. A change occurring at an early stage in the product development is much less costly than changing a product that is already in production, or, still worse, that is already out on the market, see Figure 2.6. Figure 2.7 illustrates change activities in Japanese and American companies 6

Quality Progress, December 2000, p. 24.

© Studentlitteratur

57

Part 1— Quality for Success

respectively, in the mid 1980s. Even if the situation has improved since then, there is still a gap to bridge. Relative cost of a design change (logaritmic scale) 1000 100 10 1 Time cc‘ \pa n oafeq ..\or oe•I" oac`

‘se.9e

Figure 2.6 The costs due to design changes are growing rapidly depending on where in the product life sycle the change occurs.

A

Number of changes American company

Japanese/1r" company Time 20-24 months

14-17 months

3 months

mass production

3 months

Figure 2.7 Illustration of change activities in a Japanese and an American company. (From Sullivan, 1986.)

Another strong argument in favour of early quality improvements is that the product life cycles are becoming shorter and shorter. Short life cycles reduce the opportunities to make successive improvements and to "feel ones way" with a product on the mar58

© Studentlitteratur

2 Quality and Success

ket. The product has to be in full working order already at the market introduction, so that it yields profit already at the time of the sales increase, see Figure 2.8.

Conclusions regarding traditional technologically orientated companies • Let marketing be leading for development and design • Vary less important production operations, for instance by using sub-suppliers

Time Profit development Casio .

.

,•

,

• Collect profit during growth, not afterwards

Competitors Time

Figure 2.8

A competitive strategy at the Japanese company Casio, whose production includes clocks and mini calculators. Casio utilizes its flexibility to accelerate and shorten the product life cycles. By keeping a keen eye on the market while analysing the customer requirements, these can quickly be converted into technical solutions. Then, by reaching large volumes rapidly, it is possible to secure a large share of the market. When the competitors have reached large volumes, Casio can cut their prices, still at a profit. The competitors, on the other hand, will not be able to catch up in time to cover their costs. (From Ohmae, 1982.)

Focusing on quality improvements often implies other positive effects of importance for productivity. Examples of these are a decreased staff turnover and smaller numbers of staff on sick leave. As just one example, a small Swedish company, Fresh AB, producing ventilation products7, through a new focus on quality and

7

See www.fresh.se.

© Studentl tteratur

59

Part 1— Quality for Success

leadership changed a sick leave of 8% to a "healthy presence" of 98%, shortened delivery times with 75% and increased solidity with 460% during the worst depression the Swedish building trade ever experienced.

2.3 Quality and Productivity

Productivity (hours/vehicle)

Every kind of adjustment, rework or scrapping leads to a reduction of productivity. Earlier, productivity and quality were often felt to be in opposition to one another. It was believed that higher quality could only be achieved at the expense of productivity. This would be true if the only possible quality improving activity would be to increase the amount of inspection. In a modern quality approach, such reactive methods are avoided. Improvements have to be made already in the development of the product and the production process. There are an infinite number of ways in which to improve quality, see Figure 2.9, 2.10 and 2.11.

60 x 50 High productivity High quality 40 .

x

Low productivity Low quality •

x



30 20 o

+ High productivity Low quality

o 10 20 40 60 80 100 120 140 160 180

quality (defects/100 cars) 0 Japan • Japan/North America +USA/North Americ;\ • Australia • Europa x New entrants .1 Figure 2.9 An illustration from the car industry, showing the connection between high quality and high productivity. (From Andersson, 1991, after an idea by Krafcik & MacDuffie, 1989.)

60

© Studentlitteratur

2 Quality and Success

• Productivity (hours per car)

❑ Quality (fault points per car)

115

50

1989 1990 1991 1992 1993 1994 1995 1996 Figure 2.10 The fact that increased productivity is closely related to quality is illustrated by these results from Saab Automobile in Sweden. The quality of the Saab 9000 series was improved considerably for a number of years, and productivity followed the same trend. The fall in productivity in 1996 is said to be due to training activities in the manufacturing process of Saab 9-5 (From the Swedish magazine Ny Teknik, 1997:23)

The connection between quality and productivity is clear from the productivity definition used in Japan, see Figure 2.11. What is productivity? Above all else, productivity is an attitude of mind. • It is the mentality of progress, of the constant improvement of that which exists. • It is the certainty of being able to do better today than yesterday, and less well than tomorrow. • It is the will to improve on the present situation, no matter how good it may seem, no matter how good it may really be. • It is the constant adaptation of economic and social life to changing conditions. • It is the continual efforts to apply new techniques and new metods. It is the faith in human progress.

Figure 2.11 For almost SO years, the Japan Productivity Center has used the above definition of productivity, a definition that appears very "Japanese". However, it was in fact formulated at "the European Productivity Agency", a congress in Rome in 1958. The definition clearly indicates the close and positive connection between the approach to productivity and quality. (From Helling, 1991.)

Studentlitteratur

61

Part 1— Quality for Success

2.4 Costs of Poor Quality The old view As recently as in the 1980s, product quality was often measured using the percentage defective (or non-conformance) units, i.e. the share of produced units that does not meet stipulated requirements8. This term is often an inadequate quality measure, as sometimes not even the deficiency percentage zero is sufficient. There may still be a great potential for improvements, by reducing variation, or by finding completely new solutions that better satisfy the customers' needs. Previously the notion "optimal quality" was often used. The prevailing belief was that there was an upper limit for product quality, and that improvements above this limit would not be profitable. This view was based on a static system, disregarding all the opportunities at hand to improve goods and services without increasing costs. This could be achieved by making use of the acquired knowledge and remembering the basic rule for quality improvements; "it is always possible to achieve higher quality at a lower cost". By making the most of all available knowledge and experience, we can achieve a better result at a lower cost. As mentioned earlier, quality was often thought to be in opposition to productivity. The idea of "quality costs" prevailed, and efforts were made to find the optimal balance between costs due to defects and "preventive quality costs", see Figure 2.12. Joseph Juran was one of the first people to consider costs in connection with quality issues9. Already in the first edition of the "Quality Control Handbook" from 1951 he spoke of the "cost of quality", which consisted of four parts1°: • internal failure costs, i.e. costs caused internal within the organisation before delivery to the customer detect that products or 8 9 10

62

Compare Crosby's definition in Figure 1.2, of quality as "Conformance to requirements". Juran frequently mentioned "gold in the mine", by which he meant resources that already exist in the organisation, waiting to be dug out. The notations were originally created by Armand Feigenbaum in 1943, when he worked at General Electric. It was published in 1945; see also Feigenbaum (1951). Studentlitteratur

2 Quality and Success

to

Preventive costs

o 0

/ Percentage defective Figure 2.12 The old and wrong view of "quality costs". The possibility to make use of acquired knowledge to improve qualiy was forgotten, and instead a static view of production was adopted.

material deviate from set requirements. Example of such costs are costs for scrapping and reworking as well as stand-still costs. Furthermore, different indirect costs should be included, as costs for cancelled but not rebooked meetings and waiting times. • external failure costs, i.e. costs due to defective products, where the failure is detected after delivery to the customer. Example of such costs are complaints, warranty costs and good-will losses. • appraisal costs, i.e. costs for inspection of product and material in order to check whether they fulfil the requirements in different stages of the production. Examples of such costs are those for acceptance sampling of material from suppliers, inspection in the production processes and different forms of inspection of ready products as costs for other forms of prevention activities. • prevention costs, i.e. costs for different quality stimulating activities within all the development- and production process. This includes costs for implementing quality systems, education in quality and costs for audits of suppliers. Costs of poor quality

The word "quality cost" is a very unsuitable term, which fortunately is falling out of use. It signals that quality costs. Certainly, investing 0 Studentlitteratur

63

Part - Quality for Success

1994 was a good quality year to SKF. We educated 35,000 employees in the company's quality programme. At that education we investigated about 10 million USD. All the 42,000 employees will pass our quality programme. The effects have been gratifying. The number of complaints has decerased by 30% and we approach our goal of zero defects.

Figure 2.13 It is still few companies, who in their annual reports discuss their strategic work with quality. One example of exception is from SKF in 1994, where the former CEO Mauritz Sahlin comments the financial year and the made focus on quality education. (Translation from the Swedish annual report.)

in precautionary measures causes expenditure, but what really costs is the lack of quality. Costs arise when defective units are manufactured, or services are performed, in such a way that rework or compensation in different forms is necessary. Instead of "quality costs", the term costs of poor quality should be used. Nowadays Juran uses this term, which he defines as "the costs incurred by defect units, imperfect processes or lost sales revenue"11. A model for costs of poor quality should, in our view, only include internal and external failure costs12; see Figure 2.14.

Internal failure costs Poor-quality costs External failure costs

Figure 2.14 A system for costs of poor-quality should only comprise internal and external failure costs, i.e. direct and indirect costs for quality defects or errors.

11 12

64

See for example Juran (1999), p. 8.4. There is at present no standard terminology in this area. Sometimes a form of intermediate term called "quality related costs" is used. Studentlitteratur

2 Quality and Success

The costs for poor quality in industry is often estimated to 10-30% of the sales. Several investigations in different organisations confirm this (see e.g. Sarqvist, 1998). In service organisations the costs for poor quality are even higher (Suminski, 1994). Up to 35% has, for instance, been estimated by Crosby (1988). As an example Arnrup & Edvardsson (1992) mention that SAS (Scandinavian Airlines Systems) had costs for lost luggage of about 100 SEK13. Another thing to remember is that following up costs of poor quality does not automatically solve quality problems. As a rule, the follow-up system does not suggest specific actions, but merely indicates where to start looking for problems in an organisation. Furthermore, the time is often long between the detection of a problem and its registration in the system that monitors poor quality. This makes it more difficult to trace what is wrong, and possibly also to identify the measures that would lead to cost reductions. When a company decides what to measure, it has already identified a problem area. This in turn means that it is probably sufficient to adopt fairly rough approximations after improvements have been made, and the cost of building a special computer system to monitor problem areas will then be av avoided. Costs for quality defects can act as quality indicators. Intermediate goals for the improvement work would, for example, be set up in terms of these costs. Here we should like to mention ABC, Activity Based Costing, a principle by which companies apportion their indirect costs between the activities where they arise, on "cost drivers" such as method development, storage management and production control. This method provides a more accurate distribution of the indirect costs than a standardised overhead calculation. The ABC analysis does not affect the direct costs, which are treated as usual. Cost of poor quality and ABC have a natural connection. Both notions seek to derive the source of various costs, the cost drivers. Costs of poor quality concerns finding the cause of faults and defects that may arise, while ABC) apportions the costs between the places 13

Although the situation has impoved since then about 80 suitcases are delayed or lost per day related to SAS flights at Stockholm Arlanda. (The Swedish newspaper Svenska Dagbladet, May 16, 2001.)

© Studentlitteratur

65

Part 1— Quality for Success

where they arise. The ABC analysis shows what it actually costs to scrap a detail, or to modify a component.

2.5 Quality, Work environment and Work development There is a growing understanding of the fact that the chances of achieving high quality are linked to the work environment and the opportunities of work development that the company can offer. As an instance of this, the American psychologist Liknert, see Liknert (1961), concludes that managers who focus only on getting the work done are less successful than those who also emphasize social relations at work. Hackman & Oldham (1976) maintain that jobs that are characterized by a wide resource deployment, as well as comprehensive and meaningful tasks with feedback of results, create the necessary prerequisites for commitment and motivation, see Figure 2.15. In 1987 the American Society for Quality, ASQ carried out an extensive investigation which showed that American companies thought that the greatest potential in quality improvement were to be found within the "soft" side. A Swedish survey almost ten years later (Axelsson & Forsberg, 1998), shows the same thing. Higher • quality • productivity Working contents Independence

Perceived maining wih work 10 7 Perceived responsibility for the work VA

\ Motivation och engagement

Knowledge about working results Feedback

Iii• \. Lower • absence • staff turn-over

, /

Figure 2.1.5 A model characterizing a good work development. (After Hackman & Oldham, 1976.)

66

Studentlitteratur

2 Quality and Success

Many people feel that participating in various improvement activities, such as improvement groups and quality circles, provides an important contribution towards a favourable work development. By learning and practising structured problem solving and using different methodologies and tools for improvement, we create meaning and responsibility at work. Pascal & Athos (1982) maintain that the Japanese success was essentially due to their success in generating an active participation and commitment from all employees, thereby creating possibilities for self-determination and self-responsibility. One of many indications that this area is very important is an American study (Harper, 1992), showing that the greatest challenge for the management is still to create opportunities for the employees to be committed in the daily improvement work14. Sometimes the importance of turning the organisation pyramid upside-down is brought to attention, but a more fruitful approach Management Taylorism Efficiency ProcessInnovation

The world around Working Life Quality Working environmen

Human Resource Management

Total Quality Management

Quality

Customer

Socio Technolo

Staff member Figure 2.16 Organisational stakeholders and interests and an attempt to illustrate some notions from organisation development related to these notions. (From Eklund, 1998.)

14

Compare the element "Create conditions for participation" in Figure 1.9.

© Studentlitteratur

67

Part I - Quality for Success

is perhaps to view the activities from the perspective of the various parties and stakeholders, see Figure 2.16. One explanation why the interest in Total Quality Management has increased in recent years is probably that this concept is an overall strategic management system, which is generic and harmonious in the way it balances the interests within the organisation. There is also a strong empirical support for the belief that a holistic view in terms of work environment, efficiency and quality will increase the chances of success (Axelsson, 1995, 1997; Eklund 1998). Figure 2.17 illustrates employees' view on quality improvements and their role in the improvement work. It shows a list of employee answers to the question "what does quality improvement mean to me", compiled by Hans Fries, retired president of Unilever Company, Sweden. Motivation, participation and commitment are three notions that are closely related, both to one another, to quality and to customer • • • •

• • • • • • • • • •

More time for real work We can do a better job We think we can work more effectively More satisfaction - easier to have an influence - results are visible - new challenges - greater responsibility Greater openness in the company Better understanding of the whole Invites own initiatives More cross-functional contacts Reduced number of errors and failures Takes away stress Less overtime Simpler routines Better sense for our own success and development Pride in work

Figure 2.17 Spontaneous comments from the employees to Hans Fries, former president of Unilever Company, Sweden, to the question "what does quality improvement mean to me?".

68

Studentlitteratur

2 Quality and Success

and employee satisfaction. We are convinced that the trend moves towards a holistic approach, by which quality and work development are integrated into "successful quality and work development"; thoughts introduced in, for instance, Axelsson & Bergman (1999). The quality work of tomorrow is not only a matter of developing products and processes, but, equally important, also of creating opportunities for those involved in these processes to develop in harmony with the progress of the organisation. At the end of the day, quality is about people.

2.6 Notes and References The PIMS ("Profit Impact of Market Strategy") database was created in the 1970s, when staff at General Electric wanted to find key factors for profitability. In the course of time, the database has grown, and having been managed by Harvard Business School for a period of time, it is now run by an independent company. The database supports the claim that customers are prepared to pay more for a product of higher quality than what it costs to attain this higher quality. The PIMs data base and conclusions regarding the connection between quality and profitability are discussed in Buzzel & Gale (1987). In his book "Managing Quality", Garvin (1988) also discusses the connection between quality and profitability. The book by Womack et al. (1990), "The Machine that Changed the World", is a description of "the International Motor Vehicle Program" (IMVP), a thorough study of the development of the car industry. The study was performed over five years at a cost of five million dollars, at the Massachusetts Institute of Technology, MIT. In this book, which is well worth reading, Henry Ford's mass production is compared to the Japanese lean production. A sequel to that book, "Lean Thinking", generalizes the previous discussions; see Womack & Roos (1996). Berggren (1992) reports on Swedish experiences from new production forms at Volvo, mostly from the Kalmar and Uddevalla plants. © Studentlitteratur

69

Part 1— Quality for Success

The term "cost of quality" was created by Feigenbaum in 1943 and later used by Juran. Crosby was another early advocate, and one of the first to give voice to thoughts along the same lines as Feigenbaum and Juran. Crosby talked about the "price of non-conformance". Later he broke up the costs into reworking costs, rejection costs, warranty costs and inspection costs. This break-up was presented in his book Crosby (1965). He also referred to this notion in his famous book "Quality is Free" from 1979. The terms "quality costs", qualityrelated costs and costs for poor-quality are also discussed in books such as Campanella (1999) and Dale & Plunkett (1991). Another line of development starts with the observation that present financial and accounting systems are insufficient, see Johnson & Kaplan (1987). The Activity Based Costing principle has evolved from this awareness. A further development of these ideas, which is close to the TQM philosophy, is Activity Based Management, see for instance Brimson (1991a, b). Similar ideas, focused on shortening lead times, have resulted in Time Based Management, see for example Stalk & Hout (1990). In addition to the research findings mentioned above, demonstrating a positive connection between quality development and financial success, Easton & Jarrell (1998) provide a readable account of this. Similar questions are also discussed in Anderson et al. (2000) and Edvardsson et al. (2000b). The connection between quality strategy and work development is highlighted in, for instance, Axelsson (1994, 1995), Axelsson & Bergman (1999), Eklund (1997, 1998) and LjungstrOm & Klefsjo (2002).

70

0 Studentlitteratur

3 The History of the Quality Movement

The interest and will to have satisfied customers have existed for centuries. Before the age of industrialism, craftsmen had direct contact with their buyers, who thereby made sure that the product developed as intended at the manufacturing stage. Later, the entry of industrialism meant that statistical methods were needed to make the inspections more efficient. In the course of time, inspections have been replaced by contributions at gradually earlier stages in the product life cycle. This chapter aims to give a summary of the history of quality work and improvements, and to introduce some persons, who have played a particularly important role in this evolution.

3.1 Prehistory Humans have worried about mistakes and their consequences, and about the quality of products they have bought, or work they have performed, from time immemorial. Some prehistoric examples of this quality awareness are Codex Hammurabi, the pyramids in Egypt and the Roman aqueducts. Codex Hammurabi and the pyramids

The Babylonian king Hammurabi (1792-1750 BC) heralded the present labour legislation and product liabilityl. This is illustrated in Figure 3.1.

Codex Hammurabi laid down minimum wages, control possibilities of agreements and liabilities for financial transactions. Studentlitteratur

71

Del I — Kvalitet for framgong

141

ffillw-16

If a building falls into pieces and the owner because of this gets killed the builder also shall be killed. If one of the owner's children is killed, one of the builders children also shall... Figure 3.1 Free interpretation of Codex Hammurabi (about 1750 BC), which contains 282 sections and an appendix with instructions on how to apply the law.

Building the pyramids in Egypt about four thousand years ago called for extremely exact dimensional accuracy and surface smoothness. Mural paintings found in the pyramids show how the smoothness of the surface was monitored using rods made of bone and rope, which was strung across the boulders. This is illustrated in Figure 3.2. This picture has become the logotype for the Juran Institute, one of the major consulting companies dealing with quality issues in the US.

Figure 3.2

72

An egyptian inspector supervising the making of building blocks intended for a pyramid. The men to the left and on top of the block are working on the stone, while the man to the right is measuring the block with a rope. The illustration was found in a king's grave in Thebes, dating back to around 1450 BC. (From Juran & Gryna, 1980.)

Studentlitteratur

3 The History of the Quality Movement

Roman architecture

A proof of the Roman capacity for designing and building robust structures is the Pont du Gard aqueduct in the Gard Valley in France, which has lasted more than 2000 years. Pont du Gard (see Figure 3.3), which reaches across the river Gardon, is 275 metres long and 50 metres high, and is probably the first bridge built by the Romans. Pont du Gard was built on the basis of more than 200 years of experience from Norman arches, and the design is probably the result of assiduous improvement work, which has led to progressively more reliable solutions. The wind-force would have to reach 215 km/h before the columns on the second floor would lift. The strongest gales in the area reach forces of about 100 km/h.

Figure 3.3 Pont du Gard, in Southern France, is 275 metres long and 50 metres high. It was built by the Romans more than 2000 years ago, and will withstand wind forces of about 215 km/h, which is roughly twice as much as the top wind forces measured in the area. (From the Swedish journal "lllustrerad Vetenskap" 1990:1, p. 20.)

© Student], tteratur

73

Del l — Kvalitet for framgang

3.2 Taylorism and Inspection Taylorism Productivity was low in the US at the beginning of the twentieth century. Important reasons for this was the poorly educated labour force, low morale at work and distrust of industrial management, as a result of the large unemployment rate in the 1870s. At that time, Frederick Winslow Taylor (1856-1917) used time studies to break down every working operation into components. This would lead to increased productivity. All mental work should, according to Taylor, be moved from the shop floor to the offices. The management should conceive the best way to perform different tasks and then make sure that instructions and regulations were observed in practice. A large number of inspectors were employed to find defective units2. In this way Taylor's organisation principle "Scientific Management" (Taylor, 1911) was created. The assembly line in the Ford factory was an important application of Taylor's ideas. Taylorism has been severely criticised for its mechanical and authoritarian concept of man, but Taylor's theories are based on a scientific approach, and he must be regarded as a progressive thinker. One of Taylor's four main principles was that "work assignments should be studied and designed according to scientific methods". In a way, therefore, it is fair to say that the element "base decisions on facts" adheres in part to Taylor's thoughts. To get problem free assembly of parts coming from different parts of the production or from different suppliers reduction of variation in different dimensions is very important. Thus setting suitable tolerances now became an important task as well as assurance that the product dimensions were within set tolerances. In the spirit of Taylorism these inspections of units were performed by special inspectors and huge inspection departments were often born. To reduce the amount of inspections a statistical methodology, based on the idea to take out a random sample of a lot and thereby judge 2

74

The inspections were often carried out using templates or gauges. Here we can mention that the Swede CE Johanson (1864-1954) invented and created a simple measuring system with gauge blocks, which helped Ford's ideas to work in practice. Studentlitteratur

3 The History of the Quality Movement

the quality of the complete lot, was established. This methodology came to be called acceptance sampling. The birth of acceptance sampling

Acceptance sampling has probably taken place at the Royal Mint in Great Britain since the middle of the 12th century. This has been done in the form of a ceremony with the name "the trial of the Pyx". For every 15th gold coin minted, one coin was put in a box named "the Pyx" (after the Greek word "pyx" meaning box). This box was kept in Westminster Abbey for later inspection. The inspection was performed at intervals of one to four years by an independent jury. The object of the procedure was to make sure that the Royal Mint, which was independent of the Crown, had not been cheating when manufacturing the coins. If, for instance, the total amount of gold was below the set standard by a certain amount, the Master of the Mint was charged a fee to serve as punishment. Probably the best known Master of the Mint was Isaac Newton (1642-1727). There are many indications that he might have been able to use his knowledge of the variation of mean values for his own purposes, but nothing to say that he used his knowledge for his own good. On the contrary, he seems to have been very anxious about the reputation of the Mint and is believed to have taken pains to keep down the variation in coin manufacture. For more information about "the trial of the Pyx", see Stigler (1977). Statistical acceptance sampling However, it was not until after the breakthrough of industrialism and in conjunction with mass production that systematic methods related to quality issues were used. In the 1920s some Germans (see Daeves, 1924, and Becker et al., 1927) realized that variations in a production process could be described by using statistical methods. Quality issues came to the forefront during the Second World War, especially in the US. Methods for statistical acceptance sampling3 3

In 1946 the American Society for Quality Control (ASQC), which now goes under the name of American Society for Quality (ASQ) was created, and the journal Industrial Quality Control, which is a predecessor to Quality Progress of today, was first published. Studentlitteratur

75

Del 1— Kvalitet for framgang

were developed. This development was already initiated by Harold F. Dodge (1893-1976) in the 1920s, and later, by Harry G. Romig (1990-1989) as well, see Dodge & Romig (1941). Both worked at Bell Laboratories4. Statistical acceptance sampling concerns how to draw conclusions about the properties of the whole batchs, based on the results from one or more samples from a batch of units. It is always desirable to keep the number of tested units down, because of the expense, but it is in fact necessary in destructive testing. Epoch-making contributions for a "Sequential Probability Ratio Test' were made by Abraham Wald (1902-1950). In principle, the idea is to compare the number of faulty units after inspecting each unit, to the number of accumulated tested units, and determine whether the inspected batch should be accepted or rejected, or whether to test another unit. This technique, which does to some degree keep the number of tested units down to a decision, is important, for instance, at destructive testing. Wald's findings were considered so important that they were classified as secret until the end of the World War II. In many countries, inspection was a dominant part of quality work far into the 1980s. Acceptance sampling is still used in many companies to a certain extent, for example when checking incoming goods from suppliers. Much of the material that is today part of various inspection standards for acceptance sampling was produced during World War II, or shortly after.

3.3 Walter A. Shewhart Walter Andrew Shewhart (see Figure 3.4) was employed at the Western Electric Company in 1918, just after having taken his doctor's degree in physics at the University of California, Berkeley. Later he worked at Bell Laboratories (nowadays AT&T, American Telephone and Telegraph). At that time, statistical models had begun to be used in science. For Shewhart, who was well 4

76

Joseph Juran was involved in this development too. An interesting article on how acceptance sampling was developed at Western Electric, where Juran worked, is included in Juran (1997). A short description of this can be found in Chapter 13. Studentlitteratur

3 The History of the Quality Movement

acquainted with the mathematical statistics of the time, it was natural to apply a statistical view to the production process, too. As early as in 1924 he suggested, in an internal memorandum, what has later come to be called a "control chart"; see Figure 11.4. Shewhart's view of quality issues is best illustrated with the following quotation from Shewhart (1931), page 54: Looked at broadly there are at a given time certain human wants to be fulfilled through the fabrication of raw materials into finished products of different kinds. These wants are statistical in nature in that the quality of a finished product in terms of the physical characteristics wanted by one individual is not the same for all individuals. The first step of the engineer in trying to satisfy these wants is therefore that of translating as nearly as possible these wants into the physical characteristics of the thing manufactured to satisfy these wants. In taking this step intuition and judgement play an important role as well as the broad knowledge of the human element involved in the wants of individuals. The second step of the engineer is to set up ways and means of obtaining a product which will differ from the arbitrarily set standards for these quality characteristics by no more than may be left to chance. Although the above quotation demonstrates a holistic view of a fairly modern cut, the central theme in Shewhart's publications is how to take care of data and draw conclusions from them in order to reduce and supervise the variation in the production process, i.e. what is called "the second step" above. However, focusing on the customer was very important already to Shewhart. Another illustration of this is from the preface to Shewhart (1931): "Broadly speaking, the object of industry is to set up economic ways and means of satisfying human wants." This is in keeping with many other interpreters of the quality view on entrepreneurship, one of whom is Grant et al. (1994)6, another one is Drucker7. Shewhart has had a profound influence on the modern view of quality control. If any one single individual could be called the father of modern quality philosophy, that person would probably 6 7

This comment is from Axelsson & Bergman (1999). See Watson (2002).

6) Studentlitteratur

77

Del I — Kvalitet for framgong

be Walter A. Shewhart. Shewhart himself was strongly influenced by the philosopher Clarence Irving Lewis (1883-1964) and his theory of knowledge. This is reflected, for instance, in the learning cycle that Shewhart conceived, later adapted into the PDSA cycle8 which is today a symbol of continuous improvement.

Figure 3.4 Walter A. Shewhart (1891-1967), left, and W. Edwards Deming (1900-1993), right, both very important for the development of quality philosophy.

3.4 W. Edwards Deming and Joseph M. Juran William Edwards Deming (see Figure 3.4) and Joseph Moses Juran (see Figure 3.6) share Shewhart's statistical view regarding the production process. Deming in particular stresses the statistical point heavily. Deming worked together with Shewhart at Western Electric and was influenced by his statistical philosophy. In addition, both Deming and Juran emphasize the top management's role very strongly. Only if the top management commit themselves wholeheartedly to quality issues, it is possible to achieve continuous quality improvement. Deming's philosophy is available in a condensed form in his famous 14-point management list, see Deming (1982, 1986). The list, which has been slightly revised over the years, is presented in Figure 3.5 and commented on in Chapter 17. 8

78

We will revert to the PDSA cycle, or the improvement cycle, in Chapter 9. Studentlitteratur

3 The History of the Quality Movement

In his book "Managerial Breakthrough", Juran (1964) stresses the importance of working continuously on quality improvements. This was a breakthrough as regards the view both of the managerial task and of achieving improvement. With his definition of quality, "fitness for use", Juran also adopts an attitude, which is close to the customer. In the Western World we often get the impression that W. Edwards Deming and Joseph M Juran should be credited for Japan's successes on the quality arena. This view is probably somewhat exaggerated, even if they did make an important contribution. Moreover, Deming's role in the Japanese quality work has often been made out to be considerably greater than Juran's, another matter that has been contested. Perhaps this notion has in part been kindled by Deming himself, by the way in which he usually accentuated his own contribution. At a dinner party that followed the famous NBC film "If Japan can - why can't we?" that was broadcast on June 24 19809, Deming was asked what made the great difference in Japan - what the actual underlying causes of the "Japanese Miracle" were. He replied: "One man with profound knowledge" and was, of course, referring to himself. Dr Shigeru Mizuno10, one of the participants in Deming's seminars, remembers that (Juran, 1995): The JUSE basic course instructors were, up to that time, merely selftaught concerning the fundamental ideas of quality control but, through Dr Deming's lectures, they were able to approach for the first time the true essence of quality control. The lectures had a great historical significance for the history of quality control in Japan. Juran has a more tempered view of his own contribution in Japan. During his regular visits to Sweden in the period between 19721989, he often said: "My role is not very significant, and nor is Deming's. Those who did have a crucial impact were the Japanese managers who listened to us and then started to work according to our guidelines. Managers in other countries did nothing." 9 10

This documentary is held by many to be one of the most successful business documentaries ever made, with an estimated 30,000 copies sold. Mizuno was one of the original contributors of Quality Function Deployment, QFD, which is discussed further in Chapter 5. Studentlitteratur

79

Del I - Kvalitet for fromgang

To commemorate Deming's services in Japan, JUSE1 1 established the Deming prize in 1951, a prestigious prize awarded to companies, which have succeeded particularly well in the application of their quality philosophy, and to individuals who have made distinguished contributions in the field of quality management12 . Deming's 14 points 1. Create constancy of purpose for improvement of product and service. 2. Adopt the new philosophy. 3. Cease dependence on inspection to achieve quality. 4. End the practice of awarding business on the basis of price tag alone. Instead, minimize total cost by working with a single supplier. 5. Improve constantly and forever every process for planning, production and service. 6. Institute training on the job. 7. Adopt and institute leadership. 8. Drive out fear. 9. Break down barriers between staff areas. 10. Eliminate slogans, exhortations and targets for the work force. 11. Eliminate numerical quotas for the work force and numerical goals for the management. 12. Remove barriers that rob people of pride of workmanship. Eliminate the annual rating or merit system. 13. Institute a vigorous program of education and self-improvement for everyone. 14. Put everybody in the company to work to accomplish the transformation.

Figure 3.5 Deming's list of 14 points. (From Deming, 1986.)

In his book "What is total quality control? The Japanese Way", Kaoru Ishikawa (1985) describes what happened when Juran visited Japan in 1954: ...The Juran visit created an atmosphere in which quality control was to be regarded as a tool of management, thus creating an opening for the establishment of total quality control as we know it today.

11 12

80

JUSE = Union of Japanese Scientists and Engineers. The Deming Prize and some other quality awards are presented in Chapter 21. © Studentlitteratur

3 The History of the Quality Movement

Dr Joseph Juran retired in 1994 following a tour across the American continent, where he gave his final speech on November 18, 1994 in Orlando, Florida, having worked with quality issues for 70 years. When this is written, he has just finished writing his memoirs at the age of more than 97, and is still actively taking part in the activities at the Juran Institute. Edwards Deming died of cancer in his home on December 20, 1993, when he was 93 years old. He, too, remained active to the very end of his life, see for example Deming (1993). Japanese successes in the field of quality have made people and companies take more interest than ever in Deming's and Juran's philosophies and methodologies. Their ideas will certainly have a great influence on quality work for many years, perhaps forever.

Figure 3.6 Joseph M. Juran (born 1904), left, and Kaoru Ishikawa (1915-1989), right, have both had a decisive influence on the development of quality philosophy.

3.5 The Japanese Miracle Background Japan has a long tradition in the field of quality. Examples of this are products such as paper, silk and swords. But what comes first to mind is probably the "Japanese Miracle" that took place around 1955-1985, when Japan re-built its industry after World War II. © Studentlitteratur

81

Del I - Kvalitet for framg6ng

For Japan, it was not merely a matter of developing new technology and new methodologies to get Japanese industry back on its feet. Another big problem was that many skilled technicians and managers had died in the war. Furthermore, after the war, several managers were forced to resign from their positions in the companies, as people didn't trust them because of their behaviour during the war. The term "zaibatsu" was used - meaning roughly "the privileged few". Instead, many young people from the shop floor, or newly examined engineers, were appointed managers in Japanese companies, without possessing much in the way of know how or experience. These problems combined to give Japanese quality a firmly rooted bad reputation in the beginning of the 1950s. A deliberate and concentrated effort was made to remedy this. The establishment of JUSE

A small group of Japanese technicians and engineers had met informally during the war, primarily to give each other moral support. In the course of time, this group took the name JUSE, Union of Japanese Scientists and Engineers. The official starting date for JUSE was 1 May, 1946. Ichiro Ishikawa (the father of Kaoru Ishikawa) was persuaded into accepting the chairmanship. He was a prosperous industrialist and a professor of good repute, with a great influence in Japan. Gradually, JUSE adopted a more scientific direction. As an instance of this, in 1948 six persons from JUSE formed a group to study the technology development in the US and other western countries. One of the members of this group was Shigeru Mizuno, known as one of the people who developed the QFD (Quality Function Deployment). The group collected different types of materials and came across Shewhart's book "Economic Control of Quality of Manufactured Product", which was published in 1931. At this time, they first learned of Deming's existence. Following Juran's visit in 1954 and the courses he gave, JUSE followed up with different courses for managers at intermediate and top level. In order to reach as many people as possible, training courses were started on the radio, and later on TV. The first radio course was presented by Kaoru Ishikawa and Shigeru Mizuno 82

Studentlitteratur

3 The History

of the Quality Movement

together, and lasted a quarter of an hour each day, over a period of three months (Juran, 1995). Today, different types of courses constitute a major part of JUSE's activities. Deming and Juran

In 1950 Ichiro Ishikawa, the president of JUSE, invited Deming to Japan, where he had the opportunity to talk to several groups of Japanese management executives. He held two eight day-courses in English with an interpreter. Deming himself wrote that 600 people had registered for the first course in Tokyo on the 10-17 July, but there was only room for 230 people in the overcrowded room. A large part of the course was devoted to statistical methods, in particular to what we today call control charts13. Deming introduced what is now often referred to as the Deming cycle, the PDSA cycle or the Improvement cycle14. He also managed to persuade Ishikawa to gather a group of the most prominent Japanese top managers for a two-day course before he went home. In this course he stressed the importance of using statistical methods to build in quality into the products. In 1954 Juran, too, was invited to Japan by JUSE. He was then already a celebrity, having written the book Quality Control Handbook, first published in 1951. In its fifth edition from 1999, this book comprises more than 1,500 pages; it is now renamed the Juran's Quality Handbook, and is still the standard reference work for many quality managers. Juran began his stay in Japan by giving two-day seminars in Tokyo to 70 people at a time. The participants were in the main CEOs or executives of similar status. In an interview in 1993, Juran described the seminars thus: Never before my trip in 1954, and never since, has the industrial leadership of a major power given me so much of its attention. Once in the US, just a few months ago I faced an audience of 70 CEOs but that was for one hour. Juran's message was well structured, well-reasoned and clear. Much of the material was based on his Quality Control Handbook, and 13 14

We will come back to control charts in Chapter 11.

The Improvement cycle is addressed in Chapter 9.

© Studentlitteratur

83

Del l — Kvalitet for framgang

the core message was that quality largely concerns leadership, as much as statistics and control charts. This is what Juran wrote in the preface to the Total Quality Control (the Japanese translation of the Quality Control Handbook): ...there has been some over-emphasis of the importance of the statistical tools, as though they alone are sufficient to solve our quality problems. Such over-emphasis is a mistake. The statistical tools are sometimes necessary, and often useful. But they are never sufficient. Juran returned to Japan in 1960 and repeated his courses, with 700 participants. He then saw the great changes that had happened in Japan since his first visit. "The country was undergoing an immense construction and modernization boom. ... Productivity and salaries were rising sharply and the atmosphere of industrial progress was all pervasive." Juran and Deming did without doubt have a great influence on the development in Japan after the Second World War, through seminars and company visits. But the Japanese did not only listen to these, and other, American experts. Many Japanese top managers travelled, often to the US, and visited factories and spoke with American managers, to see and understand what happened in the US. For example, Eiji Toyoda, one of the leaders behind the Toyota Motor Company, studied how the work was carried out at Ford's River Rouge plant for three months in the Spring of 1950. He and Taiichi Ohno later created concepts that are well known today, such as Just-in-Time and kanban. Japan sent delegations to other countries as well, to study manufacture and leadership. Incidentally, Deming and Juran were not the first Americans to have an impact on the Japanese development. Taylor's book "Principles of Scientific Management", for example, was translated into Japanese fairly soon after it was published in 1911. An estimated 1,5 million copies of the Japanese version were sold. American experts had also visited Japan before Deming and Juran, one of these on an initiative by a representative of the occupying powers, General McArthur, who arrived in Japan in August 1945. One of the experts who achieved a great influence was Homer Sarasohn, who came to Japan in September 1945. Charles Protzman, working at the Western Electric already before Juran arrived, came to Japan 84

Studentlitteratur

3 The History of the Quality Movement

in 1948 to help Sarasohn. They soon realized that leadership training was essential to get Japanese industry back on its feet again, and created an extensive material under the name of Industrial Management. Quality control circles Juran visited Japan again in April 1966, and was then very impressed by the progress made since his last visit. Juran was so impressed by what he had seen that in June 1966, when he returned home to participate in the E0QC15 conference in Stockholm, he strongly advocated that a completely new session about the Japanese development should be included on the agenda. The extra session was met with an enormous interest and has made this conference historic. Juran gave an emotional account of the groups of operators who were engaged in solving quality problems. This was the beginning of what we today call QC circles16. Even if thoughts and ideas for the quality circles to a certain extent originated from Juran's and Deming's message, there was nothing like this in the US - this was a Japanese invention. As an example, Juran very enthusiastically described his meeting with a group of girls in a radio factory; who had encountered a problem with loose knobs, and how they solved their problem. Juran's message to the Stockholm conference was: "The way the Japanese work at present, they will become world leaders in quality within 20 years, if we in the West don't do something." There is no doubt that Juran's prediction was right. Kaoru Ishikawa and Taiichi Ohno A part of Deming's and Juran's message was that massive training and practising of simple statistical models was necessary, to be able to identify problems and carry out improvement projects. In support of this, Kaoru Ishikawa (1915-1989) started a campaign to teach all supervisors simple statistical methods that are useful in the improvement work. This was how "the Seven QC-tools'' came into existence. With the intention of ensuring that all the employ15 16 17

EOQC is now called the European Organization for Quality, EOQ. QC= Quality Control. See Figure 1.11 and Chapter 10. Studentlitteratur

85

Del I — Kvalitet fOr framgang

ees in the companies were committed to the improvement process, Ishikawa suggested that QC-circles should be established. In these circles the seven QC-tools were used efficiently. It is interesting to observe how Ishikawa already in the early days advocated "participation for all" and "management commitment". In the book Introduction to Quality Control from 1964, Ishikawa wrote (Park Dahlgaard et al., 2001): Quality control can only be successful when top management takes responsibility for product quality. To be able to do that, top management must take quality issues as a company's policy (hoshin kanri) and they must involve not only engineers and management teams, but also people in administration and all other employees. All employees must work on quality as one whole, and that's the precondition for success of quality control. If just some few engineers study statistical quality control methods, the will be no success. Understanding, enthusiasm, and the following actions of top management and all other management teams are the most important factor. To be able to act as one whole toward quality the precondition is to establish a company wide cooperative culture, where the importance of human relation is crucial. Besides Ishikawa, Taiichi Ohno (1912-1990) was one of the Japanese who has had the greatest impact on the development of production and quality philosophies in Japan. Ohno, who worked at Toyota, emphasized the importance of reducing waste and unnecessary work. He developed the famous Toyota production system and created the concepts kanban and Just-in-Time. His production techniques are the basis of what is now called "lean production". After a visit to Ford's plant in Detroit with mass production in 1950, he grouped workers at Toyota in teams with responsibility for quality inspection. He realized the importance of setting aside time for the workers to meet with the industrial engineers, to discuss how to improve the manufacturing process, an embryo to the QC-circles. Furthermore, he was the one who installed a cord above every work-station and instructed all workers to stop the line immediately if they spotted a problem (Bowles & Hammond, 1991). Ohno is one of the people behind the Toyota's Five Whys, which means that when a fault occurs, you should ask yourself "why" five times to arrive at the cause of the problem. 86

Studentlitteratur

3 The History of the Quality Movement

TQM accepted in Japan

In Japan the concept of CWQC (Company Wide Quality Control) was used, or TQC (Total Quality Control), a notion that was coined by Feigenbaum (1951), but which has been interpreted slightly differently in Japan. According to Ishikawa, TQC implies "a system for integrating quality technologies in various functional departments", while CWQC implies "provision of good and low cost products dividing the benefits among customers, employees and stockholders while improving the quality of peoples lives". The term TQM, Total Quality Management, has since 1997 been an accepted concept in Japan, too. In the 1970s new simple tools, in the forms of charts and matrices, were put together in order to systematize and facilitate management. These are often named "the seven management tools" or sometimes "the seven new QC-tools" and will be discussed in Chapter 22. One of these tools has been developed into what is now often referred to as the Quality House, an important tool in Quality Function Deployment, QFD, which is described further in Chapter 5. Another set of tools is called the seven product development tools18, see Kanada (1995). the halo askew? There are those who say that the interest in quality improvements cooled down in Japan by the end of the 1990s, following the collapse of the Japanese economy, see, for example, Park Dahlgaard et al. (2001). Is

An article in the Financial Times19 claims that a certain erosion of quality awareness may be discernible in Japan. An indication of this may be a number of sensational product recalls. Snow Brand, a Japanese food manufacturer, recalled a large number of desserts on July 18, 2001, because of contamination, and the previous year the company suffered a big scandal due to contaminated milk making 13,000 people ill. In the middle of July 2001, Matsushita Communication Industries recalled their cell phones for the second time in 18

19

It is seven tools in these tool boxes, put together in Japan, since seven in almost a holy number in Japan because the samurai should have seven "things" with him to be successful in the fight. July 19, 2001, written by Bayan Rahman.

C Studentlitteratur

87

Del I — Kvalitet fc5r framgong

five months. Other companies, too, among those Sony, have had similar recall experiences. Whether these and other incidents of a similar nature should really be seen as a change of attitudes towards quality is debatable, but the conditions have changed because of the prolonged financial stagnation in Japan, which has meant tighter budgets and higher sales targets.

3.6 The Awakening in the Western World In the Western World, perhaps particularly in the US, people initially regarded the threat from Japan more as a price threat than a quality threat. They tried import restrictions and appealed to the population to "buy American". By the middle of the 1970s, however, the Japanese product quality was definitely superior to its western counterpart, and the crisis was a fact. As an example, the video market was completely taken over by Japanese companies, and the car industry faced serious difficulties. A study carried out by MIT (Massachusetts Institute of Technology) in 1989 concerning the competitive factor in the motor industry, showed that Japan was twice as good as car makers in the West, both in terms of design, process development and production. To counteract import quotas, the Japanese began setting up car factories in the US. First out was Honda with a plant in Marysville, Ohio in 1982. Two years later was the starting point of NUMMI, a joint venture between Toyota and General Motors, with a Japanese management and American workers, which showed clearly that the Japanese successes were not related to culture. NUMMI quickly achieved similar quality successes as their corresponding Japanese plants. The West now started to imitate some methodologies from Japan, such as QC circles and process control. People also began estimating the costs of poor quality, implementing automated supervision and robots. A massive training effort in Statistical Process Control (SPC) was started, not least after the famous TV programme "If Japan can — why can't we?". Eastman Chemical Company, for example, is said to have trained 10,000 employees in SPS. The 88

Studentlitteratur

3 The History of the Quality Movement

result was meagre, however, as the overall perspective was often lacking. The QC-activities soon fell into disrepute. Not until top managements realized that they had to commit themselves to the improvement work, did any noticeable changes occur. A forerunner here is Donald Petersen, CEO at Ford, who after having seen the documentary "If Japan can - why can't we?" engaged Deming in the company, and had meetings in person with Deming once a month. The difficulty for American leaders to understand the Japanese attitude to quality improvement is illustrated by the following quotation, in which a manager at Fuji Xerox Co describes a meeting with John Smale on the P&G board, in the mid 1980's (Cole, 1999): I think it is difficult for Americans to understand why cost is not our first concern. Mr Smale spoke of lowering cost and improving quality simultaneously. However, Japanese companies which have firmly pursued TQC focus on building quality in first with the cost reduction following from that. We had a long discussion about this and he clearly had trouble accepting our position. Of course, we went through this same process. Originally we also thought we should pursue these goals simultaneously. A holistic view and a focus on processes was developed chiefly in connection with the establishment of the Malcolm Baldrige National Quality Award in 1987. This award has since had several national and international successors20. Roughly at the same time, the ideas of quality assurance were established, primarily in Europe, and ISO 9000 was adopted as the international standard for quality management in 1987.

3.7 Service Quality Interest in services, the design of services and, later, service quality, originate in particular from the field of marketing, within the business administration area. An important part of the development in this field took place in the 1970s and 1980s and has, essentially, 20

We will revert to quality awards in Chapter 21. Studentlitteratur

89

Del I — Kvalitet for framOng

been administered from the Nordic countries and the Nordic School of Service Management21. This groups counts among its members Christian GrbOnros, Helsinki University, Evert Gummesson, Stockholm University, and Bo Edvardsson, Karlstad University and the Service Research Center (CTF) in Karlstad. In 1982 Christian Grotinros created one of the earliest models of service quality, focusing on functional and technical quality22. One of Evert Gummesson's many accomplishments is his international book on Customer Relationship Marketing; see Gummesson (1999). Many articles and research findings on service quality have been published at the Service Research Center, headed by Bo Edvardsson. In 1988 the first QUIS (Quality in Service) conference was held in Karlstad. Since then, QUIS conferences are arranged every other year in different places around the world, attracting a growing number of participants. Another centre for development of service quality is the University of Arizona. We have seen a growing interest in service quality issues in recent years, not least due to the fact that services of various kinds are continuously increasing. There is a growing tendency that customers today want functions rather than products, such as cold storage instead of refrigerators, and transport rather than cars. The development of the information technology generates new services, on the Internet, for example, and at the same time, there is a growing interest in quality issues in the public sector. Since 1970 the number of employees in the service sector has increased by about 60% in the US, and by approximately 40% in Japan. Today some 70% of the GNP is derived from the service sector in the US, and the situation is similar in most European countries. The global value of trade in services is estimated at approximately 960 billion US dollars, which is more or less twice as much as the trade in agricultural products. Earlier services were normally felt to be so different from goods that special definitions, methodologies and tools for service quality 21

22

90

This is verified by Berry & Parasuraman (1993) . The authors of this article belong to the pioneers in the field of service quality and have, together with Zeithaml, helped to create the Gap Model, described in Section 14.2. The model is briefly discussed in Section 14.3. Studentlitteratur

3 The History of the Quality Movement

were created. This was mainly due to the fact that the service quality field grew out of marketing and business economics, while the quality issues in industry were developed essentially from a technical and statistical perspective. We feel that a boundary-crossing tendency is now clearly visible, which implies that the field of quality management is becoming integrated, something that will, no doubt, be beneficial to its further enhancement.

3.8 Some Perspectives of the Quality Movement In spite of the long history of the quality movement, it is not really until in the last few years that this subject has found a hold in the academic world. There is, for example, no well-established basis in the management theory for quality management. Instead, much of the development in this field is based on consultants, although single individuals, such as Shewhart and Deming, did have a solid academic background. The quality movement as four phases

Generally speaking, it is true to say that the progress over the last few decades has led to a larger portion of the quality work being implemented earlier in the production cycle. In the years after the Second World War, the quality work in most Western countries was dominated by inspection activities. Finished products were checked and defective units23 were scrapped or re-worked. This defensive way to work, called quality inspection, was later abandoned in favour of controlling the production process. The underlying idea in quality control is that it is cheaper to already in the process look for early signs that units will be defective, so that the necessary amendments can be made to avoid producing defective units. The next step was the realization that quality efforts should 23

For example, when Juran started to work at the Western Electrics Hawthone plant in Chicago in 1924, there were roughly 5,000 inspectors. i.e. nearly one eighth of the total number of employees worked with control. Their main task was to separate "good products from bad ones."

O Studentlitteratur

91

Del 1- Kvalitet for framgang

be increased already before the start of the production process, to avoid later problems. The work was focused on formulating and gathering routines for how to administer incoming material, claims and measuring instruments, and for how to divide responsibility. These elements together make up what is called a quality system or a quality management system. These activities together are called quality assurance, which, unfortunately, is not an ideal terminology, as it indicates that these steps guarantee that the customers will be satisfied. Since then the development has moved towards increasing the efforts even before the manufacturing process is started. By systematically determining the wishes and demands of the customer, by performing well planned experiments and making robust designs, it is possible to prevent the release of bad and unprofitable products on the market. Quality management, according to this view, comprises both quality inspection, quality control and quality assurance, and is an integrated part of the activities in the organisation, involving continuous improvement work. Figure 3.7 illustrates this view of the development of the quality movement.

Quality management Quality assurance Quality control Quality inspection

... continuous improvements before, under and after production ... before production ... during production ... after production

Figure 3.7 An illustration of the terms quality inspection, quality control, quality assurance and quality management. The figure also provides an often used description of the development of quality movement.

The quality movement developed from two schools of thought

Often the development of the quality movement is described as in Figure 3.7, comprising four phases. There is reason to consider if 92

Studentlitteratur

3 The History of the Quality Movement

this really provides a satisfactory description of the history, however. For one thing, this point of view is based on the existence of a precise terminology, by which each period can be separated. Here the literature lacks consistency and consensus. Instead it is rife with a number of concepts, for example, Quality Control, Statistical Process Control, Statistical Quality Control, Modern Quality Control and Strategic Quality Management. Juran observed how the wide scale of concepts was gathered under an umbrella concept by the name of Total Quality Management, but "there was no agreed standard definition...". When Deming was asked about TQM in the future he replied (Deming, 1994): "... there is no such thing. It is just a buzzword. I have never used the term, as it carries no meaning." Furthermore, the model with four phases cannot explain the different development patterns in the different parts and cultures of the world. In Japan the concept Company-Wide Quality Control evolved directly from Quality Control in the 1960s. ISO 9000 and many of the ideas associated with quality assurance was accepted in Japan only in the last few years, and then with some hesitation. Moreover, according to Ishikawa, inspection was banned in Japan, when they developed methodologies and tools in Company-Wide Quality Control there. Indeed, inspection is hardly a part of the thoughts that Deming, Juran and Ishikawa represent. Another theory postulates that quality management has developed from different schools of thought. Kroslid (1999) calls these the Deterministic School of Thought and the Continuous Improvement School of Thought, respectively. The Deterministic School of Thought has its roots in Taylorism. Taylor said that "the inspector is responsible for the quality of the work, and both the workmen and the speed bosses must see that work is finished to suit him." (Taylor, 1911, p. 101). Another important step in the development here was the American military standards for contractual agreements with suppliers concerning their quality work, which later came to be the basis of the international standard ISO 9000 for quality systems. A third evolutionary stage is Crosby's Zero Defect approach, which he introduced in the © Studentlitteratur

93

Del 1- Kvalitet for framgong

1960s at the Martin Corporation, where he worked as quality manager on the Pershing missile programme. The Continuous Improvement School started with Shewhart and his emphasis on the improvement of the process rather than on the inspection of each single produced unit. Understanding variation and how variation could be monitored and reduced using what are now called control charts was important. Another milestone is Feigenbaum's theory of Total Quality Control, which implied that the quality work should be spread to all divisions, not just the production department. This outlook was strengthened by Juran during his visits to Japan and resulted in the term Company-Wide Quality Control, established in Japan in 1968. Also Deming's work belongs to this school. Some illuminative quotations from the evolution of the two schools of thought appear in Figure 3.8. In addition, Figure 3.9. contains a brief description of characteristic features. The deterministis school

The school of continuous improvements

Zero Defects is a standard for management, a standard that management can convey to the employees to help them decide to "do the job right the first time ... People receive their standards from their leaders. They must be told that your personal standard is: Zero Defects. (Crosby, 1979, p. 170-172)

The needs of the consumers are in continual change. So are the methods of manufacture, and products. Quality of a product does not necessarily mean high quality. It means continual improvement of the process so that the consumer may depend on the uniformity of a product and purchaser it at a low cost.

The theory was that if an organisation had a good system of well documented procedures based on a clearly defined quality policy signed by the chief executive, then provided that the system followed certain prescribed principles, there would be no reason for producing defects.

The most decisive factor in the competition for quality leadership is the rate of quality improvements. (Juran, 1989, p 78)

(Hutchins, 1995, p 466)

(Deming, 1980, p 1-2)

Standards and regulations are imperfect. They must be reviewed and revised constantly. If newly established standards and regulations are not revised in six months, it is proof that no one is seriously using them. (Ishikawa, 1985, p 65)

Figure 3.8 Excerpts from the two schools of thought that could represent the historical development of the quality movement. (From Kroslid, 1999.)

94

Studentlitteratur

3 The History of the Quality Movement

Level ofdevelopment

very high Conformance to standards, specifications and zero defecs by all employees.

high -—— medium

low

very low

Quality values and strategies are shared in the organisaton, with high customer satisfaction. Continuous improvements of processes, products and services by all employees.

Conformance to process and system specifications through corrective and preventive action.

Process improvements based on statistical methodology.

The installation of a prevention system through the specification of procedures.

Systematic application of quality control tools.

Ls

o

o

process

Inspection of final goods and services according to specification.

Deterministic School

CU ure

LL

product

Continuous Improvement School

Figure 3.9 Some characteristic features of the two schools of the quality movement. (From Kroslid, 1999.)

The quality movement today and in the future

In the last decade, the two schools described above have approached each other in their views on quality work. Today, the quality philosophy is firmly built on a holistic view, a systems perspective and basic values that should create a culture. An important change in viewpoint that has taken place concerns the important role of the management, and the importance of clear management strategies. Today, there is also a clear focus on improvement work. The introduction of quality awards in order to stimulate the quality improvement process also is part of this development. An important example of this is the Malcolm Baldrige National Quality Award, which is awarded annually by the US President to companies that have successfully implemented Total Quality Management. In 1992 the European Foundation for Quality Management, EFQM, instituted the European Quality Award. Several countries have also established their own national quality awards. As an example of this, in 1992 the Swedish Institute for Quality, SIQ set up the Swedish Quality Award24. 24

A discussion of quality awards can be found in Chapter 21.

C Studentlitteratur

95

Del I — Kvalitet for framgang

The changed overall view on quality issues, which is becoming increasingly widely spread among company and organisation managements all over the world, will no doubt have far-reaching consequences for the future development of quality work within all sectors of society. We hope there will be an accelerating development, where many different organisations in society, in the private and public sectors alike, will be involved in proactive cultural changes towards customer focusing and continuously improved living conditions. In addition, systems thinking and an outlook on people based on common values will contribute to countries and groups of countries accepting a joint responsibility for the creation of a sustainable society. It is our hope that the values, methodologies and tools that are presented and discussed in the following chapters will be of benefit in this development.

3.9 Notes and References Shewhart's importance for the development of the quality movement cannot be emphasized strongly enough, and his book from 1931 is warmly recommended. A special issue of Industrial Quality Control (predecessor of the current American journal Quality Progress) in August 1967, in honour of Walter Shewhart also provides very interesting reading. Shewharts book from 1939 is more penetrating than many modern counterparts, but unfortunately a bit difficult to read. Some interesting articles about the development of quality philosophies can be found in the journal Quality Progress, which is published by ASQ the American Society for Quality. For instance, in 1986 a series with a few short descriptions of ASQ's honorary members were introduced. Among these are Shewhart, Deming, Juran, Feigenbaum and Ishikawa. Bowles & Hammond (1991) give an interesting account of the influence of Juran and Deming on the quality development in Japan. Deming's seminars during his trip to Japan in 1950 are described in Kolesar (1994). Butman (1997) also describes Juran and his life. In Mann (1985), and particularly in Kilian (1993), there is a corresponding illustration of Deming as a person and a human being. Kilian was Deming's secretary for 40 96

Studentlitteratur

3 The History of the Quality Movement

years. In Tsutsui (1996) the role that Deming, and in part Juran, played for the Japanese quality development, is described. We have chosen not to make this summary too detailed, and that is the reason why we have not discussed the philosophies of such people as Armand Vallin Feigenbaum and Philip Bayard Crosby, although they too have played an important part. Feigenbaum (born in 1919), who worked at General Electric for many years, introduced the concept Total Quality Control in his book by the same name, first published in 1951 (a fourth edition was published in 1990), in which he clearly stated that "quality is everybody's responsibility". Crosby (1926-2001) introduced the concept "Quality is Free" in his book by the same name, first published in 1979. This book was a great success because of the way it addresses top managers, and it has been sold in about 2.5 million copies in 15 languages. His publications include Crosby (1984, 1986, 1988, 1996). Crosby also created the concept "zero defects" in the 1960s, when working at the Martin Company, where Pershing missiles were built for the US Army. Further information about Crosby is found in his autobiography Crosby (1999). An article in the magazine Quality by Lowe & Mazzeo (1986) deals with similarities and differences between the strategies of quality as far as Deming, Juran and Crosby are concerned. Other articles discussing a comparison between different quality philosophies are Ghobadian & Speller (1994) and Kruger (2001).

Figure 3.10 Philip Crosby (1926-2001), to the left, and Armand Feigenbaum (born 1919), to the right, have both contributed to the development of the quality area. © Studentlitteratur

97

Del l - Kvalitet for framgang

Many books attempt to explain the Japanese miracle. The books written by Ishikawa (1982) and Karatsu (1988) give good descriptions of this. Another description of the Japanese development can be found in Juran (1995) and in Park Dahlgaard et al. (2001). Ishikawa (1989) discusses a comparison between Japanese and Western quality strategies. In Kondo (1994), Ishikawa's own ideas and values are portrayed. Imai (1986, 1991, 1997) and Hannam (1939) address the Kaizen philosophy. For a link to the JIT ("Just-InTime") concept, Schonberger (1983) can be recommended. How the development in the US was affected by what happened in Japan and the partly failed attempts to imitate methodologies and tools is explained by Cole (1999). An interesting projection into the future of quality is performed by Normann (2001). In fact Rickard Normann was very early in his understanding of the important role of services; see Normann (1977). Some texts describing the development of the quality movement are Wadsworth et al. (2002), Conti (1993) and Dale (1999). We would also like to mention that nowadays there are quite a number of periodicals dealing with quality. Some of these are the Business Process Management Journal, European Quality, International Journal of Quality and Reliability Management, Measuring Business Excellence, International Journal of Reliability, Quality and Safety Engineering, Journal of Quality Technology, Journal of Quality in Maintenance Engineering, Measuring Business Excellence, Quality, Quality Progress, Quality Forum, Quality Engineering, Quality Management Journal, Quality and Reliability Engineering International, Quanta and Zuverlassigheit, Six Sigma Forum Magazine, Total Quality Management and the TQM Magazine.

98

Studentlitteratur

Part II Design for Quality

Emphasis on quality should permeate product development work, both development of goods and services, already from the start. In that way most customer value to the lowest cost can be achieved and thereby secure a good profit for the organisation. This has been known for a long time, but very few methodologies have been available. Now the situation is changing, and this domain is growing tremendously. In this part of the book we will discuss some methodologies and tools that can be used in the planning, development and design ofproducts; goods as well as services.

An organisation wanting to achieve longterm success needs not only to satisfy and delight its present customers, but also to create the necessary pre-requisites to satisfy its future customers. Deming's way to express this was "quality should be aimed at the needs of the customers, present and future". The customers of tomorrow will have needs and expectations different from those of our present customers. For this reason, it is important to keep up with changing needs and expectations, and to learn how to meet these by using new technological innovations. Product development and related processes, methodologies and tools are extremely important to the success of an organisation. It is essentially in the product planning and development phase that properties, costs and consequences are laid down. At the same time, product development is very expensive. For example, according to the Financial Times (June 2001), Volvo Car Corporation will invest about USD 8.5 billion in the development of new car models over the next five years. The development expenditure for the large platform for Volvo model variants such as the S 80, V 70, S 60 and the new "Adventure Concept" is estimated at approximately USD 3 billion. Such large operations naturally need to be well organised. Product development and production are traditionally associated with goods. Today, most large companies have a well-documented process for developing new goods. Services, which make up a growing proportion in many product concepts, differ from goods by the fact that the production of a service generally takes place at the same time as the service is consumed, or creates value for the customer. Hence, the actual service must

often be developed concurrently with the production process. Parallel design of products and processes is not unique to services, however. The basic idea of integrated product development is that the same methodologies should be used for goods as well, saving both time and costs. Thus, it would not be unreasonable to say that the similarities between goods and services are greater than the differences, at least on the conceptual level. Methodologies and tools that support a customer-focused approach in product development have come into increasingly frequent use in industry in the 1990s. Reliability engineering tools, common in the aviation and nuclear power industries as early as in the 1970s, have now become widely spread. Quality Function Deployment, primarily a methodology to transform customer desires into product characteristics, is used increasingly in product development. The same applies to the use of design of experiments to identify important parameters and their target values. A systematic approach to develop products that are insensitive to variation due to the environment and handling still appears to be waiting for a breakthrough in large parts of industry, however. Many of the methodologies and tools that we will discuss in this part of the book were designed for manufacturing companies, but are applicable in service companies as well. In some cases, viewpoints from the service sector have proved very useful in manufacturing companies. However, in many service companies, the future-oriented product development process is managed inadequately. Unfortunately, quality issues are, despite obvious shortcomings, given a rather obscure position in many software development companies at present.

4 Customer-focused Product Development

An organisation aiming for long-term successes needs to satisfy and delight its present customers, but also needs to create the necessary pre-requisites to satisfy and delight its future customers. The future customers will, in many cases, have different needs and expectations than the customers of today. Processes to keep up with changing needs and expectations, and how these may be met by developing and utilizing new technological advances, are therefore vital in every organisation. This also applies to the processes that aim at creating the future products of the company, based on insights of future customer requirements and the technological development. Many of the methodologies and tools for product development that we will discuss in this part of the book, were developed for manufacturing companies, but are applicable in service companies as well. In some cases, viewpoints from the service sector have proved very useful in manufacturing companies, particularly those relating to customer relations. However, in many service companies, the future-oriented product development process is managed inadequately. The same applies to software companies, whose total business activity could be regarded as product development. To this day, quality issues are given a rather obscure position in many IT companies, despite obvious shortcomings. Even if we are, in the following, focusing on the development of products, much of our reasoning is applicable to two other important processes as well, namely the development of technology, i.e. of specific solutions that can be incorporated into future products, and the development of the manufacturing process. In the context of service production, as in the development of software, it is often difficult to separate between product and production development.

© Studentlitteratur

101

Part II — Design for Quality

4.1 Product Development Methodology In recent years, product development has increasingly been brought into focus, not least from a quality perspective. The future costs of a product, for manufacturing as well as use, is to a large extent determined in the developing stage, see Figure 2.6. This applies to both goods and services. The quality of the product is also established now. This is true for traditional quality dimensions, such as reliability and safety, but also for quality dimensions that surprise and delight the customer, perhaps in the form of new solutions to unspoken and unconscious customer needs'. With creativity, thoroughness and a systematic approach in the development processes, it is possible to achieve high quality at low costs. Product development has been discussed in a great many books and articles. This section is based on a comprehensive description of product development, suggested by Don Clausing2. It is built on an iteration in three steps at different product levels, from product families, to products and subsystems to, but not including, detailed design solutions. The key words in the three steps are requirements, concepts and improvement3 , see Figure 4.1. The requirement stage implies that needs and expectations from customers, aspects from the manufacturing stage, and requirements for higher system levels should be gathered, understood, evaluated and translated into product requirements. The next step, the concept stage, means that a relatively large number of different concepts that satisfy the customer should be generated. One of these concepts should then, possibly after further improvements, be selected. This is a creative step. Finally, the selected concept should, in the improvement stage, be improved using systematic methodologies, such as reliability analyses, design of experiments and robust design methodologies4.

2

3 4

Different types of customer needs are discussed more in Section 14.1 Clausing was principal engineer at Xerox, where he led the company's product development process and improvement efforts. Later he was appointed professor at MIT (Massachusetts Institute of Technology) in USA. We use the terminology from Clausing & Cohen (2000). These methodologies will be discussed later in this part of the book.

102

m Studentlitteratur

4 Customer-focused Product Development

Customers on the Market

Manufacturing Delivery

Requirements Concept Improvement

Productfamily Individual products

Subsystems Modules

t+ t

Requirements Concept Improvement

Build Test Fix

.1

Requirements Concept Improvement

Technology development Proactive development Figur 4.1

.i

A model for product development illustrating the flow of iterations down to progressively lower system levels and a separate development of technical solutions. On each lever three phases are distinguished: Requirements, Concept, and Improvement. Note that the use of a separate process for technology development, where uncertain technical solutions are tried, provides greater predictability and shorter cycle times. (An expansion of a figure in Clausing & Cohen, 2000.)

It has become more common in the development context to speak not only of single products, but rather of product families. This has to do with the variation that exists between different customer needs. To satisfy the different needs, it is often necessary to split the market into segmentss, each with its product variant, which can satisfy the particular needs of the customers in this segment. For this strategy not to become too costly, different platforms with a common basis can be created for several product variants, where different module variants can be joined into unique products. Using a comparatively small number of modules it is possible to achieve products that will satisfy totally different customer needs. A company that has succeeded very well with this strategy is the Swedish truck company Scania, with its module system for buildFor instance different according to different countries, ages, sexes or other differentiation of requirements. © Studentlitteratur

103

Part II — Design for Quality

ing heavy vehicles. For instance, the current 4-series of trucks and buses, introduced in 1995, is based on a component box with no more than 12,000 articles. Out of these Scania offers 360 different truck models6. Education in the university world may illustrate the corresponding thinking in the service sector. Every study programme can be regarded as a platform with a number of compulsory basic courses. By different choices, the education can be made more or less unique for the individual student, depending on what the study programme allows and the student's power of initiative, interests and needs. Figure 4.1 illustrates the different levels in product families, products, subsystems and modules. With the two-way arrows we want to indicate that development involves an iterative process, and not only within the company or organisation. It is also important to let the customers take part in different ways, and to consider their views on the concepts that have been designed. In Figure 4.1 the manufacturing phase has been given a small place. We will focus more on this in part III of the book. Please note the small box saying ("Build — Test — Fix"). Traditionally, this has been considered the most important part to maintain a reasonable product quality. This is, however, a costly method, resulting in late, expensive and often unsuccessful changes, particularly for systems with high complexity. Unfortunately, even today this is often how quality work is conducted, in far too many organisations, especially in the IT sector. 4.1.1 Requirements

Understanding the customers' needs and expectations is vital. Therefore, it is important to use several different methodologies and tools to create this understanding. One example is focus groups, where customers can describe and discuss situations related to the contemplated product, another is different types of customer interviews. Sometimes designers even visit the customers, taking part in their work. Occasionally employees in the company are invited to use the product, or to acquire the service, to gain an understanding 6

See more on www.Scania.com/au/module.htm.

104

Studentlitteratur

4 Customer-focused Product Development

of the customers' problems, so that they can, with their insight into the technical possibilities, create products that are able to fulfil the customers' needs. Furthermore, it is sometimes appropriate to involve customer representatives directly in the development work, to ensure that the views of the customers are not forgotten. Different types of surveys can be taken, to assess how important various product properties are to the customers. Conjoint Analysis can be used for this; here the customers are asked to rank different product concepts that have been developed using the principles from design of experiments; see Chapter 7. With systematic management of the needs that have been found, whether they are expressed by the customer or not, a workable basis for the continued development work can be obtained. Quality Function Deployment, QFD, could be a useful methodology in this systematic translation of customer needs into requirements for design parameters. We will look further into this in the next chapter.

4.1.2 Concepts Traditionally, one solution satisfying the requirements was looked for. When it was found this solution was chosen. However, the first solution found is seldom the best one. It might not even be a good one. For this reason, it is important at the concept stage not to stop at one single possible solution, but to try to create several concepts that might be more or less good at fulfilling the requirements. Concept generation is a creative process, where brainstorming and other innovative methods can produce new solutions, to both new and old problems; see King & Schlicksupp (1998). A system based on large amounts of patents has been developed by Gennrich Altshuller to solve technical problems. He was an engineer at the Patents Department of the Soviet navy, who, because of his outspokenness, fell into disgrace with the Soviet authorities. For this reason he ended up in the Gulag', where he developed some 40 principles for creative technical problem solving. This sys7

Gulag is the name of the administrative management of a widespread system of prisons and forced labour camps in the former Soviet Union. Often Gulag is used, as here, as a name for the system of forced labour camps itself.

6:. Studentlitteratur

105

Part II — Design for Quality

i pri

0

0

Safe at high positions

+

+

Easy to handle

S

+

Easy to control

S

Good for heavy loads



i I t

Or R e f e r — e n c e

+



Figur 4.2 A reference concept (marked "reference" above) is selected among a number of different concepts, and compared with the alternative solutions. Then alternatives are marked (—) for worse, equally good (S — for same) or better (+) compared to the reference concept. The concept with the least +-signs is removed, after which a new round of selections is made, after having pondered again if any solution marked with a +sign might provide a smart solution to a sub product, which could be incorporated to improve any of the other concepts. In this way an improved concept is produced. This procedure is repeated until only one alternative remains. (From King & Schlicksupp 1998.)

tem, TIPS8 (Theory of Inventive Problem Solving) was introduced in the Western world in the 1990s, and has attracted a great deal of attention; see King et al. (1997). With many creative and good solutions to a problem, it may be difficult to choose the best alternative. The Scottish design engineer and professor Stuart Pugh developed a methodology that has come to be called the Pugh Concept Selection; see Pugh (1990). This methodology is illustrated in Figure 4.2. An important aspect of this way of working is that you do not only select a best concept, but also gradually introduce improvements by making the best of good ideas from concepts that have been eliminated during the selection process.

4.1.3 Improvement Once a concept has been selected at a certain system level, this concept should be polished and made even better at a lower cost. In particular safety, faultlessness and reliability should be considered. Design of experiments may be an appropriate methodology9 to cla8 9

The Russian abbreviation, which is used sometimes, is TRIZ. Reliability and design of experiments are discussed in Chapters 6 and 7.

106

Studentlitteratur

4 Customer-focused Product Development

rify the connection between design parameters and properties of importance to the customer. It is particularly important to understand, already at the design phase, all the sources of variation that the product is exposed to in manufacture and later during use. Various sources of variation can create problems, if the components do not fit together, whereby the product properties will differ from the planned properties. Robust design methodology (see Chapter 8) and a systematic handling of tolerances are important features here. It is important to note that some of the methodologies that are associated with improvement should, preferably, be introduced early in the process, and be considered both when generating and selecting concepts. For products with high demands on safety or reliability, it is important that these demands are taken into account already when the interesting concepts are brought up for selection. Design reviews and a slightly modified form of software development, called inspections, are important means for the designer to get feedback on, and evaluations of, the solutions from his colleagues, and sometimes from customers as well. A basis for these reviews could be different analyses of user friendliness, or reliability analyses. Fagan (1986) has introduced a way to apply inspections during software development, which has had a great impact. 4.1.4 More about the product development process The process of turning out new products is both abstract and complex. Therefore, it can be regarded from different perspectives and be shown in different graphic forms. Below we will reflect briefly on some alternatives. Integrated Product Development

As early as in the mid 1970s, Fredy Olson, later professor of mechanical engineering at Lund Institute of Technology, developed a model for the product development process, which has later come to be called Integrated Product Development, see Figure 4.3 below. Here a cross-function approach is accentuated, where market aspects, the actual product development and the development of © Studentlitteratur

107

Part II — Design for Quality

Determining the basic needs

The Need

User investigation

Market investigation

Preparation for sales

De ermi• ning the type of product

Product principle design

Preliminary product design

Modification for manufacture

Consideration of process type

Determining type of production

Determining production principles

Preparation for production

2 Product principle phase

3 Product design phase

4 Production preparation phase

1 0 Recognition Investigaof need tion of need phase phase

Production

5 Execution phase

Figur 4.3 A model of integrated product development (Based on Andreasen & Hein,1987, but originating from Olsson, 1976.)

the manufacturing process should occur concurrently. The model has many similarities with Simultaneous Engineering or Concurrent Engineering; see for example Andreasen & Hein (1987). Similar approaches are also introduced by Andreasen & Hein (1987) and Morup (1993). Technology development

In many models of the product development process, the importance of having a separate technology development process is stressed. This is important, if you are to be able to speed up the actual product development process without risking that the uncertain technology development will interfere with the product development projects. Ulrich & Eppinger (1995) stress the importance of regarding product development as an iterative process. In this context it is impossible, and inappropriate, to live by the worn-out slogan "do it right the first time". In a product development project it is not possible to know from the start what is "right", some iterations are always necessary. By raising the awareness of the iterative nature of a 108

Studentlitteratur

4 Customer-focused Product Development

product development project, this can be taken into account, and by structuring the project tasks, the resource consumption can be kept to a minimum. Technology push or a market pull

Another recurring theme is whether to be influenced by "technology push" or "market pull". There is, of course, no unequivocal answer. Rather, creative combinations of technology and market management are the key to success. Sometimes it has turned out that the development of technology has brought products that the customer had not been able to imagine, but which were very successful and, thus, fulfilled unconscious needs. Sony's Walkman and mobile telephones are good examples of successful technology push. In other situations, it may be important to identify a demand from the customers, to associate oneself closely with the customer, and perhaps even let them take part in the actual development process. A model for this was developed by Kaulio (1997), who also provided applications of the model. Kansei engineering A methodology for product development that has been noted lately is Kansei Engineering; see Nagamachi (1995). In Japan, Kansei Engineeringl° is used on a wide spectrum of products, from cars and electronics to hygiene products. The methodology was, for instance, used in the development of the successful car model Mazda Miata (called MX5 in Europe) intended as a sporty low-price car for younger male drivers. Kansei Engineering is based on statistic processing of customer statements about how they experience various products. To obtain relations between a customer's kansei and design elements, a procedure based on four steps is used: • First the product of concern has to be defined. A variety of products are collected. Their critical design elements are classified to item and categories. to The expression "Kansei" is difficult to translate, but it means roughly "total emotions". Studentlitteratur

109

Part II — Design for Quality

• The second step is to assess the customer's perception. Therefore the customer feelings and desires have to be identified and expressed with "kansei words". • Those "kansei words" are arranged on a semantic differential scale to make the customer's perception measurable. Subjective evaluations of each of the collected products are conducted to gain data in a next step. • Finally, these data are analysed using statistical methodology to quantify the contribution of each design element on the user perception. The improvement perspective

As far back as in the 1930s when Shewhart reflected on product development, he emphasized the improvement perspective, for .example in the picture that was to become the origin of Deming's improvement cycle, see Figure 4.4. The importance of learning is not limited to the product level, however. The learning perspective should also include the product development process. But continuously improving the product Step 1

Step 111

Step II

71' Specification

Production

Inspection

OLD

NEW Figur 4.4 Cycle for product development improvement. This cycle gave Deming the inspiration for what is now often called the improvement cycle or the Deming cycle, see Figure 9.3. (From Shewhart, 1939.)

110

C Studentlitteratur

4 Customer-focused Product Development

development process is not altogether easy. This is illustrated clearly in an interesting case that was studied by a research team from MIT, who coined the term the improvement paradox. One of the companies in their study, Analog Devices, was very successful in their improvement work, from the end of the 1980s up to the beginning of the 1990s. And yet the company very nearly became bankrupt. When the group studied what had happened, it turned out that the company had not managed to be as effective in their improvement work on the product development process as in the manufacturing process. Furthermore, the resources to the product development was allocated in relation to the costs of manufacture. When these were reduced, the resource allotment to product development followed suit. Thereby, the product development department was not able to deliver new products to the manufacturing department at the required pace. In the course of time, this lead to redundancy in the manufacturing department. The owners went in and demanded that the consequences of this should be taken, meaning that employees in the manufacturing department were laid off. Then the improvement work in the manufacturing department came to a standstill. The company was caught in a downward spiral. The system effect of the improvement work was unexpected. From this we can learn that it is vital that improvement processes in a company are not limited to the manufacturing processes, but that other key processes are also put in focus, such as the product development process. This is described more comprehensively by Steerman et al. (1997) and Keating et al. (1999).

4.2 Service Development Services differ from goods by the fact that the execution of a service, which corresponds to the manufacture a product, generally takes place at the same time as the service is consumed, or creates value to the customer. For this reason, the actual service must often be developed concurrently with the production process. An illustration of the influences on the production of services is given in Figure 4.5. Parallel design of products and processes is not unique to services, however. According to the ideas on integrated product developStudentlitteratur

111

Part I1- Design for Quality

Management and Staff

Organization and control

Line of visibility

Service Culture

Invisible part

Physical technical resources

Customer Visible part

Figur 4.5 An illustration of what characterizes service production and consumption. (From Edvardsson et al., 2000.)

ment, this also applies to the development of goods. Thus, it would not be unreasonable to say that the similarities are greater than the differences, at least on the conceptual level. A number of models have been proposed for the "development of services", however. We will discuss some of these below. In the main, we will follow Edvardsson et al. (2000), but also recommend Edvardsson (1996).

4.2.1 Models for service development In the 1980s, a number of models were created for the development of services. Three of these are depicted in Figure 4.6. On the whole, this corresponds to what was also described as goods development at the time. Figure 4.7 shows the model on which Edvardsson et al. (2000) have based their description of the service development process. Thus, according to Edvardsson et al. (2000), the actual development work is carried out in four stages. The first of these includes features such as brainstorming, evaluation and choice of product. In the second stage, it is decided if the idea matches the culture and strategy of the company. This is an important milestone, which all product development projects should pass, regardless of 112

0 Studentlitteratur

4 Customer-focused Product Development

Strategy Guidelines

Strategy Formulation

Develop a Business Strategy

Develop a Service Strategy

Exploration

Idea Generation

Idea Generation

Concept Development and Evaluation

Screening 71 1 Analysis

Comprehensive Analysis

Development and Testing

Service Design and Process Development

11 ! Introduction

Business Analysis

Service Design and Evaluation

Testing

Market Testing

Introduction

Commercialization

Figur 4.6 Three models for the service development process. The figure is based on Edvardsson et al. (2000, p. 60), who in turned obtained the models from Scheuing & Johnsson (1989).

The Service Logic & New Service Development

Culture ----< Strategy

E

Service The Service Service Service Policy Idea oy & Strate & Design Deplment Gate Implem ntation Generation Culture Gate

Prerequisites For World Class New Service Development

Supporting Methods Figur 4.7 A model of the development process for services. (From Edvardsson et al., 2000.)

service element in the product. If an idea passes this milestone, recourses should be reserved to start a development project. Then the actual development takes place, in stage three. The fourth stage is in Figure 4.7 called Service Policy Deployment & Implementa© Studentlitteratur

113

Part II — Design for Quality

tion. This implies that the service process is realized in the existing service system, that marketing, education, training and other means of preparation are taken care of before and during the market introduction. In Edvardsson et al. (2000), there is also a discussion about a number of supporting methododologies for the service development work, see Figure 4.8.

The Service Strategy & Culture Gate

Service Idea Generation

co z' as g z V

Communicate with and study Communicate internally Understand customer customers to: behavior —Identify potential use Establish market potential —Indentify needs

0

0 •

Reactive and proactive market research techniques

.c .... 0

2

Service Design

Process Mapping Surveys Scenarios

Service Policy Deployment & Implementation

Develop the service Quality assurance of the implementation Test concepts on customers

Creativity tools Conjoint analysis

Quality Function Deployment

Figur 4.8 Supporting methodologies for service development. (From Edvardsson et al., 2000).

We can observe that many of the methodologies in Figure 4.8 are the same as those featuring in general theories on product development.11

4.2.2 Flow charts In the design of goods, Computer Aided Design (CAD) is used to describe the spatial dimensions. For services, the time — the order in which various tasks are performed — is of vital importance. As in many other contexts, a flow chart is an excellent instrument to 11

However, we recommend Quality Function Deployment (QFD), which will be discussed in Chapter 5, to be used earlier in the development process.

114

Studentlitteratur

o rp

O

o

JJ

O

fl •—••

o

o

Arrive at Hotel

Hotel Exterior Parking

Greet and Take Bags

Give Bags to Bellperson

Cart for Bags

Registration System

No-

Process Registration

Check in

Desk Registration Papers Loby Key

Take Bags to Room

Go to Room

Elevators Hallways Room

Deliver Bags

A

Receive Bags

Cart for Bags

Sleep Shower

Room Amenities Bath

Prepare Food

Take Food Order

Call Room Service

Menu

Deliver Food

Receive Food

Delivery Trat Food Appearance

Eat

Food

Registration System

Process Check out

Check out and Leave

Bill Desk Lobby Hotel Exterior Parking


R Right force N-> 1 Right force 1-> 2 Right force 2-> 1

..g'

Position of 5th gear vs. driver seat

1

i L./ 6 • 8 0 0 0 0 0 8 A®

Targets

.to

1

Clear shift pattern on knob Easy to find reverse Easy to find 1st gear

Right force 4- > 5 Hord to Foiii Finn reverse inhibitor Shift comfort

A '

', Measurement offorce b-> 5

1

°o' n

Measurement offorce 2-> 1

1 Whots

i

Measurement offorce N->

Hum s

Judgment ofknob pottem

1

Measurement offorce N-> R

drivers were asked to mark the cars on a scale of ten degrees, considering a number of aspects in the manual gear box. The results of the test appear in part in the Quality House shown in Figure 5.9. The project led to a number of changes on the manual gear box of Volvo 850, introduced on the 1996 year's model, and the effect in terms of customer satisfaction was not late in coming. In the VOICE-study, the number of complaints was soon halved.

o • it o

. •

i . V

@







II

?

0

o

MATRIX Strvig {4 Medium O Weok A

WEIGHTS 9 3

Figure 5.9 Part of a Quality House, that was produced during a project at Volvo Car, when developing the gear box for the Volvo 850. The customer requirements were produced from VOICE (Volvo Index of the Car Experience), based on a questionnaire to all buyers of Volvo cars. In addition, tests were carried out with customers who represented different market segments, with male and female drivers of all ages. (After Gustafsson, 1998.)

132

Studentlitteratur

5 Quality Function Deployment

5.4.2 Benefits and difficulties Experience shows that it takes on an average three to six months with half day meetings once a week to accomplish a project with QFD (Gustafsson, 1998). The work is normally done in cross-functional groups of 5-8 persons. Another clear trend is that only the first step is used, of the four steps described above in Figure 5.3. Thereby, the structural breakdown of the information that is gained when all four steps are used, is often lost. But utilizing all steps would necessitate considerable changes in the company routines. Among the advantages brought by the use of QFD Swedish companies mentioned improved communication; better knowledge transfer; unity within the group and improved designs. These, it may be noted, are primarily "soft advantages". Features such as better products, higher customer satisfaction and shorter production times appear later, and have perhaps not been felt to be a consequence of the work with QFD. Even though the methodology is simple conceptually, there are difficulties in implementing QFD. The three most frequent problems encountered by Swedish companies are the lack of management support; lacking commitment in the project group and too little resources (Gustafsson, 1998). A natural question is how many requirements you can handle when working with QFD. AT&T, who is regarded as very experienced regarding QFD, usually limit the number at about 25 requirements, which provides a manageable amount (see Gustafsson et al., 2000). Furthermore, gathering customer requirements is a difficult task. The gathered requirements are often vague, unstructured and placed on different levels. In such situations a affinity diagram may be useful 6.

5.5 Notes and References In the West, the automotive industry was the first to use this methodology. This was in the beginning of the 1980s and among the fore6

An affinity diagram is one of the seven management tools addressed in Chapter 22. Studentlitteratur

133

Part II — Design for Quality

runners were Bob King (see King, 1987) and Larry Sullivan (see Sullivan, 1986). Sullivan then promoted Ford's investment in QFD. Between 1987 and 1991 more than 5,000 Ford employees were given QFD training, and over 400 projects were run (Gustafson et al., 2000). A comprehensive book on Quality Function Deployment is Cohen (1995). This is a well-written book based on the author's broad experience of using the methodology in industry. Other books on QFD are King (1987), Akao (1990), Bossert (1991) and Day (1993). The book Mizuno & Akao (1994) in English is a classic in this field. In its Japanese version from 1978, this was the first book on QFD. Research and applications within the QFD field have been reported by Akao and others in a series of twelve articles in a magazine published by the Japanese Standardization Board. These articles have been translated into English and published by G.O.A.L. In Japan, QFD was also successfully used in connection with the design of software, see Yoshizawa et al. (1993). Other recent applications of the QFD technique are presented in Akao & Ono (1993), concerning deployment of costs, and in Yoshizawa (1993) concerning deployment of company strategies. A recently published article by Tan & Shen (2000) describes how the Kano model (discussed in Chapter 14) can be integrated with the Quality House. The interest in QFD in Swedish industry started in 1988 with a project at Volvo Car Corporation, with Kurt Falk as the driving force. At the same time a research project started at LinkOping Institute of Technology, with Roland Andersson and Bo Bergman at the helm. Output from this project is, for instance, Gustafsson (1993) and Gustafsson (1995a), which include extensive reference lists. Gustafsson et al. (2000) describe another case study from Volvo. Customer requirements may also be gathered in focus groups, which are a form of group interviews, where a group of people engage in interactive discussions. Some customer requirements can also be obtained using questionnaires. Formulating and analysing questionnaires to achieve a good result is not easy. Books dealing with these issues are, for instance, Malhoha & Birks (1999) and Tull & Hawkins (1999).

134

0 Studentlitteratur

6 Reliability

We are becoming increasingly dependent on the technological systems that we surround ourselves with, and on their reliability. For example, much of our energy supply is based on nuclear power; we need aircraft and cars for transport; and computers is a vital means for the storage and transfer of information. The consequences of interruptions or accidents are often serious, sometimes indeed disastrous. Even in our daily lives, we rely heavily on faultless systems. We only have to think of the power cuts occurring in winter, to be reminded of this. In our homes we use electric cookers, TVs and microwave ovens, our cars are becoming more complex; and various electronic devices and increasingly sophisticated navigation systems are used. Consequently, reliability is an extremely important quality dimension, and reliability engineering, comprising methodologies and tools for increased reliability, is a vital part in Total Quality Management. In this chapter we will discuss some basic concepts and methodologies in reliability engineering, primarily for non-repairable units. It is essential to have an understanding of these issues, to be able to create reliable products.

6.1 The Aim of Reliability Engineering The aim of reliability engineering is (cf. Figure 6.1) • to find the causes of failures and try to eliminate these, i.e. to increase the failure resistance of the product • to find the consequences of failures and, if possible, reduce or eliminate their effects, i.e. increase the tolerance of the product to failures. This is sometimes called increased fault tolerance.

Studentlitteratur

135

Part II — Design for Quality

• Find • Assess • Reduce • Eliminate

Bring back experiences

• Find • Assess • Relieve • Eliminate

Figure 6.1 The aim of reliability engineering.

Reliability engineering has many fields of application. One of these is Safety and Risk Analysis. This concerns judging risks of damage to person, property or the environment, caused by the system being analysed. It is becoming increasingly common to use statistics as a basis, to judge various types of risks in safety analyses, as the systems we use can cause great damage. Dams, bridges, nuclear power plants, telephone exchanges and aircrafts are good examples of this. The importance of focusing on reliability and all related costs is underscored by the generally growing tendency to consider the Life Cycle Cost (LCC) of a system. In a production system it is common to sum up all proceeds of the system and subtract the life cycle costs. The result of this operation is generally called the Life Cycle Profit, (LCP), see Ahlmann (1989). In the Life Cycle Analysis concept, environmental aspects are also considered; see, for instance, Ciambrone (1997). Results from LCC analyses are used to evaluate design solutions and the organisation of various maintenance arrangements. 136

© Studentlitteratur

6 Reliability

Purchase price

Training Running/Working expenses

Documentation Reserve materials Cassations

Scrapping

Costs of stoppage/ shutdown

Maintenance costs

Figure 6.2 The so called "iceberg", illustrating the Life Cycle Cost of a system, comprises all costs during the life cycle of a system, not only the purchasing price, which is the immediate costs visible above the water surface.

6.2 Dependability An important property of a product is its dependability, which can be defined as (free interpretation of the vocabulary in the IEC60050:191): "the capability of a unit to perform a required function under given conditions at a given point in time, or time interval, provided that the necessary maintenance resources are provided".

The dependability of a unit is determined by its • reliability, which is the ability to perform the required function under given conditions. • maintainability, which is a measure of how easy it is to detect, localize and remedy failures • maintenance support, which is the ability of the maintenance organisation to mobilise maintenance resources when needed. The first two properties are thus tied to the product, while maintenance support is a measure of the efficiency of the maintenance organisation. The concept reliability should only be used in a general sense, where matters involving handling by people, software and the environment are also considered. © Studentlitteratur

137

Part II — Design for Quality

Dependability

Reliability

Maintainability

Maintenance support

Figure 6.3 Factors affecting system dependability.

Units or component are often discarded when faults occur. They are referred to as non-repairable. Subsystems or more complex system are repaired and used again, and are thus called repairable units. In this chapter we will address some basic concepts, ideas and methodologies, for non-repairable units, as well as for units that are repaired when errors occur. However, our main focus will be reliability and measures of reliability. The concepts "maintainability" and "maintenance support" are beyond the aim of this book, although they are very important.

6.3 Basic Concepts To be able to speak of the reliability of a product, we need a good definition of the failure concept and the failure consequences. We also have to be able to describe how frequently failures occur and what the risk is of a failure occurring. Generally speaking, a failure is a deviation from the demands made on a product. As a rule, it is impossible to predict when a failure will occur. Therefore, we experience the time to failure as a random variable and have to use the probability concept to describe reliability. In this paragraph we will study some measures of reliability. 138

Studenffitteratur

6 Reliability

6.3.1 Reliability function Let us study a unit, randomly chosen from a manufactured batch. The probability that this unit still works after the operating time t is called the survival probability at time t. This probability regarded as a function of t is called the reliability function (survival function) and is written R(t). If R(1000) = 0.90 this implies that about 90% of the units of a big batch will survive 1000 operating hours. This also means that the failure probability at the time t = 1000 hours is 0.10, i.e. about 10% of the units have failed after an operating time of 1000 hours. The probability of a failure before t is usually denoted F(t). Thus it follows that F(t) = 1— R(t). The function F(t) is denoted life distribution (or distribution function) for the time to failure. This area corresponds to F(a), i.e. the probability that a unit will not survive a units of time.

This area corresponds to R(a), i.e. the probability that a unit will survive a units of time.

probability density functionf(t)

Time Figure 6.4 The connection between the probability density function f(t), the life distribution F(t) and the survival function R(t).

6.3.2 Failure Rate The risk that a unit, which has survived the time t, has failed at the time t+h can be expressed as the failure rate A.(t) multiplied by h, when h is small. The failure rate is, in other words, a measure of how likely it is that a unit that has survived the time t will fail in a "very near future". Suppose, for instance, that k(t) = 0.01 at t = 1000 (i.e. the failure rate after 1000 operating hours is 0.01). This can be interpreted as the probability for a unit, which works after 1000 hours operating time, to fail during the next following Studentlitteratur

139

Part II — Design for Quality

hour is roughly 0.01. Thus about 1% of the units that have survived 1000 hours will fail during the next hour. Formally the failure rate is defined as the ratio between the probability density function f(t) for the time to failure and the corresponding survival function R(t), i.e. X(t) = f(t) R(t) the probability that a unit which has survived this far ...

... is about A.(t) • h

t+ h

Time

Figure 6.5 The probability that a unit which has survived until time t will fail before the time t+h is roughly .1..(t)h.

Figure 6.6 illustrates how the failure rate often varies with time. Owing to its characteristic appearance it is often called the bath-tub curve. Variations of manufacturing and material give certain units bad reliability. These units will then break down first and the population will improve as time passes. This first period is called the early failure period. It is followed by the constant failure rate period, where the failure rate is nearly constant. This period is sometimes called the best period. The failures that occur can arise from temporary overloading. Finally, increased wear and ageing make the failure rate grow during the wear-out period. For non-repairable units it is sometimes assumed that the failure rate is constant, i.e. independent of the age of the unit. It is said that the units do not age. Then the probability of failure in a certain time interval, given that the unit is functioning at the start of the interval, depends only on the interval length and not on the age of the unit. In this case the age of the unit does not give us any information about its remaining length of life. This can be interpreted as implying that a unit fails due to factors that are independent of the earlier operating time of the unit, for instance a temporary overload. 140

© Studentlitteratur

6 Reliability

A failure rate

early failure period

constant failure rate period

wear-out period

1/0 Time

Figure 6.6 Typical shape of the failure rate of a non-repairable unit, the so-called bath-tub curve.

1.6 Weibull distribution At). 03-1 exp(-0 1.2

0.8

0.4

0.5

1.0

1.5

2.0

Figure 6.7 Probability densities of Weibull distributions with some different values of the shape parameter p. When p = 1 the exponential distribution is achieved. The applicability of Weibull distributions is due to the fact that the distribution can be achieved by varying the form parameter p.

If the failure rate is constant, say X., the reliability function can be written R(t) = exp(-Xt), t > O. Then the time to failure is said to be exponentially distributed. Another common life distribution is the © Studentlitteratur

141

Part II — Design for Quality

Weibull distributions, whose reliability function can be written R(t) = exp(—(t/c)11), t ?.. O. Here p. > 1 corresponds to increasing failure rate and p < 1 corresponds to decreasing failure rate. When 13 = 1 the failure rate is constant, which means that the time to failure is exponentially distributed, see Figure 6.7.

6.4 System Reliability When a system built of components or sub-systems is studied from a reliability point of view, the structure is often illustrated with a reliability block diagram as illustrated in Figure 6.8. clutch

bevel gear cardan shaft

engine

engine

clutch

R.

R.,

gear box Rgb

cardan shaft Fics

bevel gear Rbg

Figure 6.8 A simple system and its reliability block diagram.

6.4.1 Series systems Let us assume that we have n components with reliability functions 111(t), R2(t), ..., Rn(t) and failure rates 7 1(t), 77

1.5

r60.15

0.3 0.2

0 05 3 0.4

Q 1.0

O'

ar 0

5.0 4.0 3.0 2.0

2' a z. 10.0 uz,

;34 e- - 15.0

-C3 rt,

Inging

-2.0

1

-1.0

eIl

-

II

0.2

03

-

0.0

0.4 05 06 08 1

The va ue o t for the pont on the disribution line which corresponds to F(t) = 10% is an estimator for L10

'IN Num

III I

mgaltrujin um IR L

134 811111111



MWiffintal I 111"11102111 i 111 11

IMME'III

5111111111111114111

11 1d

=nm n

99.0 ERIE!

:«7›: 95.0 90.0 2"' 80.0 'nj 70.0 er' 60.0 50.0 "0 13 40.0 Q 0 -ei ne 30.0 ri3 -"r 25.0 CL z --- 20.0

0%

99.9

II

2.0

6 7

11 9

10

The va ue oft for the po'nt on the disribution line which corresponds to F(t) = 50% is an estimator for L50

2 3 4 5

I

11,

4

.0

1

II 1

I 1 1111111111 I 1 I 11

20

111111111111 -7.0 40 50 60 70 80 90 100

121111113111 30

111 -11111

0

2.0

I.0

I _1.0

111 1 11111I I11

The parameter ß is estimated to 1.4

1

1

4.0

S'nce F a) = 0 63 we can estimate a to the value oft for the point on the distribution line which corresponds to F(t) = 0.63. Here the estimator of a is 6.2.

3.0

Äly nnö JOI u5,1saa— II .7.10d

6 Reliability

Then we plot t1 = 1.36 against the mean rank 0.10, t2 = 2.00 against the mean rank 0.20, ..., t9 = 11.20 against 0.90. The result is illustrated in Figure 6.14. As the plotted points fit well to a line there is no indication that the assumption that the observations are from a Weibull distribution is incorrect. With the help of the probability paper it is possible to estimate the parameters a and /3 in the Weibull distribution function F(t) = 1 exp(-(t/a)P), t 0. It is also possible to estimate different measures of time to failure such as the median life length L50, the nominal life length L10 and the Mean Time To Failure MTTF3. How this is done is illustrated in Figures 6.13 and 6.14. If the failure data are incomplete, for instance if the test has been interrupted before all the units have failed, the plotting has to be done according to a slightly different technique (see e.g. Nelson, 1982).

6.6.2 TTT-plotting The TTT plot

TTT-plotting, where TIT is short for "Total Time on Test", is a completely different technique from plotting on a probability paper. The so called TTT-plot gives a picture of the failure data which is independent of the scale and is situated completely within the unit square with corners in (0,0), (0,1), (1,0) and (1,1). The deviation of the plot from the diagonal provides information about the deviation of the life distribution from the exponential distribution, and thus about the failure rate of the unit. tn of times to The TTT-plot of an ordered sample 0 = t0 < t1 < t2 n, the points failure is obtained by plotting, for j = 0, 1, (fin,SJIS„), where S0 = 0 and Si = nti + (n-1)(t2-41) + + (n-j+1)(tr-to) for j = 1, 2, ..., n 3

The term "median life length" stands for the time that a unit survives with the probability of 50%, "nominal life length" is the time that a unit survives with the probability of 0.90 and MTTF (Mean Time To Failure) is the expectation of the life distribution F(t). Studentlitteratur

151

Part II - Design for Quality

1.0

0.8

0.6 Si /S

0.4

0.2

0.0

0.0

0.2

0.4

0.6

0.8

1.0

j/n

Figure 6.15 TTT-plot showing the material in Example 6.3.

and then connecting these points with line segments. Si is the total time the units have been tested at time As 0 j/n 51 and 0 Si/S, 1 the TTT-plot starts at (0,0) and ends at (1,1) and is situated completely within the unit square; see Figure 6.15. Example 6.3

Assume that we have tested n = 9 units and have obtained the following times to failure (see Example 6.1): 1.36 2.00 3.20 3.86 5.00 5.90 7.00 9.00 11.20 tl t2 t3 t5 t4 t6 t8 t9 This gives the following Si-values: S1 = 9 • 1.36 = 12.24 S2 = S1 + 8 • (2.00-1.36) = 17.36 S3 = S2 + 7 • (3.20-2.00) = 25.76 S4 = S3 + 6 • (3.86-3.20) = 29.72 S5 = S4 + 5 • (5.00-3.86) = 35.42 S6 = S5 + 4 • (5.90-5.00) = 39.02 S7 = S6 + 3 • (7.00-5.90) = 42.32 S8 = S7 + 2 • (9.00-7.00) = 46.32 S9 = S8 + 1 • (11.20-9.00) = 48.52 152

Studentlitteratur

6 Reliability

In order to draw the TTT-plot of the material we calculate So/S9 = 0 S3/S9 = 0.53 S6/S9 = 0.80 S9/S9 = 1.00

S1/S9 = 0.25 S4/S9 = 0.61 S7 /S9 = 0.87

S2/S9 = 0.36 S8/S9 = 0.73 S8/S9 = 0.95

If, in a unit square, we plot the ten points (0,0), (1/9,0.25), (2/9,0.36), (3/9,0.53), (4/9,0.61), (5/9,0.73), (6/9,0.80), (7/9,0.87), (8/9,0.95) and (9/9,1.00) and connect them with line segments we get the ITI-plot in Figure 6.15. Model identification

When the number of failure times n increases, the 111-plot approaches a curve which is characteristic of the underlying life distribution F(t). This curve is called the scaled 11 1-transform. Accordingly, the 1'1"1-plot can be interpreted as an estimator of the scaled TTT-transform of F(t). How the 1T1-plot approaches the 111transform is illustrated in Figure 6.16. 1.0

Si /S„

0.0 j/n

1.0

Figure 6.16 TTT-plots based on simulated data from a Weibull distribution with = 2.0, n = 10 i (1) and 13= 2.0, n = 100 in (2) and the TTT-transform of a Weibull distribution with shape parameter /3 = 2.0 in (3). (From Bergman & Klefsjo, 1984a.) © Studentlitteratur

153

Part 11 — Design for Quality

Hence we may find a suitable life distribution model of our failure data by comparing the TIT-plot to scaled TTT-transforms of different life distributions and then choosing the life distribution whose TTT-transform best corresponds to the TTT-plot. This was the first application of the TTT-plotting technique that was introduced by Barlow & Campo (1975). Failure rate information The TTT-plot also gives information about the failure rate of the current kind of unit. It is possible to show that increasing failure rate of a life distribution corresponds to the fact that the TIT-transform is concave (i.e. the slope of the tangency is declining). A T1Tplot that shows a concave pattern is therefore probably based on times to failure from a unit with increasing failure rate. For example, it seems reasonable that the distribution underlying the TTT-plot in Figure 6.15 has an increasing failure rate. The same conclusion can be drawn from the TIT-plot in Figure 6.17. 1.0

1 1 1 I I 1111111111111

0.8

0.6 Si /S, 0.4

0.2

0.0

1111111111111111111 1.0 0.8 0.4 0.6 0.2 0.0 j/ n

Figure 6.17 TIT-plot showing times to failure from engines of dumpers that have been used in a Swedish mine. As the TTT-plot has a concave tendency, the engines are likely to show an increasing failure rate, which in turn means that it is probably profitable to perform preventive maintenance at suitable intervals. (From Kumar et al., 1989.)

154

Studentlitteratur

6 Reliability

6.7 Some Qualitative Analysis Methodologies There are a number of different ways to analyse systems from a qualitative, comparative, perspective. Here we will look briefly at two of these, Failure Mode and Effect Analysis and Fault Tree Analysis.

6.7.1 Failure Mode and Effect Analysis - FMEA Failure Mode and Effect Analysis, FMEA, is a very useful methodology for reliability analysis. It involves a systematic check-up of a product or a process, its function, failure modes, failure causes and failure consequences. FMEA can be performed as a qualitative analysis of the connections between failure modes and the corresponding failure consequences at the system level and how it is possible to take measures to prevent failures or reduce the consequences of failures. FMEA can be used in many different ways. Qualitative and very rough analyses can suitably be initiated already during the stages of planning and definition of a project. The aim here can be to investigate whether it is possible to fulfil the reliability demands of the market. During the design and development stages a more detailed, quantitative FMEA may be used for various reliability activities. Such an FMEA can serve as a very good basis for a design review, which is a systematic analysis of the design by a group of persons with different knowledge and experience. These kinds of FMEA are called design-FMEA. In connection with pre-production engineering, process-FMEA is a way of evaluating the manufacturing process. Generally, malfunctions of the product are studied and an analysis is made of how these malfunctions can be caused by disturbances in the manufacturing process. Process-FMEA can serve as a basis both for improving the process before and after the start of manufacturing and as a basis for the design of the process control. © Studentlitteratur

155

inIriallilluapms

rn

Figure 6 .18Exampl eof a form intended for a design-FMEA.

1

Item

Part No name issue

...

... 36

6

6

High loading

12

Current status Cause of Current controls OCC SEV DET RPN failure

6

Effect of failure

2

Failure mode

Control Crack Reduced Material of fuel funcfault flow tionality

Function or process

1

Mary

Strengthen

Strengthened

Revised status

6

6

6

12

OCC SEV DET RPN

2

Action taken

...

Action by

No action

Recommended corrective action

Failure Mode & Effect Analysis — Design/Process

A-Ill on0 icy u6! saa — 11 P od

6 Reliability

The result of an FMEA is entered on an FMEA form. This form can be designed differently depending on the purpose of the failure effect analysis. Example of a form is shown in Figures 6.18. Sometimes a quantitative analysis also is done when using designFMEA. This is often called Failure Mode, Effects and Criticality Analysis, FMECA. The main idea is to consider each failure mode of every component in the system and quantify certain values in order to rank the different failure modes. There are several procedures for this numerical analysis, but these will not be discussed here. One way to make an assessment is to weigh together the probability for failure F, the degree of seriousness A, and the probability for detection U to a risk number, often noted Risk Priority Number, RPN. Often the RPN is calculated as the product between the numbers F, A and U; see Figure 6.18. We refrain here from further discussions on risk analysis.

6.7.2 Fault Tree Analysis — FTA When performing a quantitative as well as a qualitative analysis of complex systems, Fault Tree Analysis, FTA, can be an excellent methodology. The fault tree is a logical chart of occurrences that illustrates the connections between a non-desired occurrence on a system level and its causes on lower system levels.. The design of a fault tree begins by specifying the non-desired top event. The immediate causes of this non-desired top event have to

main event

o

basic event

0 (

)

incompletely analysed event

fl D12

>1

or-gate

&

and-gate

AV

transfer to or from another part of the fault tree

restriction

Figure 6.19 Some common symbols used in Fault Tree Analysis.

0 Studentlitteratur

157

Part II — Design for Quality

System failure

Figure 6.20 Illustration of a simple fault tree for the system, whose reliability block diagram is shown to the left.

be identified, and they are connected to the top event with a suitable logic gate. This procedure is repeated until, by gradual stages, the basic events are reached, on apparatus, component or detail level. If the system in question is made up of many components, this might lead to trees that are big and difficult to grasp. It is then possible to choose a lower level of decomposition, provided that the basic occurrences are independent of each other. Some common symbols used in Fault Tree Analysis are shown in Figure 6.19. A simple illustration of a comparison between a reliability block diagram and a fault tree is shown in Figure 6.20. Today, there are a number of computer programs for designing fault trees. Many of these also provide the possibility to make quantitative analyses based on the fault tree derived. One example of this can be found in Figure 6.21. It is wise to remember that quantitative analyses are often based on the assumption that the basic events occur independently of each other, an assumption that gives a model, which is simple but doubtful.

6.7.3 Top-down or bottom-up? In many systems analysis the question whether to use a top-down or bottom-up approach appears. Here it deals with Fault Tree Analysis or Failure Mode and Effect Analysis. Our answer is both. It is natural to do an initial FMEA not on a parts level but on an inter158

C Studentlitteratur

6 Reliability

Cruise control not deactivated by braking

Tube 1 blocked

Quick air ilet fault

Tube 2 blocked

Quick air inlet is stuck closed

Regulator valve closed

Tube 3 blocked j

J_

Regulaior ( valve is stuck

Faulty electronic unit

E

xternal grounding of regulator valve

No opening command to electronic unit

Quick valve switch does not open

Figure 6.21 A fault tree describing the top event "cruise control not deactivated b) braking". The fault tree was included in a safety analysis of a certair type of cruise control, which besides electronic components also ha: mechanical and pneumatic components. The analysis was made a, the Swedish Defence Research Establishment. (From Gunnerhed, 1991.)

Studentlitteratur

159

Part II — Design for Quality

mediate level to investigate what the effects are on system level from components or subsystem failure. This provides a basis for selection of catastrophic failures, which might occur on system level and then use FTA for these. In turn these direct attention to areas of the system, where more through FMEA analyses should be performed. In this way the reliability analysis is made an iterative process alternating between bottom-op and top-down analyses. This should be compared to the iterative product development process suggested in Figure 4.1.

6.8 The Development of Reliability Engineering In connection with the development of aviation, a more systematic work with reliability engineering issues started. Failure data from various components were collected, particularly from aircraft engines. In the 1930s, the statistical material was extended to include accidents as well. The planning of future changes was then made, with these data as a basis. This is how probability assessments began to be used to achieve increased safety and dependability in the aircraft. The first reliability predictions were probably made in Germany during the Second World War, where Wernher von Braun worked with a missile project, called V1. However, the first series of about ten missiles was not particularly dependable — all the missiles crashed already at the launch. Then Robert Lusser, a German mathematician, was sent for, and he was the first person to initiate reliability analyses. In the 1940s, people worked hard with dependability problems in railway transport. General Motors Corporation, for instance, managed to extend the life of the locomotive engines by the factor four. In this period, a great deal of effort was focused on resistance to fatigue. Waloddi Weibull (1887-1979), a Swedish professor at the Royal Institute of Technology in Stockholm between 1924-1953, see Figure 6.22, made vital contributions towards an increased 160

0 Studentlitteratur

6 Reliability

Figure 6.22 Waloddi Weibull (1887-1979), left, and Benjamin Epstein (born 1918), right, made innovative contributions to the development of reliability engineering. The photo of Weibull is taken by Sam C Saunders, who has himself played an important role in the advancement of the subject. The photo of Epstein is taken by Bengt Klefsjo.

understanding of the fatigue phenomena4. He also suggested the life distribution that would later be called the Weibull distribution; see Weibull (1951). The exponential distribution, another central distribution in reliability engineering, was studied in detail by Benjamin Epstein (born 1918) in the beginning of the 1950s. You can say, with good reason, that the real birth of reliability engineering was in the 1940s. The electronic development in the 1950s meant new and greater demands on systematic reliability. The growing complexity in electronic systems made reliability analyses necessary. Highly specialised maintenance groups were needed to keep the new systems in good condition. In the American navy, for example, the number of electronic valves in a destroyer had increased from 60 in 1937 to 3,200 in 1952, but the equipment could only be used about 30% of the time due to poor dependability. In the Korean war (19501953), the US Department of Defence found that the annual costs of maintenance of unreliable equipment were more than twice the

4

In Akersten & Klefsjb (2002) a picture is given of Waloddi Weibull — the man and scientist.

© Studentlitteratur

161

Part I1— Design for Quality

cost of the actual investment. This lead to an increased awareness that it was necessary to implement fault prevention in the design stage, rather that waiting for faults and repair them as they occurred. The aerospace and nuclear industries started to grow in the 1960s, and this brought about an increased awareness of security issues. Preventive error analyses began to play an increasingly important role. The analyses became more extensive. Fault Tree Analyses were initiated in 1961, in connection with safety analyses of the launching pads for Minute missiles. Failure Mode and Effects Analysis, was also developed in the beginning of the 1960s, when MacDonnel Douglas started using this technique. After that, American as well as Swedish aerospace industry used this technique regularly, from the end of the 1960s. Probability estimates for risk assessments were used to a growing extent. As an instance of this, there was a requirement for probability assessment when the Concord aircraft was developed in a cooperation between British Aircraft Corporation and Aerospatiale in France. Faults were graded in less serious, serious, critical and disastrous faults and risk measurements were implemented for the various fault types. For example, a disastrous fault, leading to an aeroplane crash, would occur with a probability of at the most 10-7 per flight hours. In the nuclear industry, safety thinking had a breakthrough with the building of nuclear power plants. By the end of the 1960s, it was, nevertheless, only a few industrial branches that dealt systematically with reliability assessments and analyses, but the awareness increased all the while. The first data banks, consisting of failure data to facilitate failure estimation, started in the 1960s. The first extensive risk assessment of an industrial plant was performed on the nuclear power plant in Peach Bottom BWR6 Plant in

6

Concorde flied for the first time March 2, 1969 and travels with a speed of about 2,300 km/h. It is not known to the authors how many flight hours the 13 Concorde planes had accumulated at the devastating crash minutes after take off from Charles de Gaulle Airport in Paris on July 25, 2000, in which 113 passengers and the crew were killed. BWR = Boil Water Reactor.

162

C Studentlitteratur

6 Reliability

the US, and was published in 1975. This study, lead by professor Norman C. Rasmussen at the Massachusetts Institute of Technology between 1972-75, demanded a work contribution corresponding to some 50 engineers' years of work. The result was a 20 cm thick book of some 2,000 A4-pages. Different conceivable accidents were analysed, ranging from component failures in safety systems to operator mistakes. The report, also called WASH 1400, must be regarded as a milestone in the development, even though the risk assessments have been strongly questioned. In particular, what was felt to be underestimates of safety margins, handling by people, dependence faults and the consequences of faults were criticized. According to WASH 1400, the risk of dying from radioactive emission is no greater for the inhabitants living in the vicinity than the risk of being hit by a falling meteorite. After the Three Mile Island' in 1979, the attitudes changed again. Even if the accident did not cause any deaths, the inhabitants in the region were, naturally, chocked. The Rasmussen report had identified a conceivable sequence of events, which was similar to the one that happened at Three Mile Island. A commission appointed by the President recommended that probability assessments for security should be implemented again. Also, the propensity to consider the human factor at risk assessment increased.

6.9 Notes and References Barlow & Proschan (1965, 1981) introduce a relatively advanced reliability theory. Furthermore, Mann, Schafer & Singpurwalla (1974), Lawless (1982), Bain (1991), Henley & Kumamoto (1981), Kapur & Lamberson (1977), O'Connor (2002), Crowder et al. (1991), Heyland & Rausand (1994) and Aven (1992) are all well-known books on reliability. Of these, the books by Henley & Kumamoto, Kapur & Lamberson and O'Connor have the most practical approach. O'Connor's book is particularly well worth reading. Ascher & Feingold (1984) provides an interesting discussion about analysis of data from repair-

7

An nuclear accident, when a fault in the feed water system caused a fault in the cooling system. Studentlitteratur

163

Part II — Design for Quality

able systems. Nelson (1982) deals with different kinds of statistical conclusions from complete as well as censored failure data (from non-repairable systems). An overall article about reliability is provided by Bergman (1985). The reliability area is also discussed by Barlow (1984). In Akersten & Klefsjo (2003) a summary of professor Waloddi Weibull is presented as a man and scientist. FMEA is addressed in MIL-STD 1629 "Procedures for Performing a Failure Mode, Effects and Criticality Analysis" and in IEC Standard 60812. Books discussing FMEA are Stamatis (1994) and McAndrew & O'Sullivan (1994). FTA is described in Vesely et al. (1981), IEC Standard 1025 and Henley & Kumamoto (1981). A review paper of Fault Tree Analysis is Lee et al. (1985). Some articles highlighting the TTT-plotting technique, which is useful in many contexts, are Bergman & Klefsjo (1984a, b, 1998), Westberg & Klefsjo (1994) and Akersten et al. (2001). There are further references in these articles. The LCC concept is addressed in Blanchard (1978), Isacson (1990) and Blanchard & Fabrycky (1991). Dependability and quality are interlinked for at least two reasons. Firstly, dependability is a very important quality dimension in a system. Secondly, the production process dependability affects the process quality, not only in terms of efficiency, but also of capacity. Today Total Productive Maintenance, TPM, is often mentioned; see Nakajima (1988), for example. This philosophy is based on activities to optimise equipment efficiency by way of maintenance, which is often carried out by the operators themselves, and improvement work based on group activities. The ideas in Total Productive Maintenance are close to the values and methodologies in TQM. Total Productive Maintenance, TPM, and TQM, are described in Nakajima (1988), Suzuki (1992) and Tajiri & Gotoh (1992). The experiences of implementing TPM in Swedish industry are highlighted in Lycke (2002). The important area of Safety and Risk Assessment is not included in this book, but for further reading on this, we refer to Reason 164

© Studentlitteratur

6 Reliability

(1990, 1997), Cox & Tait (1998), and Lees (1996). HAZOP (Hazard Operation) is described in Redmill et.al. (1999) and Kletz (1999). In this book, we have not addressed maintenance and issues concerning maintenance engineering. Some references here are Blanchard (1998), Campbell & Jardine (2001) and Moubray (1997). Finally, we would also like to mention that there are a number of publications in the reliability area. Some of these are Quality and Reliability Engineering International; IEEE Transactions on Reliability; International Journal of Reliability; Quality and Safety International; Lifetime Data Analysis; Microelectronics and Reliability; and Reliability Engineering and System Safety.

© Studentlitteratur

165

7 Design of Experiments

To be able to base decisions on facts and to perform quality improvements it is necessary to collect and treat data systematically. However, the facts naturally accumulated during product and process operation are not enough. Knowledge accumulation has to begin earlier and it has to be accelerated. For that purpose, experiments also need to be planned and performed early in product and process development. Well planned experiments provide rapid knowledge of the values that have to be chosen for design and process parameters, to achieve the best possible products or processes at the lowest cost levels. In this chapter we will describe some basic principles in the design of experiments. We will focus the discussion on how to identify the factors that have an impact, and will largely refrain from a discussion about the levels of these parameters. This is highlighted further in Chapter 8, where we also deal with a more advanced usage of design of experiments as a means of obtaining robust products and processes.

7.1 One-Factor-at-a-Time Experiments Suppose that a maximized yield from a chemical process is wanted. Influencing factors2 that have been identified are the time t in the reactor tank and the temperature T in the tank. Assume further that a one-factor-at-a-time experiment is used. At first the temperature is fixed at 225°C and then five tests are performed at different reaction times, see Figure 7.1. The design of experiments is, therefore, very important within the Six Sigma programme; see Chapter 23. The discussion in this section is based on an example from Box, Hunter & Hunter (1978).

166

Studentlitteratur

7 Design of Experiments

tmax

= 130

I* 60

Figure 7.1

90

I

120 150 timet (minutes)

I Nix 180

The first test series, which for the set temperature T = 225°C gives the yield as a function of the reaction time. (From Box, Hunter & Hunter, 1978.)

From the test results in Figure 7.1 it is concluded that the reaction time that gives the best result is about 130 minutes. As it now "is known" that the reaction time 130 minutes gives the best result, another series of tests is performed. The time is set to 130 minutes, but the temperature varies. In this series it is established that the "best" reaction temperature is T= 225°C; see Figure 7.2.

17) 80

E E cn 70 -0 60 —

Tmax = 225

I 10 I I I YIP 210 220 230 240 250 Temperature T (°C) Figure 7.2

The second test series, which for the set reaction time t = 130 minutes gives the yield as a function of the reaction temperature. (From Box, Hunter & Hunter, 1978.)

© Studentlitteratur

167

Part II — Design for Quality

temperature T ( °C)

250 240

230 220

210 I 60

90 120 150 180 Time t (minutes)

Figure 7.3 The yield as a function of both time and temperature can look as above and still give the results shown in Figures 7.1 and 7.2 when tested according to a one-factor-at-a-time experiment. The figure shows response curves of the yield as a function of time and temperature. The figure indicates that t = 70 minutes and T = 255°C is the best alternative. (From Box, Hunter & Hunter, 1978.)

In both cases about the same maximum yield has been achieved, i.e. 75 grammes. This can easily be interpreted as if the best combination of time and temperature actually has been found. However, the situation might be far from optimal, which is illustrated in Figure 7.3, where the response curves for the yield instead show that the best alternative is t = 70 minutes, T = 225°C. The moral of this example is that it is not at all certain that the best yield is achieved with a one-factor-at-a-time experiment. If two factors are interacting, i.e. the level of one factor influences the effect of changing the other, the "optimum" obtained by a one-factor-ata-time experiment may be far from optimal. In the next section we shall illustrate that even if there is no interactions a one-factor-at-a-time experiment is unnecessarily costly. 168

© Studentlitteratur

7 Design of Experiments

7.2 A Weighing Experiment Assume that two objects, A and B, with the unknown weights vA and vB are to be weighed. At our disposal we have a balance scale giving a measurement error, denoted e. Assume that this measurement error is completely random and has the known standard deviation a = 0.01 in a suitable unit. Also assume that measurement errors that arise in different weighings are independent of each other. The task is to weigh the two objects with a measurement error whose standard deviation is at the most 0.0075. How many weighings have to be performed? Assume that each of the objects is weighed twice and the values obtained are yAi and yA2 for A and yin and YB2 for B. Each of these four values then has a measurement error with the standard deviation a = 0.01. If then the mean values YA = 12(YAl +YA2) and YB = 1(YB1+Y132)

are formed, the standard deviation of these is equal to 410.012+0.012 — 0.01 — 0.0071

4

,J2

Accordingly, these mean values actually fulfil the set precision demands. With this technique every object is weighed twice, i.e. all in all, four weighings are necessary. In industry, experiments are often very expensive to perform. The number of experiments therefore has to be kept down. A natural question here is if it is possible to estimate the weights of the objects with the same precision using fewer than four weighings. It is actually possible to solve the problem with two weighings. It is only necessary to determine the sum of the weights and the difference between them. Observe that the difference is easy to measure, as there is a balance scale available. Assume that the obtained sum of the weights of the objects is z1 = vA + vB + el, where el is the measuring error from the first Studentlitteratur

169

Part II — Design for Quality

weighing and that the difference between the weights of the objects is z2 = vA — vB + e2, where e2 is the measuring error from the second weighing. This gives 1(ei +e2) (zi +z2) = vA +-

2

i.e. 1 1 vA = — (z1+z2)--(ei+e2) 2

In the same manner we get 1 —e2) 2(z1—z2)-2(ei vB = 1 As (e1+e2)/2 and (e1—e2)/2 both

have the standard deviation

0.012 +0.012 .0.01 = 0.0071 4 42 (z1+z2)/2 and (z1—z2)/2 give the weights of the objects A and B, respectively, with sufficient accuracy. By planning an experiment in a suitable way, it is possible to reduce the costs of the experiment drastically. In order to achieve this effect, a plan has to be drawn up before the experiment is started. Furthermore, we have to disregard the fact that each of the experiments (in this case the weighing) gives immediate information about the things that attract our interest (in this case the individual weights). The interesting conclusion cannot be drawn until both the weighings have been performed. But this disadvantage is of limited importance compared to the fact that the experimental costs in this case have been halved.

7.3 A Factorial Design The conclusion of the example in Section 7.1 is that wrong results might be achieved if a one-factor-at-a-time experiment is used. In 170

Studentlitteratur

7 Design of Experiments

the previous section it was illustrated that even if correct results are achieved, one-factor-at-a-time experiments might be very costly. This section shows what a simple alternative to one-factor-at-atime experiments can look like3.

7.3.1 A plan of experiments It has been established, from operating experience, that the existence of cracks is a problem when using a certain kind of spring. Therefore, we need to examine what factors affect the crack initiation during usage. It is decided that three factors are to be studied, namely • the steel temperature before hardening (S) • the oil temperature when hardening (0) • the steel carbon content (C). Two values, levels, are set for each factor. For each factor a low and a high value are chosen; see Table 7.1. For the low level we use the symbol "—" and for the high level "+".

Table 7.1 The levels of the factors. Oil temperature (°C)

Carbon content (%)

830

70

0.50

910

120

0.70

Symbol

Steel temperature before hardening (°C)

Low



High

+

Level

If all possible combinations of the levels of the three factors are studied, the result is eight different test situations, see Table 7.2 and Figure 7.4. Such an experiment is called a full factorial design.

3

An example from Box & Bisgaard (1987) serves as a starting point.

© Studentlitteratur

171

Part II - Design for Quality Table 7.2 The eight different runs in a full factorial design with three factors tested on two levels. Faktor Run no. 1 2 3

4 5 6 7

8

s

o

c

— + — + — + — +

— — + + — — + +

— — — — + + + +

For each of these eight test conditions, called runs, a batch of springs is manufactured. These springs are then exposed to a life test and the number of springs without cracks is observed. In order to avoid misleading influence from disturbing factors, such as parameter drifts in the manufacturing or measurement process, the runs are performed in a random order. The result can be illustrated using a cube, in which each corner represents a run, see Figure 7.5.

(+)

c (+)

(-) (-)

S

(+)

i

Figure 7.4 The different runs of the experiment illustrated as corners in a cube. For example, the corner marked "4" corresponds to C at low level but O and S at high level.

172

© Studentlitteratur

7 Design of Experiments

g (-)

S

(+)

Figure 7.5 Illustration of the test results obtained in the different runs. For example, "79" means that 79% of the springs that were produced at a high steel temperature, low carbon content and a low oil temperature did not have any cracks.

7.3.2 Estimation of effects Estimation of the main effects

It is now possible to estimate the effect of, say, raising the steel temperature S. For every combination of oil temperature (0) and carbon content (C) there are two observations, one for a low and one for a high steel temperature. Each of these differences gives an estimation of the effect of raising the steel temperature from 830°C to 910°C. The differences are 79 - 67 = 12, 90 - 59 = 31, 75 - 61 = 14 and 87 - 52 = 35, respectively. The arithmetic average (12+31+14+35)/4 = 23 of these differences, which may be written 4 ((79-67)+(90-59)+(75-61)+(87-52)) therefore results in an estimate of the average effect of raising the steel temperature from the low value to the high. Note that this value can also be obtained as the difference between the means (79+90+75+87)/4 and (67+59+61+52)/4 of the results achieved at high and low steel temperature, respectively. © Studentlitteratur

173

Part II — Design for Quality

In the same way, it is found that the average effect of raising the carbon content is — 5 and the average effect of raising the oil temperature is + 1.5. In order to get the same number of observations when estimating each of the three effects using a one-factor-at-a-time experiment, sixteen experiments would have been necessary. Such a plan of experiments still would not have given a fair result, because in this case the steel and oil temperatures seem to interact. When the oil temperature is low, the effect of raising the steel temperature is not so great (79 — 67 = 12 and 75 — 61 = 14, respectively) as when the oil temperature is high (90 — 59 = 31 and 87 — 52 = 35, respectively). This indicates an interaction between the factors. Estimation of the interaction effects

The interaction effect between the steel temperature S and the oil temperature O, which is designated SxO, can be estimated as (the effect of raising) (the effect of raising) S on low 0-level ) S on high 0-level) If no interaction exists, this difference is expected to be about zero. Since we have two estimates of the effect of raising S on each ()level this means that we get the estimate of SxO as (35+231) (12+214) =20 ) To get the same type expression of the estimates of the interaction effects as of the main effects, the estimate of the interaction effect is defined as half the difference above. The halving of difference here and for other estimates of interactions can be considered as a pure convention, which just changes the scale. This means that the interaction SxO between S and O is estimated by 11-(352311 (122+14)1 = 10 ) Note that it is not possible to estimate the effect of interaction with a simple one-factor-at-a-time experiment. In the same way as 174

Studentlitteratur

7 Design of Experiments

above, it is possible to estimate the effects of interaction between steel temperature and carbon content (SxC gives 1.5) and between oil temperature and carbon content (OxC gives 0.0). Estimates using the design matrix

When estimating the various main effects S, C and O, it is possible to use the signs in the table describing the experimental design, see Table 7.2. The different effects can be estimated as a quarter of the sum of the run results, where each run result has been taken with the sign that describes the level of the corresponding factor. If the test results are designated y1, y2, ..., y8 the effect of an increased steel temperature can be estimated as ,

— k—Yi + Yz —Y3 + Y4 —Ys + Y6 —Y7 + Y8) 4

It is also possible to estimate the effects of interaction in a corresponding way, using what is called a design matrix, illustrated in Table 7.3. The signs in the columns regarding interaction are obtained as products between the signs in the columns for the corresponding factors, where the product of two signs has to be interpreted as the sign of the product between two numbers with these signs. The first sign in the column showing the interaction SxO between S and O is a plus, since "minus multiplied by minus equals plus". In the same way, the next sign is minus, since "plus multiplied by minus equals minus". The bottom line in Table 7.3, which does not belong to the design matrix, gives the estimated effects. Accordingly, the interaction effect SxO can be estimated as 4

(Y1-9/2 —Y3+ Y4 +Ys —Y6 —Y7 + Y8)

Note that there might also be an interaction effect between all the three factors studied. In this case the interaction effect between steel temperature and oil temperature could be due to the level of the carbon content. This interaction effect can be estimated with the help of the signs in the last column in Table 7.3, which are obtained by multiplying the signs in the three involved columns. In this case we get the estimate of SxOxC equals 0.5. © Studentlitteratur

175

Part II — Design for Quality

Table 7.3 The design matrix for the described experiment, combined with the results from the runs in the far right column, and the estimated effects on the bottom line. Main factors and interactions Run no. S

O

C

SxO

SxC

OxC

SxOxC

y

1 2 3 4 5

— + — + —

— — + + —

— —

+ — + — —

+ + — — —

— +

+ —

— —

+ —

— +

+

— + +

+

+

+

+ — + — — +

67 79 59 90 61

6 7 8

— + + + +

+ — — + +

23

1.5

—5.0

10

1.5

0.0

0.5

Estimated effekts

75 52 87

The dominating factors seem to be the steel temperature and the interaction between the steel and oil temperatures. When interaction effects are at hand there is no point in describing the main effects of each effect separately. The result should instead be illustrated as in Figure 7.6.

(+)

o

(—)

77

64 (—)

S

(+)

Figure 7.6 The results of the experiment described in Table 7.3. Each result above is the arithmetic mean of the two runs with different carbon contents, but with the same steel and oil temperatures. Note that when the steel temperature is low, a negative effect is obtained when the oil temperature is raised, but the effect is positive when the steel temperature is high.

176

© Studentlitteratur

7 Design of Experiments

7.3.3 Analysis of the result from the experiment When we are to decide if the estimated effect actually is the result of an actively influencing factor, we have to consider the random variation in the experimental results. If a main effect or an interaction effect does not influence the crack initiation our estimate is expected to be close to 0, i.e. the estimate should be an observation from a distribution with expectation 0. How far from 0 the estimate must be in order for us to dare to say that the corresponding effect has influence depends on the uncertainty in our estimate. This, in turn, depends on the uncertainty of the results in the different runs, i.e. the values in the cube corners. If every batch examined contains 200 springs, we can estimate the standard deviation a for the percentage of crackless springs p as a 5 3.6. This can be seen as follows. The number of springs without cracks is roughly binominally distributed with n = 200 and p unknown, i.e. Bin(200,p)-distributed. The standard deviation for the percentage of crackless springs is then Vp(1-p)/200 which at most equals 3.6%. This occurs4 when p = 0.5. Each effect, main effect or interaction effect, is estimated as a sum of eight observations (with a plus or minus sign) divided by fours. The random error of the estimator of an effect can thus be written as (ei+e2+...+e8)/4, where all the el are independent and have a standard deviation which is at most 3.6. The standard deviation of the estimator then becomes Vcri + a • • • + 4

V3.62 + 3.62 + + 3.62 4

2.6

As each effect is estimated as the sum of a number of independent random variables, it follows, according to the Central Limit Theorem6, that the effect is roughly normally distributed.

4

6

Can be shown by using derivation of g(p) = ,s/p(1—p/200) , 0 5 p 1. This is why the natural estimation of the interaction effect was multiplied by 1/2 on p. 178. The Central Limit Law says that a linear combination of random variables, which are no too dependent on each other, has a distribution, which is more and more normally distributed the more terms the linear combination has. Studentlitteratur

177

Part II - Design for Quality

If neither of the factors is active, i.e. neither of them has any influence, the estimated effects are observations from a reference distribution, having a more or less normal distribution, with the expectation zero, and a standard deviation which, admittedly, is not known, but which is, at any rate, not greater than 2.6. Studying the estimates of the effects, we can establish that it is not probable that the estimated effect of steel temperature is a result of random variations. The same applies to the estimated interaction effect S x 0 between steel and oil temperature. It is, as a matter of fact, not probable that an almost normally distributed random variable with expectation zero and standard deviation 2.6 is as large as 23, or even as 10, which corresponds to the estimated effects. The probability is roughly 0.1 % that such a N(0, 2.6)-distributed random variable should attain a value that deviates by more than 3.3 times the standard deviation from the expectation; see also Figure 7.7. On the other hand, the measured effect of raising the carbon content might possibly be due to random variation. It is not unrealistic that a normally distributed random variable with expectation 0 and a standard deviation of 2.6 may attain values around - 5. The chance of obtaining a value that deviates at least 1.9 times the standard deviation from the expectation is about 6%.

N(0, 2.6)

—5

laz 0

1 11 10

l 15

l • oFstimated effect 20

Figure 7.7 Illustration of the estimated effects related to a normal distribution with expectation 0 and standard deviation 2.6, i.e. the distribution the estimated effects would have if no factor is active. This kind of distribution is called a reference distribution.

178

© Studentlitteratur

7 Design of Experiments

Using a full factorial design, it has proved possible to estimate the main effects of the factors with maximum precision. It was also possible to identify interaction effects. With a one-factor-at-a-time experiment this would not have been possible. More tests would have been necessary in order to reach the same accuracy when estimating the main effects and it would not have been possible to establish any interaction effects. Analysis using probability plotting

To analyse the uncertainty in the example with cracks on springs we could use properties of the binomial distribution to estimate the uncertainty of the measurements in the cube corners and thereby the uncertainty of our estimates of the effects. This is a very special situation. In most cases we lack knowledge about the uncertainty of the measurements in the particular runs and accordingly also of the estimated effects. We can then use that our estimates of the effects are from a distribution which roughly is a normal distribution7 and use a probability paper for normal distribution in the way illustrated in Figure 7.8 to estimate the uncertainty. We order the estimates of the effects according to size, i.e. -5.0 0.0 ... S 23, and plot the j:th estimate in size against the mean rank8 j1(n+1) , i.e. 0.5 against 1/8, 0.0 against 2/8, ..., 23.0 against 7/8. We know that the estimates of those effects that do not have any influence on the number of cracks are observations from an approximately normal distribution with expectation 0. Therefore, we can estimate the corresponding distribution by drawing a line on the probability paper, which passes the point (0, 50%) and corresponds as well as possible to the middle part of the plotted points9. From that line the standard deviation of the reference distribution can be estimated. Then the effects, which might have influence on the number of cracks can be identified. For more details; see Box Hunter & Hunter (1978).

7

8 9

This is due to the Central Limit Law. This plotting is similar to that on a Weibull paper discussed in Section 6.6.1. The reason why the points in the middle are given more importance depends on that these are from the reference distribution with higher probability than the "edge points". Studentlitteratur

179

Part II - Design for Quality

Normal Probability Plot

% I' l_TL

%

a=

=

r'3TT

99,9 3a --99,8

0,1

99,5

= 0,5 1

99

3a -98-

=

= =

X

--

MI

10 -- 2a 20

X

70

30

40

60 /_

50

50

60

40

70

30

X

20

= 10

In am On M.

im NM mil

-

NM

IM

IM MN

MN

IM NM

X We get he estimate [4 - (-2)] 3 = 2.0 of the standard deviation of the reference distribution

5

-7

1

3a

5

2a - 80

2a

-

2

95 90

3a

0,2

MI on M

NM

80 ,..

= = 90

95

- - - - - - - -----

g

MI

=

=

2a

99 99,5

0,5

99, 8

0,2 3a --0,1

3a

99, 9 -5

-2

0

45

10

15

20

Figure 7.8 Analysis of the result from the example with crack initiation on a probability paper for normal distribution. We get the estimate of the standard deviation of the reference distribution as [4-(-2)]/3 = 2.0. This can be compared with largest possible value of 2.6, which we used in the analysis using binomial distribution. The figure illustrates that it sometimes can be difficult to adapt a line to the plotted points. This difficulty usually decreases when the number of plotted points increases, i.e. more factors are studied than in this example.

180

© Studentlitteratur

7 Design of Experiments

7.4 Fractional Factorial Designs If it is considered that interaction effects can be disregarded, it is possible to use fewer runs than in a full factorial design. A full factorial design with two factors on two levels consists of four runs; see Table 7.4. Table 7.4 Design matrix of a full factorial design with two factors on two levels. Factor Run A 1 2 3 4

B

+ +

AxB +

+ +

+

If we are fairly certain that no interaction exists between the factors A and B, we can let a third factor, C, which we want to examine, vary according to the levels in the interaction column A xB, thus examining three factors in four runs. The design matrix for this fractional factorial design is illustrated in Table 7.5. See also the illustration in Figure 7.9. Table 7.5 Design matrix to a fractional factorial design with three factors on two levels. Factor Run 1 2 3 4

A

B

+

-

+

+ +

C + +

Note that if there is still an interaction between A and B, it would, according to the earlier discussion, be estimated using the same sequence of "+" and "-" as is now used for the factor C. This means that it is not possible to distinguish between the effect of factor C and the interaction between A and B. The factor C and the interacCO Studentlitteratur

181

Part I1— Design for Quality

tion AxB are said to be aliased. Also other effects will be impossible to separate. For instance, the effect of A is mixed with the interaction effect between B and C. If, for instance, the estimator is large, it is not possible to know if it is the main effect C that is active, or if it is the interaction between the two other factors A and B. In order to determine this, the number of runs has to be extended, resulting in a full factorial design.

(+)

C

B

(—) (—)

Figure 7.9

A

(+) Do

Illustration of a fractional factorial design with three factors. Using four runs, three main effects can be estimated. Note that if one of the factors is disregarded, the experiment will in this case be complete in the other two. In the figure to the right, factor C has been disregarded.

Often a large number of factors could affect the result. Then a fractional factorial design can be especially effective at the beginning of an investigation. Hence some of the factors have to be eliminated. Using the test results and estimates of the effects of the different factors, possibly mixed with interaction effects, it is possible to eliminate factors that are less "interesting". An example of an experimental design for seven factors in eight tests is illustrated in Table 7.6. Here we get aliases between, for instance, AxB and D, BxC and E and between AxBxC and G. If we believe that there are interaction effects between two of the factors, but not between all of them, we can investigate a fourth factor by using the column AxBxC and study four factors with seven runs. 182

© Studentlitteratur

7 Design of Experiments

Table 7.6 A design matrix intended for a fractional factorial design with seven factors studied in eight runs. Factor Run 1 2 3 4 5 6 7 8

A

B

— + — + _



+ — +

+ + — + +

C

D

E

F

G

— — — + + + +

+ — — + + — — +

+ — + — — + — +

+ +

— + + — + — — +

— — + +

The reason why the fractional factorial design has proved to be so useful is that often only a few single factors and interactions turn out to be of particular interest. This is an example of what is called the "80-20-rule", or, as stated by Joseph Juran: "the vital few and trivial many". After the first fractional factorial design, it is possible to design a more complete experiment with only the most important factors involved. Here we have illustrated experimental designs with 22 = 4 and 23 = 8 runs. In many cases, experiments are carried out in several runs, perhaps 24 = 16, 25 = 32 or 26 = 64 runs. It is also possible to use three levels instead of two1°. This is, of course, at least partly a matter of costs. Observe that factors that do not give any visible effects are not necessarily of no interest. On the contrary, from an economic point of view, they can be very interesting. By making a suitable choice it may be possible to choose the corresponding parameters in a way that is economically advantageous without affecting the function of the product or the process.

io

See, for instance, the discussion about robust design in Section 8.5.

© Studentlitteratur

183

Part II — Design for Quality

7.5 Studied Factors and Irrelevant Disturbing Factors Before starting an experiment it is important to clarify what factors or parameters are interesting and what factors might disturb the outcome of the experiment. The factors that are to be studied depend on the situation. Often it might be advantageous to use guidance from previous experiments and other experience. In connection with quality improvement, such experience can have been obtained with the help of the seven improvement tools mentioned in Figure 1.7 and further described in Chapter 11. In particular, a cause-and-effect diagram might be very useful. By performing the runs in a random order it is possible to avoid systematic sources of irrelevant disturbing factors. However, it is often advantageous to try to keep them constant or making blocks of runs in which these disturbing factors are kept in fixed positions. The making of blocks is done in such a way that the effect of irrelevant disturbing factors is eliminated. Hereby the estimators of the effects of the studied factors are not affected. If, for instance, only four batches a day can be manufactured and it has been decided to study three factors in eight experiments11, each in one batch, it might be possible to risk an irrelevant disturbing factor resulting in variations in the manufacturing process from one day to another. In this case blocks can be made with, say, four experiments per block, i.e. a day. Then the runs are made in such a way that the block effects are levelled out. It is, for instance, possible to distribute the blocks so that the block effect is mixed with the interaction effect among all the three factors, since this interaction is the least probable one. The experiment can then be illustrated as in Figure 7.10, where round symbols describe the runs performed during the first day while the square ones symbolize the runs during the second day.

11

184

As in the crack experiment with springs earlier. CO Studentlitteratur

7 Design of Experiments

B

(-) 1

A

(+)

i

AN-

Figure 7.10 Illustration of a block design of a full factorial design with three factors on two levels. Round symbols illustrate runs in day 1, squares stand for runs in day 2.

7.6 Conjoint Analysis A different use of the ideas in design of experiments is Conjoint Analysis, whose aim is to better understand which product properties are most important to a customer. If a customer is requested only to grade the importance of different product properties, getting a fair picture may be difficult. Everything tends to be regarded as "very important". If, on the other hand, the customer is faced with a choice between various products with different characteristics, it will be easier to form a true picture of what the customer actually values most highly. In a Conjoint Analysis (the word is a concentration of Consider Jointly), the customer is asked to evaluate products whose properties have been chosen according to an experimental design. For each interesting property, a low and a high attribute level is chosen. Using the plan, a number of different product concepts are created, which the presumptive customers are asked to consider and value by ranking or marking on a graded scale. The preferences of each customer can then be estimated as effects. If a large number of customers are asked to perform the same evaluations, an estimate of the customer preferences can be made with reasonable accuracy. It is also possible to judge whether different customer categories have the same preferences. If that is the case, appropriate segmentations of the potential market can be performed. © Studenditteratur

185

Part II — Design for Quality

Here it is worth noting that Conjoint Analysis is equally useful for services and goods. Applications in the field of education, for example, are illustrated in Gustafsson (1996), Ekdahl (1999) and Wiklund & Sandvik Wiklund (2000). Conjoint Analysis can also be a suitable tool to perform customer weighings of product properties in Quality Function Deployment (QFD).

7.7 Notes and References As early as in the 1920s, the British scientist Sir Ronald A. Fisher (1890-1962) developed a statistical design of experiments to handle agricultural experiments. There is a presentation of Fisher in Barnard (1990). Fisher's theories are described in his classic book Fisher (1935). Leonard H.C. Tippett (1902-1985) was one of the first to see the potential of industrial applications of design of experiments. Another early book on factorial designs and analyses is by Brownlee (1946). Moreover, Cochran & Cox (1957) have become classics in this field. In Duncan's book "Quality Control and Industrial Statistics", whose first edition was published in 1951, the design of experiments was an important issue. The well-written book Box, Hunter & Hunter (1978) is an exposition of classic design of experiments for industrial use. Other readable books are Daniels (1976), Montgomery (2001) and Wu & Hamada (2000). Attempts at describing, by simple first and second grade functions, how a result variable depends on a number of factors, are usually called Response Surface Methodology. This term was coined by Box & Wilson in the 1950s, and the idea has made great progress in recent years. Readable books are Box & Draper (1987) and Meyers & Montgomery (1995). A technique that is, unfortunately, practically forgotten, EVOP (Evolutionary Operation), uses response surface methods systematically to vary process parameters in small steps, enabling continuous improvement of the process results, see Box & Draper (1967). These books are, however, somewhat less accessible to the uninitiated. 186

Studentlitteratur

7 Design of Experiments

Figure 7.11 Sir Ronald Aylmer Fisher (1890-1962), left, and George E.P. Box (born 1919), right, both have played an important part in developing the design of experiments, which is an important element in the modern view of quality improvements.

A weakness in the classic design of experiments is that, primarily, it describes expected results, and not their dispersions. In fact, the dispersion is supposed to be (or, by some transformation become) equal in all experimental conditions. Genichi Taguchi (born 1924), from Japan, realized this discrepancy, and in the 1950s developed a variant of statistical experiment design, where dispersion and variation were placed in focus. This is addressed in more detail in Chapter 8. Design of experiments has during several years been part of a research programme at Linkoping University. Some dissertations produced in relation to that programme are Sandwik Wiklund (1997), Gustafsson (1996), Hynen (1997) and Ekdahl (1999). This research programme is now continued at Chalmers Univeristy of Technology. For a comprehensive work on Conjoint Analysis, see Gustafsson et al. (2001). How Conjoint Analysis can be used with market analyses is described in several works, of which Luce & Tukey (1964), Green & Rao (1971) and Green & Srinivasan (1978) are some examples. Gustafsson (1993) discusses how to apply Conjoint Analysis to identify customer requirements, for itemization in Quality Function Deployment. Wiklund & Sandvik Wiklund (2000) describe how Conjoint Analysis is used in developing courses in quality management. C Studentlitteratur

187

8 Robust Design

Over the last few decades, Genichi Taguchi's view of quality and quality improvement has attracted great interest in Western industry. Sometimes he has been mentioned as one of the men behind the Japanese success in the quality field. Dr Genichi Taguchi developed his thoughts largely in his work to develop telecommunication products at Nippon Telephone and Telegraph Company during the 1950s and 1960s. Bell Laboratories (nowadays AT & T) was one of the first companies outside Japan to take an interest in Taguchi's ideas. At about the same time, in 1979, Taguchi's book "Off-line Quality Control" was published in English. During 1984, thanks to the Ford car company and others, a number of conferences and seminars were arranged, where Taguchi's methods were discussed. The first European conference on Taguchi's ideas was held in London in 19881. Since then, discussions and interest have grown and the ideas have been presented in a number of articles in various journals. A large number of industrial applications have also been published.

8.1 Taguchi's Philosophy The central idea in Taguchi's views on the quality concept is that he takes the use of the product as a base. According to Taguchi, product quality, or rather lack of quality, is the loss to society caused by the product after delivery2. Taguchi restricts himself to the costs after delivery, but he takes a practically global view. Only

1 2

The conference papers are presented in Bendell (1989). See Figure 1.2.

188

© Studentlitteratur

8 Robust Design

companies that take into consideration not only the consumer costs, but also the costs to society, will in the long run be competitive. The way Taguchi concentrates on the costs during product use brings LCC (Life Cycle Cost) to mind3. In LCC all the costs related to the life cycle of a product is considered, already in the design phase. Taguchis' thoughts, which also embrace environmental aspects, are therefore, in a way, in line with current discussions on a sustainable society.

Figure 8.1

Genichi Taguchi (born 1924) has had a great impact on the development of quality engineering over the last decades, not least with his ideas on robust design, and the significance this has brought to the design of experiments. The photo was taken by Bengt Klefsjo at the EOQ conference in Budapest in 2000.

Taguchi makes a clear distinction between product characteristics and quality characteristics. Product characteristics have to be chosen in order to compete within a certain market segment. Quality characteristics, on the other hand, are set by the deviations of the product qualities from the ideal in the chosen market segment. Poorquality and variation of the product characteristics are intimately connected. Every unit of a kind of product is exposed to disturbances of all kinds. This can affect product quality and thus contribute to its variation. If the product characteristics vary apprecia3

See the comments in Section 6.1. Studentlitteratur

189

Part II — Design for Quality

bly, poor quality is indicated4. The disturbing factors, also called noise factors, can be of different kinds, from variations in the manufacturing process to environmental stress and wear. As it is impossible to eliminate the disturbances completely, Taguchi instead comes to the conclusion that the design has to be robust, that is insensitive to the disturbances to which the product might be exposed. How this aim is to be reached is a central element in Taguchi's quality philosophy. Taguchi provides methodologies for quality control already during the product and process development phases. This is probably the reason why Taguchi's quality philosophy has received so much attention. Many people have been talking about the necessity of preventing quality problems right from the product development stage. Not very many methodologies, however, have been presented for how this should be done, reliability and design review activities excluded. Taguchi's quality philosophy, combined with the ideas of Quality Function Deployment, presented in Chapter 5, are thus essential means to achieve efficient quality development.

8.2 The Design Process Taguchi breaks down the quality activities connected with product design and manufacture into two stages, which he calls on-line quality control and off-line quality control. Taguchi's off-line quality control is related to the product and process development, while the on-line quality control involves production and customer relations. Taguchi identifies three phases during the design process. These are: • system design • parameter design • tolerance design.

4

This is exemplified by Sony's manufacture of television sets, in Section 8.4.

190

© Studentlitteratur

8 Robust Design

8.2.1 System Design In the process of system design, the actual frame of the product is set, taking into consideration the needs of the customers and the manufacturability of the product. Deep knowledge about the needs of the customers and the manufacturing possibilities is thus a basic requirement for system design. The final result of the system design is a prototype design which can satisfy the customer needs concerning the function, provided it is not exposed to disturbances. However, it is not sure that the final product will fulfil the requirements of the customers since variation, disturbances and other weaknesses during manufacturing may influence customer satisfaction. Variations of use and different types of wear can also lead to customer dissatisfaction with a product.

8.2.2 Parameter Design A design is said to be robust if it is insensitive to various disturbances, both in the manufacturing process and during use. If, for instance, it is necessary to have a hole in a heavily loaded part of a structure, the aim is to keep as large a distance as possible between the edges. This should be done in order to reduce the sensitivity to inaccuracy in the manufacturing process and to load variations experienced in use. The aim is to make the design robust when deciding the parameter values. The characteristics of a product are often non-linearly dependent on the design parameters. The output voltage of a transistor depends, for instance, non-linearly on the transistor amplification; see Figure 8.2. By choosing the amplification x1 with a nominal output voltage y1 instead of x0 and yo, respectively, see Figure 8.2, the sensitivity to variations in the amplification has been reduced. By adding a resistor, the nominal output voltage yo can be achieved without affecting the sensitivity to variation in the amplification. In this way a robust design has been achieved.

© Studentlitteratur

191

Part II - Design for Quality

Figure 8.2 It is often possible to choose parameter designs in order to make the design more or less sensitive to disturbances. In this figure x can illustrate the amplification of a transistor and y can illustrate its output voltage. By choosing a nominal amplification x1 instead of x0 the output voltage is much less affected by the spread of actual amplification. The level can then be controlled towards the target value zo with the help of a resistor.

Generally, the situation is not as simple as shown in the above example with the transistor. Usually, the relation between product characteristics and design parameters is not known. Instead, information about non-linearities and influencing disturbances has to be obtained by using design of experimentss. Specially, interactions between design factors and disturbing factors can be utilized to decrease the product's sensitivity to noise variation.

8.2.3 Tolerance Design In parameter design the target values for the design parameters are set. During manufacture, the aim is to come as close as possible to these target values. As there is always a variation in the manufacturing process, a tolerance interval has to be given. 5

192

More is needed for that than what was discussed in Chapter 7. © Studentlitteratur

8 Robust Design

As far as Taguchi's quality philosophy is concerned, the tolerance interval must not, in connection with manufacturing, be interpreted as a permission to be anywhere within the tolerance interval with the corresponding dimensions. Even if the variation of the process is reduced, the aim must still be to reach the target value. It is not permissible to get closer to one of the tolerance limits even if the manufacturer would profit from this through lower material costs. The total sum of the customer costs and the manufacturing costs should be considered when selecting the parameters. The tolerance limits should be selected by balancing the loss to society due to deviations from the target value and the cost to the producer of making an adjustment when an out-of-tolerance situation arises.

8.3 Robust Design Every individual unit of a certain product is exposed to a number of disturbing factors during its entire life. These disturbances are deviations from what can be considered normal. They can be anything from the material of which the product is manufactured to a badly performed repair. Every factor that can make a product characteristic deviate from its target value should be considered a disturbing factor. Taguchi divides the disturbing factors into the following groups: • outer disturbances, like variations of temperature, voltage fluctuation and other environmental factors during usage • inner disturbances, like wear, tear, and deterioration within the individual unit, due to its operation • manufacturing variations, i.e. deviations of the individual unit from the set target values due to manufacturing. This gives a variation between units manufactured under the same specification. Typical of a robust design is that even if an individual unit is exposed to disturbances of the above mentioned kinds, its important characteristics still do not vary. Design of experiments is, as mentioned earlier, an important means for finding necessary information to achieve this. Studenditteratur

193

Part II — Design for Quality

Not only the product design, but also the design of the manufacturing process is of major importance as far as the final product quality is concerned. The same methods as for the product design have to be applied when designing the manufacturing process. However, some new elements have to be added. An important one deals with the possibility of controlling the manufacturing process. Dealing further with these issues would, however, lead us too far.

8.4 The Loss Function 8.4.1 Taguchi's Loss Function The loss to society is important to Taguchi. He cannot accept the traditional view, that as long as the parameter lies within the tolerance limits, the loss to society is zero and as soon as the parameter value has exceeded one of the tolerance limits, the financial loss is large. For Taguchi, every deviation from the target value means a loss which grows as the deviation increases, see Figure 8.3.

C

radition view

A Losses

Losses LTL Losses

No losses I Target value

Losses UTL

)10

guchiD Caview

Target value Figure 8.3 (a) Traditionally, a loss is considered to arise only when the parameter value is outside one of the tolerance limits. (b) Taguchi is of the opinion that every deviation from the target value causes a loss that grows with the deviation from the target value.

194

© Studentlitteratur

8 Robust Design

Taguchi also wants to quantify the financial loss when a parameter value deviates from its target value. He uses, as Carl Friedrich Gauss (1777-1855) and several statisticians with him, as an acceptable approximation a squared loss function6. How this may be used is illustrated in Figure 8.4.

A

r+

r— A

A

Y (Dimension) Figure 8.4

Illustration of Taguchi's loss function. Taguchi uses a squared loss function L(y) = [A(Y—r)2]/42, where A equals the customer's costs if the deviation from the target value r equals A. The value A is chosen so that about half of the customers choose to repair at this deviation. Every deviation from the target value means, according to Taguchi, that there is a cost. The larger this deviation is, the larger the average cost.

In connection with tolerance design, Taguchi wants to take into consideration the customer's costs as well as the manufacturer's costs. Every deviation from a parameter target value causes a customer loss, whereas there may be costs for the manufacturer for reducing the variation of the manufacturing process. For Taguchi it is obvious that it is the sum of the customer costs and the manufacturer costs that has to be minimized. An example in Figure 8.5 illustrates Taguchi's point of view.

8.4.2 Sony's television sets The traditional view, that the most important thing is to keep a parameter value within tolerance limits, is often inadequate. This is aptly illustrated by the following example, from Phadke (1989).

6

This is a reasonable function, as the first term in a Taylor expansion around yo of a symmetrical function L(y), where L(yo) = 0, equals L(y) = k(y—y0)2.

© Studentlitteratur

195

Part II — Design for Quality

r— å r— 5

Y (Dimension)

Figure 8.5 This illustration shows how Taguchi suggests the tolerance limits should be chosen. First the loss function is chosen as in Figure 8.4. Then the tolerance limits are set in such a way that the producer's cost B is balanced against the customer's loss. We get the condition 8 = A82/6,2, from which we obtain 8 = ANA and the tolerance limits r ±8.

By the end of the 1970s, American consumers preferred television sets that were produced at the Sony plant in Japan to those manufactured at Sony's plant in the US. Both plants used the same design and tolerance limits in the production, however, so how could the customers experience quality differences? Of the sets that were transported to the United States from Sony's plant in Japan, some 0.3% had an colour intensity outside the tolerance limits, while virtually none of those produced in the US were outside the tolerance threshold. Thus, the difference could not be explained by the number of defective units. An investigation showed that the colour intensity in the sets produced at the two plants varied, as shown in Figure 8.6, where m is the target value and the tolerance limits were set to m ± 5. The distribution of colour intensity at the Sony plant in Japan was essentially normally distributed at the target value, with a reasonably small standard deviation. At the US plant, on the other hand, the distribution of colour intensity was considerably more widely spread within the tolerance area. Thus, the difference was that, with a parameter value near the target value, the colour intensity was experienced as considerably better, and the Japanese plant had a much larger percentage of sets near the target value. In short, what this amounted to was that Sony USA focused on fulfilling tolerance limits, while Sony Japan endeavoured to reach the target value. 196

Studentlitteratur

8 Robust Design

Sony—Japan

Sony—USA

L

m—

5

m

m+ 5

Colour intensity

Figure 8.6 The distribution of colour intensity in television sets produced at the Sony plants in Japan and the US, in the mid 70s. (From Phadke, 1989, but originally from the Japanese paper The Asahi, April 17, 1979. See also Sullivan, 1984.)

8.4.3 An illustration from Saab A simple way to determine a loss function, practised at General Motors, was introduced by Richard Eichler, Saab Automobil7. If interested in the loss function for a certain parameter, ask a number of customers, perhaps at a demonstration, what constitutes an unsatisfactorily low value and an unsatisfactorily high value, for the parameter in question. Then, by adding the number of unsatisfied customers for each parameter value, it is possible to form a picture of the loss function, see Figure 8.7.

Figure 8.7 Areas where customers feel unsatisfied add up to something that in many cases can be approximated by a squared loss function.

In a seminar at the Industrial Statistics Section of the Swedish Society of Statisticians, in the spring of 2001. © Studentlitteratur

197

Part II — Design for Quality

Assuming that a parameter is measured and the measurements stored for all individual units produced, it is easy to note the number of claims at each given parameter value. Such a situation is illustrated in Figure 8.8, where the steering system of a sports car has been examined. The investigation dealt with how much the car pulled to the right or to the left. The distribution of the measured parameter variation after production, and the drawn-up specification limits, are shown to the left. The right-hand figure illustrates the distribution of customer complaints. A quadratic approximation corresponds surprisingly well with the customer reactions. The relative percentage of customer complaints is not smallest where the designer thought it would be, but appears at a considerably larger angle. An adjustment of the target value and the specification limits solved the problem.

Target value tat- - Tolerance width - -0— (a)

Target value ---st-- Tolerance width - -a-(b)

Figure 8.8 An illustration of a squared loss function from an empirical material when studying how much a certain sports car pulled to the right or to the left on a smooth road.

8.5 Design of Experiments 8.5.1 Experiment plans with design and disturbance parameters

Perhaps the most important message in Taguchi's quality philosophy is to be found in design of experiments. In Japan, millions of well planned experiments are performed each year, to gain knowledge about the way the design parameters affect the product characteristics. In too many Western industries, however, only simple 198

© Studentlitteratur

8 Robust Design

tests of a verifying kind are still used. When test series are actually performed, they are often only one-factor-at-a-time experiments. As shown in Chapter 7, this is a tremendous waste of money and information. The verifying tests do not generate new information and one-factor-at-a-time experiments are often not particularly efficient, but an inadequate way of collecting information. By using statistical design of experiments, it is possible to obtain much more information at a lower cost. Those experiments that have been carried out in industry have focused on how the design parameters affect the level of the test results. Often, it is assumed that the variation of the experimental results is constant for different combinations of design parameters. However, Taguchi's prime interest is variation. In order to create a robust design, a combination of design parameter values has to be found, to provide the desired product characteristics with little variation, even when disturbing factors are allowed to affect the result. Taguchi is in favour of experiments where every factor studied, i.e. design parameters and disturbing factors, are allowed to vary on two or three levels. Combinations of design parameter values are chosen according to a fractional factorial design; see Figure 8.9. Parameter no. Run no. 1 2 3 4 5 6 7 8 9 Figure 8.9

1

2

3

4

—1 —1 —1 0 0 0 1 1 1

—1 0 1 —1 0 1 —1 0 1

—1 0 1 0 1 —1 1 —1 0

—1 0 1 1 —1 0 0 1 —1

Example of an experimental design. The effects of four design parameters are examined. Every design parameter can be chosen at one of three levels (-1 = "low", 0 = "normal", 1 = "high"). In run number 1, for example, all the four design parameters are chosen at their low levels.

© Studentlitteratur

199

Part II - Design for Quality

For each of these combinations, the disturbing factors are chosen in the same way. The same combinations of the disturbing factor levels are represented in all the examined combinations of the design parameter levels; see Figure 8.10. Noise factor Run no. 1 2 3 4

1

2

3

—1 —1 1 1

—1 1 —1 1

—1 1 1 —1

Figure 8.10 For every run according to Figure 8.9 the three disturbing factors are varied according to this fractional factorial design. The matrix illustrates how the effect of three disturbing factors can be studied using four runs in each series. Every disturbing factor is chosen at a low level (marked "-1" ) or at a high level (marked "1"). In Chapter 7 we used the notations "-" and "+" for low and high level, respectively.

Using the results from the experiment, it is possible to estimate what average level different parameter combinations result in, with regard taken to variations of the disturbing parameters. It is also possible to get information about how sensitive different choices of design parameter combinations are to variations of the disturbing factors. It is then possible to choose levels of the design parameters for which the sensitivity to disturbances is minimized and where, on an average, the target value has been reached. Design parameters that affect neither sensitivity nor level can be chosen at their most economical level; see Figure 8.11.

8.5.2 Signal-to-noise ratio Taguchi suggests that signal-to-noise-ratios or S/N-ratios should be used to find a robust design. The appearance of such a S/N-quota varies, depending on whether the target value is to be as high as possible, as close to zero as possible, or as close to a given numerical value as possible. The examples illustrated in Figure 8.9 and 8.10 can have an S/N-ratio of -;-,2 S/N

200

= lg IL s2 © Studentlitteratur

8 Robust Design

i.e. the values8 -2

-2

-2

lg yi, lg y2 , si s2

lg

S9

are compared for the various runs. The use of these ratios is, however, questionable. The argument in favour is that the spread is often proportional to the level. This is the case when multiplicative rather than additive models are used. But then the logarithm should instead be applied to the results variable, whereby an additive model is achieved, see for example Box, Hunter & Hunter (1978) and Wu & Hamada (2000)

Seleced levels Observed individual of noise factors outcomes Run no. 1 2 3 4 5 6 7 8 9

Parameter no. 1

2

3

4

-1 -1 -1 0 0 0 1 1 1

-1 0 1 -1 0 1 -1 0 1

-1 0 1 0 1 -1 1 -1 0

-1 0 1 1 -1 0 0 1 -1

-)IP-

-1 -1 1 1

-1 1 -1 1

-1 —11.1 —11101 —00-1 —ON-

-1 -1 -1

-1 -1 1 1

-1 1 -1 1

Y11 Y12 Y73

(Y1, s1)

Y14

ap,

-1 /10 1 —Po1 —P.Om -1

Y91 Y92

(Ys, .59)

Y93 Y94

Figure 8.11 From every test series, i.e. a certain combination of the design parameters combines with a test with the three noise factors, an average level y, and a variation measure (or sensitivity measure) se i = 1, 2, ..., 9, can be evaluated. These measurements of variation s, indicate how large influence the different disturbing factors have for the studied combination of levels of the design parameters. The result can then be used to choose the parameter combination, which gives the most robust design.

8

Ig stands for the logarithm with basis ten. Studentlitteratur

201

Part I1— Design for Quality

8.5.3 A summary of Taguchi's philosophy Taguchi's views on robust design and how to achieve this is illustrated in the digest shown in Figure 8.12.

Taguchi's Qality Philosophy in Seven Points 1. Lack of quality is the total cost society is caused by the product after delivery to the customer. 2.Continuous quality improvements and cost reductions are necessary for a company to stay in business. 3.A quality improvement program has to aim continuously at reducing the deviation of the product performance characteristics from their target values. 4.The customer loss due o the product performance variations can often be regarded as increasing as the square of the deviation from the target values. 5.The product quality is to a large extent determined by the design and manufacturing processes. 6.Often a non-linear relation exists between the design parameters and the product characteristics. This can be used to reduce the sensitivity to disturbances of the product characteristics. The same applies for production processes. 7. Design of experiments can be used to identify the parameter combinations that reduce the variation of the product characteristics.

Figure 8.12 Taguchi's quality philosophy summarized in seven points. (After Sullivan, 1984. © American Society for Quality Control. Reprinted with Permission.)

8.6 Notes and References There is no doubt that Taguchi's point of view regarding design of experiments and robust design methodology has succeeded in attracting industrial attention to very important areas. His way of looking at robust design has also created many new ideas. How202

Studentlitteratur

8 Robust Design

ever, his methods are not always well phrased. Sometimes they are even unsuitable. Taguchi has written a number of books where he develops his ideas; see for instance Taguchi & Wu (1979), Taguchi (1986) and Taguchi, Elsayed & Hsiang (1989). A complete issue of the journal Quality and Reliability Engineering (1988:2) also deals with Taguchi's ideas and methods. Taguchi et al. (2001) describes a new methodology, using multivariat information and Mahalanobis distances, to adopt Taguchi's way of measuring security in different types of forecasts (concerning anything from health care to mechanical and electrical industrial applications) via S/N ratios. This is an interesting concept, but there are other ways of achieving the same result more efficiently, see Engelhardt (2001). One of the persons who has best managed to explain Taguchi's ideas is R. N. Kackar from Bell Laboratories; see Kackar (1985) and Kackar & Shoemaker (1986). Madhav Phadke, who was Taguchi's host when he visited AT&T Bell Laboratories, and who used his methods successfully to solve a problem that no one had up till then managed to solve, describes the ideas behind Taguchi's approach very well in his book Phadke (1989). Industrial design of experiments and robust design are also dealt with thoroughly in Ross (1988), and Lochner & Matar (1990). A number of industrial applications are also presented in Bendell et al. (1989). A book discussing design of experiments using the Taguchi approach is Roy (2001). Some critical judgements of Taguchi's ideas are to be found in for instance Box, Bisgaard & Fung (1988). A review of Taguchi's books has been published by Bisgaard (1989) in Technometrics. An international research project within the IMS programme (Intelligent Manufacturing Systems) is devoted to robust design engineering. A European part of this project is launched during fall 2002.

© Studentlitteratur

203

Part III Production for Quality The pivotal quality principles in production is to prevent, improve and control. In this part of the book we will discuss some methodologies and tools in this work. We will look at variation and its causes, and how to improve and control a process using Statistical Process Control (SPC). We will discuss the seven improvement tools, and control charts, in particular, as these are important tools in the improvement work. The capability concept, which deals with the ability of a process — manufacturing or other — to produce results that fulfil the set requirements, will be discussed. Other topics addressed in this section are supplier partnerships and some notions related to statistical acceptance sampling, a field of diminishing importance.

In every process there is variation, which often creates large losses. This holds for production, administration or any other type of process. This variation can be due to vague routines. human dissimilarities or insufficient information. Other causes of variation could be changes or disruptions in the process, such as tool wear, environmental influence or human errors. Alternatively, it could be caused by varying raw material, components and subsystems from suppliers. To diminish the variation in material deliveries it is important to maintain a close co-operation with suppliers of the process. A clear tendency today is that organisations share the responsibility for developing and producing subsystems with their suppliers. Their aim is to elucidate the whole chain, from "ear to loaf", and to create a win-win relation between supplier and customer by a wider outlook on systems. In many cases, suppliers have located their production near their customer's production facilities. To achieve an effective production it is necessary to determine, before the production starts, if a process has the necessary ability to produce what is required. The term we use for this is that the processes must be capable. To obtain a good capability the variation in the process outcome must not be too great in relation to the agreed tolerance limits, while the long term mean value must be fairly constant for the variable of current interest. Demands for capability are becoming more common, and are stipulated in various types of international standards, for example ISO 9000, QS 9000 and ISO/ TS 16949. It is also important to make sure in the production phase that no factors are added that may increase the variation, or

move the mean value away from the established target value. It is necessary to know how to differentiate random variation from actual changes, to be able to determine if a process is stable or not, and this requires knowledge and understanding of variation. Is the reduced sales result of a particular month due to a drop in customer potential, or just a random variation — as sales are not normally the same month by month? Is the increased number of defects in the turning caused by misalignment or faulty material — or merely a case of minor random changes? Is the change in mean temperature for a given month a result of environmental changes or just an evidence of normal yearly temperature variation? It is essential to be able to see if there are discernible reasons for changes, as an adjustment based on random variation will increase the total variation. When the variation in a process has increased, or needs to be improved for other reasons, the cause of the problem must first be identified and then eliminated. This requires relevant tools, to help identify the causes and then to structure and analyse the gathered material, which could be both numerical and verbal information. A couple of toolboxes containing efficient tools have been created for this purpose. One of these, compiled by Kaoru lshikawa, is the seven QC-tools, or the seven improvement tools, which have been used systematically in the improvement work of Japanese companies for almost fifty years. In this part of the book we will focus primarily on variation, causes for variation and how to determine when a process is stable; or when new variation factors have been added to the process. We will also discuss the efficient improvement tools compiled by Kaoru Ishikawa.

9 Statistical Process Control

In all situations in life we experience variations, whose causes we are often unable to identity. Kaoru Ishikawa, the Japanese quality expert, said that we live in "a world of dispersions". In some cases, this variation is a good thing - imagine, for example, a world of identical humans. Variation is, however, often a source of inconvenience and a driver of costs when discussing quality issues. Examples of causes of variation in a manufacturing process can be the play in bearings or spindles, vibrations, varying lighting conditions, inhomogeneous materials, and varying temperature or humidity. In a service process, information uncertainties and individual differences are important sources of variation. Since in each situation there are often various causes of the variation, it can be hard to identify the contribution of the individual cause. If, on the other hand, we have a maladjusted machine, tool wear or defects in raw material lots, this cause may contribute so much to the variation that it becomes an assignable cause, i.e. it can be identified and eliminated. The other causes contributing to the variation are in general called common causes. Each one of these causes contributes just a little to the variation, but together the contribution may be significant. The purpose of Statistical Process Control' (SPC) is to find as many assignable causes of variation as possible and then eliminate them. When a stable process with small variation is achieved, the target is to maintain or, if possible, improve the process even further. In these cases it is often not possible to make improvements by elimiThe general view of Statistical Process Control has broadened in recent years, and the notion now also embraces improvement work, so a better term would perhaps be "Statistical Process Improvement". We have chosen to keep the established term, however, even if it is somewhat misleading. © Studentlitteratur

207

Part III — Production for Quality

nating sources of variation. Instead, a creative change in the process structure is needed. The ideas behind Statistical Process Control were first developed with manufacturing processes in mind, but are today used increasingly for other purposes as well, including administrative and financial processes. Controlling and improving the processes of organisations is an essential element in the Six Sigma programme, as will be discussed in Section 23.4. Table 9.1 illustrates different epochs of process control, as interpreted by Beretta, the Italian arms manufacturer. Table 9.1 The development of process control can be divided into seven epochs. The table illustrates the ratio between rework and the total work contribution during the six first epochs at the Italian arms manufacturer Beretta, founded in 1492. (After Jaikumar, 1988.) Here CIM stands for Computer Integrated Manufacturing. The seventh epoch is represented by the current strive for Six Sigma processes. Epoch

The British system (c. 1800) The American system (c. 1850) Taylorism (c. 1900) Statistical Process Control (c. 1930) Numerical control (c. 1970) CIM (c. 1980) Six Sigma (c. 2000)

Ratio between rework and the total work contribution 0.8 0.5 0.25 0.08 0.02 0.005 0.0000034

9.1 Variation Assignable and random variation

The variation occurring from assignable causes is often called assignable variation, whereas variation caused by common causes is called random variation3. There is, however, no clear distinction 2 3

Another common term is "systematic variation", but this may wrongfully be associated with system-dependency. Sometimes the terms "natural variation" or "noise" are used.

208

0 Studentiitteratur

9 Statistical Process Control

between these two types of causes. What is an assignable cause and what is a common cause depends on the information acquired from the process. time

a process with assignable causes

a stable process

time

time

a stable more capable process

Figure 9.1 By eliminating assignable causes we get a process in statistical control, a stable process. By process changes we can make the process less sensitive to variation, thus increasing its capability.

When we have eliminated, or at least compensated for, the effect of the assignable causes, only the random variation remains in the process. As long as only this variation contributes to the dispersion, and no systematic variation occurs, we say that the process is in statistical control or that we have a stable process, see Figure 9.1. When the process is stable we can predict its future results. Shewhart (1931, p.6) states this in the following way: "A phenomenon will be said to be controlled when, through the use of past experience, we can predict, at least within limits, how the phenomenon may be expected to vary in the future". The limits are set by the natural, random variation which the common causes bring about. The purpose of Statistical Process Control is, on the basis of data from the process, to • identify assignable causes in order to eliminate them and create a stable, predictable process • supervise the process when it is in statistical control so that no further assignable causes are introduced without the knowledge of the operator • continuously give information from the process, so that new causes of variation can be identified as assignable and eliminated. Studentlitteratur

209

Part III - Production for Quality

Statistical Process Control is a vital part of the continuous improvement work. Using information from the process, new causes of variation can be identified as assignable and eliminated, or at least compensated for. Thus, the variation of the process will decrease, the costs of quality defects will decrease and quality will be improved. Often people do not adopt a statistical approach to the process. This leads to misunderstandings, as those who observe a random variation are misled into thinking that it is assignable. They then try to compensate for the variation in different ways. Instead, this results in increasing variation in the process4. Decisions are not based on facts, but merely on misguided ambition. By using principles from statistical process control this kind of overcontrol can be avoided. Deming uses the name tampering-5 for overcontrol. Addition of variation As emphasized earlier, variation is an important source of poorquality. The causes of variation often interact additively. The standard deviation of the different causes should therefore be added quadratically to get the variance of the total variation. Suppose, for example, that we have five independent sources of variation and that their contributions are added. If the various contributions have the standard deviations 4, 2, 1, 1, 1, respectively, the resulting standard deviation will be V42+ 22 + 12+ 12+ 12 = 4.8

Now, if we are successful in identifying the second largest source of variation and if we also manage to eliminate that completely, the resulting standard deviation will be V42+ 12+ 12+ 12 = 4.4 The standard deviation has then been reduced by less than 10%. If, instead, we manage to identify and eliminate the largest source of 4

s

A very good illustration of this is Deming's famous experiment "The Red Beads", described in Deming (1986). See Deming (1993, Chapter 9).

210

C Studentlitteratur

9 Statistical Process Control

variation, this implies a much bigger reduction in the resulting standard deviation, which now equals f22+12+12+12= 2.6 Distribution of the sum of sources:

Distribution of source no:

Figure 9.2 The total variation often depends on several more or less independent contributions. The variance of the total variation is then the sum of the variances of the different contributions.

The variation measured as the standard deviation has now been approximately halved. Consequently, it is vital that we devote ourselves to the appropriate problem in our improvement work. Wasting a lot of energy on the second largest contributor to variation is not only less economical but may also be demoralizing. It will be difficult to see the result of the improvements efforts, and the effect of seeing the results of one's improvement work is crucial for the motivation to move on.

9.2 Process Improvement When we are looking for assignable causes, it is important to tackle the problems systematically and accurately. There are often several problems or causes present. It is a matter of first tackling the prob© Studentlitteratur

211

Part III — Production for Quality

lem that is the most serious. When that problem is solved we move on to the next. Figure 9.3 illustrates the improvement cycle: Plan — Do — Study — Act. The stages in that cycle are commented upon below. Another list with roughly the same contents can be found in Figure 9.4.

Figure 9.3 A cycle for solving problems in the continuous improvement work presented by Deming (see Deming, 1986, 1993). Deming speaks of the PDSA-cycle, short for "Plan-Do- Study-Act", but he often refers to this cycle as the Shewhart-cycle after Walter A. Shewhart. In his early lectures in Japan Deming used "Check" instead of "Study" and the cycle was then called the PDCA-cycle; see Deming (1986).

Plan. When problems are detected the first thing we have to do is to establish the principal causes of the problem. Large problems have to be broken down into smaller, manageable ones. The decision concerning changes must be based on facts. That means that we have to look systematically for different plausible causes of the problem using, for example, the seven improvement tools (see Chapter 10). A cause-and-effect diagram can often give a hint as to the possible causes. Getting a group of people together, preferably with different backgrounds and skills, for a brain-storming session, where fantasy and ideas can flow freely without being criticized, is often productive. Other useful tools are FMEA (Failure Mode and Effects Analysis) and design of experiments6. After that we have to compile data so that we can detect causes of error and variation. In such cases, a histogram and other simple 6

See Chapter 6 and 7, respectively.

212

© Studentlitteratur

9 Statistical Process Control

ways of illustrating statistical data, Pareto diagrams, stratification, and scatter plots, will be of great help. It is vital not to "overreact" in such a way that the solution of a problem becomes a costly experience based on trial-and-error. Do. When an important cause of a problem is found, an improvement team is given the task of carrying through the appropriate steps. It is of great importance to make everyone involved fully aware of the problem and of the agreed improvement steps.

Identify project

Appoint improvement team

Plan

Problem analysis

Look for causes of the problem

Evaluate the result

CD° D

Take steps

Measure and evaluate the results

CAct)

Make permanent the improved quality level t

Figure 9.4 An improvement cycle with contents similar to that in Figure 9.3.

© Studentlitteratur

213

Part III - Production for Quality

Study. When appropriate steps have been taken we shall investigate the result to see if the implementation of the improvement program was actually successful. Again, several of the seven improvement tools, such as histograms, Pareto diagrams and stratification, are important and useful tools. When we are convinced that the steps taken have had a positive effect and that the quality level has been raised, we have to make sure that the new improved level is retained. This can sometimes be made by utilizing a control chart; see Chapter 11. Act. All the time it is a matter of learning and gaining experience from the improvement process in order to avoid the same type of problem the next time. If the steps taken were successful the new and better quality level should be made permanent. If we were not successful we have to go through the cycle once more. It is also very important to analyse the entire cycle of problem solving once again in order to learn and also improve the improvement process. Then we go on with improvement by moving on to the next problem in the same process or proceed to the next process and repeat the improvement cycle once again.

Quality planning

Quality control (during operations)

40 -

Sporadi• spike , ,

›. ra = cr 15 o. "6 7,1 o o

Original zone of quality control

20 co c o 0, -47_, •c co - 5 a 0

0

. . . .

, Chronic waste

,

New zone of quality control

. ' Quality im: provement ,` Time

Figure 9.5 Illustration of "The luran Trilogy". (From boon, 1986. A similar figure is also presented in Shewhart, 1939.)

214

Studentlitteratur

9 Statistical Process Control

One of those who most systematically and persistently has advocated the use of statistical techniques for problem solving is Ishikawa; see Ishikawa (1982). Also Juran, see for instance Juran (1986), has done a lot to spread the message of continuous quality improvement. "The Juran Trilogy": Planning — Control — Improvement, see Figure 9.5, plays an important part in his message. A quality improvement programme focusing on processes and reduced variation is Six Sigma, which will be addressed in Chapter 237.

9.3 Notes and References As we have mentioned earlier Walter A. Shewhart was one of the first to realize the importance of a statistical approach to the manufacturing process. His books, Shewhart (1931, 1939), are still very well worth reading. He has also strongly influenced Juran and Deming, for example. Above all, his way of thinking about statistical process control has influenced Deming quite substantially; see Deming (1986). What we have called "assignable causes" of variation, with terminology from Shewhart, Deming calls "special causes". Shewhart uses the term "chance cause" for what Deming calls a "common cause". Donald J. Wheeler has written a number of books addressing Statistical Process Control, in a way that balances theory and practice well, e.g. Wheeler & Lyday (1989) and Wheeler & Chambers (1992). A European network for people who want to increase the awareness of the need of a statistical view in industry was created during 2001 under the name European Network for Business and Industrial Statistics. The network was initiated during the First International Symposium in Industrial Statistics in Linkoping in Sweden, August 1999. One of the driving forces to establish the network was professor SOren Bisgaard. 7

See also Figure 13.12.

© Studentlitteratur

215

10 The Seven Improvement Tools

As a basis for improvement work, data are required, together with an analysis of these data. In Japan it was early realized that everyone in a company had to participate in the improvement work. This meant that the statistical tools, which were to be used, had to be fairly simple and yet efficient. For that reason seven tools were put together by Dr Kaoru Ishikawa, among others. This toolbox is called the seven improvement tools or the seven QC-tools, where QC means Quality Control. Since the beginning of the 1960s these tools have been taught to workers and foremen in Japanese industryl, who have used them systematically for problem solving. In this chapter we will give a short description of these seven tools for quality improvement, namely • • • •

data collection Pareto charts stratification control charts,

• histograms • cause-and-effect diagrams • scatter plots

see also Figure 10.1. For a more complete presentation of the tools we refer to Ishikawa (1982), Magnusson et al. (2002) and Brassard Si Ritter (1994). The set of tools included in the toolbox varies slightly between different descriptions. For example, sometimes flow charts, which in this book are discussed in Chapter 19, are also included, sometimes they are added as an eighth too12.

2

The use of these in Japanese quality control circles is illustrated in Chapter 23. See for example Magnusson et al. (2002).

216

Studentlitteratur

70 The Seven Improvement Tools

Control chart

1 Pareto diagram

• •• • •• • •• • • •

I



Histogram

Scatter plot

i E-1 Stratification Figure 10.1 The Seven Improvement Tools.

10.1 Data Collection The collection of data is one of the most important steps in a programme for quality improvement. Having a substantial basis for decision-making is vital. It is, of course, also essential that the basis elucidates the topic in question. If incorrect or misleading data are collected, not even the most sophisticated methods will help in the analysis. From the very start we have to be aware of the purpose of the data collection. • What is the quality problem? • What facts are required to elucidate the problem? Not until these questions have been answered is it possible to move on to collecting data. When we are collecting data, we can © Studentlitteratur

217

Part III — Production for Quality

Check Sheet Date

Product Tolerance limits Number of units Lot No tolerance limit

Plant Department Note/Remark tolerance limit

dim. 1.61.71.8 1.92.0 2.12.22.3 2.42.5 2.62.7

20 15

10 5

freq.

x x x x x x x x $ xx xx x x x „ x x xR Mxxx 5 x x

X x x x x x„ x x xx X xxx-xx xx x x x x„ x x x x46 16 yit V.c x x x x x pit

X x x x x xx 3 4 5 8 12 17 15 9 8 3

Figure 1 0.2 A check sheet for continuous follow-up of a manufacturing process. This type of check sheet is also called a frequency table. In the case illustrated it is to be observed that the manufacturing process results in a large number of units with dimensions outside the upper tolerance limit.

design a table in which each observation is represented, as it appears. Every new fact can be marked by a stroke or a cross. Sometimes it may be advisable to tick them off as in a check sheet. A few examples of check sheets are given in Figures 10.2 and 10.3. In Figure 10.2 we can easily see that the variation between the measured units' diameter is too large compared to the set tolerance limits. We can also see that the average value is too close to the upper tolerance limit.

218

Studentlitteratur

10 The Seven Improvement Tools

Check Sheet Date

Product Manufacturing step

Plant

Number inspected Note/Remark

Inspector Lot No

Type

Number

scratches

411-HH-#01411-141+14fft 11

32

cracks

4114111-Httifil- ///

23

inclomplete 4+H-H-1-1411-t411-t 4H1 4.1.H . 4.frt 41-11. -1411.

48

wrong shape ////

4

other

8

4111 II I Total

115

Figure 10.3 An example of a check sheet intended for investigating what types of defects occur on a certain type of part.

Often the origin of the observations varies in some way. The measured units can, for example, be produced at different machines. It is then suitable to use different symbols or colours for observations with different origins; see also Figure 10.4. This is in fact a form of stratification which will be discussed in Section 10.5.

>(0 0.>(› >(>(c 0>(7, ,ft) ?0 8>‹ 0 >‹>(Q>(0>,

0 0),>()(07,>(

2.5

3.0

3.5

4.0

4.5 mm

Figure 1 0.4 Often the origin of the observations varies in some way. The measured units can, for example, be produced at different machines. It is then suitable to use different symbols or colours for observations with different origins. See also Figure 10.14.

© Studentlitteratur

219

Part — Production for Quality

10.2 Histograms Often we have large amounts of data. Then we cannot represent each observation in the figure. Instead we have to divide the measurement axis into different parts, classes, and let the number of values in each class be represented by a rectangle. The area of this rectangle is made proportional to the fraction of observations in the class; see Figure 10.5. In that way the sum of the areas of all these rectangles is equal to unity. This type of figure is called a frequency histogram.

► 11.1

11.2

11.3

mm

11.4

► hours 800 900 1000 1100 1200 1300 1400 1500

vo 0

1

2 3 4 5 6 7 8 9

errors per day

Figure 10.5 (a) The histogram illustrates the outcome from measurements of the diameter of 30 shaft pivots. Each class has the same class range, i.e. the same distance between upper and lower limits in the class. (b) The histogram illustrates times to failure for 50 electric bulbs. Note that the classes do not have the same range. (c) The histogram illustrates the number of errors per day within a certain manufacturing section over a total of 132 days. In this case each integer corresponds to one class.

220

© Studentlitteratur

10 The Seven Improvement Tools

Using the histogram we can in an excellent way illustrate how a product or process characteristic varies. Note that a histogram can very easily be obtained using a frequency table as a basis. The big difference is that the histogram generally describes relative frequencies and not numbers of observations. Tools similar in spirit to the histogram, and very useful for illustrating large amounts of data, are referred to as Exploratory Data Analysis, EDA. These tools were originally developed by John Tukey; see Tukey (1977).

Table 10.1 A data collection with 50 observations. 7 18 8 19 33

10 6 35 19 18

0 0 1 1 2 2 3 3

3 5 0 5 0 5 0 5

32 12 16 5 7

17 14 31 21 11

21 25 11 8 13

29 27 15 13 25

13 33 22 17 15

3 29 7 30 16

18 17 20 14

28 12 9 15

23

28

(1) 6 1 5 1 5 1

7 1 5 1 7 2

7 2 6 2 8 3

7 2 6 3 8

8 3 7 3 9

8 3 7 9

9 3 7

4 8

8

8

9

9

(8) (10) (13) (6) (7) (4) (1) (50)

Figure 10.6 Stem-and-leaf diagram of the material in Table 10.1. To the left of the stroke, i.e. the stem, in this case are the tens and to the right are the units. The top value is thus 3 and the bottom value 35. Last but one from the bottom are the values 30, 31, 32 and 33.

© Studentlitteratur

221

Part III — Production for Quality

One example of tools from EDA is the stem-and-leaf diagram, which can be described as a histogram where the numerical values are still to be found, see Figure 10.6. Another EDA-tool for describing data and their dispersion is a box-plot; see Figure 10.7. In recent years the ideas from EDA have had a break-through in the work of process control. More about EDA can be found in Tukey (1977), for example. to c\J

cr)

Lc) o)

o

E x

10

E

20

30

40

Figure 10.7 A box-plot of the material in Table 10.1. Here Md stands for median, Q1 has 25% of the observations to its left and Q3 has 75% of the observations to its left. The notations X,„,„, and X nun stand for the largest and smallest numerical value, respectively.

It should be noted that the notations in a box-plot vary depending on the description. This should also be remembered when using computer programmes.

10.3 Pareto Charts There are several problems present in connection with a program for quality improvements. In general only one problem can be solved at a time. The Pareto chart (named by Juran after the Italian economist and statistician Vilfredo Pareto, 1848-1923) is then of great help when deciding in which order the problems should to be addressed3. The data, which were collected using for instance the check sheet in Figure 10.3, can be illustrated as in Table 10.2. 3

The Pareto chart was introduced by Juran in the first issue of the Quality Control Handbook in 1951, in a figure showing the distribution of quality losses.

222

Studentlitteratur

10 The Seven Improvement Tools Table 10.2 Another way of presenting the data in Figure 10.3. Cause

Number of defectives

scratches cracks incomplete wrong shape other

32 23 48 4 8

28 20 42 3 7

115

100

sum

Percentage of defectives

In this table there is certainly all the information available to us at present, but it is not presented in a particularly lucid way. Therefore we will illustrate the data graphically. In the Pareto chart in Figure 10.8 we get a very clear picture of the frequency of the various error types.

number of defectives

120

a)

100 —

—100 a D

80 —

— 805 D)

60 —

4-

48

40 —

60 E.'

— 40

32 23

20 — 0

L

co a) a) cop Ta = Eas

8

C

_O u) 2'1

(-)

7?)

4

a) o_

Ct -C

0 0) c

8 1

-0 a)

accumulated percentage of defectives

— 20 E 0

45 -C '6

2 3

Figure 10.8 A Pareto chart based on the material in Figure 1 0.3.

Note that in a Pareto chart: • Each type of defect is illustrated by a rectangle whose height equals the number of defectives on the left-hand scale. SomeS tu d en flitter atur

223

Part — Production for Quality

times the accumulated percentage of defectives is also shown on the right-hand scale. • The order between the different types of defects is such that the one with the largest frequency is placed furthest to the left. After that the number of defectives decreases to the right. The smallest columns furthest to the right can possibly be put together in one group "others", if each one of them contributes too little. • A line illustrating the cumulative number of defectives or the fraction defectives is often drawn. However, this line does not appear in all Pareto charts. • It is important always to state where and when data have been collected. On the basis of the Pareto chart the most serious problem is very clearly made visible. When that problem is solved we can move on to the next. In this way each problem is focused on, one at a time. Often the Pareto chart shows that very few problems account for a large number of the errors or the non-quality costs. Juran, therefore, speaks of "the vital few and the useful many"4. The so-called 8020-rule, which is often found in the field of business economics, states the same things. It is important to emphasize that it is not only the total number of errors or complaints that determines what step to take. It is also possible to draw a Pareto chart based on the experienced consequence costs of the different types of defects.

10.4 Cause-and-effect Diagrams Once we have selected a quality problem its root causes have to be found. Here a systematic analysis can be made using an cause-and4

5

Juran first spoke of "the vital few and the trivial many", but later he considered this expression unpedagogical. In 1905 M O Lorenz published a graphic method of analysing the distribution of welfare in the JASA, Joumal of the American Statistical Association. This chart, nowadays called the Lorenz chart, was actually the basis of Pareto's discussions. Juran later admitted that he should have given Lorenz credit for the principle, rather than Pareto; see Juran (1960). For example, one interpretation is that 80% of the revenues come from 20% of the customers.

224

Studentlitteratur

10 The Seven Improvement Tools

effect diagram, which is also called a fishbone diagram or an Ishikawa diagram. This type of diagram was introduced for the first time by Dr Kaoru Ishikawa in 1943 in connection with a quality program at the Kawasaki Steel Works in Japan. Its construction resembles a simplified fault tree6. In the diagram we first roughly describe those types of causes that can possibly produce the observed quality problem. Then we concentrate on one of these roughly described causes and try to investigate it in more detail. Cause 1

Cause 2

\

Cause 4

Cause 3

\

Cause 5

\

Quality problem

Cause 6

Figure 10.9 A cause-and-effect diagram is designed by first sorting out the main causes of the problem. Then we refine the diagram as shown in Figure 10.10. A cause-and-effect diagram may not, in its final version, look as in this figure. If it does, the causes of the main causes are not evident to us. A finished diagram should, as a rule, be very "bony"; see Figure 10.12.

When this is done we take one of these causes described in more detail and refine the classification further, see Figures 10.8 and 10.9. Only when we have investigated all the detailed cause descriptions for a main cause, do we move on to the next one. Note that even here it is important to concentrate our efforts and analyse one problem at a time. It is, however, essential to point out that once it is finished a causeand-effect diagram must never look like the one in Figure 10.8. If it does, we have a poor grasp of causes and effects. A cause-and-effect diagram must, in order to be useful, have a lot of "bones" on its skeleton; see Figure 10.12 and 10.13. 6

See Section 6.7.2.

© Studentlitteratur

225

Part III — Production for Quality

Or Figure 10.10 For each main cause of a quality problem we must try to find the causes of the main cause. A cause-and-effect diagram should thus be "bony" when it is finished.

The causes of a quality problem can often be referred to any of the following seven M's: • Management. Does the management provide sufficient information, support and means for the improvement activities? • Man. Does the operator have adequate training, motivation and experience? • Method. Are the proper tools available? Are the process parameters properly specified and are they possible to control? • Measurement. Are the testing devices properly calibrated? Are there any disturbing environmental factors? • Machine. Is preventive maintenance adequately executed? Has the machine the capability to produce units with a variation which is sufficiently small? • Material. What about the material used in the process? Are the supplier's quality activities enough? • Milieu. Does the environment affect production outcome? The beginning of a cause-and-effect diagram can thus look as shown in Figure 10.11. We then sometimes speak of a 7M-diagram. Management

\

Measurement

Man

Method

Quality problem

\

Machine Material

Milieu

Figure 10.11 The principle of a cause-and-effect diagram in the shape of a 7M-diagram.

226

© Studentlitteratur

10 The Seven Improvement Tools

Cable clip design

Control clamp profile

extent & design supervisor pull test machine

dimensional accuracy

gauge

Pull-out force

dimension material

Cable

Operator

Figure 10.12 A cause-and-effect diagram illustrating a problem with inadequate pull-out force after the clamping of a connector to a clamp-shoe.

Figure 10.13 illustrates some of the elements of Total Quality Management, using a cause-and-effect diagram. Cause-and-effect diagrams provide an excellent basis for problem solving. A cause-and-effect diagram and data collected earlier often point to a plausible cause of the observed problem. On other occasions the cause-and-effect diagram can give clear indications as to whether a larger amount of data is required and how it is to be collected or how a well planned statistical experiment is to be conducted. Cause-and-effect diagrams are usually developed by a team using post-it notes during a brainstorming session. Often, the causes then are grouped within "major" causes which are structured one by one to get a result as that in Figure 10.12. Sometimes, however, the diagram also is developed "top-down", i.e. the work starts with identification of the "major" causes and then causes to those are identified.

© Studentlitteratur

227

Part III - Production for Quality

Committed Company Management

Competence Positive Approach to Development for Everyone People

those conceme the problems

TOM

Good Initial Implementation Well Working Organization

Information

Well Defined Objectives Customer in focus

Continous Improvement

Committed Employees Fiery Spirits

Involved Employees

Courses in motivation Well working personnel policy

Delegate responsibility

Reasonable salaries policy

Let those concerned solve the problems

Encouragement for work well done Give feedback, i.e. statistics

Clear customer-suppl relationships Clearly defined responsibilities

Those who can influence the quality level must be responsibility for doing it Visualize data Show what things ands/ behaviours costs

Create committment for the product Possibility to influence working conditions

\

\Able to influence Well working new projects suggestion systems

Figure 10.13 Elements of Total Quality Management, illustrated using a causeand-effect diagram. (From Gunnarsson & Holmberg, 1993.)

10.5 Stratification One way of deducing causes of variation is through stratification. If, as in the basis for Figure 10.4, we have data collected from different sources, then we should classify these data into subgroups and illustrate each group separately, for instance by using a histogram. 228

Studentlitteratur

10 The Seven Improvement Tools

If these histograms differ substantially, as in Figure 10.14, we may have found a cause of the problem. Then it is a matter of going further to rectify the problem. Maybe an additional refinement of the cause-and effect diagram is required? A basic rule closely related to the stratification principle is that we must avoid mixing data of different origins. Through stratification we can obtain important information for the improvement work. Machine A

1 2.5

ri1 —T 3.0

1 3.5

4.0 Machine B

1

I 2.5

3.0

3.5

4.0

mm

Figure 10.14 Stratification of data from Figure 10.4.

Stratification can be performed with respect to a number of classifications. Some of these may be • Material - Supplier - Store - Time of purchase • Machine - Type - Age - Factory • Operator - Experience - Shift - Individual • Time - Time of day - Season • Environment - Temperature - Humidity © Studentlitteratur

229

Part III — Production for Quality

10.6 Scatter Plots In cases where original conditions vary continuously it may be unsuitable, or in some cases impossible, to stratify. Instead a scatter plot can be used to illustrate how a process or product characteristic varies due to an explanatory variable, see Figure 10.15. Maybe the variation of the explanatory variables explains a great deal of the observed variation of the product characteristic. In that case, we have a good basis for quality improvement. A

Hardness

X X

XX

X

XXX xx X X x x

X

X x x X

XX PP'

Temperature Figure 10.15 A scatter plot illustrating how a result, hardness, is due to a process parameter, temperature.

There are often many parameters influencing the product characteristic of interest. In such cases we should draw a series of scatter plots, one plot for each combination of the parameters, and of the product characteristic in combination with the parameters. The kind of covariation which can be interpreted from a scatter plot can also be used for controlling and supervising the process. Instead of measuring a product characteristic it is better to measure an explanatory parameter directly in the process. Figure 10.15 illustrates, for instance, that it may be more appropriate to supervise temperature during hardening than the hardness itself in the finished product. By studying a process parameter instead of measuring on a produced product, we can more rapidly prevent the problem of process variation. 230

6) Studentlitteratur

10 The Seven Improvement Tools

No of breeding stork coupies

No of children born (millions)

2 000 —

1970

1975

1980

Figure 10.16 The diagram illustrates the number of children born and the number of breeding storks in an area of former West Germany during 19651980. Although the curves are parallel we should not conclude that the nativity is due to the number of storks or vice versa. (The idea for the picture is from Nature, 1988, p. 495.)

Note, however, that we have to be on guard against nonsense correlations. Perhaps both the product characteristic and an "explanatory" parameter may depend on a third parameter, see Figure 10.16.

10.7 Control Charts The control chart was the prime tool introduced by Shewhart to find if assignable causes of variation exist, in order to make a manufacturing process predictable. It is also an excellent tool to show graphically the output of the process in time order. The basic idea behind the control chart is to take out information from the process at regular time intervals, create one or more suitable process quality indicators and based on this check whether the process characteristics perform in suitable and predictable way. Not Studentlitteratur

231

Part III - Production for Quality

only is the process variation illustrated in the control chart as a function of time, but also process changes are indicated. In a manufacturing process the information often is taken on the produced units, for instance, as a measurement as illustrated in Figure 10.17. The information is then weighed together in a suitable Process

measurements 3.1 3.3 3.2 3.5 3.4

3.3

measurements 3.6 3.4 3.9 3.3 3.8

measurements 3.5 3.4 3.9 3.5 3.6

quality indicator (e.g. the arithmetic mean) 3.6 3.6

Upper control limit 3.6—

MN

3.3— Lower control limit Figure 10.17 The idea behind control charts. At regular time intervals information of the process is gathered, e.g. from a number of units which are picked up from the process and measured in some sense. The information is weighed together to a process quality indicator, e.g. as the average value of the obtained measurements, and plotted in the control chart. As far as the plotted indicator lies between the set control limits the process is considered as stable.

232

© Studentlitteratur

10 The Seven Improvement Tools

manner, for instance to an arithmetic mean or to a standard deviation, and plotted in the chart. However, other measurements of the process might be even more informative. There are two main purposes in using control charts. The first one is that control charts support the creation of processes with a stable variation. The second one is to quickly detect whenever a change has occurred in a stable process, resulting in an alteration in the mean value or in the dispersion. We will discuss control charts more thoroughly in Chapter 11.

10.8 Notes and References More about the seven improvement tools (the seven QC-tools) can be found in the book "Guide to Quality Control" by Kaoru Ishikawa (1982). The book was written at the end of the 1960s to be used in a very ambitious training program for members of QCcircles in Japan. The idea of QC-circles (see Section 23.1) also originates from Kaoru Ishikawa. Other books describing the seven improvement tools are Asaka & Ozeki (1990), Andersen & Fagerhaug (2000) and Brassard & Ritter (1994). In a series of articles in Quality Progress 1990, starting in the June issue, "the seven QC-tools" are described and how they can be used for solving quality problems. In that series the tool "stratification" is exchanged to "flow chart", an important tool for process mapping, which will be discussed in Chapter 18. For more about Exploratory Data Analysis we refer to Tukey (1977), Chambers et al. (1983) and Hoaglin et al. (1983). A beautiful and illustrative book on how to present data in an informative way is written by Tufte (2001).

© Studentlitteratur

233

11 Control Charts

An important tool in statistical process control for finding assignable causes and for supervising a process is the use of control charts. The idea is that we take a number of observations from the process at certain time intervals. Using these we calculate some form of process quality indicator, which is plotted in a diagram. A process quality indicator is an observable quantity based on the observations indicating the status of the process characteristic of interest. It can, for example, be the arithmetic mean, the standard deviation or the total number of non-conforming units based on information from a sample. A manufacturing process is sometimes supervised by using several process quality indicators simultaneously. As a process quality indicator we can use any quantity that in some way indicates the ability of the process to produce desirable results or how the process can be changed. Consequently, it is not necessarily based on measurements on the results produced in the process. Instead it is an advantage if the indicator is based on measurements made directly in the process, since an indication of a change in the process will then be caught earlier. As long as the plotted quality indicator remains within the prescribed limits, we say that the process is in statistical control or that we have a stable process. These prescribed limits are called control limits. Very often a central line indicating an ideal level, a target level, is used between the control limits, see Figure 11.1. First, we should like to make clear that there is a significant difference between control limits and tolerance limits. Control limits are used in a control chart and used to indicate if a process is predictable, i.e. if the process is stable. Tolerance limits are set to determine whether a single unit fulfils stipulated production requirements. 234

© Studentlitteratur

11 Control Charts

Thus control limits are associated with the process, whereas tolerance limits are associated with the single unit. It is also important to point out that there is variation in all processes, not only in manufacturing processes. Therefore, thoughts from process control and improvement are important also if the issue, for example, is the number of invoice errors, the time to administer loan or the number of programming errors in software development. The distance between upper and lower control limits is often set at six times the standard deviation for the quality indicator.

x

upper control limit, UCL central line, CL

lower control limit, LCL Figure 11.1 The principles of a control chart. A quality indicator is plotted at regular intervals, in a chart where the distance from the central line to the control line is often three times the standard deviation for the indicator. See also figure 10.17.

11.1 Requirements on Control Charts A control chart should meet the following requirements: • quickly indicate when a systematic change has occurred in the process and thereby contribute to the identification of assignable causes of variation • it should indicate if the process is predictable, i.e. in statistical control • it must be easy to handle • false alarms should be rare, i.e. the risk must be small that a plotted point is outside the control limits when no systematic change has occurred • in the control chart it should be possible to estimate the time of a change and the type of change in order to facilitate the identification of causes © Studentlitteratur

235

Part III — Production for Quality

• it has to function as a receipt, both for the producer and the customer, proving that the process has been stable during the production • it has to serve as a basis for evaluation of process dispersion, i.e. its capacity to deliver units within set tolerances' • it has to function as a receipt proving that the improvement work has been successful • it should strengthen motivation and continuously bring attention back to variations in the process and to quality issues • is should provide information for improvements of its future operation. Item one and four above cause a conflict. If the sensitivity of the diagram is increased by narrowing the distance from the central line to the control limits, the risk of false alarms tends to increase. As a rule, this problem is solved in the following way. The random dispersion in the observed quality indicator is reduced by using several observations and not marking an isolated observation. Furthermore, the control limits are usually set so that the difference between the upper and lower control limit is six times the standard deviation for the plotted process quality indicator, when the process is in statistical control. As we will see later, this principle implies that in general the time between false alarms becomes sufficiently long, even if it varies considerably between different types of control charts.

11.2 Principles for Control Charts In large parts of this chapter we will exemplify ideas and concepts concerning control charts using a control chart for supervision of the average level of a certain characteristic. We can think of a diameter of a bolt, the time to delivery or the time to failure of a certain type of unit. In this case it is suitable to use an x -chart, i.e. plot the mean x of the observations in the chart.

1

This is called process capability, and is discussed further in Chapter 12.

236

© Studentlitteratur

11 Control Charts

At certain regular time intervals we take a number of observations, a sample, from the process. The size of the sample is denoted n. Each observation is here presumed to be a numerical value. Then we calculate the arithmetic average x = (x1+x2+...+xn)/n of the n observations and use this average as a quality indicator of the process. Suppose that the quantity we want to control has the mean (expectation) it and the standard deviation a, both of which are assumed to be known when the process is in statistical control. Then the average x is an observation from a distribution with the same mean (expectation) it as the individual observations xl, x2, ... , xn, but with the standard deviation 6/,in . In other words, we have a larger chance of detecting a deviation from the mean it by observing the average value x instead of an individual observation. How large then should n be?

distribution of the mean distribution of the individual units

P

Figure 11.2 if the observations are from a distribution with expectation µ and standard deviation a the arithmetic average z has the same expectation,.0 as the individual observations, but a standard deviation that is abs/ n .

Guidance as to the choice of n is given in Figure 11.3, where the standard deviation of the distribution for x is illustrated as a function of the number of observations n. We can clearly see that the reduction of the standard deviation of i due to adding another observation, is considerable for values of n up to about 4, but then gradually decreases. That is why it is common that 4, 5 or 6 is chosen. For historical reasons n = 5 is frequently2 chosen. 2

With 5 observations it was easy to calculate the average x without a computer or a calculator, simply by adding the observations, multiply the sum by two and finally move the decimal point one step to the left.

© Studentlitteratur

237

Part III - Production for Quality

standard deviation of x 1.0a -

0.8a -

• 0.6a 0.4a _







0.2a -

number of observations n 1 2 3 4 5 6 7 8 9 10

Figure 11.3 The standard deviation of the distribution for x as a function of n, the number of observations in the sample.

11.3 Choice of Control Limits The choice of control limits must be made in such a way that false alarms become rare. Let us assume for the moment that the distribution of x is approximately normally distributed. Then the probability of an x -value deviating more than 3 a/[ from the process mean µ when the process is in statistical control, is merely 0.0027. If we stop the process when an x -value deviates more than 3 cr/Vn from ,u, we therefore stop the process unnecessarily in about 0.3% of the cases. This means that, when the process is stable, on the average there are about 365 samples taken between false alarms; see Section 11.7.2. This is generally considered to be a reasonable risk. Often, in situations like this, we therefore use the control limits µ ± 3 cri,in and the central line µ in a control chart in which x is plotted. Since the distance between each control limit and the central line is three times the standard deviation of our quality indicator x , the control chart is said to have 3-sigma limits. Control charts designed according to this principle are sometimes called Shewhart diagrams or Shewhart charts after their originator Walter A. Shewhart. 238

Studentlitteratur

11 Control Charts

DA a1gD13'

0A3424-516-4101. R. L. JONES:A few days ago, you mentioned came of the problems tionnected with the development of an acceptable form of inspection report which night be modified from time to time, in order tO give at A glen. the greatest amount of accurate inThe attached form of report le deeigned to indicate whether or not the obaerved variation, In the percent of de+ feotive apparatue of a giv are aignificentr that ia, to indicate edsether or not thawoduct is eatisfactory. The [Peary underlying the method dans of psisthe eigbifiteoce of the veriatione in the value of eomashat involved when considered in auch e form as to cover gractisalltriall types ::42bOT:emotlat::°c:Zrn1 detelT.' °Sh:uld it be found desirable, however, to make uae of this form of thart in any'of the atudies now being conducted within the Inspectiou Departmeot, it will be yoasible to indicate the method to be followed in the pertioular. examples.

SIMArRART. E.APPARATUS —

Perm Report.

I SPECTED FOR T OLERANCE

b — 01

'1 ,, A .1 r) 8

,., ..untuald

,,j

410,0841

inenterto . sae, a Ot,

TS

den