Design Navi for Product Development 3031452402, 9783031452406

This book delves into Feasibility Prediction Theory (FPT) and its real-world application in product development. Organiz

116 23 3MB

English Pages 176 [166] Year 2023

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Design Navi for Product Development
 3031452402, 9783031452406

Table of contents :
Preface
Acknowledgments
Contents
About the Author
Part I Basic Theory-Feasibility Prediction Theory
1 Evolution of Feasibility Prediction Theory
1.1 Traditional Prediction Methods
1.2 How the Feasibility Prediction Theory was Invented
References
2 Feasibility Prediction Theory
2.1 Scale of Measuring Feasibility is Required
2.2 Probability of Feasibility
2.3 Axiomatic Evaluation of the Feasibility of the System
2.4 Functional Requirements and Design Range
2.5 How to Calculate a System Range
2.6 Common Range Coefficient
2.7 The Feasibility Prediction Theory
2.8 Parameter Types and System Range
2.9 The Feasibility Prediction Theory Combining Kansei Evaluation
2.9.1 Kansei Evaluation of Heating Equipment
2.9.2 Let’s Run the Meeting Democratically
2.10 System Feasibility Probability
2.11 Appendix: Normal Distribution Table
Reference
Part II Feasibility Study of Product Development Project-New Feasibility Study
3 Project Feasibility Study
3.1 What is a Feasibility Study
3.2 New Feasibility Study Process
3.2.1 Project Description and SWOT Analysis
3.2.2 Establish Functional Requirements and Design Ranges for the Project
3.2.3 Design a System to Realize the Project
3.2.4 Predict the Feasibility of All Areas in Which the System Will Be Involved
3.2.5 Summarize the Conclusions of the Feasibility Study
3.3 Areas to Examine the Feasibility of Projects
3.4 Future Prediction
3.4.1 Predicting the Future with Patterns
3.4.2 Unpredictable and Uncertain Event Problems
3.4.3 Logical Thinking
3.4.4 Timing of Project Launch
References
4 Feasibility Study of Product Development Project
4.1 Product Development Project from Seeds and Needs
4.1.1 SWOT Analysis to Establish Areas of Need
4.1.2 Gather Information Through Interviews/Surveys
4.1.3 Discovering Subjects Using the Meta-Concept Thinking
4.2 Product Development Projects from Current Products
4.2.1 Find the Current Product Value Curve
4.2.2 What is the Meta-Concept of This Product?
4.2.3 New Strategies in the Action Matrix
4.2.4 Drawing a New Value Curve
4.3 Product Development Project from Vision
4.3.1 First, Articulate the Vision
4.3.2 SWOT Analysis
4.3.3 Discover Subjects
4.3.4 Determination of Evaluation Criteria and Selection of Subjects
4.3.5 Selection of Functional Requirements and Conceptual Design of the System
4.3.6 Prototype Design
4.4 Feasibility Study of Product Development Project
References
Part III Realization of Product-Design Navi
5 Design Navi
5.1 Significance of Design Navi
5.2 Process of Design Navi
5.2.1 Determination of Functional Requirements and Design Ranges
5.2.2 Design Prototypes
5.2.3 Selection of Critical Parameters and Determination of Levels
5.2.4 Collection of Data by Experiment or Simulation Based on Orthogonal Table
5.2.5 Determine the Optimal Values of the Parameters by Determining the System Feasibility Probability
5.2.6 Build a Product with the Optimal Values Obtained and Check Its Performance
5.3 Performance Prediction
5.4 Case When the Design Range Is a Positive Single Value
5.5 Feasibility of Functional I/O Relationship
5.6 How to Choose Parameters
5.7 Features of Design Navi
Appendix: Orthogonal Table L18
References
6 Development Examples with Design Navi
6.1 Improvement of Injection Molding Machine
6.2 Development of Dental Air Grinder
6.3 Productivity Improvement of Grinding of Gas Turbine Parts
6.4 Development of Coating Tools
6.5 Hand Washing (Work Optimization)
6.6 Design of Stabilizing Power Circuit
References
Part IV Correct Way of Thinking-Met-Concept Thinking
7 Way of Thinking
7.1 Beware of Cognitive Bias and Symptomatic Thinking
7.1.1 Cognitive Bias Leads You into Unexpected Pitfalls
7.1.2 Symptomatic Thinking is Dangerous
7.2 Information Collection and Correct Value Criteria
7.2.1 Information Gathering—Have You Done Your Research?
7.2.2 Let’s Have the Correct Value Critera—The Wrong Ones Will Lead Us Astray
7.3 Correct Way of Thinking 1: Meta-Concept Thinking
7.4 Correct Way of Thinking 2: Total Design Thinking
References
Index

Citation preview

Hiromu Nakazawa

Design Navi for Product Development

Design Navi for Product Development

Hiromu Nakazawa

Design Navi for Product Development

Hiromu Nakazawa Professor Emeritus Waseda University Maebashi, Japan

ISBN 978-3-031-45240-6 ISBN 978-3-031-45241-3 (eBook) https://doi.org/10.1007/978-3-031-45241-3 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Paper in this product is recyclable.

Preface

Many new products come onto the market every day. How many of them become a hit product? How many of them are well-balanced products with sufficient performance to satisfy customers and at a reasonable cost? Some of the products that have disappeared from the market seem to have been conceived or planned incorrectly in the first place. There are also products that are well conceived but do not perform as expected. This book introduces a powerful method for developing products that do not make these mistakes, but instead are cost-effective and satisfy customers. Failure is not an option when it comes to product development, but also when purchasing products and equipment. Even when the requirements for the product or equipment to be purchased are well defined, it is often difficult to choose which one to select. It is a matter of trade-offs. One product may have excellent performance but not sufficient performance, another product may have the opposite characteristics, and so on. Now we often wonder which one to choose. Here are also introduced some techniques that can be applied in such cases using the underlying theory of this book. This book is unique in that it explains product development from project feasibility studies to actual product realization in a unified and demonstrative manner that can be used immediately. This product development theory is based on probabilistic concepts and is therefore highly reliable. There are two basic steps in product development. In the first step, when a product development project is launched, a feasibility study is conducted to determine its adequacy and feasibility. The next step, when the product development project is a go, is the process of determining the functional requirements of the product, including its cost, and their target values, and completing the product in concrete form that satisfies these requirements in the most balanced manner. At the risk of rough expression, there are two methods commonly used in these two steps. The first step, feasibility study, uses Cost-Benefit Analysis. For the second step, the Taguchi Method is well known. In the first step of the feasibility study, the decision has been based mainly on Cost-Benefit Analysis. This method, however, is a method in which the losses and benefits of all the different types of evaluation items to be considered, that is, different v

vi

Preface

dimensions, are converted into a measure of money, and the option with the most benefits is selected. It is unreasonable to assume that everything must be converted into money. For example, it would be difficult to convert the losses and benefits of technical performance into money. Even if the losses and benefits can be calculated, it does not tell us how much feasibility probability there is for them to be realized. This book introduces a new feasibility study that remedies these shortcomings of conventional feasibility studies. The Taguchi Method, which is often used in the second step, is also difficult to use in reality because of two drawbacks. One drawback is that it can only be applied to a single functional requirement. When trying to satisfy multiple functional requirements, one has to come up with a single evaluation formula that integrates them. This can be very difficult, and such a single formula cannot always be created. Another drawback is that it is not possible to set target values for functional requirements. The above is discussed in detail in this book, but all of these methods are difficult to use due to these functional limitations. In other words, until now, there have been no good methods that can be used in product development. Therefore, they have reluctantly relied on the method of repeated trial and error, which wastes money and time. The theory introduced in this book is a powerful solution to these problems. In order to solve these problems, this book proposes new feasibility study for the first step and Design Navi for the second step. These two methods proposed in this book require a theory to serve as their foundation. It is the Feasibility Prediction Theory (FPT). The new feasibility study uses FPT to determine the feasibility probability of the relevant areas of the project, which are then combined to determine the system feasibility probability. With this single number, system feasibility probability, the feasibility of this project can be judged comprehensively. This is an excellent method that can integrate items of different dimensions with the common dimensionless measure of feasibility that is probability. There is no need to force the conversion to money. Design Navi, which combines FPT with the orthogonal tables used in experimental design, is used in the second step of product development. Design Navi provides fantastic results in a short period of development time compared to the inefficient and costly trial-and-error method of the past. In fact, this Design Navi has been introduced by many companies in Japan since it was first announced in 2001. We have counted up to 2014, and we know that more than 60 companies (we have not checked after that date) have adopted this method. Those companies have achieved excellent results, and some of them are introduced in this book. The above theories are based on an idea that the author discovered in 1980 when I was a visiting researcher at the Massachusetts Institute of Technology in Boston, where I invented the Information Integration Method and further developed it. The Information Integration Method is already taught in some graduate schools and MBA programs in Japan, but the Feasibility Prediction Theory presented in this book is an evolution of the Information Integration Method to allow for easier calculation of feasibility probability.

Preface

vii

In addition, an important and indispensable part of product development is the conception of new ideas. Various conventional methods of thinking have been proposed, but I have never been able to shake the feeling that they are not quite right. The reason for this may be that none of them taught the correct “procedure” for thinking. So, I introduce a Meta-Concept Thinking that uses an unconventional thinking procedure that I developed. This book consists of the following four parts. In Part I, the evolutional history of the Feasibility Prediction Theory and the theory is explained in an easy-to-understand manner. In Part II, a new feasibility study to verify the feasibility and appropriateness of a project is explained. It also explains how to use this new feasibility study more concretely, using a product development project as an example. In Part III, the theory of Design Navi is introduced, which is the main theme of this book and examples of actual applications of this method that have produced excellent results. In Part IV, I will first clarify the causes of the mistakes in thinking and then introduce the usage of Meta-Concept Thinking and the Total Design Thinking derived from the Feasibility Prediction Theory. The theories proposed in this book can be used in all aspects of politics, economics, management, science and technology, and production technology. We believe that this knowledge is essential for people who are specialized in whatever the field. In other words, it is a new field of study that must be incorporated into the curriculum as a required subject in higher education, regardless of whether it is in the humanities or science and technology. If this book becomes the pioneer in this field, the hard work of writing this book will be rewarded. Finally, I would like to thank Kindai Kagaku sha Co., Ltd., to mention that this book is written based on my Japanese book published from this publisher. Maebashi, Japan

Hiromu Nakazawa

Acknowledgments

The Feasibility Prediction Theory (FPT) and Design Navi described in this book have been the powerful theory/method for improving product developments and productivity in the Japanese manufacturing industry. This method not only improves product development, but it is also a theory/method that can lead various projects to success. The knowledge that a book conveys is important, but if you don’t experience it, you will never truly understand it. No matter how good the knowledge or information is, it is meaningless if it just sits in your head. It is only when you use it and experience it that the knowledge and information are given life. And I hope that this book will be a clutching straw for a few people, or even a few companies. This research began in 1980 when I visited Prof. Nam P. Suh’s laboratory at MIT for a year as a visiting researcher. At the time, Prof. Suh was working on a design theory based on information axiom. The axiom states that a design with a small amount of information is a good design, but how to calculate this amount of information in actual design was still unresolved and the axiom was applied sensory. Later, I proposed a theory called the Information Integration Method to quantitatively calculate the amount of information of this axiom in design, which eventually evolved into FPT. Combining this Information Integration Method with the orthogonal tables used in the experimental design, I developed the Nakazawa Method, which is the basis of Design Navi. In 2020, this function of information was simplified to the calculation of only the probability within the information function, and Design Navi was created. A detailed explanation is given in the text. In this sense, I would like to first thank Prof. Nam P. Suh for providing the inspiration for this research. This research would not have been possible without the cooperation of my former undergraduate and graduate students in my former laboratory. So, I thank them for this important contribution. In addition, all calculations for cantilever in Chap. 5 were done by Prof. Toshitake Tateno, at the School of Science and Technology, Meiji University. Thank you very much.

ix

x

Acknowledgments

I am truly happy to have finally finished writing this English book based on the Japanese edition. This English edition was revised and reformed from the Japanese edition published by Kindai Kagaku-sha Digital. On the occasion of the birth of the Japanese version of the book, the author was greatly helped by Ms. Sachi Ishii, the editor-in-chief of Kindai Kagaku-sha Digital, and Mr. Masahide Ito, an editor at that company. I would like to express my sincere gratitude to both of them. I am also grateful to the senior editor at Springer Nature, Dr. Mayra Castor, that she gave me the chance and support to publish this book. Lastly, I would like to thank my late wife Michiko, who always supported me, my son Satoru, and my daughter Megumi, who still support me today. Maebashi, Japan 2023

Hiromu Nakazawa

Contents

Part I

Basic Theory-Feasibility Prediction Theory

1 Evolution of Feasibility Prediction Theory . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Traditional Prediction Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 How the Feasibility Prediction Theory was Invented . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3 3 8 10

2 Feasibility Prediction Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Scale of Measuring Feasibility is Required . . . . . . . . . . . . . . . . . . . . 2.2 Probability of Feasibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Axiomatic Evaluation of the Feasibility of the System . . . . . . . . . . 2.4 Functional Requirements and Design Range . . . . . . . . . . . . . . . . . . . 2.5 How to Calculate a System Range . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Common Range Coefficient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7 The Feasibility Prediction Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8 Parameter Types and System Range . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9 The Feasibility Prediction Theory Combining Kansei Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9.1 Kansei Evaluation of Heating Equipment . . . . . . . . . . . . . . . 2.9.2 Let’s Run the Meeting Democratically . . . . . . . . . . . . . . . . . 2.10 System Feasibility Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.11 Appendix: Normal Distribution Table . . . . . . . . . . . . . . . . . . . . . . . . Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

11 11 14 17 19 22 24 25 29

Part II

32 32 35 37 39 43

Feasibility Study of Product Development Project-New Feasibility Study

3 Project Feasibility Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 What is a Feasibility Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 New Feasibility Study Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Project Description and SWOT Analysis . . . . . . . . . . . . . . .

47 47 48 48

xi

xii

Contents

3.2.2 Establish Functional Requirements and Design Ranges for the Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 Design a System to Realize the Project . . . . . . . . . . . . . . . . . 3.2.4 Predict the Feasibility of All Areas in Which the System Will Be Involved . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.5 Summarize the Conclusions of the Feasibility Study . . . . . 3.3 Areas to Examine the Feasibility of Projects . . . . . . . . . . . . . . . . . . . 3.4 Future Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 Predicting the Future with Patterns . . . . . . . . . . . . . . . . . . . . 3.4.2 Unpredictable and Uncertain Event Problems . . . . . . . . . . . 3.4.3 Logical Thinking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.4 Timing of Project Launch . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Feasibility Study of Product Development Project . . . . . . . . . . . . . . . . . 4.1 Product Development Project from Seeds and Needs . . . . . . . . . . . 4.1.1 SWOT Analysis to Establish Areas of Need . . . . . . . . . . . . . 4.1.2 Gather Information Through Interviews/Surveys . . . . . . . . . 4.1.3 Discovering Subjects Using the Meta-Concept Thinking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Product Development Projects from Current Products . . . . . . . . . . . 4.2.1 Find the Current Product Value Curve . . . . . . . . . . . . . . . . . . 4.2.2 What is the Meta-Concept of This Product? . . . . . . . . . . . . . 4.2.3 New Strategies in the Action Matrix . . . . . . . . . . . . . . . . . . . 4.2.4 Drawing a New Value Curve . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Product Development Project from Vision . . . . . . . . . . . . . . . . . . . . 4.3.1 First, Articulate the Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 SWOT Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.3 Discover Subjects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.4 Determination of Evaluation Criteria and Selection of Subjects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.5 Selection of Functional Requirements and Conceptual Design of the System . . . . . . . . . . . . . . . . . . 4.3.6 Prototype Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Feasibility Study of Product Development Project . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

51 51 52 52 53 58 58 61 61 63 63 65 65 66 66 68 68 68 69 70 70 71 72 73 75 76 77 79 79 84

Part III Realization of Product-Design Navi 5 Design Navi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Significance of Design Navi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Process of Design Navi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Determination of Functional Requirements and Design Ranges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Design Prototypes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

87 87 90 91 93

Contents

xiii

5.2.3 Selection of Critical Parameters and Determination of Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.4 Collection of Data by Experiment or Simulation Based on Orthogonal Table . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.5 Determine the Optimal Values of the Parameters by Determining the System Feasibility Probability . . . . . . . 5.2.6 Build a Product with the Optimal Values Obtained and Check Its Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Performance Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Case When the Design Range Is a Positive Single Value . . . . . . . . 5.5 Feasibility of Functional I/O Relationship . . . . . . . . . . . . . . . . . . . . . 5.6 How to Choose Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Features of Design Navi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix: Orthogonal Table L18 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

107 110 111 113 115 116 117 117

6 Development Examples with Design Navi . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Improvement of Injection Molding Machine . . . . . . . . . . . . . . . . . . . 6.2 Development of Dental Air Grinder . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Productivity Improvement of Grinding of Gas Turbine Parts . . . . . 6.4 Development of Coating Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Hand Washing (Work Optimization) . . . . . . . . . . . . . . . . . . . . . . . . . 6.6 Design of Stabilizing Power Circuit . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

119 119 121 125 126 127 130 134

94 97 99

Part IV Correct Way of Thinking-Met-Concept Thinking 7 Way of Thinking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Beware of Cognitive Bias and Symptomatic Thinking . . . . . . . . . . 7.1.1 Cognitive Bias Leads You into Unexpected Pitfalls . . . . . . 7.1.2 Symptomatic Thinking is Dangerous . . . . . . . . . . . . . . . . . . 7.2 Information Collection and Correct Value Criteria . . . . . . . . . . . . . . 7.2.1 Information Gathering—Have You Done Your Research? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.2 Let’s Have the Correct Value Critera—The Wrong Ones Will Lead Us Astray . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Correct Way of Thinking 1: Meta-Concept Thinking . . . . . . . . . . . 7.4 Correct Way of Thinking 2: Total Design Thinking . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

137 137 138 139 141 141 144 145 153 157

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159

About the Author

Hiromu Nakazawa born in 1938, Doctor of Engineering, and is a Professor Emeritus, Waseda University. Hiromu Nakazawa is specialized in design theory and precision engineering. Hiromu Nakazawa graduated from Waseda University in 1961. Hiromu Nakazawa is an engineer at Mitsubishi Heavy Industries for seven years. After serving as an assistant, Hiromu Nakazawa became a full-time lecturer and an assistant professor at the School of Science and Engineering, Waseda University, and as a visiting researcher at the Massachusetts Institute of Technology, then became a professor at the School of Science and Engineering, Waseda University. Hiromu Nakazawa’s major publications include Principles of Precision Engineering (Oxford University Press, 1994, ISBN 0 19 856266 7), Information Integration Method (Corona Publishing Co., Ltd., 1987 in Japanese), Precision Engineering (Tokyo Denki University Press 2011 in Japanese), and about twenty books including university textbooks. Hiromu Nakazawa is an honorary member of the Japan Society for Precision Engineering. Hiromu Nakazawa’s awards are “Hasunuma Memorial Award” from the Japan Society for Precision Engineering and “Achievement Award” from the Japan Society of Mechanical Engineers, Design Engineering and Systems Division.

xv

Part I

Basic Theory-Feasibility Prediction Theory

Chapter 1

Evolution of Feasibility Prediction Theory

In this chapter, the background and the evolution process of the Feasibility Prediction Theory (FPT) that plays the leading role in this book is explained. If you are interested in the history, please start to read from this chapter. If you want to use the Feasibility Prediction Theory immediately, you may skip this chapter to start from Chap. 2 without affecting the use of the FPT at all.

1.1 Traditional Prediction Methods Feasibility Prediction Theory (FPT) is the theory which predicts the feasibility of a system axiomatically and probabilistically considering all the functional requirements comprehensively. Before going into the following explanation, let’s look at the definition of the basic words. These definitions are described again in next chapter. When a person tries to realize a goal, the thing designed to realize that goal is called a system. It has a wide range of meanings, including not only hardware but also software. It also includes from appliances, automobiles, aircraft and national projects such as building of dams and highways. If you look closely at the flow of trying to achieve a goal, the goal is specified first, and requirements and target values to achieve this goal. Based on them conceptual design is made to create a more specific details of the system. In this book design includes all of these design processes. When a specific system is designed, you can predict how well each required item will be able to be achieved. This request item we’ll call functional requirements. Under these preparations the historical evolution until the FPT was invented is explained next. When we look at our daily behavior, we notice something in common. It is needless to say that you are acting for some purposes. We don’t doubt whether we can realize the trivialities of our daily lives at all, but if it becomes a little big purpose of traveling abroad or building a house and etc., we may anxious for whether it will realize or © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Nakazawa, Design Navi for Product Development, https://doi.org/10.1007/978-3-031-45241-3_1

3

4

1 Evolution of Feasibility Prediction Theory

not. In other words, we want to know its feasibility. Then how do you predict? There are various prediction methods (although not necessarily limited to distant future events), look at some of the main methods that have been used. Intuition First of all, intuition is used when trying to carry out work quickly in general. Intuition is a great means of predicting feasibility for human. The great people of all ages and countries who have a great sense of intuition have made excellent decisions and actions to impress people. To enhance this intuition is one of the most important training objectives for human beings. This is desirably to be enhanced by education, though such education is not conducted consciously in today’s school. Unfortunately, good intuition isn’t given to everyone. And intuition doesn’t always give you the right answer, because the real problem is often complex and intuition can’t handle it. Especially in many cases the matter has many functional requirements, and the degree of request of functional requirements varies and/or trade-offs between them exist, so it is almost impossible to derive the correct conclusion by intuition. Expected Utility There is a prediction method called expected utility. This method is not only a feasibility calculation, but also, is a method that combines the benefits when it realizes. For example, if a student has to decide to take the examination of the university A or B of which examination is conducted on the same date, how does the student choose the university? In this case you have to estimate two ambiguities. One is the probability of passing the test, and the other is the benefits (utility) after graduation of each university in your future carrier. The probability of passing can be estimated to some extent from comparison of your recent deviation test value with that of the university. Utility is quite difficult to estimate objectively and correctly. But once items are determined and some objective data representing by unique number is got, you may calculate expected utility to multiply the probability and the utility. If multiple items are to be evaluated, traditionally there is no way to combine all the expected utility, comprehensive unique value is anyway adapted. For instance, provided A university is estimated 8 and B university 6 appropriately, then these values are multiplied by probabilities of passing the test, and the university of larger expected value is selected. Like this determination of utility is fairly vague relying on intuition. By combining the FPT of this book, the feasibility probability of utility considered is accurately predicted as explained later, so expected utility will be changed to more powerful method and used reasonably. Cost Benefit Analysis In order to select one of several plans (systems) to be realized for a certain purpose, there is a method called cost benefit analysis in which alternatives are evaluated from viewpoints of cost and benefit. The point to note is that even if a good plan is chosen, its feasibility cannot be predicted. The cost is the amount of total resources in terms of monetary values, necessary to realize the plan, while the benefit is the effect of the plan in terms of monetary

1.1 Traditional Prediction Methods

5

Fig. 1.1 Cost benefit analysis

values calculated from the proposal under consideration. For instance, suppose that a project to build a highway from city G to city H exists and that three routes A, B, and C are under consideration. For each plan, direct benefits such as savings of travel time, decrease of total traffic accidents, and indirect benefits, such as an increase of land values and the promotion of local industries are estimated in terms of monetary values and summed up. On the other hand, cost including the construction cost, purchasing cost of lands and also environmental negative costs such as noise and air pollution are estimated in terms of monetary values and summed up. If we take benefit for abscissa and cost for ordinate, provided that three curves are obtained as shown in Fig. 1.1. Based on these data, we can evaluate and choose the best proposal. For example, if the benefit maximized plan is to be selected for a fixed cost a2 , then plan C will be the best. On the other hand, if the cost minimized plan is to be selected for a fixed benefit b1 , then plan B is the best. This method is seemingly simple and clear because it only has to be evaluated by two items of cost and benefit. But it may be difficult that anything must be converted reasonably into monetary value. For instance, if a requirement involving construction deadlines must be considered additionally, selection is no longer possible. Point Evaluation Method This is a method where an estimation is made among several proposals by assigning points to each proposal and the proposal of the highest points is selected as best one. Differently from cost benefit analysis when points are estimated considering feasibility it is possible to predict feasibility theoretically. To explain by a simple example of purchasing a system from three firms, provided that the functional requirements such as the system’s performances 1, 2, 3, cost and the firm’s after-sales services are to be considered. Points are assigned to these functional requirements, for instance, as shown in Table 1.1. There are several point systems, however, five-point system is used in this case. In general, a difference of the importance exists among the items, weighing is necessary for differentiation. Four levels weighing coefficient is adopted here. Then each category point is multiplied by the weighing coefficient and summed up. In this case the proposal A has the highest points and so is chosen.

6

1 Evolution of Feasibility Prediction Theory

Table 1.1 Point evaluation method Item

Weighing Proposal A Proposal B Proposal C coefficient Evaluation Weighed Evaluation Weighed Evaluation Weighed evaluation evaluation evaluation

Price

3

4

12

3

9

4

12

Performance1 3

4

12

4

12

2

6

Performance2 3

5

15

4

12

2

6

Performance3 4

4

16

4

12

3

12

Safety

2

4

8

3

6

3

6

Service

1

2

2

3

3

3

3

Total

65

58

45

There are several shortcomings to this point evaluation method. First a serious drawback is the fact that a proposal which has an evaluation item with an extremely low point will be selected if it carries the highest total point. But the proposal thus selected will bring a serious matter later because of the low-rated item. For instance, although points of the item from cost through safety are very high, the point of the after-sales services is extremely low, so when the system breaks down it will be very hard to repair. Therefore, this method evaluates systems to optimize partly. Furthermore, result will vary according to the weighing coefficient value, therefor it must be determined rationally, but one faces to the difficulty to determine it rationally. Taguchi Method The Taguchi Method [1] is a parameter design method of a system developed by Genichi Taguchi, based on the experimental design which was developed by a British geneticist and statistician R.A. Fisher for the efficiency of the farm experiments. It has been used in many areas including industry. To explain the use of Taguchi Method briefly, taking an example of the industrial field, in order to improve in productivity effect- inferred parameters and their values of several levels are assigned to an orthogonal table [2]. According to this orthogonal table experiments or simulations are carried out and the optimal parameters are determined from these results. In that sense, the Taguchi Method looks like Design Navi, they are quite different essentially. But Taguchi Method can set only one functional requirement. Besides it cannot set the target value for the functional requirement and calculation procedure is quite complicated. Meanwhile Design Navi which is invented based on the FPT has resolved all these shortcomings. In Chaps. 5 and 6 Design Navi will be explained in detail, and you will understand that this is easier to use and widely adaptable. Feasibility Study A feasibility study (FS) is a study of the feasibility of a proposed project. This is defined in this book as the process of examining the feasibility of a proposed project in all relevant areas /items and predicting whether the project can be optimally and

1.1 Traditional Prediction Methods

7

holistically realized in its entirety, or modifying the project to realize it. Projects vary widely in size and type, ranging from national projects to corporate product development. For example, a project may be construction of a high-speed rail line, construction of theaters and hotels, purchase of research vessels, wastewater treatment for a fuel company, a national lifelong education platform, or even the introduction of gray wolves to a certain region. The list is endless. Therefore, the method of conducting a feasibility study varies on an ad hoc basis depending on the subject. There are many items and related areas that must be considered in a feasibility study, regardless of the type of project. Therefore, when evaluating whether a project is good or bad, there is a need for a scale that unifies all the different types of items and disciplines. It is meaningless to simply present a list of analysis results for each field. For example, this system has a low rating in this field because of such and such a problem, but it has a high rating in the other areas. In such a way it is not at all clear whether the project can be carried out or not, or what modifications are necessary to implement the project, etc. However, the current feasibility studies are still at this level. Hence it is hard to use. Since the conventional feasibility study did not have a unified, single evaluation scale for these issues, the cost–benefit analysis has been used in conventional feasibility studies. However, as mentioned above, this is quite unreasonable and hard to use. Another way is to rate each of the several proposed options on a ◎, 〇, Δ. But even in this way there is no single objective scale to evaluate all items in total, finally it must be decided by fragile intuition of a person and also, we cannot know how feasible the selected option is. Therefore, this book proposes one solution. In other words, feasibility probability for each item is proposed as a measure to predict feasibility which can be applied to any item of any area. By integrating them as system feasibility probability for the project, it is possible to evaluate the project comprehensively with a single numerical value. Moreover, it turns out to what degree the project can be realized or which items were problematic and how much the feasibility of the project as a whole will improve when improvements are made to increase the feasibility probability of those items. Thus, the current feasibility study was transformed into a highly reliable feasibility study. This is the new feasibility study, which is explained in detail in Chaps. 3 and 4. In addition, there are many other feasibility evaluation methods such as the elimination method and radar charts, but we will not go into details. Because none of these methods is a holistic and reliable way to evaluate feasibility. In other words, the holistic evaluation of the feasibility of a system requires that all functional requirements have explicit and quantitatively target values and also a range of variation in the properties of the items must be clear. From these data the probability of the feasibility of the system must be reasonably determined. Furthermore, the feasibility evaluation method can only be used safely if there is a guarantee that the system will never be selected when even one item is not feasible. In other words, the method must be able to select the overall optimal proposal. Otherwise, a flawed system will be selected and we will have a big problem later on.

8

1 Evolution of Feasibility Prediction Theory

The FPT proposed in this book can compensate for all of the above shortcomings of conventional FS and can reasonably evaluate the feasibility of a system in terms of multiple functional requirements holistically.

1.2 How the Feasibility Prediction Theory was Invented The research that led to the FPT proposed in this book was conducted by the author as a visiting researcher under Prof. N.P. Suh at the Massachusetts Institute of Technology (MIT) in 1980. At that time, Professor Suh was advocating the axiomatic design theory [2]. I proposed a method to quantitatively calculate the amount of information used in the axiomatic design theory [2], which led to the development of the Information Integration Method, which further developed into the Design Navi. The Information Integration Method was first published in 1983 as “Process Design Method Using the Concept of Quantity of Information” [3]. In 1984, it was also published in the literature [4, 5]. In 1987, it was published in a book entitled “Information Integration Method” [6]. However, there was a problem that the term information was confused with the term information in information theory, so in 2006 the term information was changed to recsat in the book [7], and this name has been used ever since. Since the FPT was constructed using the original concept of Information Integration Method, this book uses same terminology as used in it. The following is an explanation of how the FPT has evolved from a theory of Information Integration Method. The study of the Information Integration Method started from the design axiom minimum information axiom proposed by Professor Suh, therefore from the concept of information, or more precisely, information equation same as that used in information theory. However, the interpretation of the information equation was subsequently changed. If the feasibility probability of a certain state is P, the smaller the probability, the harder is it to realize. It means that feasibility of the objective state, if it is of low possibility, necessarily requires a great deal of effort, a large amount of material, energy, information, elaborated mechanisms, and so on. The amount of information is interpreted as a state equation that expresses the amount of these efforts. This is the background of the basic concept of the information equation used in Information Integration Method. The function E that represents this state is defined by the following equation (the same equation as in information theory), provided P denotes probability of realizing the state. E = ln

1 P

(1.1)

where ln is the natural logarithm. Now we will show how this state equation can be used to predict the feasibility of a system’s functional requirements. To realize the functional requirements of a system, it is necessary to change the system from the initial state of a functional requirement to the target state. If we can measure the

1.2 How the Feasibility Prediction Theory was Invented

9

Fig. 1.2 Difference between two state quantities I

State 2 I State 1

E2

E1

difficulty of this state change, we can predict the feasibility of the system. Therefore, for a system with certain functional requirements we denote that the quantity of state in the first state 1 is E 1 and the quantity of state in the target state 2 is E 2 , as shown in Fig. 1.2. The difference between these state quantities I (which is also an information quantity) can be used to estimate the feasibility difficulty of the system. Combined with Eq. (1.1), we get the following equation. I = E 2 − E 1 = ln

1 1 − ln P2 P1

Here, the probability P1 is 1, because it already exists, that is, it has been already realized. Then the second term becomes zero, because the natural logarithm of 1 is zero. Hence the equation becomes simply I = ln

1 P2

(1.2)

which means that the feasibility of this functional requirement of this system can be calculated from ln P12 or P2 only needs to be calculated. Now, in the Information Integration Method, the most feasible system was determined according to the axiom the best system is the one with the smallest total (summed) value of the information quantities I i of all the functional requirements. For example, suppose a system has three functional requirements and if the probabilities of feasibility are P1 , P2 , and P3 , then the total information content I of the system is as follows, I = ln

1 1 1 1 + ln + ln = ln P1 P2 P3 P1 P2 P3

(1.3)

As you can see here in the function ln is the inverse of the compound of the three feasibility probabilities of the functional requirements. It is clear that the feasibility of the total system can be obtained simply by calculating the compound probability using each feasibility probability of the functional requirement without calculating more complex function (information). The minimum amount of information means the maximum amount of compound probability. This feasibility led to the establishment of the FPT introduced in this book. In Chap. 2, we will finally get into the details of the FPT.

10

1 Evolution of Feasibility Prediction Theory

References 1. Taguchi, G. (2000). Experimental design method (Vol. 1, 2). Maruzen (in Japanese). 2. Suh, N. P., Bell, A. C., & Gossard. (1987). On an axiomatic approach to manufacturing system. Journal of Engineering for Industry Transactions ASME, 100, 127–130. 3. Nakazawa, H. (1983). Process design method using the concept of information. Precision Machine, 49(9), 1246–1250. (in Japanese). 4. Nakazawa, H. (1984). Information integration method. Proceedings international symposia on design and synthesis (pp.171–176). (in Japanese). 5. Nakazawa, H., & Suh, N. P. (1984). Process planning based on information concept. Robotics and Computer Integrated Manufacturing, 1(1), 115–123. 6. Nakazawa, H. (1987). Information integration method. Corona. (in Japanese). 7. Nakazawa, H. (2006). Design Navi for products development. Kougyouchousakai, (in Japanese).

Chapter 2

Feasibility Prediction Theory

This chapter explains the Feasibility Prediction Theory (FPT) and how to use it. The theory determines the overall optimal feasibility of a system through calculating the probabilities of feasibility for all functional requirements of the system and integrating them based on an axiom. This calculation of probability requires a collection of objective data to determine the range of parameter variation. However, this is not always possible. Even in this case, it is shown that, if we use a sensory evaluation, we can collect data and calculate probability.

2.1 Scale of Measuring Feasibility is Required The definition of the Feasibility Prediction Theory (FPT) was given in Chap. 1 and is reproduced here: Feasibility Prediction Theory is the theory which predicts the feasibility of a system axiomatically and stochastically considering all the functional requirements comprehensively. By the way, the book “Built to Last” [1] was once a bestseller in Japan. The authors defined that the visionary companies are the best of the best in their industries and have been that way for decades. They also asserted that in order to become a visionary company, a hairy project (business plan) must be set and realized. However, no matter how hairy the project is, there must be a guarantee that it can be realized. In other words, it is important to predict the feasibility of the project. The FPT in this book can predict the feasibility probability for the system as a whole, using the individual feasibility probability for functional requirements. When a project is about to be executed with a hairy goal on which a company’s future success is hanging, management may think that they have fully researched and considered all aspects of the project’s feasibility. However, when it comes time to make a decision and put it into action, it is difficult to rely on one’s own intuition without an objective measure of feasibility. This decision requires a great deal of courage. In such cases, if we have a rational and reliable measure that can predict © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Nakazawa, Design Navi for Product Development, https://doi.org/10.1007/978-3-031-45241-3_2

11

12

2 Feasibility Prediction Theory

the overall feasibility of the system as a whole, even if there are tradeoffs between individual functional requirements of the system, it is possible for them to predict the feasibility of the system with a great deal of confidence. This measure is the system feasibility probability proposed in the FPT. If we generalize the above further, we can notice a pattern in the way we work on a daily basis. When we have a request, we first conceptualize it. This conceptualization is the initial system. The system here includes the concrete object to be realized, such as a proposal, design, plan, management plan, or policy. A more detailed expression of the contents of the system is so called specifications or functional requirements. In this book, we will use the term functional requirement (FR). The term functional requirement does not only refer to performance, but also to cost, delivery time, safety, social contribution, etc. Moreover, it includes specific target values. Once the conceptualization step is complete, the next step is to design a specific system based on these functional requirements. The creation of such a concrete system is called design in this book. Generally, more than one system proposal is created. We also call them options. The plan is then submitted to the board of directors, which selects the one that has the greatest potential to realize the objectives of the plan. The process of selecting the best system among them is important. The best system is the one that meets the target values of all functional requirements. In reality, however, functional requirements have different dimensions and cannot be evaluated in the same way. In addition, there are trade-offs such that the option manages to satisfy one requirement it does not satisfy another make the choice very difficult. A simple scale is required to select the proposal that is expected to realize all the functional requirements. This is just the scale that the FPT proposes in this book. Let me explain the general flow concerning above with the example of specific automobiles as follows. See Table 2.1. Sorry for the old data, this is part of the data for liter cars (cars with 1000 cc engines) in 1985. This data was presented in a consumer magazine at the time, but it did not mention which car better to choose. The data in Table 2.2 shows the target values from fuel consumption to price (hereafter also referred to as design range) for the six models which the author tentatively set. Even given such specific data, if the physical quantity of each functional requirement is different and there is a trade-off between the functional requirements, it is difficult to choose which car is most likely to achieve the required performance in terms of the overall optimum. Furthermore, in this case, we do not want a car in which most items are quite feasible, but some items are not feasible at all. For example, we do not want a car that has excellent acceleration performance and is highly feasible, but fuel consumption is almost unattainable. The way to find a car must be the one that is free from defects and that can realize the target values in a holistically optimal manner. From another point of view, even if the performance varies within a certain range as shown in Table 2.1, (in general, the performance of a car always varies depending on the conditions of driving), we want a method by which we can choose the most optimal car that can achieve the required performance in Table 2.2 in a holistically optimal manner, not in the partly optimum manner.

2.1 Scale of Measuring Feasibility is Required

13

Table 2.1 Performances of litter cars (data from 1985) Functional requirement

Car A

Car B

Car C

Car D

Car E (Car C with automatic transmission)

Fuel consumption (km/l)

12.4 ~ 24.6 11.1 ~ 21.9 11.0 ~ 21.8 11.4 ~ 22.1 8.7 ~ 16.7

Car F (Car B with diesel engine) 15.7 ~ 25.6

*Acceleration 4.7 ~ 15.9 performance (s)

4.3 ~ 14.4

4.8 ~ 16.6

5.0 ~ 16.9

5.0 ~ 15.2

6.1 ~ 23.1

Room noise (phone)

45 ~ 68

49 ~ 71

45 ~ 68

48 ~ 69

45 ~ 68

58 ~ 74

Trank capacity (l)

102

148

178

169

178

148

Pric (ten thousand \)

82.8

84.4

82.4

77

86

91.9

*Acceleration time from 40 km/h through 80 km/h

Table 2.2 Target values of litter cars Functional requirement

Fuel consumption (km/l)

Design range More than 15.0

Acceleration performance (s)

Room noise (phone)

Less than 12.0 Less than 55

Trank capacity (l)

Price good

Good for120 or more

Good for 80 or less

No for 50 or less

No for 120 or more

We often come across similar cases in a daily work, where we necessarily have to choose a system that is feasible in a holistically optimal manner. In other words, in situations where the act of selection is necessary, we must always select the system that is holistically optimum. A closer look at realizing functional requirements shows that each functional requirement has a target value. Therefore, to realize a functional requirement is to determine the probability of achieving the target value for each requirement item. In other words, predicting feasibility means to measure probabilistically how much the target value of each functional requirement can be realized. Suppose that we have a scale that can measure the feasibility of each item, how do we evaluate the system as a whole? The intuitive approach may be to use a comprehensive scale that integrates the quantitative feasibility of each functional requirement. If this measure can predict the overall optimal feasibility, even if there are trade-offs among the items, then it will be a great success. Again, even if a system gets good overall measure value, the measure is nothing if the system has hidden defects, i.e., items with abnormally low feasibility (feasibility

14

2 Feasibility Prediction Theory

probability). The overall measure is required to have characteristics such that if any item is defective, the overall probability of feasibility becomes low. With the above strategy the Information Integration Method was established in 1983 using the concept of information. The method has been evolved to make it easier to use replacing information with probability, which will be introduced hereafter.

2.2 Probability of Feasibility Now let’s begin explaining the feasibility probability. I will not go into a rigorous mathematical explanation so as not to confuse the reader who is not familiar with probability theory. Please understand intuitively. Probability in a line of this book is understood as the degree to which a specific state is realized, and is expressed as a value from 0 (it cannot be that state) to 1 (it is perfectly realized). So here, we call the probability of feasibility of a set target value feasibility probability. Using the fuel consumption of car A in Table 2.1 above as an example, let’s see how the fuel consumption target value can be achieved using probability. The table shows that the fuel consumption of car A ranges from 12.4 to 24.6 km/l, depending on how the car is driven. The probability density function is used to show the percentage of the fuel consumption in a specific range. A graphical representation of this function is shown in Fig. 2.1. The variable of the abscissa is called the probability variable, but it is referred to as a parameter in this book. The characteristics of fuel consumption can be expressed as a probability density curve as shown in Fig. 2.1. However, there are many possible patterns depending on driving conditions and the habits of the driver (the graph is actually more complicated than this). To explain the meaning of this curve, the area bounded by the width, the curve cut by that width, and the abscissa represents the probability of fuel consumption occurring during that width. Therefore, the total area bounded by the curve and the abscissa indicates that the probability is 1, because fuel consumption of this car necessarily happens between the end crossing points of the curve and abscissa. On the other hand, the fuel consumption target should be greater than a specified value (in this case, 15 km/l, sorry for the old data which is common sense value for Design range Probability density

Fig. 2.1 Probability density curves for fuel consumption (km/l) of car A

12.4

15.0 Fuel consumption

24.6

2.2 Probability of Feasibility

Design range

Probability density

Fig. 2.2 Probability density for fuel consumption (km/l) of car A

15

Common range h

System range lS

12.4

15.0

24.6

Fuel consumption (km/l)

the time). This is the range we want to achieve as a goal. This is called the design range. In this case, the design range is as shown in Fig. 2.1. In fact, as mentioned above, fuel consumption curve varies depending on how the car is used. Moreover, since the car will be sold to an unspecified large number of people, if a car suitable for a certain probability density curve is developed, and the car is used in a different way, there will be no good results. In other words, using a certain probability density curve will lead to incorrect actual evaluations. Therefore, in such a case a probability distribution as shown in Fig. 2.2 is used. This probability distribution is called the uniform probability density distribution. This distribution means that fuel consumption has the same probability of occurring at any parameter position when considering fuel consumption over a specified range. It is unrealistic to drive with unusually low or high probability of a particular fuel consumption, so this uniform probability density distribution has some rationality. This range of variation in performance (in this case, fuel consumption) is called the system range (ls ). The common part of the design range and the system range is called the common range (l c ). The development (or selection) of a good car means that the system range, which is the range of variation in performance, is all within the design range. Once such a vehicle is developed (or selected), fuel consumption will be within the required range no matter what kind of driving style is used. If the entire system range is in the design range, please note that the common range is the same length as the system range. Now, let’s evaluate the degree to which the fuel consumption target for car A can be achieved based on Fig. 2.2. If the car is driven within this common range (lc ), the target is achieved. Therefore, it is sufficient to find the probability of the common range. Let P be the probability of the common range, then its area (gray range) is the probability of fuel consumption occurring in the range. If the height of the probability density is h, then the probability of the common range is hlc . If the length of the system range is ls , then the total area (hls ) is 1, so h can be replaced by the following equation h=

1 ls

then we obtain the probability of common range P as

16

2 Feasibility Prediction Theory

P=

lc ls

(2.1)

In other words, assuming a uniform probability density distribution, feasibility probability of fuel consumption is obtained as ratio of the common range length to the system range length. It is clear from Fig. 2.2 that the more the system range comes into the design range, i.e., the more likely the goal will be realized, the more the system range length and the common range length come to be equal, as a result feasibility probability approaches 1. This means that the car can achieve the target fuel consumption regardless of how it is driven. Conversely, the further the system range goes outside the design range, the smaller the value of the common range. If the system range is completely out of the design range, the common range is zero and feasibility probability becomes zero. Now let’s actually calculate the feasibility probability of the fuel consumption of car A based on the values in Fig. 2.2. P=

24.6 − 15 = 0.787 24.6 − 12.4

That is, the feasibility probability of the fuel consumption of car A is 0.787. There are many different patterns of probability density, but here we have approximated them with a uniform probability density. The question arises, however, whether this causes fatal error. Of course, there is a difference between the uniform pattern and the actual pattern, so errors do occur. However, we can use this feasibility probability to select the best proposal and/or to develop the best product using Design Navi described in Chaps. 5 and 6. Moreover, we evaluate the feasibility of the project which is most likely to realize. Therefore, we deal with the cases that the system range should be as much as possible included in the design range. This means that no matter what the pattern is, the system range will always be included in or almost included in the design range. Then the error between the actual pattern and the uniform probability density pattern becomes very small or zero for the practical use. If the system range is completely included in the design range, the feasibility probability is 1 as described above. In the case where there two alternatives exist and both system ranges are within the design range (as shown in Fig. 2.3, the feasibility probability is 1 for both alternatives and the two must be treated equally theoretically. Although B may be considered better than A in practical use, since the feasibility probability has only a purely mathematical meaning, and there is no difference in practical significance. Note that the value is the same if the feasibility probability is the same in all cases. However, if you really want to make a difference between A and B, you can make the design range a little stricter, since the feasibility probability for A will be smaller than 1.

2.3 Axiomatic Evaluation of the Feasibility of the System

17

Fig. 2.3 Feasibility probability is zero for both proposals

2.3 Axiomatic Evaluation of the Feasibility of the System From the above explanation, it is clear that feasibility can be measured by the probability for single items, but how should this be considered when assessing the overall optimum probability of a system with multiple functional requirements? In this case it is useful to be given a single number to assess the feasibility of the system in a holistically optimal way. A system that is an aggregate of multiple functional requirements is meaningless unless all the functional requirements do not function in a holistically optimal manner. In other words, the system should be evaluated based on the feasibility of the functionality of the system as a whole. No matter how great the performance of an item may be, if there is even one item that is not feasible, the evaluation method must lead to the conclusion that the system as a whole is not good overall. In other words, the FPT must be such that it leads to the conclusion that a system that is not totally optimized is not good. In this way, the highest integrated value of the probability of feasibility of each functional requirement is the highest feasibility of the system as a whole that realizes all goals. Suppose now that there are two functional requirements. Let them be FR1 and FR2 . Let P1 and P2 be the probability of feasibility of each. Since the functionality of the system is based on the functional requirements which must be realized at the same time, these two probabilities are also obtained as values when the two functional requirements occur at the same time. The probability that these two functional requirements (two states) are realized at the same time is called the compound probability in probability theory, and its value is the value obtained by multiplying them together, i.e., P1 P2 . Therefore, the compound probability of all functional requirements occurring at the same time is the value obtained by multiplying the probabilities of feasibility of all functional requirements. That is, if the number of functional requirements is n, the compound probability Ptotal is given by the following equation. Ptotal =

n ∏ i=1

Pi

18

2 Feasibility Prediction Theory

∏n Symbol i=1 Pi is an expression meaning that n feasibility probabilities P1 to Pn are multiplied together. Now it has been found that this measure may be used to evaluate the feasibility of a system, we call this compound probability system feasibility probability. Now, just saying “may be used” is not enough. We need evidence to be confident that we can use it. We can prove this theoretically, but this would require that the evidence necessary to prove it must be proven beforehand. Going backwards in this way is a waste of time, as the work would go on forever. It is a waste of time. Without doing such troublesome process we adopt the idea to start from the level of “we may use it” as a self-evident truth. This self-evident truth is called axiom. So, we propose next axiom and proceed with the discussion forward. Axiom: The system feasibility probability is the probability that all functional requirements of the system can be realized in a holistically optimal manner. This axiom guarantees that the system feasibility probability indicates the probability that the system can be realized in a holistically optimal manner considering all functional requirements of the system. As you can see from the previous explanations, the functional requirements do not need to be independent of each other because they are required to be realized simultaneously. Furthermore, this axiom states that the system feasibility probability not only represents the feasibility of the overall system functionality, but also, even if there are trade-offs between items, or even if the individual feasibility probabilities are large or small, this axiom comprehensively captures them and guarantees the feasibility of the system as a whole. The axiom states that when selecting the best system from multiple systems, if the functional requirements are the same, the system with the highest of the system feasibility probability is the best system. The nature and dimensions of the individual functional requirements are completely different, and yet they can be evaluated on an equal footing through probability. In other words, even if the dimensions are different for all requirements, they can all be evaluated together on a single common scale, the system feasibility probability. To explain axiom more, axioms are generally accepted truths. “Axiom is a truth that is so self-evident that it does not need to be proven true....” (a Japanese dictionary), on the basis of which various theoretical systems are constructed. In general, if something is theoretically proven to make sense, we accept it as absolutely true. However, if we think about it carefully, we realize that the theory starts from an axiom or principle that has not been proven theoretically. It means that the original is kept hidden (or, for a better word, omitted from the explanation), and then theoretically draw a logical conclusion from the subsequent stages. Generally speaking, we don’t go all the way back to the source, so we unconsciously accept it. In this sense, all orthodox theories of modern science and technology are based on axioms. Whether it is the theorems of mathematics and geometry, Newtonian mechanics and Ohm’s law of electric circuits, they are all axiomatic systems. The axiom proposed here plays the same role. However, if a field is found in which this axiom contradictory, then this axiom system cannot be used in that field. Newtonian mechanics also does not apply to the laws of physics (quantum mechanics) governing microscopic physical systems such as molecules, atoms and elementary

2.4 Functional Requirements and Design Range

19

particles. Fortunately, the Information Integration Method (see Chap. 1), on which this theory (FPT) is based, has not encountered any contradictions in the areas we have dealt with so far. The correctness of this axiom is also proven by the fact that many companies have been able to develop products with excellent performance in a short period of time using Design Nav. As explained in Chap. 1, even if the functional form of the information and the feasibility probability in this chapter are different, the essence is the same, and then this is meant by that the axiom is correct. Returning to the feasibility axiom, it is explained before that the system with the highest system feasibility probability value is guaranteed to be the best system that balances all functional requirements. However, if even one item has a very low probability of realization (low feasibility), no matter how good the other functional requirements are, the system will not be feasible from the viewpoint of total optimization. For example, if one feasibility probability is zero, no matter how large the feasibility probability of many other items, the system feasibility probability is zero and the system is the worst and will never be selected. We have been using the term FPT to refer to this theory. but we will now name this theory FPT again. Since the theory is a combination of Eq. (2.2) and the axiom, the theory can also be called ∏-theory as an umbrella term. Please use whichever name is easier for you to use. The FPT is standardized in this book. The main steps in actually using the FPT are described below. The procedure is as follows. (1) (2) (3) (4)

Determine functional requirements and design ranges Measure/estimate the system range for each functional requirement Determination of common range coefficients (to be explained later), if necessary Calculation of system feasibility probability.

The FPT is the basic theory for the New Feasibility Study (Chaps. 3 and 4) and the Design Navi. However, this theory can also be used for evaluation of various systems. In the next section, we will explain how to use the FPT for evaluation of systems according to this process.

2.4 Functional Requirements and Design Range The basic components of the FPT are functional requirements, design range, and system range. The common range is not included in the basic elements because it is obtained by combining the design range and the system range. Functional requirements and design range are explained below. When designing or evaluating a system, the first question is what functional requirements should be fulfilled. We often see the mistake of starting a job without clearly defining these functional requirements. Do not proceed with work based only on vague ideas. First, the project must explicitly define goals and functional requirements. If you do not do this and instead proceed with the project based on tacit

20

2 Feasibility Prediction Theory

understanding, assuming that everyone is on the same page, the project will later fail due to various conflicts and defects among organization and team members. To define functional requirements explicitly is also to define them quantitatively. Especially in development of products, a quantitative target value must be determined for each functional requirement. In the case of development work, this target value is also the final goal of the development, it can be completed only when this goal is achieved. This specific target value is the design range we have discussed so far. When you want to buy something for everyday use, there is no need to decide on the functional requirements for a small amount of money. However, when you want to buy something expensive, it is necessary to determine the functional requirements. When buying a house, for example, you need to clearly define functional requirements and set specific targets, or you will be swayed by trivial features and flashy advertising, and end up making a terrible purchase! There are multiple functional requirements, but these characteristics are rarely completely independent of each other. This is a function of the organic nature of the system and it is only natural that they influence each other underground. However, you should choose items that are as independent as possible in appearance. The reason for this is that multiple evaluations of similar items at the same time would result in a double or triple evaluation of their characteristics, which would lead to a very biased evaluation. But again, while they may appear to be independent from each other, but they influence each other in the underground. It is a correct assessment of the real system as it is. In practice, under such circumstances, the system range data are obtained, so realistic probabilities of feasibility are used. In this sense, the FPT is a reasonable method. Design ranges, on the other hand, must be rationally and correctly determined. They must be neither too strict nor too loose. For example, when evaluating the feasibility of a project, design ranges are the minimum that the project can achieve. The FPT mechanically calculates the feasibility of a given design range. If the design range is wrong or inappropriate, a wrong conclusion is mechanically derived, which is not in line with the original objective of the project. I hope that the method of determining the design range will be developed as a new discipline apart from this prediction method. In this sense, the design range must be set reasonably and correctly, but a very difficult problem to solve is inherent in reality. For example, in the case of the aforementioned automobile fuel consumption, marketing research indicates that fuel consumption must be higher than this to sell to customers in the target segment, or that the emissions per vehicle mileage must be lower than this to meet CO2 emission regulations, etc. On the other hand, in product development projects, we must not be too conscious of competition among companies and create systems of excessive quality that result in an oversupply of functions. Consumers don’t want that much, and in some cases, the system has become difficult to use. More functions will increase costs and will not necessarily satisfy customers. Products with an oversupply of functions will generally be difficult to handle, and customers will end up paying a lot of money to buy something that is difficult to use, which should be avoided.

2.4 Functional Requirements and Design Range

21

There are many patterns in the design range. Figure 2.4 shows a possible general pattern. And here, let’s calculate the probability of feasibility for cases (1) through (6). The squares in the figure represent the system ranges of the uniform probability density distribution. P1 = P2 = P3 = P4 = P5 = P6 =

5 5 3 5 3 5 0 5 1 5 5 5

=1 = 0.6 = 0.6 =0 = 0.2 =1

The design ranges described above have all been given as above a certain value, below a certain value, or in between a certain value and a certain value. In some cases, however, you may want to target a single number. For example, the resistance of an electronic component, or a transportation system that must arrive at a specific time. In such a case, the target value is one specific number, and the design range becomes a single line, not a range. In this case, the system range generally has a certain width

Fig. 2.4 Varius design range

22

2 Feasibility Prediction Theory

Fig. 2.5 Functional requirements are in a functional relationship

and the width of the common range is zero, so the feasibility probability is always zero. Since this is not possible to evaluate, we consider the following in this case. Even if any system is manufactured aiming for a single target value, the actual characteristic values will always vary around that value. Therefore, we define a design range by considering that the difference between the target value and the actual value should fall within a certain range. For example, if a film is to be manufactured with a thickness of 0.1 mm, and the tolerance is 0.1 ± 0.01 mm, the design range is set at this tolerance. If the manufacturing target is d ± δ, the design range is set within ±δ centered on d. This value of δ is also the tolerance range for quality control. A further extension of the problem is that we want to correctly output a particular value y that is in a functional relationship to the input x, i.e., we want to realize a system in which the input and output are in a functional relationship. Written in the form (Fig. 2.5), y = f (x)

(2.3)

In this case, the design range is a task where the target value at each xi determines the width of the design range for each functional requirement yi value.

2.5 How to Calculate a System Range System ranges, such as fuel consumption in Table 2.1 above, are continuous-type parameters, but discrete-type data should have been taken at the time of data measurement. From those data, the system range is calculated. And the system range should have been determined to range from 12.4 to 24.6. Let’s see how the system range can be obtained from such discrete data. First, from the given data, determine the mean value m and standard deviation s (the equations for obtaining these are given below), and then determine the center position of the system range and the width from the center position using the following equation.

2.5 How to Calculate a System Range

23

m ± ks

(2.4)

This mean value m decides the center of the system range. The ±ks is the range from the center of the system range to the data disperse. We will refer to k as the system range coefficient. Generally, k = 3. To put simply, the standard deviation is a number that indicates how the data is dispersed around the mean. The ±3s means that 99.7% of the data will be scattered around the mean. Since the data we collect can be thought of as a finite sample drawn from a population with an infinite number of samples (data), the standard deviation s can be obtained using the following equation, where n is the amount of data. ┌ | | s=√

n / 1 ∑ (xi − m)2 n − 1 i=1

(2.5)

Such calculations can be easily performed by Excel, and other calculation software which are built into function calculators simply entering data. Note that there are two types of equations for the standard deviation. The other equation for the standard deviation is 1/(n − 1) in the root as 1/n. This is the equation for the case where there are only n data. The above equation is always used when estimating the standard deviation, since in general, a finite number of data are obtained (measured) from an infinite number of populations. Let’s try a concrete calculation. For example, given three pieces of data, 10, 12, and 14, s can be calculated as follows. m = 12 / ) 1( s= (10 − 12)2 + (12 − 12)2 + (14 − 12)2 = 2 2 Then, provided k = 3, system range ls is, ls = 12 ± 6 = [6, 18] This means that the system range is from 6 to 18. When quantitative and objective data cannot be obtained, such as in the case of usability or design preference, multiple evaluators are asked to give a sensory evaluation. For example, you could use a 10-point scale employing a design range of 6 or more points and ask them to give it a score. The number of evaluators should be at least 30 to keep the distribution of data close to a normal distribution. Let’s calculate specifically. Suppose that the design preference of a certain product is evaluated by five evaluators (here we use five because we are only showing a calculation example), and suppose the results are 7, 8, 9, 6, and 5. As can be seen from the preceding discussion, the system range ls is not [5, 9]. This is because a slightly larger number of evaluators could increase, the range of dispersing data

24

2 Feasibility Prediction Theory

would become larger. The mean and standard deviation are obtained considering this possibility, and the system range is considered to be three times the range of the standard deviation, as shown above. Then we obtain following system range. ls = 7 ± 3 × 1.58 = [2.26, 11.74] Here, however, the maximum value should never exceed 10, since the evaluation is based on a 10-point scale. Therefore, the system range is considered to be between 2.26 and 10. Assuming a design range of 6 points or more, the feasibility probability P of this functional requirement is P=

10 − 6 = 0.517 10 − (7 − 3 × 1.58)

In cases such as this where objective quantitative data cannot be measured, feasibility probability can be predicted using kansei(sensitivity) assessment. We will discuss this kansei evaluation again later.

2.6 Common Range Coefficient The previous section of design ranges explained the case where a single numerical value is used as the target value. This section explains how to handle the case where the system range can only be expressed as a single numerical value rather than a range. For example, in Table 2.1, the trunk capacity of a car or the price of a car cannot be treated as a system range that has a certain width. If these values are within the design range, the feasibility probability is 1. However, if the value is even slightly out of the design range, the feasibility probability will immediately become 0. If even one item has a feasibility probability value of 0, no matter how good all the other items are, the system feasibility probability will be 0, and the proposal will not be selected. Therefore, it would be somewhat unreasonable for the feasibility probability to change abruptly from 1 to 0 if the value is even slightly outside the design range. So, we work to give a gradation to this change. Here we introduce the function, common range coefficient. In Fig. 2.6, take a system parameter such as trunk capacity on the horizontal axis and the common range coefficient kc on the vertical axis. The common range coefficient takes a value between 0 and 1. Assuming that the design range should be greater than a, k c is 1.0 at a. The k c of c for a value below which is absolutely unacceptable is 0. Connect the intersection of a and 1.0 with c by a straight line. If the system parameter happens to be at position b between a and c, we go up from here and read the value of k c next to the position where it crosses the common range coefficient line. So, we consider the common range for this position as follows.

2.7 The Feasibility Prediction Theory

25

Fig. 2.6 Common range coefficient

Assuming that there is a system range w of some infinitesimal width centered at b, we think that kc times w is the common range here. From these values the feasibility probability P, we obtain as follows. P=

kc w = kc w

(2.6)

In other words, k c is obtained as the feasibility probability in this case. k c can be obtained analytically without drawing a diagram by the following equation. kc =

cb ca

(2.7)

Now let us actually calculate the feasibility probability for the given common range coefficient line as shown in Fig. 2.7. Suppose the mass of this system is 105 kg. If the mass is less than 100 kg, it will be within the design range, so the feasibility probability is 1, and there is no problem. Then, considering that 110 kg is definitely not acceptable, using the common range coefficient, a reasonable feasibility probability can be calculated by the following equation. P = kc = 0.5

(2.8)

2.7 The Feasibility Prediction Theory Now that the basics necessary to calculate feasibility probability have been covered, the system feasibility probability of the litter car illustrated in Tables 2.1 and 2.2 can be calculated. Let us assume that the car developed in Company A’s project was car A, and let us predict whether the target values of functional requirements, or design ranges shown in Table 2.2, including those of other companies, can be realized in a holistically optimal manner.

2 Feasibility Prediction Theory

Fig. 2.7 Common range coefficient

Common range coefficient

26 kc

Design range

1.0

0.5

0

100

105

110

Mass (kg)

Assuming first that the readers develop car A, how well do you expect it to achieve the requirements in Table 2.2 as a whole, and whether it compares favorably with other companies’ cars? Perhaps it is difficult for human intuition to make such a decision. Moreover, it is impossible to reliably evaluate the feasibility probability that the car will achieve the required performance as a whole by intuition. In this case, the score evaluation method can be used, but as mentioned before, the score evaluation method is a partially optimal evaluation method, and there is a risk that if a particular item scores high, it will be selected even if other items are deficient. It also has drawbacks, such as the need for ambiguous weighting. So, the FPT that we have learned so far is a powerful tool for selecting the car that best realizes Table 2.2. The FPT first calculates the feasibility probability for each functional requirement, and then calculates the total system feasibility probability. Since the system feasibility probability is, according to the axiom, a measure that allows each car to be evaluated in an overall optimal manner, the car with the largest value is chosen as the car with the highest likelihood (probability) of achieving the target values in Table 2.2 as a whole. In other words, the optimal car can be selected considering the entire functional requirements. First, since the system range is given for fuel consumption, acceleration performance, and interior noise, the feasibility probability can be calculated by finding the common range as it is. The system range of trunk capacity and price is only given by a single number, so the common range coefficient is used. Calculation results are shown below for car A only. First, let us find the feasibility probability of fuel consumption, acceleration performance, and interior noise. Fuel consumption : P1 =

24.6 − 15 = 0.79 24.6 − 12.4

Acceleration performance : P2 = Interior noise : P3 =

12 − 4.7 = 0.65 15.9 − 4.7

55 − 45 = 0.43 68 − 45

(2.9) (2.10) (2.11)

2.7 The Feasibility Prediction Theory

27

Fig. 2.8 Common range coefficient of trunk capacity

For trunk capacity, the common range coefficient must first be determined. Since a trunk capacity of 50l or less is completely unacceptable and a trunk capacity of 120l or more is specified as sufficient, the common range coefficient line is shown in Fig. 2.8, and k c can be obtained from the figure. Using this, the feasibility probability for a trunk capacity of 102l is obtained as follows. Trunk capacity : P4 =

102 − 50 = 0.74 120 − 50

The price is similarly calculated based on Fig. 2.9 as follows. Price : P5 =

120 − 82.4 = 0.94 120 − 80

Fig. 2.9 Common range coefficient of price

Common rage coefficient

The same calculation can be done for all cars from other manufacturers. The results are shown in Table 2.3. From this result, all feasibility probabilities are multiplied to obtain the system feasibility probability. The car with the largest value, by a small margin, is car C from another company, and we can conclude that this car has a higher feasibility probability of fulfilling all five functional requirements than the other cars. Then what about car A? If you look at the calculation results, only the trunk capacity is inferior to other companies. For example, if the design could be modified

1.0

0

80 82.8 Price (10 thousand ¥)

120

28

2 Feasibility Prediction Theory

Table 2.3 Calculation results of feasibility probability Functional requirement

Car A

Car B

Car C

Car D

Car E (Car C with automatic transmission)

Car F (Car B with diesel engine)

Fuel consumption (km/l)

0.79

0.64

0.63

0.66

0.21

1

Acceleration performance

0.65

0.76

0.67

0.59

0.69

0.35

Room noise (phone)

0.43

0.27

0.43

0.33

0.43

0

Trunk capacity (l)

0.74

1

1

1

1

1

Price (10 thousand 0.94 \)

0.89

0.94

1

0.85

0.7

System feasibility probability

0.12

0.17

0.13

0.05

0

0.15

to fit into the design range with a little more trunk capacity, the trunk capacity would be 120l or more, so the feasibility probability of this item would be 1, and the system feasibility probability would be 0.21, transforming it into the best car. In this way, the FPT highlights the differences between our products and those of other companies, identifies areas for improvement, and tells us at a glance where and to what extent we can improve for better total evaluation. In this case, the value of 120l for the design range was important. If this is set incorrectly, the development will be meaningless. As mentioned before, setting the design range is important and must be reasonably correctly set. If it is product development, it must adequately address the needs of the market. The design range for other items is exactly the same. For car F, the functional requirements other than interior noise do not seem to be particularly problematic, but the interior noise is defective and the system range falls outside the design range, so the system feasibility probability is also 0. Thus, if an item is deficient, its system feasibility probability will be a very small value, and the proposal will never be selected. In this sense, this method can be used with confidence. Furthermore, the FPT is characterized by the fact that even parameters with completely different characteristics and dimensions, from fuel consumption to price, can be evaluated consistently on the same playing field because they are converted into dimensionless quantities called probabilities. In general, items with different characteristics must be evaluated separately, which makes it difficult to compare them as a whole, but the system feasibility probability is a very easy concept to use. Another thing to note in the value of the system feasibility probability is that, indeed, when comparing car A and car C, car C comes out to be slightly larger. That does not mean we can simply conclude that the car C is superior. This is because the system range naturally includes errors, so it is unreasonable to assume that a difference of this magnitude is significant enough between the two. It is a subject for future research to determine theoretically how much difference in values

2.8 Parameter Types and System Range

29

is significant, but one way to consider this is to add other functional requirements so that the difference can be further widened. For example, test-driving and evaluating the ride quality and driving performance with kansei are also important evaluation items. When the difference is small, it may be possible to increase the number of functional requirement items to be considered, but originally, the necessary and sufficient functional requirement items should be selected and evaluated from the beginning. Adding or deleting functional requirements later is undesirable, as it is an indication that they were not fully considered in the first place. Finally, I would like to make one more important point. The results in Table 2.3 all show very low values for system feasibility probability. How should these figures be considered? One way to think of it is that the probability 1/2 (i.e., the borderline between success and failure) is considered to be the limit of whether each functionality can be realized, so if the number of functional requirements is n, the system feasibility probability should be at least 0.5n . Given that, in this case, the system feasibility probability is 0.55 = 0.03125, which means that all cars A through E in this table will pass. However, if the system feasibility probability is as low as shown in the table, we are somewhat concerned about whether the system will function sufficiently as a system. Rather, it is reasonable to want at least 0.5 as a system feasibility probability. In other words, to realize the system as a whole, the system feasibility probability is likely to be required to be at least 1/2. The calculation would then be the opposite of√the above. In other words, in this case, the individual feasibility probability is 5 0.5 = 0.871 or more. In general, it is reasonable to hope that√“the feasibility probability of an individual functional requirement is greater than n 0.5, where n is the number of functional requirements. Then all cars in the table are rejected. This is the approach used in this book.

2.8 Parameter Types and System Range There are two types of parameters (random variables) that represent functional characteristics. Discrete and continuous random variables. The way the system range is handled differs depending on this variable, so we will discuss this point. However, this section is a little bit technical, skipping it, there is no problem to use this theory. Let us first explain discrete random variables using the following example. Suppose a person’s personnel evaluation items are scored on a 10-point scale by a panel of five reviewers. Their scores were 7, 9, 6, 7, and 8. Figure 2.10 illustrates the frequencies with this score as a parameter. Such parameters are discrete parameters. The method for obtaining the system range in this case has been explained previously in the case of kansei evaluation, so it will be omitted. By the way, let’s consider a few more things about continuous random variables. Consider the fuel consumption of car A mentioned earlier. Having explained that it is reasonable to consider the fuel consumption probability density curve with a

30

2 Feasibility Prediction Theory

Fig. 2.10 Example of discrete random variables

uniform probability distribution because there are many possible patterns, let us now consider whether it is reasonable to assume a normal distribution. Consider Fig. 2.11, where the probability density is almost zero at fuel consumption of 12.4 and 24.6. If we can calculate the area on the right side of the design range, we can calculate the feasibility probability of fuel consumption for this car A. However, since the calculation is difficult as it is, it is converted to a standard normal distribution. Once the standard normal distribution is obtained, the probability can be easily calculated since the normal distribution table for the calculated area (probability distribution table attached at the end of this chapter) is already available. For that transformation, the fuel consumption random variable X must be transformed into the following random variable Z Z=

X −m s

For this conversion, the mean m and standard deviation s must be known. In this case, the average value is obtained as follows. m=

24.6 + 12.4 = 18.5 2

The standard deviation, on the other hand, is considered as follows. In the probability table for the standard normal distribution, the random variable Z has almost zero probability below 3.0 and above 3.0, so for example, the rightmost point is aligned Fig. 2.11 Normal distribution case

2.8 Parameter Types and System Range

31

with 24.6 for fuel consumption. Then, using the stochastic variable transformation equation above, we obtain s as follows. 3.0s = 24.6 − 18.5 s=

24.6 − 18.5 = 2.03 3.0

Using these m and s, the value of Z at position 15 of the design range is obtained as follows. Z=

15 − 18.5 = −1.72 2.03

The probability density curve for the standard normal distribution is now obtained as shown in Fig. 2.12 below. The normal distribution table shows the probability (area) of the random variable from 0 to the specified position on the right. Therefore, the probability between 0 and 3 is considered to be approximately 0.5. The probability from 0 to −1.72 on the left side can be obtained by using this table to find the probability from 0 to 1.72. Its value is 0.457 from the normal distribution table. Therefore, all probabilities to the right of −1.72 are the sum of these two probabilities. So, it comes as follows. P = 0.5 + 0.457 = 0.957 In other words, the feasibility probability is greater than 0.79 assuming a uniform probability distribution. Conversely, the uniform probability distribution produces a smaller probability than the normal distribution, so the value is on the safe side. We do not know if it is possible to assume a normal distribution for this probability distribution of fuel consumption (it would be rather unrealistic), but even if we dare to assume such a distribution, the calculation is quite complicated as shown here. Therefore, it is recommended to use the uniform probability distribution in such cases, since it is easier to calculate with the uniform probability distribution described above and it is found to be on the safe side. It is advisable to carefully Fig. 2.12 Normal distribution curve

32

2 Feasibility Prediction Theory

judge whether it is necessary to dare to consider a normal distribution instead of a uniform probability distribution in other cases.

2.9 The Feasibility Prediction Theory Combining Kansei Evaluation In the aforementioned evaluation of the feasibility of automobile performance, the feasibility probability could be calculated because the system range could be measured objectively and quantitatively, but there are many cases in reality where data cannot be measured objectively and quantitatively. In such a case, as I mentioned before, we would be evaluating with human kansei instead of objective data. Let’s examine this approach in a little more detail. Kansei(sensibility) can be interpreted in various ways, but according to the Kojien (Japanese dictionary), it is the sensitivity of a sense organ that produces sensations and perceptions in response to stimuli from the outside world, so it has a strong tinge of subjective preference. Therefore, kansei evaluation is considered suitable for the task of selecting the one that best matches human preferences. Kansei evaluation evaluates by converting what is gained through human kansei into a quantitative score. In other words, it is quantified by a score. However, there is inevitably the influence of individual differences, and because we are human, there is also room for misjudgment. Therefore, in the case of kansei evaluation, it is necessary to incorporate the judgment of as many monitors (evaluators) as possible in order to avoid such risks. Another drawback of sensory evaluation is whether the monitors really understand the importance of the subject. The importance of the subject may not be fully understood, especially in the case of innovative subjects, so it is necessary to educate the monitors beforehand. The term score evaluation here has a different meaning from the traditional score evaluation method which states that the system with a larger total of scores is better. It simply uses the scores evaluated by kansei as a substitute for quantitative data and the final evaluation is derived using the FPT. There are various methods of scoring, depending on how much the perfect score is, but in general, the 10-point system is used. In such cases, the design range is often set at 6 or more point. For a more detailed evaluation, use the 100-point method, but here we will illustrate an application using the 10-point method.

2.9.1 Kansei Evaluation of Heating Equipment Let’s evaluate the heating equipment. There are various types of heating equipment, but here we assume that three monitors evaluated four types: a movable (meaning not stationary) gas heater, a common electric heater, a far-infrared heater whose heater

2.9 The Feasibility Prediction Theory Combining Kansei Evaluation

33

does not turn red but emits far-infrared radiation to warm the room, and a slightly larger hot carpet, as shown in Table 2.4. Although a larger number of monitors would be better, we have chosen a smaller number because we are only explaining the FPT here. Also, the results shown here are intended only as an exercise in usage of the theory and do not represent the true nature of heating equipment. The four functional requirements are safety, warmth (the feet must also feel warm), heating mildness, and running cost. Although the cost of purchase may be also an Table 2.4 Heating equipment kansei evaluation using feasibility probability Gas stove (Movable, not FF)

Data

Mean value m

Standard deviation s

m + 3s

CR

Feasibility probability

Safety

5

7

6

6.00

1.00

9.00

3.00

0.50

Warmth (also underfoot)

7

8

7

7.33

0.58

9.07

3.07

0.88

Mildness

5

7

6

6.00

1.00

9.00

3.00

0.50

Running cost

7

8

8

7.67

0.58

9.40

3.40

System feasibility probability Electric heater

0.98 0.22

Data

Mean value m

Standard deviation s

m + 3s

CR

Feasibility probability

Safety

6

7

7

6.67

0.58

8.40

2.40

0.69

Warmth (also underfoot)

6

7

7

6.67

0.58

8.40

2.40

0.69

Mildness

5

6

7

6.00

1.00

9.00

3.00

0.50

Running cost

6

7

7

6.67

0.58

8.40

2.40

System feasibility probability

0.69 0.17

Mean value m

Standard deviation s

m + 3s

CR

Feasibility probability

8

8.33

0.58

10.07

4.07

1.00

8

7

7.67

0.58

9.40

3.40

0.98

8

9

9

8.67

0.58

10.40

4.40

1.00

8

9

8

8.33

0.58

10.07

4.07

Larger hot carpets

Data

Safety

8

9

Warmth (also underfoot)

8

Mildness Running cost

1.00 0.98

System feasibility probability Mean value m

Standard deviation s

m + 3s

CR

Feasibility probability

8

8.33

0.58

10.07

4.07

1.00

7

8

7.00

1.00

10.00

4.00

0.67

6

7

8

7.00

1.00

10.00

4.00

0.67

5

7

6

6.00

1.00

9.00

3.00

Far-infrared heater

Data

Safety

8

9

Warmth (also underfoot)

6

Mildness running cost

System feasibility probability

0.50 0.22

34

2 Feasibility Prediction Theory

issue actually, I dare to consider focusing evaluation on performance alone. Running costs can be measured objectively and quantitatively, and of course can be mixed with quantitative evaluation, but since this is an explanation of kansei evaluation, we have chosen to use kansei evaluation even for it. One of the cautions in conducting a rational kansei evaluation is that if it is safety, only safety should be evaluated for all objects first. If we take one proposal, say a gas stove, as a standard, we score the other proposals relative to it. In this way, the risk of abnormal scores can be avoided to some extent. Let us calculate the feasibility probability for the safety of gas stove. The three monitors’ scores on a 10-point scale are given as 5, 7, and 6. The m in the next column of the table is the mean value of this data. All table calculations are performed using spreadsheet software. m = (5 + 7 + 6)/3 = 6 The question is how to determine the system range from this data, and as mentioned previously, one measure of data variability, the standard deviation, is used. Thus, the column following the mean value shows the standard deviation s. Using spreadsheet software, the following values are obtained. s=1 The upper value of the system range is taken as the range within which 99.7% of the data will fit. Then, m + 3s = 9 So, if the design range is 6 or higher, the common range is as follows. CR = (m + 3s)−6 = 3 Thus, feasibility probability for the safety of gas stove is obtained as follows. Feasibility probability =

CR = 0.5 6s

As explained before, if the value on the upper side of the system range exceeds 10 points, the feasibility probability is calculated using that value as 10 (corresponding to the shaded cell in the table), since the 10-point method does not exceed 10 points. The conventional way of organizing survey results is to evaluate them by the size of the average, but the average alone is incomplete because it does not take into account the variability of the scores. This is because it is impossible to determine whether a system with a good average but high variability or a system with a slightly lower average but low variability is better. Thus, in kansei evaluation, the correct way to

2.9 The Feasibility Prediction Theory Combining Kansei Evaluation

35

evaluate both the mean and the variability is to use the above concepts of system range and common range in the calculation. Another point to note here is weighting. In the score evaluation, the design range was the same for all evaluation items, with a score of 6 or higher. Then the importance would not be taken into account in the design range among functional requirements. However, weighting would destroy the theoretical system, so we cannot do that either. What to do, then, is to have the monitor implicitly do the equivalent of weighting with the kansei of the monitor in such cases. In other words, a monitor will be asked to score strictly on the items the monitor considers important. In fact, it seems that the monitors unconsciously and with kansei make such evaluations without any particular attention. Based on the above considerations, we returned to the aforementioned calculation results and concluded that the hot carpet has the best system feasibility probability of 0.98 in terms of safety, warmth (even underfoot), mildness of heating, and running cost, followed by the far-infrared heater and gas heater with the same probability of 0.22, and the electric heater is the worst. As we have already stated, this is just an exercise and does not take into account price or conditions of use such as room insulation effect. The scores will vary depending on the monitor, so this is not a universal answer. However, we hope you now understand that combining the score evaluation with the FPT allows us to make a reasonable judgment. Although the kansei evaluation remains ambiguous to the extent that it relies on human kansei, it can be said that the evaluation by a large number of people removes ambiguity and increases reliability by taking into account the average of the scores as well as the variability of the scores. However, in doing so, a collection of monitors with unbalanced kansei will not be reliable. A well-balanced combination of monitors with various kansei is also necessary. We must collect monitors that most adequately cover the segments referred to in marketing. If we are not mistaken here, kansei evaluation by monitors can be a fairly powerful tool in a product development project. If you only have a small number of monitors, you may want to consider compensating for imperfections by repeating the evaluation over time. In this example, the number of monitors is three, which is too small to be confirmed, but when data from a large number of monitors is collected, a distribution that looks like two mountains may appear. This clearly indicates that two types of monitors with different properties are contained. In such cases, the data should be divided into two categories according to the attributes of the monitors and recalculated, adopting the data of the more important monitor.

2.9.2 Let’s Run the Meeting Democratically People have different opinions. In addition, different departments within the same company will act only for the convenience of their own department. Even if this is not the case, different experiences and values naturally lead to different opinions. Personal different experiences, values, positions and belonged departments naturally

36

2 Feasibility Prediction Theory

have different evaluation criteria. At meetings, such people gather to discuss issues, so naturally opinions diverge and become confused. In such meetings, it is difficult to reach a conclusion that everyone can agree on, since each person tries to stick to his or her own value standard and make his or her point. Forcing a conclusion will leave a personal emotional lump that is far removed from the content of the discussion. This is the worst way to run a meeting. Contrary to this, there is the following example. When physicist and Nobel laureate Richard Feynman attended a meeting of the Manhattan Project Committee, there was one fact that the prodigious scholar Feynman was astounded by. At this committee, which later awarded five Nobel Prizes, including Feynman’s, the discussion was quickly settled once each member had expressed his or her opinion. Perhaps each person made an instantaneous judgment, synthesizing even the value critera of others, and came to a conclusion that was acceptable to everyone. In other words, even in the most difficult meeting getting smart conclusion depends on the quality of the participants. Apart from meetings of such a group of geniuses, things do not go smoothly in meetings of ordinary people. For example, let’s say that during a discussion about choosing a development subject for the next year, it was difficult to reach a consensus. The fundamental reason why this happens is because, as mentioned above, everyone’s sense of values and evaluation criteria are different in the discussion. When choosing a development subject, some will place great importance on the fact that the development will be completed in two years. Others will wonder how much it will contribute to their company’s sales when it is commercialized. Others may wonder if they can make better use of the technology (core competence) in which they have an advantage. It is only natural that if each of us discusses the issues based on different evaluation criteria and values, we will not be able to come to a consensus. So, we should first put this sense of values and evaluation criteria out in the open, and then everyone should discuss and decide what criteria should be used to select development subjects. Once the evaluation criteria, or evaluation items (functional requirements), have been determined, everyone discusses the contents of each development subject in accordance with these evaluation items. Finishing this discussion, ask all participants to give a score on a 10-point scale for all the items of each subject. In other words, kansei evaluation is done by all. For example, suppose a company holds a meeting of five people to evaluate two development subjects. The following seven evaluation items were selected. (1) (2) (3) (4) (5) (6) (7)

Does it fit with the company’s vision? Can the company’s strengths (core competence) be utilized? Will the market be large when the product is commercialized? Will demand grow in the future? Can the development be completed in two years? Is there a possibility of competition with other companies? Can we obtain a sufficient budget for development? On the other hand, assume the following two development subjects.

2.10 System Feasibility Probability

37

(a) Development of new electronic components (b) Development of high-performance, low-cost solar cell materials. This is summarized in Table 2.5, which shows an example of how the system feasibility probability could be obtained by asking persons A through E to each score the design range as 6 or higher on a 10-point scale. Comparing these two system feasibility probabilities, it is clear that the development subject with the higher system feasibility probability is the most likely to realize all of the above seven evaluation items (functional requirements) in a totally optimal manner. Here it was a case of choosing between two subjects, but it is a mistake to think that the small number of subjects makes the choice reasonable, appropriate, or easy. This is because there are as much as seven items to evaluate. Furthermore, as the number of development targets increases to five or six, more and more difficult decisions must be made, making the management of the meeting even more difficult. As seen in this example, everyone participates in the selection process, and everyone can calmly incorporate his or her ideal sense of values for all evaluation items without being drowned out by the atmosphere or the loud voice of the supervisor, so that everyone is satisfied with the conclusions reached. This FPT can be very powerful tool for the meeting running democratically.

2.10 System Feasibility Probability The above is all the explanation of the FPT, but there are a few things to note here. First is the meaning of the absolute value of the system feasibility probability. As mentioned before, this feasibility probability value should essentially indicate the feasibility probability of this project (system) as a whole. Some of the events of all functional requirements are independent of each other, but many are not. Therefore, one might think that the probability calculated is not correct because functions are overlapped. However, in reality, various functions interact with each other to make things happen. Therefore, it should be safe to assume that this FPT evaluates reality as it is (as they affect each other). So, we think that function independence is not a concern. Further development of this theory leads to Design Navi as a tool for developing superior products and new technologies. This is a terrific technique that I will explain in Chaps. 5 and 6. Although it requires some time and effort to collect data through experiments and/or simulations, the best products can be developed in about a quarter of the time required by conventional methods. Many companies are actually benefiting from this. The examples used above were mainly evaluations of systems that exist in reality. However, it is rather the prediction of future feasibility where the FPT comes into play. It is a powerful tool, for example, for evaluating the feasibility of a project that is about to be implemented. Projects must be evaluated in a unified manner for various items in different dimensions, but in the past, there was no scale such as

38

2 Feasibility Prediction Theory

Table 2.5 Feasibility evaluation of development subjects Subject

Developments of new electric components

Evaluation items

Name/Score A B C D E

Mean Standard Max CR = Feasibility value deviation = m Max-6 probability m s + = CR/6s 3s

Vision Core competence Market Demand growth Complete in 2 years Competition Sufficient budget System feasibility probability

Development of high-performance and low-cost solar cell materials

Vision Core competence Market Demand growth Complete in 2 years Competition Sufficient budget System feasibility probability

Note Design range of 6 or more using the 10-point method

system feasibility probability, so the final decision ultimately relied on vague human judgment. With the advent of system feasibility probability, it is now possible to determine the feasibility of a project in a rational and holistically optimal manner. This is explained as an application to feasibility studies in Chaps. 3 and 4. The FPT also reveals other potential applications. This means that if the right data can be collected, it can be used to predict social phenomena. For example, predicting the end of a pandemic, or the effectiveness and feasibility of a government policy. You can apply it to a variety of cases. Finally, and importantly, the FPT could also be a method to ameliorate the shortcomings of the modern, fragmented discipline. This is because in the twentieth

2.11 Appendix: Normal Distribution Table

39

century, not only academia but all areas have been subdivided, and although we think we understand a field by analyzing it in minute detail, we have actually lost sight of the whole. We are seeing the trees and not the forest. Now was the time for a theory to integrate fragmented areas. The most significant problem is the integration of the arts and sciences. Most universities have separate arts and science departments. As you can see from the discussion so far, this theory uses the concept of probability to enable the evaluation of subdivided areas in an integrated manner. In other words, we recognize that we are on the threshold of a culture of integration of the arts and sciences that integrates completely different disciplines. For example, the G20 Trade and Digital Economy Ministers meeting in Tsukuba, Japan, on June 8, 2019, agreed on the Principles for Human-Centered AI Development. The following ten items are proposed there. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

Inclusive growth, sustainable development and well-being Human-centered values and equity Transparency and can be explained Robustness, security and safety Accountability Investment in R&D Digital ecosystem development Maintenance of policy environment Human resource development and labor market readiness International Cooperation.

These suggestions are essential for future AI development, but then, when an AI project is proposed, is there a way to evaluate these 10 items in a holistic and optimal manner? Even if each item is evaluated separately, there is no way to proceed unless the feasibility of the project as a whole is known. That’s when the FPT learned here becomes an important tool!

2.11 Appendix: Normal Distribution Table

0.3186

0.3159

0.9

0.2611

0.4345

0.4463

0.4332

1.5

0.4049

0.4649

0.4641

1.8

0.4719

0.4778

0.4826

0.4713

0.4772

0.4821

1.9

2

2.1

0.4564

0.4452

0.4554

1.6

1.7

0.4207

0.4032

0.4192

1.3

0.3869

0.3849

1.2

1.4

0.3438

0.3665

0.3413

0.3643

1

1.1

0.291

0.258

0.2881

0.7

0.2291

0.8

0.2257

0.6

0.195

0.1591

0.1554

0.1915

0.0832

0.1217

0.0793

0.1179

0.2

0.3

0.4

0.0438

0.0398

0.1

0.5

0.01

0.004

0

0

z

0

0.02

0.483

0.4783

0.4726

0.4656

0.4573

0.4474

0.4357

0.4222

0.4066

0.3888

0.3686

0.3461

0.3212

0.2939

0.2642

0.2324

0.1985

0.1628

0.1255

0.0871

0.0478

0.008

0.03

0.4834

0.4788

0.4732

0.4664

0.4582

0.4484

0.437

0.4236

0.4082

0.3907

0.3708

0.3485

0.3238

0.2967

0.2673

0.2357

0.2019

0.1664

0.1293

0.091

0.0517

0.012

0.04

0.4838

0.4793

0.4738

0.4671

0.4591

0.4495

0.4382

0.4251

0.4099

0.3925

0.3729

0.3508

0.3264

0.2995

0.2704

0.2389

0.2054

0.17

0.1331

0.0948

0.0557

0.016

0.05

0.4842

0.4798

0.4744

0.4678

0.4599

0.4505

0.4394

0.4265

0.4115

0.3944

0.3749

0.3531

0.3289

0.3023

0.2734

0.2422

0.2088

0.1736

0.1368

0.0987

0.0596

0.0199

0.06

0.4846

0.4803

0.475

0.4686

0.4608

0.4515

0.4406

0.4279

0.4131

0.3962

0.377

0.3554

0.3315

0.3051

0.2764

0.2454

0.2123

0.1772

0.1406

0.1026

0.0636

0.0239

0.07

0.485

0.4808

0.4756

0.4693

0.4616

0.4525

0.4418

0.4292

0.4147

0.398

0.379

0.3577

0.334

0.3078

0.2794

0.2486

0.2157

0.1808

0.1443

0.1064

0.0675

0.0279

0.08

0.4854

0.4812

0.4761

0.4699

0.4625

0.4535

0.4429

0.4306

0.4162

0.3997

0.381

0.3599

0.3365

0.3106

0.2823

0.2517

0.219

0.1844

0.148

0.1103

0.0714

0.0319

0.09

(continued)

0.4857

0.4817

0.4767

0.4706

0.4633

0.4545

0.4441

0.4319

0.4177

0.4015

0.383

0.3621

0.3389

0.3133

0.2852

0.2549

0.2224

0.1879

0.1517

0.1141

0.0753

0.0359

40 2 Feasibility Prediction Theory

0.494

0.49998

4.1

0.49999

0.49999

0.49999

0.49999

0.49999

0.49999

4.2

4.3

4.4

0.49998

0.49995

0.49997

0.49995

0.49997

3.9

0.49993

0.4999

4

0.49993

3.8

0.4998

0.4998

0.4999

3.6

3.7

0.4998

0.4998

3.5

0.4995

0.4997

0.4995

0.4997

3.3

0.4993

0.4991

3.4

0.4993

3.2

0.4987

0.4987

0.499

3

3.1

0.4982

0.4981

2.9

0.4966

0.4975

0.4965

0.4974

2.7

0.4955

2.8

0.4938

0.4953

2.5

2.6

0.492

0.4918

2.4

0.4864

0.4896

0.4861

0.4893

2.2

2.3

(continued)

0.4868

0.49999

0.49999

0.49999

0.49998

0.49997

0.49996

0.49993

0.4999

0.4999

0.4998

0.4997

0.4995

0.4994

0.4991

0.4987

0.4982

0.4976

0.4967

0.4956

0.4941

0.4922

0.4898

0.4871

0.49999

0.49999

0.49999

0.49998

0.49997

0.49996

0.49994

0.4999

0.4999

0.4998

0.4997

0.4996

0.4994

0.4991

0.4988

0.4983

0.4977

0.4968

0.4957

0.4943

0.4925

0.4901

0.4875

0.49999

0.49999

0.49999

0.49998

0.49997

0.49996

0.49994

0.49991

0.4999

0.4998

0.4997

0.4996

0.4994

0.4992

0.4988

0.4984

0.4977

0.4969

0.4959

0.4945

0.4927

0.4904

0.4878

0.49999

0.49999

0.49999

0.49998

0.49997

0.49996

0.49994

0.49992

0.4999

0.4998

0.4997

0.4996

0.4994

0.4992

0.4989

0.4984

0.4978

0.497

0.496

0.4946

0.4929

0.4906

0.4881

0.49999

0.49999

0.49999

0.49998

0.49997

0.49996

0.49994

0.49992

0.4999

0.4998

0.4997

0.4996

0.4994

0.4992

0.4989

0.4985

0.4979

0.4971

0.4961

0.4948

0.4931

0.4909

0.4884

0.49999

0.49999

0.49999

0.49998

0.49997

0.49996

0.49995

0.49992

0.4999

0.4998

0.4997

0.4996

0.4995

0.4992

0.4989

0.4985

0.4979

0.4972

0.4962

0.4949

0.4932

0.4911

0.4887

0.49999

0.49999

0.49999

0.49998

0.49997

0.49997

0.49995

0.49992

0.4999

0.4998

0.4997

0.4996

0.4995

0.4993

0.499

0.4986

0.498

0.4973

0.4963

0.4951

0.4934

0.4913

0.489

(continued)

0.49999

0.49999

0.49999

0.49998

0.49997

0.49997

0.49995

0.49992

0.4999

0.4998

0.4998

0.4997

0.4995

0.4993

0.499

0.4986

0.4981

0.4974

0.4964

0.4952

0.4936

0.4916

2.11 Appendix: Normal Distribution Table 41

0.499997

5

0.49999

0.49999

0.499995

4.8

4.9

0.49999

0.49999

4.7

0.49997

0.499995

0.49998

0.49997

0.49998

4.5

4.6

(continued)

0.49997

0.499995

0.49999

0.49999

0.49998

0.49997

0.499995

0.49999

0.49999

0.49998

0.49997

0.499995

0.49999

0.49999

0.49998

0.49997

0.499995

0.49999

0.49999

0.49998

0.49997

0.499995

0.49999

0.49999

0.49998

0.49997

0.499995

0.49999

0.49999

0.49998

0.49997

0.499995

0.49999

0.49999

0.49998

0.49997

0.499995

0.49999

0.49999

0.49998

42 2 Feasibility Prediction Theory

Reference

Reference 1. Collins, J., & Porras, I. J. (1994). Built to last. Harper Collins.

43

Part II

Feasibility Study of Product Development Project-New Feasibility Study

Chapter 3

Project Feasibility Study

The method of predicting the feasibility of a project is generally referred to as a feasibility study. This chapter explains how combining the FPT with a general feasibility study improves on the traditional shortcomings and makes it more reasonable and easier to use. First, we will introduce what a general feasibility study is and the actual process by which it is performed. The feasibility study methodology is ad hoc and will vary from project to project. Still, there are some common areas to consider for basic flow and feasibility, which are introduced below. If the FPT is then introduced, it will show that feasibility can be comprehensively and smartly determined based on the scale of system feasibility probability, whereas in the past, feasibility was determined based on human intuition in the final stage of the decision-making process. We call this a New Feasibility Study. Furthermore, instead of considering the proposed project as a fixed one that cannot be changed, if it is judged to be difficult to realize, it is suggested that the project be modified to increase the probability of realization as much as possible. As a consequence, you will understand that what has been ambiguous until now can really be resolved in a very clear manner.

3.1 What is a Feasibility Study The definition of a feasibility study was given in Chap. 1 and is reproduced here. A feasibility study is the process of examining the feasibility of a proposed project in all relevant areas /items and predicting whether the project can be optimally and holistically realized in its entirety, or modifying the project to realize it. Feasibility studies determine whether a project, whatever national or corporate, is feasible and whether it will be implemented. The role of the feasibility study is important because it determines the success or failure of the project. Therefore, it is essential to have some kind of rational and reliable theoretical tool. Until now, however, there has been no such reliable tool.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Nakazawa, Design Navi for Product Development, https://doi.org/10.1007/978-3-031-45241-3_3

47

48

3 Project Feasibility Study

It is unclear when the term feasibility study was first used, but the term feasibility study was already used in the Apollo program in the 1960s. In other words, NASA conducted a feasibility study called the “Apollo spacecraft feasibility study” [1]. The feasibility study was conducted by having companies such as General Dynamics/ Convair, General Electric, and GL Martin propose “a specific system to land on the Moon and return the crew safely to the Earth”. Generally, as in the Apollo program, multiple proposed systems are designed and studied. Feasibility studies are too broad a subject to be systematized, have not yet been established as a discipline, and there is no academic society dedicated to them. Nevertheless, some common elements can be found in the above definition, so it should be possible to systematize and construct a theory to some extent. This book attempts to theorize feasibility studies using the FPT. As discussed in Chap. 1, the tool currently used in feasibility studies is cost–benefit analysis which has the drawbacks. However, the rational, easy-to-use, and powerful tool proposed in this book is the FPT. Using this method, feasibility studies can be reborn and become more accessible, and can be theorized. With such an attempt, a New Feasibility Study is proposed here. Below is a description of that New Feasibility Study process. The outline of the process described here is rough, but rest assured that we will elaborate on specific examples in the next chapter. First, please get a bird’s-eye view of the process.

3.2 New Feasibility Study Process This section assumes that you are performing a feasibility study of a project in a company. The basic process of the New Feasibility Study including start-up is as follows. (1) (2) (3) (4) (5)

Project description and SWOT analysis Determination of the functional requirements and design range of the project Design a system to realize the project Predict the feasibility of all areas in which the system will be involved Summarize the conclusions of the feasibility study. Let’s discuss each of these steps in turn.

3.2.1 Project Description and SWOT Analysis While most feasibility studies begin with “(3) design a system to realize the project”, we begin with this critical first step, project description and SWOT analysis. SWOT analysis determines comprehensive analysis of Opportunities and Threats as the external environment and Strengths and Weaknesses of company business as the

3.2 New Feasibility Study Process

49

internal environment. The SWOT is an acronym for them. This will be explained in detail later. The first step is to clearly and concisely state the project description, but before that, the historical background of this project, if any, should be stated first. For example, if you are working on a product development project and there is a history of failure or success in similar projects before, be sure to include it. It is unwise to proceed with a project without knowing its history of failure, as you may repeat the same mistakes. Also, if there is a history of success, it should be noted, but do not be pulled in by the success experience. This is because success is strongly dependent on the environmental conditions of the time, especially the SWOT environment described below, and it is not necessarily true that the same thing will succeed again under a different SWOT environment than in the past. On the other hand, it may be helpful to analyze the SWOT of that time. Next, the project description should be articulated in writing in a way that is clear, persuasive, and mobilizing for all parties involved. The content should be explained in five minutes or less, be persuasive, and elicit understanding and interest from others. The newer and more innovative your project is, the more likely there will be opposition. However, if the content is exciting to everyone, the opposition may break. If a good project is proposed and realized, the effect will be immeasurable, but a poor project is a waste of money and time. The same is true for national policies. The leader of a country sometimes has the responsibility to present to the people bold projects that will innovate and develop the country, that will give the people hope, and that will lead to a prosperous future. Consider this first step with reference to the successful Boeing project from Collins and Porras “Built to Last” [2]. In 1952, Boeing made the decision to develop a large jet airliner for commercial airlines. At the time, Boeing’s experience was dominated by military bombers such as the B-17, B-29, and B-52, and 80% of its sales came from a single customer: the Air Force. Boeing had not yet recovered from the painful layoffs that followed the end of World War II, when the special war demand disappeared and the company’s 51,000 employees were reduced to 7500. On top of that, developing a prototype jet airliner would require three years’ worth of after-tax average profits over the past five years. This was a risky and daring decision that could not afford to fail. Under these circumstances, Boeing management took a risky gamble. They set out to develop a jet airliner with the audacious goal of becoming a major player in the commercial aircraft market. The Boeing 707 marked the dawn of the jetliner era. Although the content of the feasibility study that led to the decision to undertake such a risky and daring project is unknown, we can infer Boeing’s feasibility study approach from the little information available in the book mentioned above. To determine the content and rationale for a project, we begin with a SWOT analysis, which will be discussed in more detail in Chap. 4, to see whether the project is successfully capturing opportunities and responding to threats in the external environment. Next, from the perspective of the internal environment, we look at whether the project can take advantage of internal strengths that make it superior

50

3 Project Feasibility Study

to its competitors, and whether weaknesses relative to the competition will be an obstacle. One thing to note here is that you should not run away from the project because your company does not have a core competence (e.g., a technology that is overwhelmingly superior to the competition) when you implement the project. It is important to develop new technologies that are necessary. By doing so, the company will become stronger and develop further than ever. A SWOT analysis in the case of Boeing, based on the information in the aforementioned book, would be as shown in Table 3.1. From this analysis, we could have broken through the first step of whether it is safe to proceed with the project of "developing a jet airliner and becoming a major player in the commercial aircraft market". There are generally four gateways for determining the subject of a product development project: “vision,” “seeds,” “needs,” and “improvement of the current system” (this is also explained in detail in Chap. 4). In the case of Boeing, the jet engine was already in production, so it was not a seed. In the above analysis, general airlines did not recognize the need for jetliners, so there was no need for them in the market. Boeing was not manufacturing jet airliners, so it was not an improvement of the current system. So, you can see that this project came out of management’s vision. It is very important to start a project with a vision. Unfortunately, we don’t know how Boeing could have gone about the feasibility study any further. To summarize the first step, once the subject of the project is given, the content of the project should be clearly stated in writing. The title of the project should be simple, clear, and fine-sounding, and people should be able to visualize some of the contents just by that. A SWOT analysis is conducted to determine if the project on this subject can be taken to the next step as a practical matter, and a conclusion is drawn. If this analysis shows that the project can go ahead, we proceed to the next step. Table 3.1 SWOT analysis of Boeing (Speculation) External environment Opportunities

Internal environment

Threats

• Only our company considering a jet airliner

• None

Strengths

Weaknesses

• Experienced in manufacturing large bombers • Our engineers have an idea to design a large jet plane for commercial airlines

• 4/5 of the sales achievement were to the Air Force • Layoffs reduced human resources to 7500 • Significant development funds will be required

3.2 New Feasibility Study Process

51

3.2.2 Establish Functional Requirements and Design Ranges for the Project Now that the subject of the project has been established, we must determine the functional requirements of the project and the design range that will be needed to design a specific system in order to move this project forward. This step is also explained with reference to another Boeing case in the above literature. It was early 1960, after the successful development of the Boeing 707, when the company launched a project to develop another new jet airliner. This jet airliner was a Boeing 727, but management determined the functional requirements and design range from the requirements of a potential customer, Eastern Airlines. The functional requirements and design ranges were clear, specific, and seemingly impossible to achieve. It would take off and land on runway 4–22 at New York’s LaGuardia Airport (4860 feet long—much too short for any existing passenger jet of the day), fly nonstop from New York to Miami, have a fuselage wide enough to accommodate six-abreast seating and 131 passengers, and meet Boeing’s high safety standards. Given these functional requirements and design ranges, the next step was to design a specific new jet airliner (i.e., the system that would realize the project).

3.2.3 Design a System to Realize the Project We design several specific systems that fit the project. The actual design is divided into several stages, from conceptual design to detailed component design, but the feasibility study phase ends with a conceptual framework design. The feasibility of this design is then examined. What is important here is that the design should not be symptomatic, obsessed with the background (including SWOT analysis) from which the project emerged in process (1). Do not settle for conventional designs. Fresh ideas are required. The meta-concept thinking introduced in Chap. 7 may be useful. In the meta-concept thinking, the first question to ask is, “What will be the problem if the problem is not solved?” Solving that problem becomes the starting point and subject of the idea. From this higher-level subject, specific ideas are generated. For example, if we consider what would be the problem if the six-abreast seating was not solved, we would come up with the problem of not being able to accommodate 131 passengers. Then, if we rethink from the higher-level concept of “a structure that can comfortably accommodate 131 passengers,” we may be able to design a better jet airliner. Based on this subject, we will find many ideas on a broad perspective and put them together into a few ideas and design several specific systems. This means do not be obsessed with the six-abreast seating. We only need to accommodate 131 passengers.

52

3 Project Feasibility Study

3.2.4 Predict the Feasibility of All Areas in Which the System Will Be Involved Next, the FPT is used to determine if there are any operational problems in all relevant areas when the designed system is put into operation, and if the functional requirements are feasible. In other words, the feasibility probability is used to determine if the functional requirements are feasible in the relevant areas. In order to find related areas, all project members brainstorm and summarize them using the KJ method [3]. This kind of exhaustive search is necessary. Once the relevant field is determined, a system range must be obtained to calculate the feasibility probability of that field. For this purpose, the necessary data is collected. Once all of these preparations have been completed, the feasibility probability of the system can be determined for all relevant areas. This process is different from a conventional feasibility study. The New Feasibility Study proposed in this book determines the feasibility of a project by determining the system feasibility probability instead of the more difficult-to-use cost–benefit analysis. If more detailed data is needed to determine the system range, a prototype of this system should be built, unless it is a large project, and if budget and time permit. The existence of a prototype will completely change the accuracy of the prediction of feasibility. Even if it is impossible to build a prototype, it is necessary to at least build a mock-up. Haste makes waste is also true in feasibility studies. It is no good if it takes time to create a prototype. This is where the Design Navi described in Chap. 5 comes in handy. This is because the best prototypes can be created in a short time by using Design Navi.

3.2.5 Summarize the Conclusions of the Feasibility Study Once the feasibility probabilities of all related areas are obtained, they are combined to obtain the system feasibility probability, which determines the feasibility of the entire system. System feasibility is a single probability value that indicates whether all the functional requirements of the project can be satisfied in all the related areas in a comprehensive manner. This is one of the advantages of the New Feasibility Study described in this book. Neither the case in which a person makes a table with ◎, ◯ and Δ and judges by intuition, nor cost–benefit analysis, in which people struggles for forcibly converting anything into money, the New Feasibility Study is easy to use and powerful in its ability to determine overall feasibility based on a single, clear, and concise probability value. The case studies in the next chapter will help you understand its effectiveness. Again, the FPT can compare and integrate the feasibility of evaluation items (functional requirements) in different areas and different dimensions on an equal footing with the dimensionless quantity of probability. Moreover, the axiom allows us to comprehensively compare multiple system proposals with a single number, the

3.3 Areas to Examine the Feasibility of Projects

53

system feasibility probability, for all evaluation items. Therefore, this scale can be used to evaluate the feasibility of a system in absolute assessment, or it can be used to select the best system based on a relative comparison of multiple systems. The fact that this system feasibility probability can absolutely evaluate the actual feasibility of a system means that it is possible to objectively determine how likely it is that this project will actually be realized. Let us consider the meaning of the system feasibility probability of 0.5 a little more. A probability of 0.5 is the same as the toss of a coin, where one side of the coin always turns up on the other side. In other words, when the toss of success or fail is made, either the project will succeed or fail with even probability. If this is the case, then if the toss is set up to produce the successful side, the project will surely be successful. Then what is the key of the successful setting up? The best way to do this is to have the right leadership and an execution organization composed of sincere members who are passionate about making sure that the project succeeds. Moreover, it is the leadership of the leader that makes the members passionate about the project, so in the end, “leadership theory and organizational theory” are important. This means that when the coin is tossed, the coin is crafted so that the side that says the project will succeed always appears on the coin. Then even if the project has a feasibility probability of 0.5, there will always be a bias that the project will be successful (put it in the extreme, the feasibility probability will become 1). We will explain the process with specific examples to make it more understandable in the next chapter.

3.3 Areas to Examine the Feasibility of Projects Once a specific system has been designed to realize the project, identify the multiple areas that will be involved when the system is operational. For example, if the project is for product development, the feasibility probability of the product gaining market share in the relevant market (For example, let this probability be P1), the important performance of the system (P2), whether the environmental issues can be solved (P3), and so on. In reality, there are many items to consider. However, we will simplify them to the extreme here for understanding the contents easily. Once these data are gathered, the system feasibility probability PS can be obtained to determine if the project is feasible in total. The system feasibility probability PS of a project is obtained as a single number, P1 × P2 × P3. The value of PS determines whether the project should be implemented, modified, or abandoned. You may have found that the New Feasibility Study method is surprisingly simple. Now, let’s consider the areas related to the operation of the system as we begin the feasibility study. Since the areas to be considered vary from project to project and are numerous, it is impossible to cover them all. Only those areas that are generally common are discussed here and their feasibility concepts are explained below.

54

3 Project Feasibility Study

Technology Feasibility The technical performance of the system is the area that most affects the feasibility of other areas. The system designed here includes a wide range of software, hardware, and organization, so the number of functional requirements is quite large. The feasibility probability of technology is obtained by finding the feasibility probability of each functional requirement and multiplying them all together. (For example, the performance table of an automobile shown in Chap. 2 corresponds to this). This is a brief explanation of how to determine feasibility probability. This is a very aggressive flow of explanation, but I dare to do so because it is effective in understanding the feasibility study as a whole. Suppose we have a functional requirement for a certain performance of this system, say we want to keep the weight under 20 kg. You design the system, but after considering cost and other functional requirements and changing the parameters, you end up with a weight of 17, 18, or 19 kg. The estimate at the initial design stage is often based on a lack of detail, so the final weight may be much higher. Therefore, assuming a safety factor of 1.1, if a design weighing 19 kg is adopted, it may weigh 20.9 kg, so the maximum range of weight variation is [17, 20.9]. Figure 3.1 illustrates this relationship. Again, a uniform probability density distribution is assumed. From this figure, the feasibility probability P of the weight can be obtained as follows. P = (20 − 17)/(20.9 − 17) = 0.77 The feasibility probability of one item is 0.77. The other items are obtained in the same way, and all of them are multiplied to obtain the feasibility probability of the technology. Public Economic Feasibility The public costs and benefits of the project are considered. Generally, a cost–benefit analysis is used, but here, feasibility probabilities are determined independently for each. Costs include development costs and operation costs, and feasibility probability is determined by whether the costs can be kept below the target value. For the benefits, the feasibility probability is determined by considering how soon profits will be Fig. 3.1 System range and design range of weight

3.3 Areas to Examine the Feasibility of Projects

55

generated after the investment is made, and determining the probability that the profits will exceed the target value. The feasibility probability of public economic feasibility is obtained by integrating these factors. Legal Feasibility This is a literal feasibility. We consider whether or not the relevant laws and regulations can be satisfied. However, if there is a possibility that the law may change due to political circumstances, etc., the feasibility probability of such a change must be determined. It should be noted that this feasibility does not only regulate the product itself to be manufactured, but also the waste disposal associated with the manufacturing, the location of the manufacturing, the location of the warehouse, the location of the office, etc., may be subject to regulation. Operational Feasibility The extent to which the designed system can solve the operational issues that the project is having problems with is considered. Note that this area is easily confused with technology feasibility. For example, suppose you have commercialized a cooker that is easy to use. The technological feasibility would be the ease of use or able to cook as desired, and the operational feasibility would be how much more enjoyable cooking life becomes when the cooker is used (operated), how much it contributes to health improvement, how much it costs when disposed of, and so on. The extent to which the range of variation in the characteristics of the system operation, i.e., the system range, can meet the goals of the system operation, i.e., the design range, is the subject of consideration. In addition to performance, the issues here are ease of operation, system reliability, maintainability, durability, and ease of disposal. Resource Feasibility Once the functional requirements and design ranges of the system have been determined, the feasibility of the resources is determined by considering whether the necessary resources such as software, equipment, human resources, experts, etc. are ready or can be made available to realize them. In addition to the procurement of the required real estate, equipment, and human resources, the probability of interference with the business currently being conducted is also a matter of probability. Time Feasibility The above feasibility prediction may be not so difficult, but the time (delivery date, etc.) feasibility and the items described below are much more difficult to predict because they include an element of future prediction. Realization time requires both development time and time until the system is actually operational. The feasibility probability of the schedule is considered to determine how well the project will be realized on schedule. The same is true for the feasibility study of other items, but one must consider not only the best external conditions but also the worst external conditions. From both of these conditions, a system range of time is determined. If the initial project completion schedule is not

56

3 Project Feasibility Study

reasonable as a result of these considerations, it may be necessary to change the schedule. Marketing Feasibility If the project is to develop and market a product, marketing feasibility must be sought. There are many issues to consider, including the size of the marketing segment, the best time to market, market sustainability, sales price and volume, service structure and promotion, market size expansion, distribution system, and the intended sales territories. Therefore, it is recommended to think MECE-like when considering uncertain issues. MECE stands for Mutually Exclusive Collectively Exhaustive. In other words, it means to think without omission and without duplication. At the strategy consulting firm McKinsey & Company, Inc. [4], a consultant, uses this basic technique to gain a structured understanding of the subject matter under consideration. Thinking without omissions or duplications not only prevents mistakes when acting on mere random thoughts, but it also allows us to construct our thoughts from a broad perspective, preventing our ideas from being left out, and when explaining to others, it gives a sense of security and persuasiveness to those who listen. From another point of view, MECE can be said to be thinking about a certain object of thought by dividing it into cases. For example, MECE-like thinking to divide matter into cases, there is the 3C concept, which divides marketing problems into three Cs: Customers, Competitors, and Company. First, we consider the type of customers we are dealing with. Then, you set a goal of what percentage of those customers will buy your product, and estimate the feasibility of that goal. Next, we assume that we are competing with competitors. We estimate the feasibility of our goal of gaining market share against them. Finally, think about your company’s strategy. What sales methods will you use, what promotions will you plan, how will you improve your sales force, etc.? Predict how well these plans will be realized. If you are considering a long-term outlook, you will have to anticipate fluctuations in customer needs, preferences, buying power, etc. Another method of dividing cases is the 4P method proposed by American marketing scientist Jerome McCarthy in 1961, which consists of Product, Price, Promotion, and Place. The feasibility of these items is predicted in the same manner as above. The feasibility of marketing is evaluated only when these factors are comprehensively considered. However, Philip Kotler argues that in today’s world, traditional marketing alone is not sufficient, but must include digital marketing. For more information on this point, please refer to the literature [5]. Financial Feasibility Financial feasibility is also very important. We consider how much money will be made from the project. If it is a product development project, it is of course related to the marketing mentioned above. For example, one way to evaluate the project is to estimate how much cash flow it will generate in the future and how long that

3.3 Areas to Examine the Feasibility of Projects

57

cash flow will continue. Other factors include IRR, certainty of repayment, payback period, equity growth rate, and so on. The details are best left to technical papers, but the project team will need to estimate the extent to which these targets can be met, how far they may deviate, what the economic fluctuations are likely to be, and, if the projection is too aggressive, the range of possible swings, including sudden conflicts, political upheavals, and epidemic pandemics. Social Feasibility It is feasibility for social contribution. It is not only a question of whether the project will contribute to society, but also whether it will not cause harm to the environment. In addition, the degree of contribution to local employment is also an item to be considered. SDGs Feasibility The Sustainable Development Goals (SDGs) were unanimously adopted at the UN Summit in September 2015 with the participation of over 150 heads of state and government. They consist of 17 goals and 169 targets that address social issues in all countries, such as poverty, hunger, environmental problems, and economic growth, emphasizing that no one should be left behind and aiming to be achieved by 2030. It also includes sustainable consumption and production, global environmental protection such as sustainable management of natural resources, and cooperation among all people, including maintaining peace. This information could very well be of interest to investors in the project. This is because, on the part of institutional investors (investors such as corporations and financial institutions that make large investments), the United Nations recommends that they have a responsibility to focus on ESG (Environmental, Social, and Governance) issues when making investments. The future is not just about the environment, but also about social and governance. From now on investors will no longer look only at a company’s financial information when investing in a project, but will also focus on whether or not the company is fulfilling its environmental and social responsibilities. In 2020, the Japanese government has set a goal of achieving carbon neutrality by 2050, so it may be necessary to consider the feasibility of this goal. As you can see, you may be daunted by the wide range of areas to be considered. Even after all this study, unforeseen problems may arise during operation. But that does not mean that you should not consider it. The only way is to proceed these steps steadily for success.

58

3 Project Feasibility Study

3.4 Future Prediction As is clear from the above explanation, future predictions are naturally relevant in some areas. Some people think that the future cannot be predicted, but this is not the case in feasibility studies. Therefore, the following are some ideas to improve the accuracy of future predictions. For each of the above areas, a system range, or range of variation of characteristic values, must be determined for each item. Moreover, the range of variation of these characteristic values must not be the current values, but the values at the time in the future when the system will be operated or used. In other words, since future values are required, feasibility requires knowledge in the specialized field of future forecasting. Today, the future is very difficult to predict because of the rapid pace of change and progress. And if we have to assume that completely unexpected events like a new corona pandemic will occur, predicting the future will be nothing short of difficult. For example, agricultural and pastoral societies have been in existence for tens of thousands of years, feudal societies for several thousand years, and modern societies for more than 300 years, so change has been slow. However, in the modern age, the computer, for example, the vacuum-tube computer ENIAC appeared in 1946. It was only recently that the iPhone, with its built-in IC chip, went on sale worldwide in 2007, so only 61 years have passed since then. In just 30 years since the mid-1990s, when the Internet began to have a major impact on culture and commerce, AI has been put into practice, the world’s fastest supercomputer, Fugaku, was built in 2020, and the creation of the Digital Agency was established in 2021. At this speed of change, it may be impossible to predict what will happen in the next few years. Nevertheless, the following ideas should help you to somehow predict the future.

3.4.1 Predicting the Future with Patterns I propose here a forecasting method based on one idea, which I have summarized in my own way based on Koyo Sato’s “How to Think Ahead of the Future” [6]. Future forecasting begins with the recognition of the following four patterns of future manifestations. If we can recognize these patterns, we can predict the future with a fairly high probability. (1) (2) (3) (4)

“Necessity” drives technological innovation Evolutionary flow is a “line” From concentration to “dispersion” Highly necessary technology “diffuses” according to the law of increasing entropy. (1) “Necessity” drives technological innovation The need to expand man’s physical capabilities led to the invention of the steam engine by James Watt, which freed man from hard physical labor. On the other hand,

3.4 Future Prediction

59

after Faraday’s discovery of the principle of electromagnetic induction, generators and motors were invented, again extending man’s physical capabilities. Electricity developed and spread to meet the need for energy, which was easier to handle than steam. The automobile was invented as a faster means of transportation, the railroad was invented for mass transportation, the airplane was invented for a faster transportation where there is no road, and the bullet train was invented because of the need to travel faster. In this way, technological innovation originates from “necessity”. Not only in the field of technology, money was invented out of the need to eliminate the inconvenience of bartering. Medicine was developed to protect people from disease, and laws were created to protect the lives and property of individuals in society. In other words, the key to predicting the future is to find the necessity. (2) Evolutionary flow is a “line” Let’s look at the history of technological development from another perspective. After the ENIAC (1946) was developed, the Apple I was launched in 1976, which was built in a garage by Steve Jobs. Then came the personal computer and in the 1990s the Internet began to have a major impact on culture and commerce. After cell phones evolved and were replaced by smartphones with built-in computers, everyone was able to easily obtain information from the Internet. At the same time, microchips with sensor functions were embedded in objects, connecting them to the Internet as IoT, from which vast amounts of data were collected in the cloud. As AI evolves and becomes capable of pattern recognition in the 2020s, a new industry appears to be emerging. In other words, taken broadly, we see an “evolutionary flow” along the lines of computers, PCs, the Internet, smartphones, IoT, and cloud computing. This stream of progress has yielded an explosion of data, and AI has emerged to extract meaningful wisdom from these data. Society will probably change drastically in the future depending on how AI transforms itself. From another point of view, AI will improve the accuracy of future predictions. The same is true in other areas. For example, in the field of biotechnology, or in the field of environmental issues, we should be able to see a flow of evolution riding on key concepts. If we can catch this flow, we will be able to predict the future in that field with a certain degree of probability. (3) From concentration to “dispersion” In both technological development and social change, the phenomenon of change from centralized large-scale systems to decentralized systems occurs frequently. For example, in the field of technology, the evolution of computers can be viewed as a line, as mentioned above, but if we look at its essence, we can see that it follows the principle of decentralization from concentration. The IBM System/360, which was launched in 1964 and was a great success, was a typical centralized computer. At that time, the person doing the calculations would bring the punched program cards set in a cardboard box according to the order of the calculations to the computer center of his or her company or university and receive the results the next day or so. It would be very difficult to get the cards in the correct order if the box was dropped and the

60

3 Project Feasibility Study

cards scattered around. This led to the development of desktop minicomputers that were smaller, expensive, and difficult to use, and could perform calculations on their own, and one was set in each section of a company or university. Subsequently, calculation speed was improved, storage capacity was increased, and the desktop minicomputers became smaller and less expensive, leading to the invention of the microprocessor and the desktop type became the microcomputer, which became widely used and enabled many people to handle their own computers. As microprocessors evolved further, so-called personal computers (PCs) and laptop computers became commercially available, allowing individuals to own computers cheaply and easily, and computers became decentralized. Today’s further forms of decentralization are smartphones and wearable smartwatches. In this way, the trend from centralized to decentralized technology is truly remarkable. As a social phenomenon example, the merchandise sales system is cited. In the past, when people wanted to shop according to their own tastes, they would go to department stores, which had a large selection of goods. A wide variety of goods were concentrated in the department stores. Next supermarkets appeared, offering a wide variety of goods at reasonable prices, and after that convenience stores became popular as dispersed stores convenient for purchasing daily necessities, making our lives much more convenient. Even daily necessities can now be purchased online, completely changing our lifestyles. The decentralized product sales system on the Internet, which offers a much wider variety of products than stores without having to go out to the store, cheaper prices, availability as early as the next day, and the ability to purchase at home without shipping costs, combined with the widespread use of computers and smartphones, has made our life dramatically more convenient. Perhaps this change could occur from an energy perspective as well. Currently, electricity is supplied by power companies, but in the future, when more efficient and low-cost solar power generation is realized, combined with high-performance and low-cost advanced batteries, the use of electricity will probably change to a decentralized system. (4) Highly necessary technology “diffuses” according to the law of increasing entropy Technologies that are not highly constrained will diffuse rapidly when the need is high. Even if the need is high but there are constraints such as large capital investment, the technology will not spread rapidly. For example, water supply in Africa or electricity grids. However, if the technology is highly necessary with weak constraints, it will diffuse like atoms in random motion according to the law of increasing entropy. It is as if, when milk is added to coffee, the milk initially forms an orderly pattern, but as time passes, the pattern disappears and the mixture diffuses into a state where it cannot be easily separated. The spread of smartphones and the Internet to all corners of the world is a good example. If you can find any of the above patterns, you can follow the pattern and predict the system range with some accuracy.

3.4 Future Prediction

61

Fig. 3.2 Market size in 3 years

3.4.2 Unpredictable and Uncertain Event Problems So, what do we do when unpredictable and uncertain events occur that do not fit the above pattern? These include natural disasters such as earthquakes, floods, volcanic eruptions, and pandemics of new viruses. In such cases, the author’s approach is as follows. For example, let’s forecast the size of the market in three years. If things go well, sales are expected to reach 10,000 sets per year, and the design range wants to sell at least 6,000 sets. If there are several possible contingencies that could affect sales, each of which could result in a sharp decline in sales. Assume that the most impactful of these events will cause a 60% drop in sales. If there is still a 10% probability that such an event will occur within 3 years, how should we incorporate this probability? Here we will consider it simply. Suppose that even in the absence of unpredictable uncertain events, the worst-case scenario is for a drop to 5500 sets due to a worsening of the economic situation. Suppose also that if an uncertain event unfortunately occurs with a 10% probability, the number of sets is expected to fall to 4000 sets. Then, as shown in Fig. 3.2, if we assume that the area of the uniform probability density distribution from 4000 to 5500 (the hatched area) is 10% of the total, the feasibility probability P can be obtained by the area on the right side of the design range, which is as follows. P = 0.9 × (10000 − 6000)/(10000 − 5500) = 0.8 I know it’s a bit rough, but it’s a future prediction, so you’ll have to forgive me.

3.4.3 Logical Thinking Kees van der Heijden [7], on the other hand, classified future prediction problems as shown in Fig. 3.3. One is the predictable case, and the other is the unpredictable

62

3 Project Feasibility Study

Fig. 3.3 Relationship between predictability and uncertainty

case. Theoretically, no matter how near-future the problem is, there is always a risk involved in prediction because the probability of being wrong is non-zero. In areas where some degree of predictability is possible, we can predict accordingly, but the further out in time we go, the more uncertainty we face and the less predictable it becomes. Beyond the predictable region, Heijden suggests writing several scenarios to deal with [7]. To develop scenarios, we use logical thinking. While this is a powerful tool in persuading others within the company, it is surprisingly ineffective in predicting the future. According to business books, logical thinking is "the art of taking a systematic view of things, grasping the big picture, and then logically summarizing and accurately communicating the content". However, it is impossible for a human being to grasp the big picture (i.e., the actions of global competitors in real time) from a systematic viewpoint. The term Laplace’s Demon is used to describe this. It means that if we assume the existence of an intelligence that could simultaneously know the position and momentum of all matter in the world, it would be able to calculate the temporal development of these atoms using classical physics, and therefore would be able to know perfectly well what the world ahead would be like. Put another way, if we could have all the information, we could predict the future with certainty. Unfortunately, quantum mechanics, as it developed in the twentieth century, proved that it is impossible in principle to know the position and momentum of an atom simultaneously. In other words, the future cannot be predicted by logical thinking. Another barrier is the issue of literacy, that is, whether decision makers who engage in logical thinking can legitimately understand the conclusions. At the time, both the Shinkansen and the Kuroyon Dam projects (an arch dam for hydraulic power generation and the height of the embankment is 186 m, the highest in Japan, and completed in 1963 after difficult construction) were considered impossible and useless. In reality, however, they have been realized, and they have generated tremendous profits. In other words, if decision makers have the same low literacy level as most people, they will make mistakes in predicting the future. What “seems impossible” at the moment is not “really impossible. The literacy skills of decision makers play a very important role in predicting the feasibility of a project.

References

63

3.4.4 Timing of Project Launch Assuming you have found a pattern, there is one more thing to keep in mind. It is the timing of launching that project. If it is too early, it will not be accepted by society. Too late and you will be overtaken by the competition. You have to find a timing when people’s sense of values will switch, when the convenience of the project will shatter their stereotypes. AI may solve this problem in the future. Conversely, once a project is set in motion, people may recognize the importance of that project and change their attitudes. This is because people themselves evolve with technology. Also, once a project is underway, new information becomes available, and the people involved in the project need to constantly upgrade their perceptions. As a result, there may be a need to make small modifications to the project. However, the feasibility study has determined that the project is feasible, so there should be no need to cancel the project. And a feasibility study that is cancelled because the perception of the times changes is meaningless! State and public projects are tough because you have to think about future scenarios and forecast 10 or 20 years into the future. But for corporate projects, you don’t have to think that far ahead. One way to think of it is to launch the next project when the use-by date approaches and ride the wave of a new society. With the above in mind, in the predictable region, the system range is obtained from the best-case and worst-case predicted values. If the worst-case scenario is predicted to occur, the overall probability should be obtained by considering the probability of occurrence of uncertain events as explained above. Please do not make predictions based on best-case conditions alone. You will get hurt.

References 1. https://en.wikipedia.org/wiki/Apollo_spacecraft_feasibility_study#GE_D-2 2. Collins, J., & Porras, I. J. (1994). Built to last. Harper Collins. 3. Kawakita, J. (1970). Further idea of creation—Development and application of KJ method. Chuokoronsha. (in Japanese). 4. Teruya, H., & Okada, K. (2001). Logical thinking. Toyo Keizai Inc. (in Japanese). 5. Kotler, P. (2020). Kotler’s marketing 4.0. The Asahi Shinbun Company. (in Japanese). 6. Sato, K. (2015). Thinking ahead to the future. Discover 21, Inc. (in Japanese). 7. van der Heijden, K. (1996). Scenarios. Wiley Limited.

Chapter 4

Feasibility Study of Product Development Project

Now that we are ready for the New Feasibility Study to calculate the feasibility of a project, let’s take a product development project as an example to calculate its feasibility in this chapter. Please understand the power of the FPT. This section will start with the planning of a specific product development project. The feasibility of the project up to the sale of the planned products will be explained in the New Feasibility Study. There are generally four entry points to discovering a subject for product development. They are seeds, needs, current products, and vision. In the New Feasibility Study example presented in this chapter, we will use a product development project that starts with a vision. Before that, let us briefly explain how to find a subject for product development that starts from seeds, needs, and current products.

4.1 Product Development Project from Seeds and Needs A seed is an act discovered without purpose of use. The fact discovered can be either inside or outside the company. For example, water that gushed out during the excavation of the Daishimizu Tunnel, which passes under Tanigawa-Peak, became a seed and was sold as mineral water. Another example of starting from a seed is when Spencer Silver, while developing an adhesive with high adhesive strength at 3M, accidentally created an adhesive that “sticks well but peels off easily,” which led to Post-it. In most cases, product development projects that start from the seeds usually skip over SWOT analysis, to start search for a need, and the commercialization process. In general, product development projects almost always start with needs. Since competing companies conduct similar needs assessments, they often come up with ideas based on similar information, making it difficult to differentiate their products from others. And since the average consumer does not have innovative needs, it is even more difficult to find innovative products based on those needs. In order to © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Nakazawa, Design Navi for Product Development, https://doi.org/10.1007/978-3-031-45241-3_4

65

66

4 Feasibility Study of Product Development Project

differentiate yourself from your competitors, you need to make a conscious effort to look for “masked needs,” which are hidden needs. There are many reference books on product development that start from needs, so I will not go into detail here, but I will explain some of the key points.

4.1.1 SWOT Analysis to Establish Areas of Need First, we must determine what areas of product planning are appropriate from a broad, forward-looking perspective. If you plan a product on a whim, even if there is a need (i.e., customer preference at the time), the need may disappear in a short period of time if the trend changes. The main principle when starting from the needs is to attack from a broad perspective, and to some extent from a long-term perspective. To do this, conduct a SWOT analysis and determine the areas of need to be examined from the outside, i.e., from the perspective of opportunities and threats. Once you have decided a field, the next step is to figure out where to focus your market. The important thing to remember is that the market segment should not be too broad. In other words, narrow down your target users, not broaden them. For example, do you want to target men or women, young or old, or people of any income level? If we narrow down the target to male youth, we can then narrow down the target even further, such as whether the youth are salarymen in their teens, 20s, or 30s, whether they are interested in automobiles, or whether they like outdoor activities, and so on. If the temptation here is to expand the market segment to sell to as broad an audience as possible or to make the product attractive to everyone, you will fail. The “all-purpose” or “It can be used by anyone” do not exist in reality. Even if you did find one, on the other hand, the average person would wonder if this is really necessary or really effective for them, and they would end up thinking that it doesn’t seem relevant to them. What is good for everyone will be halfway good for everyone. This is where the idea of limiting market segments comes in. This is called the market segment limitation principle.

4.1.2 Gather Information Through Interviews/Surveys Once the segments of the product to be developed have been determined, information about their needs is collected through interviews and questionnaires. Questionnaires are a convenient and economical way to collect information from a large number of people, even from remote locations, but they should be designed carefully, as the design of the questions can change the conclusions of the survey in any way. When interviewing, it is necessary to listen to the interviewee’s deepest thoughts, or masked needs, rather than simply listening to the interviewee. One recommended technique is active listening, which is a method used in coaching theory [1].

4.1 Product Development Project from Seeds and Needs

67

In the past, coaching of athletes involved coaches trying to fit athletes into a mold that they themselves had experienced. In recent years, however, coaching theory has shifted to a method in which athletes are made aware of the most appropriate mold for themselves and helped to develop their abilities on their own. This has been applied not only in sports, but also in the business field, where it is used to help employees with problems and their supervisors to find solutions through communication. I believe that if we apply this to marketing interviews, we can extract masked needs. In active listening, the coach never tells the answer, but always makes the person aware of the answer. Therefore, in an interview survey, the questioner (interviewer) should never give the answer. The questioner is empathetic to the other person (client or monitor) and treats them with compassion and acceptance (acknowledging that they are right). In dialogue, questions and backtracking are combined to help the other person find answers (masked needs). Backtracking is when the person asking the question responds to the other person’s words in the same way that the other person says. The other person is then stimulated by his or her words to come up with a solution. For example, let’s look at the following example of an exchange between a questioner and a customer. Questioner: “Is that fountain pen easy to use? “ Customer: “Yes, it is a little bit difficult to use. “ Questioner: “Oh, it is difficult to use? Why is that? “ Customer: “The ink doesn’t come out well at the beginning of writing. “ Questioner: “Does the ink always run out at the beginning of writing and the characters are blurred? “ Customer: “Not always, but when I don’t use it for a long time.” Questioner: “Is it difficult to write when it has been left unattended for a long time?”. Customer: “Yes, it is. For example, you can always write quickly with something like a marker, that’s why I always use a marker instead of a fountain pen.” Questioner: “You use markers because you can start writing immediately without blurring.” Customer: “Markers are softer to the touch than fountain pens, and there is no blurring at the beginning and no need to refill the ink. However, I feel fountain pens are more formal…” And so, the conversation continues, and this alone gives us a hint of an important masked need. The customer’s masked need, I imagine, is “I want a new formal writing instrument that writes beautifully like a fountain pen and allows me to start writing at any time”. This was a conversation that the author alone took the liberty of composing as a customer, but there was absolutely no need to craft any statements on the part of the questioner. In other words, active listening allows the other person to think about the answer, so the questioner does not have to go through any trouble at all.

68

4 Feasibility Study of Product Development Project

4.1.3 Discovering Subjects Using the Meta-Concept Thinking Next, to avoid falling into what Christensen [2] calls function oversupply, which is a drawback of sustainable technology, and to avoid merely adding new functions to conventional products, we need to use the meta-concept idea to go up to the higher-level meta-concepts (higher level objectives) and look for valid subjects. For example, in the fountain pen example above, if we specifically mention the name of the conventional product, such as “a fountain pen” that allows you to start writing beautiful letters immediately at any time, we would be oversupplying the function of sustainable technology. As Christensen says, large companies fail because they are so caught up in improving the functionality of sustainable technologies that they miss disruptive technologies. It is important to express the theme in such a way that disruptive technologies can be found. If you get out of the fountain pen and start from a higher-level concept such as “a new writing instrument with a formal feel,” you will not fail. In this sense, there is no particular problem in finding specific subjects from masked needs, since masked needs are often meta-concepts. However, whenever a need emerges as a “concrete product” or a “problem to be solved,” a meta-concept must be found.

4.2 Product Development Projects from Current Products Product development projects that start with the current product literally start with the product or products that the company is currently manufacturing and selling. In this sense, it is an easy-to-imagine and easy-to-use method. This process, the Strategic Campus Method starting with the current product, is completely different from the above three. Strategic Campus Method was proposed by W. Chan Kim and Renee Mauborgne in “Blue Ocean Strategy” [3]. Here is introduced a method that reinforces the Blue Ocean Strategy with the Meta-Concept Thinking. The steps are as follows.

4.2.1 Find the Current Product Value Curve First, select a target product that your company currently manufactures and sells. Once the target product has been selected, a list of keywords is created to understand the characteristics of this product, and its value curve is drawn and analyzed. Let’s take an MD player (Sony released first in Japan 1992) as an example. We use a value curve to visualize the characteristics of this product. We place the characteristics of the MD player on the horizontal axis. Here, we consider seven items: smallness, lightness, largeness of number of recorded songs, price lowness,

4.2 Product Development Projects from Current Products

69

Fig. 4.1 MD player value curve

playback media, long continuous playback time, and good sound quality. On the vertical axis, the quality/quantity of the characteristic is indicated on a rough scale of high, medium, and low. The higher the value, the better the quality and quantity. Estimate and plot the quality and quantity of the above characteristics. What is important here is to make sure that the expression of the characteristic item can be plotted in such a way that the better the item is, the higher it is. For example, the smaller size of an MD player is better, so it is expressed as “smallness”, but if this is expressed as “size”, this is to avoid the misunderstanding that a smaller size should be expressed as “low”. However, it is difficult to relate “high” and “low” with respect to playback media, so we have simply indicated “playback media”. Figure 4.1 shows the evaluation of an MD player. The items are arranged so that the same level is next to each other for better appearance. The reason why we chose “low” for the “number of songs recorded” is because a single MD media contains only a small number of songs.

4.2.2 What is the Meta-Concept of This Product? Next, think of a meta-concept for this product. Think of the higher-level concept, the real purpose, and use keywords to express it. This process is a concept that does not exist in the original Strategic Campus Method. By incorporating this idea, the Strategic Campus Method becomes an even more powerful product development method. In the Meta-Concept Thinking, we think, “What would be the problem without an MD player? A problem is; “I can’t freely listen to my stored music anywhere.” So, the meta-concept is; “I want to listen to my stored music freely anywhere”. This is where we start our thinking.

70

4 Feasibility Study of Product Development Project

Table 4.1 Action matrix Removing function (meta-concept) Playback medium

Adding function (meta-concept) Ease of song input

Reducing function Length of playback time

Increasing function Number of songs, price lowness, smallness, lightness, sound quality

4.2.3 New Strategies in the Action Matrix Based on this value curve, action matrix (Table 4.1) is developed. The action matrix is divided into four categories: “remove function,” “add function,” “reduce function,” and “increase function, and actions are considered for how to improve the characteristics of the current product. The upper columns are for adding or removing functions based on the metaconcept, and the lower columns are for adding functions that are not based on the meta-concept. The right column is for adding new functions or enhancing existing functions, and the left column is for removing existing functions or reducing existing functions. Since the meta-concept is to store music in any way and listen to it freely anywhere, the “playback medium (MD in this case)” is not necessarily necessary, so it should be removed. At the same time, from a meta-conceptual viewpoint, since a playback medium is not necessary to “store” music, “ease of song input” to the device is added. This would be a masked need in marketing terms. In terms of “length of playback time,” we would like to reduce the mass and size a little more, considering that it should be possible to listen on a Shinkansen train from Tokyo to Osaka and back with enough time to spare without recharging the battery. However, as technology advances, it will become possible to reduce weight and size while lengthening playback time, but at this point we have made a tentative decision. The one who wants to increase the “number of songs” wants to record many times more music than a single MD. The “sound quality” should be even better. The “price lowness,” “smallness,” and “lightness” players should be cheaper, smaller, and lighter. Thus, to summarize the above, the action matrix for MD players is shown in Table 4.1.

4.2.4 Drawing a New Value Curve The results of the above action matrix are shown in Fig. 4.2 as a value curve (●). Compared to the company’s conventional product (▲), it is clear that the product has a different appeal. If we develop a product that fits this value curve, we can be sure

4.3 Product Development Project from Vision

71

Fig. 4.2 Value curves for new portable music players

that it will sell much better than our current product, although the price will depend on how much we charge for it. This value curve will be the functional requirement of the product development project going in from the current product. This would be what Christensen calls disruptive technology [2]. Don’t you recognize a concrete product that already exists from this final functional requirement? The example in this section is taken from my book published in 2011 [4], so I apologize if it is a bit old-fashioned, but yes, it is the iPod, which was a bestseller at the time. It would not be possible to develop an iPod-like product today, but at the time, if this had been implemented in a development project before Apple, it would have generated tremendous profits. As I mentioned earlier, this method is further enhanced by the Meta-Concept Thinking and made more practical than the original Strategic Campus Method.

4.3 Product Development Project from Vision Let me explain the feasibility study of a product development project that begins with a “vision“ (or perhaps it can be described as an aspiration), which is what I recommend most. Product development projects that start with a vision do not require any complicated market research in advance, but rather create a new market. Examples include Sony’s Akio Morita’s Walkman [5] and the flash memory invented by Fujio Masuoka [6], an engineer at Toshiba Corporation. When it comes to the Shinkansen, it was realized through the combined vision of Shinji Sogo, the president of Japan National Railways (JNR) at the time of the Shinkansen construction and known as the “father of the Shinkansen”, Hideo Shima, a JNR chief engineer, and Tadanao Miki, Tadashi Matsudaira, and Hajime Kawabe, engineers at the Railway Technology Research Institute. What is important here are the four future manifestation patterns explained in the section on future predictions in the previous chapter. To reiterate, they are: (1) “Necessity” drives technological innovation, (2) Evolutionary flow is a “line”, (3) From

72

4 Feasibility Study of Product Development Project

concentration to “dispersion”, (4) Highly necessary technology “diffuses” according to the law of increasing entropy. In particular, it is very important to think of vision in terms of necessity, or to think in terms of a line, which is the evolutionary flow. This process corresponds to the first three steps of the New Feasibility Study process described in the previous chapter, namely: (1) project description and SWOT analysis, (2) determine the functional requirements and design range for the project, and (3) design a system to realize the project. Let us now describe the first six steps of the product development process, starting with the vision. (1) (2) (3) (4) (5) (6)

First, clarify the vision SWOT analysis Discovery of subjects Determination of evaluation items and subject selection Selection of functional requirements and system design Prototype design.

Although we usually think of vision as something that should be held by management, there are cases in which it is held by ordinary employees. In this case, whether or not a good system is in place to incorporate the vision of ordinary employees will greatly affect the prosperity of the company. The above example of Fujio Masuoka’s flash memory is a case in point. Toshiba suffered a huge loss in the flash memory field because it ignored the vision of this individual. If this vision had been captured, Toshiba would have achieved great growth by now. Therefore, the vision of individual employees must be valued. Let me explain the process in turn.

4.3.1 First, Articulate the Vision The first step is to express explicitly what kind of vision you have. Let’s define a vision as “the image of the future that you have in your mind with a fervent desire to realize it”. You must know that a vision has the magical power to make the vision itself come true. In other words, by having a dream or vision and believing that you can realize it, your brain will start working in that direction, and strangely enough, the environment around you will also change accordingly. Gen Matsumoto, research fellow of Institute of Physical and Chemical Research, who was involved in the development of the brain computer, states, “When people have dreams, their brains are activated, and their brain circuits are formed autonomously toward solving problems”. Necessary information, resources, and human capital will also be gathered. This makes it easier to realize dreams and visions. Let’s assume that the management of the hypothetical company implementing this project has a vision to create indoor health equipment that will help people live healthier lives. As of 2020, a new corona pandemic has completely changed the way we live. People who used to be members of fitness clubs and made an effort to stay fit by attending weekly, now need to think about staying fit at home for fear of infection of the new corona. It is safe to say that visions of indoor health equipment

4.3 Product Development Project from Vision

73

have emerged along these lines. The next step and after are to be done with multiple members.

4.3.2 SWOT Analysis The second step is a SWOT analysis, which was explained in the previous chapter, but will be outlined again here. SWOT analysis is used to analyze the external and internal environments surrounding a company to formulate management strategies, but in this section, a SWOT analysis is conducted to put the vision in the flow of time and make it easier to realize. Table 4.2 shows the results of the SWOT analysis conducted for our hypothetical company. The “internal environment” is what the company can change through its own efforts, while the “external environment” is what it cannot control. It is considered best to begin the analysis from the broad perspective of the external environment. In other words, start with the macro factors (political, economic, and social conditions, technological trends, legal regulations, etc.) of the external environment that may affect the company’s ability to achieve its objectives, then list the micro factors (market size, growth potential, customers’ sense of values, price trends, competitors’ trends, etc.), and then list the opportunity factors and threat factors. In this case, it is of course necessary to pay attention to information from newspapers, magazines, TV, the Web, and human networks on a daily basis. After analyzing the external environment, the next step is to analyze the internal environment, i.e., the company’s strengths and weaknesses. Strengths and weaknesses are analyzed by examining the company’s tangible and intangible management resources, such as production systems, product capabilities, cost structure, sales capabilities, technological capabilities, reputation and brand, finances, and human resources, and classifying them according to whether they are superior or inferior to those of competitors. Table 4.2 SWOT analysis of our company (as of 2020) Opportunities

Threats

• Growing health consciousness • Financial strain on the national health insurance system • Not yet hit product • Growing elderly population • New Corona pandemic

• Increasing fitness clubs • Increasing joggers

Strengths

Weaknesses

• Manufacturing small LCD monitors • Active bicycle club • Manufacturing technology

• No health and medical professionals

74

4 Feasibility Study of Product Development Project

In our example in Table 4.2, the external environment is the growing health consciousness of the public, which may be related to the growing elderly population. In addition, the financial strain on the national health insurance system must be resolved at all costs. To achieve this, we must improve the health of the people and lower the per capita cost of medical care. In addition, there are many companies manufacturing and selling health equipment, but none have yet to hit the market. Furthermore, as of 2020, the new Corona pandemic has discouraged the use of fitness clubs, and a trend toward more in-home exercise is emerging. This is creating the potential for increased demand for indoor health and fitness equipment. All of this presents the best opportunities. The threat, on the other hand, is the increasing number of fitness clubs, which may reverse the trend toward less exercise in the home. This may not be a threat due to the new corona pandemic as mentioned above, but I included it as a threat just in case. The increase in the number of joggers also means that the number of people who do not use health equipment is increasing. However, if we consider that jogging on rainy days is difficult, or that it is indoors so people don’t have to worry about being seen, there will be a demand for indoor training. Demand will be especially high during the rainy season and in snowy areas. There are many reasons why there is not an increase in the number of indoor health equipment, but one reason is that they are expensive, the effects are not clear, and Japanese houses are too small to house such an equipment. If these issues can be resolved, demand for indoor health equipment can be expected to increase. In contrast, one of the company’s strengths in the internal environment is its active bicycle club in the area of sports. Another advantage is that we manufacture small LCD monitors under subcontract from our parent company, and we have excellent human resources and facilities in this manufacturing technology, which gives us an advantage over our competitors. We also have superior manufacturing technology and facilities for mechanical parts. Since a SWOT analysis is conducted by multiple members, much more information is actually generated. Once all of this information is gathered, it is organized and integrated using the KJ method [8], focusing on opportunities and threats. Opportunities and threats are grouped together because it is easier to process them afterwards if they are organized and grouped than if they are listed in pieces. This process is omitted here. The classic approach taught in business schools is to gather information in this manner, then select an external environment that is compatible with the company’s internal capabilities, i.e., its strengths, and formulate a strategy (plan). However, this method of strategy formulation should be avoided because the company’s strengths (core competence) become a system constraint (see Chap. 7) and an ideal product development project cannot be conceived. Rather, we should create product development projects that focus on external opportunities and create new core competencies when our own core competencies are inadequate. In other words, we create new core competencies, if necessary, in order

4.3 Product Development Project from Vision

75

to integrate opportunities and visions. The accumulation of these new core competencies enhances the company’s technological and product development capabilities, and thus the company’s development. In summary, what is important in discovering subjects from a SWOT analysis is the “outside-in” approach. This means that we must identify the subjects that fit our vision from the “outside in,” i.e., the “opportunities” in the external environment. This is equivalent to capturing the evolutionary flow of future predictions. Because opportunities in most cases indicate the evolutionary flow of future predictions. We can also discover subjects from threats by reverse thinking. After that, we consider how to deal with the “in,” or internal environment from the perspective of our strengths and our weaknesses, and what new core competencies we need to develop. Based on this analysis, an indoor health and fitness product that meets demand has the potential to become a hit product.

4.3.3 Discover Subjects The next step is to create a specific subject that will become the core of the product development project. We find a subject that fits the vision, fits the opportunities of the time, and is threat-sensitive. The key here is the Meta-Concept Thinking. Instead of thinking symptomatically from a list of opportunities, think about what will be the problem if the opportunity is not solved when it is given, and then think about what should be done to solve it from a higher-level concept (meta-concept), from which many subjects that fit the vision can be developed. It is recommended that this process be conducted in groups. After several hours of meeting to discover subjects, the number of subjects can grow considerably. If there is a large number of participants, there can be as many as 100 or so subjects. Some of these subjects are similar to each other, and some are more unique than others when combined. We then organize and integrate these subjects to narrow down the number of subjects. After the consolidation process, there will naturally be many unimportant subjects, which will also need to be sifted out. If the number of subjects is narrowed down to too small, good subjects will be omitted, and if there are too many, it will be difficult to work on the next stage of finalizing the subjects. In general, it is desirable to limit the number of subjects to about 5 or 6 at most. Next in the process of narrowing down the number of subjects, first consult with everyone to determine the number of votes each person can cast. If there are many subjects to be voted on, a small number of votes may be unsatisfactory, and if there are too many participants and too many votes are held, it may be impossible to narrow down the number, so experience has shown that it is a good idea for each person to have about 3–5 votes. If you actually vote in this way, you can narrow it down to about 3–6 subjects, which is well done.

76

4 Feasibility Study of Product Development Project

4.3.4 Determination of Evaluation Criteria and Selection of Subjects Since the purpose of the aforementioned preliminary selection is to distinguish the good from the not-so-good, it can differentiate the good from the not-so-good without problems, although it may seem a rather rough approach. However, when it comes to selecting the final proposal at a stage where almost only the main proposal remains, a more precise and rational selection method is required. This is where the FPT comes in handy. The FPT is the most suitable selection method in this case because, as mentioned above, it is also an evaluation method for overall optimization. To do this, we first discuss and decide what criteria we will use to evaluate. Needless to say, it is important to decide on the correct evaluation criteria, since different evaluation criteria will naturally lead to different results. Therefore, the next step is to carefully and thoroughly discuss and decide on the evaluation criteria for the project. Here is a typical example. (1) (2) (3) (4) (5) (6)

Does it fit the vision? Is it profitable? Is there a need in the market segment? Is it unique? Can it be commercialized within one year? Does it have market potential?

This step is the first step in the process of developing a project with the goal of selecting a subject that fits the vision. The feasibility study will be conducted after the system has been designed. Unlike engineering problems, objective and quantitative data cannot be obtained in many cases at this stage, so kansei evaluation is used. 10-point evaluation is used. In general, the design range should be the same for all evaluation items, and the meaning of each point may be explained. For example, a score of 10 is “totally top rated,” and a score of 6 is “standard rated” and so on. The FPT does not allow for any weighting between items, so differences in importance are left to the kansei evaluation of each evaluator (monitor). The system range needed when calculating feasibility probability is the range over which the scores vary, as has been explained many times before, so if the mean is m and the standard deviation is s, the following equation is used to obtain it. m ± 3s Now, if the above value of the system range exceeds 10, then 10 is assumed. The FPT will narrow down to one final subject with the design range score of 6 or more. Let’s say that the hypothetical subject is “an indoor bicycle for fitness and exercise, as if you were riding in the suburbs” (hereafter named “virtual bike for fitness” or “virtual bike” for short).

4.3 Product Development Project from Vision

77

In fact, this subject was already proposed in my book [4] on January 15, 2011, but to my surprise, on the NHK morning news on July 11, 2020, a video was shown of a partially practical virtual road race on the Internet. It was an idea that the author had 9 and a half years ago, but since the road race was not possible due to the Corona pandemic, the video was conducted by inviting participants on the Internet. Although the virtual bike presented in this book has more features than the virtual bike introduced in the NHK news, it proved that the subject of this project was not wrong.

4.3.5 Selection of Functional Requirements and Conceptual Design of the System In this step, functional requirements of the subject of the product development project selected in the above process, i.e., those of the virtual bike, are determined, and the conceptual design of the system is done based on them. The first step is to create functions that can impress customers. The focus-object method may be effective here. This method was proposed by Whiting [9] in 1955. Some people may find this method difficult to use until they get used to it, but it is quite effective and should be tried. In the focus-object method, you first arbitrarily select any object or software that you like as the object to focus on. Next, you list the attributes of this object and associate new ideas (in this case, functional requirements) by forcibly connecting these attributes to the aforementioned subject. For example, Table 4.3 shows an example of creating a function for a new virtual bike, focusing on the author’s favorite game of golf. This kind of thinking method is used to create specific functional requirements. In today’s world, where the Internet has developed and the virtual world has become commonplace, this virtual bike has the potential to become a promising product (as of 2202). The functional requirements of the virtual bike can be summarized as follows. (1) Virtual bike for indoor health and fitness. (2) Virtual bike that can be enjoyed indoors like an outdoor sport. (3) Images of a Tour de France-like ride are projected on the monitor of the mounted headset. (4) The rider turns the handlebars in accordance with the road on the monitor and changes the treading force according to the gradient of the road to move forward. (5) The overall load can be changed according to the level of physical fitness. (6) Not only the quantity but also the quality of exercise is evaluated and displayed. (7) Easy to move anywhere indoors. (8) Guidance by an instructor is available for an appropriate fee on a case-by-case basis. (9) Exercise results can be scored (e.g., time to complete a given distance, etc.).

78

4 Feasibility Study of Product Development Project

Table 4.3 Functional requirements for virtual bikes Focus-object Attributes

Attractive functional requirements

Golf club

The distance to be flown determines the club

• Variable load according to fitness level • Achievement depends on the choice of load

Golf ball

Round, travels far, run

• Indicators can be better or worse depending on the quality of the practice • Inertia moves us forward

Golf cart

• Can move it anywhere in a house • The function of running is required • Just an annual fee and a professional checks us from time to time Work hard to achieve • Score, and experience a sense of accomplishment target scores, competition and difficulty at the same time • Handicap allows for equal competition even if there is a difference in physical strength

Caddie Score

Easy to move

Support play

Golf Navi

Be able to see ahead situation

Clubhouse

Lunch and a pleasant talk • Monitor provides a view of the road • Have a place to chat with friends, exchange information, enjoy a meal and a cup of tea

• Monitor provides a view of the road

(10) Compete with other people who have the same equipment on the Internet, including handicaps. (11) Even amateurs can participate in virtual road races on the Internet. (12) Opportunities for members to get to know each other through partnerships with restaurants in the city. (13) Famous road applications from around the world can be downloaded from the Internet. The author’s image changes the pedal load as if the driver is driving on a real road according to the road on the monitor, steers, brakes when it is dangerous, and stops the image there if the driver goes off the road. The time it takes to reach the destination is scored. A virtual road race can be held with many participants on the Internet. If you use it in your garden or on your balcony, you can do it while being exposed to sunlight. There are two possible types of monitors: screen type and goggle type. The goggle type provides a more realistic experience, but may be more difficult to use if you are unable to control sweating. The screen should display not only road images, but also information such as exercise time, distance traveled, average speed, calories burned, number of days exercised and so on. Weight, visceral fat, and body fat can also be measured on the virtual bike at any time, providing constant feedback on health status and motivating the user to “OK, I’ll keep going further”. The problem would be the price. In order to commercialize this basic functionality, it is necessary to determine which of these functional requirements are “functions that can impress customers”

4.4 Feasibility Study of Product Development Project

79

and balance them with cost. Therefore, we ask monitors to perform a kansei evaluation of each functional requirement. Again, the FPT is used.

4.3.6 Prototype Design Once the key functional requirements are determined, we design a prototype of the product that incorporates these requirements. Of course, this prototype is only a starting point for conducting the final feasibility study, so it does not matter if a certain level of design has been achieved. Next, based on this design, the most satisfactory system for the target segment of users (target user group) will be developed. In this case, a mockup may be created, or a product similar to the actual product may be manufactured for this level of product. The more concrete the product, the more accurate the next feasibility study will be. In this case, a powerful tool is the Design Navi described in Chaps. 5 and 6. Using it, the best product can be realized in a short time. The feasibility study using this prototype will enable highly accurate prediction of feasibility. Although this may seem like an elaborate process at first glance, it bears repeating that the first half of the project, from 4.3.1 to 4.3.5, is the most important step. No amount of time or effort is too much for the first half of the project. It will pay off later.

4.4 Feasibility Study of Product Development Project The second half of the feasibility study for the product development project will be conducted on the virtual bike design or mock-up or real vehicle prototype obtained above. The flow of the feasibility study for the second half of the project is as follows. (1) Predict the feasibility of all areas in which the system will be involved (2) Summarize the conclusions of the feasibility study. First, we list the relevant areas that must be considered with respect to the system (virtual bike) obtained above. In the case of this case study, the list is as follows. Technical feasibility Legal feasibility Operational feasibility Schedule feasibility Market feasibility Resource feasibility Financial feasibility Social Feasibility.

80

4 Feasibility Study of Product Development Project

We will examine the feasibility of the best prototypes made using Design Navi in these areas. Let’s look at how we predict feasibility in turn. The accuracy of the prediction of feasibility probability has a significant impact on the accuracy of the conclusions of the feasibility study. The theory of accurate prediction will have to wait for future research, but Bayesian estimation with a Kalman filter [10], assuming time-series changes in the situation, may be one powerful method. It is beyond the level of this book, so we omit it. Technical Feasibility P1 Among the functional requirements listed in 6.3.5 in the previous section, one technical requirement would be item (4), “ The rider turns the handlebars in accordance with the road on the monitor and changes the treading force according to the gradient of the road to move forward”. There is no particular parameter or system range, but rather a question of whether or not this functional requirement can be realized. Moreover, this is an item that must be realized, and although it may be difficult. Therefore, the probability P1 is as follows P1 = 1 Legal Feasibility P2 This virtual bike will not violate the Road Traffic Law because it will not run on public roads. If there are any laws related to the Electrical Appliance and Material Safety Law or the Household Goods Quality Labeling Law, they must naturally be cleared. P2 = 1 Operational Feasibility Probability P3 The vision for this project was to create an indoor health device that would help people live healthier lives. We then predicted the degree to which the proposed virtual bike would help people live healthier lives. We have a prototype that we manufactured by Design Navi, so we can rent it out to monitors for about a month and see how much their metabolic syndrome improves, how much their blood tests show that their neutral fat and bad cholesterol levels have decreased, and other health indicators can also be tested. For now, let us assume that the monitors are given a kansei evaluation based on their impressions of the test ride. The monitors will be asked to be other than members of the feasibility study team. And, for example, if 10 people were rated 7, 8, 9, 9, 7, 6, 9, 8, 8, 8, 8 on a 10-point scale. The author’s personal kansei evaluation would be higher, but since this is an exercise, I have assumed this way. Using the mean m and the standard deviation s calculated from above data, and if we assume that the system range is m ± 3s, P3 is obtained as follows.

4.4 Feasibility Study of Product Development Project

81

m = 7.9 s = 0.99 SR = 7.9 ± 3 × 0.99 = [4.93, 10] Assuming a design range of 6 or greater, CR is CR = 4 Therefore, feasibility probability P3 is P3 = 4/(10 − 4.93) = 0.79 Schedule Feasibility P4 Schedule feasibility probability is considered in terms of the time it takes for the product to reach the distributor. The system range is from the time when the environmental conditions are in place to minimize the time from system design to completion of the actual vehicle for sale, installation of production equipment, arrangement of subcontractors, adjustment of production conditions (again using Design Navi), and product delivery time, to the worst long process time if an unexpected delay situation occurs. Assuming that the goal is to deliver the product to the distributors within two years of the actual start of the project, and that the system range of the schedule is assumed to be 1.8–2.2 years, the feasibility probability of the process, P4 , is, P4 = (2.0 − 1.8)/(2.2 − 1.8) = 0.5 This would ensure that the overall system feasibility probability would be less than 0.5. This means that the project is unfeasible. Therefore, management made a bold decision to set the completion deadline at 2.5 years. During a feasibility study, it often happens that the process is changed for the sake of feasibility. This will naturally affect the financial and market feasibility probability, so if a problem difficult to be solved arises in those areas, this change will be invalid. As a result, as shown in Fig. 4.3, the system range is completely contained within the design range, so the system range and the common range have the same length. P4 = 1 Market Feasibility P5 There are many market-related considerations. As explained in the previous chapter, they include at least 8 items such as market size, optimal time to market, market sustainability, sales price and volume, service structure and promotion, market size scalability, distribution system, and geographic area of interest. Since we delayed the product launch in the previous section, the analysis of this relationship is very

82

4 Feasibility Study of Product Development Project Probability density

Fig. 4.3 Probability density of schedules

Design range

1.8

2.2

2.5

Schedule (year)

important. It is not possible to explain all of this here, but considering that this kind of product though with less functions has already been sold according to the NHK’s morning news on July 11, 2020, please forgive me to cut corners here and assume that the market’s system feasibility probability is as follows. P5 = 0.7 Resource Feasibility Probability P6 This is the probability of securing the required funds, real estate, and human resources (especially sports medicine-related specialists). This system feasibility probability is also assumed as follows. Although it should be assumed here that it is almost 100% possible to secure the optimal human resources, we have taken a safe position and predicted the following. P6 = 0.95 Financial Feasibility Probability P7 As mentioned in the previous chapter, we will consider how much money will be made by this project. There will be various scales, but for example, we will estimate how much cash flow this project will generate in the future. We will estimate the extent to which we can meet the set targets, how much there is a possibility of its deviation, what the economic fluctuations will be, and if we make a strict estimate, we will include unexpected disputes and other factors in our forecast. Since this is an exercise, let us hypothetically determine the system feasibility probability as follows P7 = 0.98 Unfortunately, the year 2020 saw a pandemic of a new coronavirus, which caused tremendous damage to many industries, including air transportation and tourism. And also in 2022 irrational war by Russia broke out and inflation has begun. The products discussed here may not need to have a particularly low feasibility probability because

4.4 Feasibility Study of Product Development Project

83

more people will instead exercise at home under such circumstances. However, the feasibility probability of the affected industries may have to be kept quite low to account for such unforeseen circumstances. This means that many projects will have to consider a low feasibility probability for the contingency, and most projects will not be feasible. So, what if we think about it this way? If, when an unforeseen situation is anticipated, there is a possibility that some measures to be recovered can be taken in this area of study, or if measures that are less likely to cause damage can be incorporated into the project in advance, the project will return to its normal course when such an unforeseen situation recovers and be realized. In this way, we may be able to adopt a high feasibility probability. Feasibility Probability of Social Contribution P8 This item was discussed in the operational feasibility section above, but there are also other items to consider, such as whether there will be any negative impact on the environment and whether the company can contribute through employment. But for now, let’s assume that there are no problems. P8 = 1 We are now ready to determine the system feasibility probability of the product development project. Although we have assumed feasibility probability to be simple, in reality it will be an arduous and time-consuming task. However, the larger the project becomes, the more time and effort will be well worth it, since failure is not an option. Also, if the system feasibility probability is low, we have to think of a countermeasure and modify the project so that the system feasibility probability is as high as possible. If you put that much effort and money into the project, you can be sure that it will be successful. Now, let us find the system feasibility probability P of this product development project using the above feasibility probabilities. P = 1 × 1 × 0.79 × 1 × 0.7 × 0.95 × 0.98 × 1 = 0.51 Since 0.51 is slightly above 0.5, albeit with an error margin, this project is a GO. However, as mentioned in the previous section, the fact that the system feasibility probability exceeds 0.5 does not reassure us. In order for this project to succeed, it is essential to have a project implementation organization consisting of a leader with passionate leadership and sincere implementation members with passion. Again, the evaluation can be accurately assessed by the monitor if a concrete system, or prototype, is presented. In feasibility studies, it is important to build actual prototypes. If several prototypes are built and used by a large number of monitors for a number of weeks, objective data on the degree of improvement in health can be obtained and evaluated correctly. For example, if we measure how much has neutral fat and bad cholesterol decreased, how much has lung capacity increased, how much has metabolic syndrome improved in terms of changes in waist measurements, etc.,

84

4 Feasibility Study of Product Development Project

objective and quantitative feasibility probability can be obtained. Since these data can be used as powerful promotional data during marketing promotions, they should definitely be evaluated on actual vehicles. Finally, unpredictable uncertain events are not discussed here to simplify the explanation, but they are obviously matters that must be taken into account. In the actual feasibility study, please refer to the method described in the previous chapter to include uncertain events. Now that you have a better understanding of the New Feasibility Study method using the FPT, we hope that you will be able to use this new method to conduct your own feasibility studies. If a project is proposed at your company, please use this New Feasibility Study method to ensure that it will produce results.

References 1. 2. 3. 4. 5. 6. 7. 8.

Gordon, T. (1977). Leader effectiveness training, A Perigee Book. Christensen, C. (1997). The innovator’s dilemma. Harvard Business School Press. Kim, W. C., & Mauborgne, R. (2005). Blue ocean strategy. Harvard Business School Press. Nakazawa, H. (2011). Trump card for a manufacturing. Nikkagiren. (In Japanese). Morita, A. (1986). Made in Japan. Dutton. Bloomberg Businessweek, April 3, 2006. Matsumoto, G. (1998). Love activates the brain. Iwanami Shoten Publishers. (In Japanese). Kawakita, J. (1970). Further idea of creation—development and application of KJ method. Chuokoronsha. (In Japanese). 9. Whiting, S. C. (1995). Operational techniques thinking. Advanced Management. 10. Fujita, K. (2010). Searching for the Invisible—That’s Bayes. Ohmsha, Ltd. (In Japanese).

Part III

Realization of Product-Design Navi

Chapter 5

Design Navi

The main subject here is Design Navi, an optimal design method invented by combining FPT and orthogonal tables. This method was granted a US patent in 1999 and a Japanese patent in 2002, but the patents have expired more than 20 years after the filing date, so it can be used freely now. In other words, Design Navi is such a reliable method that it has been granted a patent. Until now, we have called it the Nakazawa Method, but since this name makes it difficult to understand its contents, we will call it Design Navi in this book. Now that the know-how for using Design Navi properly is almost solidified, this book presents a compilation of Design Navi. Although this chapter and the following chapter mainly use examples of mechanical systems, there are no restrictions on the areas of application. No matter what field you are in, agriculture, aquaculture, cooking, medicine, electricity, chemistry, materials, etc., you can use this method to efficiently develop the best products and technologies. Please use it and experience the goodness.

5.1 Significance of Design Navi If the basic structure (idea) is correct, Design Navi can achieve a technology or a product surprisingly good and quickly. This fact is borne out by the results of companies that have used Design Navi. The traditional method of working by trial and error may seem easier, but the method cannot achieve what you want, and you end up having to redo it many times, wasting money and time. In comparison, you will be surprised at how easily the best can be achieved in a short period of time if you proceed systematically according to Design Navi, although it requires some time for experimentation and calculation. If we can produce good products, sales will naturally. With this in mind, we want to ensure the work is profitable in the way as it is said more hast less speed. Here, the use of Design Navi is explained using a

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Nakazawa, Design Navi for Product Development, https://doi.org/10.1007/978-3-031-45241-3_5

87

88

5 Design Navi

mechanical example of designing a cantilever, but the explanation is designed to be fully understandable to readers in other areas. Design Navi is a method developed based on the Information Integration Method introduced in Chap. 1 [1]. In a nutshell, Design Navi is a method of determining the optimal values for the critical parameters of the system to be developed. Compared to the conventional method of development, which involves repeated trial and error and whack-a-mole processes, and in which numerous requirements are not fully satisfied but the product is shipped with a lowered level of performance due to the time of sales launch or delivery deadline, this method enables the development of the best product with optimized requirements in a revolutionary short period of time. Even if the best product can be achieved, some information is needed to determine the values of individual parameters, so data must be collected through simple experiments or simulations. Although it takes some time and effort for experimentation, compared to the conventional development process of repeated trial and error and whack-a-mole, development can probably be done in 1/4 the time compared to the conventional development time, including experimentation. It also offers the highest performance achievable in its basic conception. For example, given the subject of developing the following paper airplane, how long will it take you to complete it to your satisfaction? [2] The development subject is as follows. “Draw and cut out the shape shown in Fig. 5.1 from a thick card of B6 size, a little larger than a postcard, to make a paper airplane that meets the following functional requirements.” Functional requirement 1: Fly as far as possible. Functional requirement 2: Reduce wing area as much as possible (meaning material reduction). The weight would be a single large clip, and the wings and a fuselage would be shaped as shown in Fig. 5.1. Fig. 5.1 Drawing of paper airplane

5.1 Significance of Design Navi

89

Fig. 5.2 Nine types of airplanes

When we asked a number of engineers (more than 100 in total), “Given a development subject like this, how long would it take you to do it?”, the answer was “more than two days” for more than 90% of them. How about our readers? When developed in Design Navi, the first step is to create 9 types of airplanes using combinations of dimensions A-D (the upper 9 airplanes in Fig. 5.2). The optimum range of dimensions will depend on the mass of the weight, so preliminary experiments may be necessary depending on the weight. Next, the flying distances of these planes were measured, the wing area was calculated, and these two types of data were analyzed using Design Navi (at that time, information content, not probability, was used for calculation) to determine the best dimensions for a plane that would satisfy the above two functional requirements. In this case, the design range (target value) was defined as the average or better of all data for each functional requirement. Although it is a prerequisite that the airplane be carefully made, it is recommended that an experimental catapult be prepared for better reproducibility of flight distance measurements. From these data, the Design Navi determined the optimal dimensions and the completed airplane is shown at the bottom of Fig. 5.2. The figure clearly shows that the wing area became considerably small. Moreover, this airplane flew very far. The performance was so good that it flew from one end of the large gymnasium to the other. This paper airplane was completed in about three hours, including all of the above work! So, you understand the outline of Design Navi. There are too many cases in the world where a good idea or basic structure is found, but the product is commercialized and sold without realizing even half of its original performance. Companies are busy competing with rivals every day, so they cannot spend much time on development. The reality is that they think it could be better if they spent more time on it, but they run out of time and end up selling it as a product with unsatisfactory results. Design Navi solves this kind of waste. In other words, it allows us to develop products that achieve 100% of the inherent performance of the idea.

90

5 Design Navi

The above is not the only effect of using Design Navi. The real purpose lies elsewhere. When we look at the development process of companies, we see many cases where they do not spend enough time in creating the idea or the basic concept, and later regret that the idea was not good enough. Instead, the product development process should be simplified through Design Nav, and more time and money should be spent on creating great ideas and basic structures. This is the process that achieves real productivity gains. I hope that the Design Navi will change the old style of developments like putting the cart before the horse. The application of Design Navi is not limited to the development of products and systems. It can be used for the development of new technology itself, the development of new materials, and product development in various areas such as medicine/ chemistry, agriculture/fishing, optimization of manufacturing conditions, and even the early stages of product planning. For example, the optimal conditions for a plant factory, the ingredients of feed and the optimal conditions for aquaculture that accelerate the growth of farmed fish, or the ratio of ingredients in a tasty dressing in the culinary field, the applications are limitless. It can also be used to improve products currently being sold. If you follow the steps of applying the Design Navi, you may not complete the project in one go, but if you repeat it several times, you will certainly feel that you are making progress toward completion. And if the idea is wrong, the Design Navi will tell you in early stage that the idea is wrong. Furthermore, as I have mentioned, a more important feature of this method is that it allows us to set target values to be achieved for all requirement items, and it guides us to realize all of them in a holistically optimal manner. In the world, development is often done by the method of partial optimization, but Design Navi is a method of total optimization, setting target values for each required item and achieving a good balance between all target values. One might ask how Design Navi can be used to find optimal values, but the mechanism is contained in the axioms discussed in Chap. 2. The validity of an axiom is demonstrated by the fact that the development of a product or technology based on this axiom actually achieves the target functionality. In fact, there is a lot of great evidence that has been achieved by using Design Navi, some of which is presented in Chap. 6.

5.2 Process of Design Navi Let’s start with the overall process of design navigation. The Design Navi process consists of the following six steps. (1) (2) (3) (4)

Determination of functional requirements and design range Design prototype Selection of critical parameters and determination of levels Collection of data by experiment or simulation based on orthogonal table

5.2 Process of Design Navi

91

(5) Determine the optimal values of the parameters by determining the system feasibility probability (6) Build a product with the optimal values obtained and check its performance. Let me explain this process using the example of cantilever development.

5.2.1 Determination of Functional Requirements and Design Ranges Again, we will use a mechanical example here, but it can be used in any case. First, the goal of what is to be developed must be explicitly expressed in the title. Expressions that are short, easy to say, and clearly understandable. This time we will use the following subject. “Development of a lightweight, deflection-resistant cantilever.” A cantilever is a beam with one end fixed. Next, the functional requirements of the object to be developed must be determined. Decide what function you want the object to have. Generally speaking, even if the title of the project is clearly defined when proceeding with development, it is often the case that development work is started immediately upon seeing the title without clearly and explicitly defining the functional requirements. For example, if the subject is “Let’s create a new bicycle,” the first thing to be decided is what kind of functionality a new bicycle should have. However, the members may not discuss the functions to be realized sufficiently, and instead, they may develop the bicycle on the spur of the moment, resulting in the development of an inadequate product that fails. It is very important to determine the functional requirements at the beginning of the development process (not only for development, but also for work in general). This is because functional requirements determine the vector of development. If we start development without deciding on this, we may end up going in different directions before we know it, or we may not be able to achieve unity and consistency in the final stage of development because each person has a completely different image of the subject when working as a team. Furthermore, the success of a development depends solely on the quality of the functional requirements. If good functional requirements can be determined, the product, when realized, will be one that sells in large volumes, leading to increased market share and higher corporate profits. In other words, in an extreme way, the fate of the company depends on it, so it must be decided carefully and over a sufficient period of time. See previous Part 2 for how to determine functional requirements. As described above, first determine the functional requirements, and then determine the specific target values for each functional requirement, i.e., the design range used in the calculation of feasibility probability. This is the target point to be reached and the end point of development. If we do not determine these acceptance criteria, we will not know when to finish the development.

92

5 Design Navi

A design range may be given as a specification from a specific user if that user is predetermined. Thus, when the design range is given from the outside, it is easy, but when we have to decide on our own, it is difficult because we have to decide on a reasonable level of design range that is not excessive or inadequate. One approach is the benchmarking method. If there is a similar product (including products from other companies) currently being manufactured and sold, this approach uses the performance of that product as a starting point. The intention is to create products that are better than those currently manufactured and sold. This is a method of decision making that is based not only on the performance of the company’s products, but also on the best performance in the industry. However, this may be superfluous, but instead of competing with rival manufacturers with the same functional requirements, we should set and develop innovative functional requirements that other companies’ products do not have. Determine the Functional Requirements (FR) and Design Range (DR) for this example as follows. The subject cantilever is subjected to a load of 100 N at 500 mm from the fixed end, and the material used is ordinary steel plate (Fig. 5.3). FR1: Cantilever with low deflection, DR of deflection is less than 0.06 mm. FR2: Cantilever with light mass, DR of mass is less than 2 kg. To be technical, since the material is steel plate, Young’s modulus was calculated as 206 GPa and density as 7.86 × 103 kg/m3 . These numbers will not appear in the following explanation, but the reader should only follow the flow of the following explanation to fully understand how to use Design Navi. Fig. 5.3 Cantilever (prototype design)

100N 500mm δ

b1

b2

h2

Cross section

h1

5.2 Process of Design Navi

93

There is another important aspect of functional requirements. That means not putting in unimportant requirements. For example, suppose a product development had multiple basic performances. Besides, suppose you also want to reduce the noise level a little more if possible (even though the current situation is not particularly problematic). Inclusion of noise in the basic functional requirements may sacrifice other performance requirements, and the optimal values of the parameters may be completely changed for this reason. This is, in a sense, like putting the cart before the horse. Customers want primarily a product with good basic performance in its original form. Noise is not a problem for customers because the level of noise is not very high, but overthinking and including noise as a functional requirement, which results in a reduction in basic performance, is a real downfall. In other words, the optimal values of the parameters will naturally vary depending on what is considered to be the functional requirement. Thus, we must be very careful in how we select functional requirements. Basic performance should be carefully checked first, and then necessary and sufficient functional requirements to meet it should be selected. Incidentally, if further improvement is desired with respect to noise, it is possible to determine the structure (parameters) that will maintain the best performance and then take measures in two steps to reduce the noise as much as possible. In general, this is possible.

5.2.2 Design Prototypes The next step is to design prototypes. Design prototypes at the current level of technology or with the best available technology in your company. Ideally, prototypes should be designed with the best technology and the best design, but they can also be imperfect. In fact, most of the time we start with an incomplete design. Although the time to completion will vary somewhat depending on the level, no matter what level you start from, as long as the basic structure is correct, the Design Navi will lead you to the best one, so you do not need to worry so much about the level of the first one. We use the term “prototype design” here, but do not take it narrowly. The term “design” does not only refer to the design of a product, but also to the specific technical details of the product if it is the development of technology, the operating conditions of the machines, equipment, and systems used in its manufacture if it is the improvement of a manufacturing process, or the development of software. In other words, the object of prototyping here has a broad meaning: specific technologies, conditions, machines, and systems. If simulations are available, general business plans such as management plans and corporate organizations can also be covered. Prototype design should follow the Axiom of Functional Independence proposed by Suh [3]. While this is a difficult requirement as a practical matter, if the function is not designed to functional independence, it will be a challenge to make it into a good product later. For example, suppose a refrigerator design has two functional requirements: “to put things in and out” and “to maintain a constant temperature in the

94

5 Design Navi

cabinet. These two functional requirements should be designed to be independent of each other. A structure in which the internal temperature fluctuates as you put things in and out of the cabinet is a bad design because the functions interfere with each other. In practice, a perfect solution may not be found due to installation area limitations and cost, but the next best design will be found. So, suppose we design the structure of the first prototype as shown in Fig. 5.3. For the cross-sectional shape, we decided on a box shape because it is resistant to torsion. To simplify the story and make the process easier to understand, we will assume that if the functional requirements for deflection are met, the product will not break in terms of stress.

5.2.3 Selection of Critical Parameters and Determination of Levels The next step is to find the parameters that most affect the aforementioned functional requirements. The four important parameters that immediately come to mind in the example treated here are b1 , b2 , h1 , and h2 . Finding which parameters have the greatest influence on functional requirements depends largely on the knowledge, experience, and intuition of each engineer. Once the parameters are determined, the next step is to determine the level at which the experiment will be conducted by assigning the values of the parameters later. When developing, it is often difficult to know what values to set for parameters. In such cases, it is necessary to conduct preliminary experiments to find important parameters and to determine in advance the range of parameters for which optimal values are likely to exist. In this example, we will calculate in advance how much structure will be required to get a rough idea of the structure. Once parameters are selected and levels are determined, the process of drafting an “orthogonal table” incorporating them begins. However, some readers may not be familiar with orthogonal tables, so I will briefly explain about them. The orthogonal table is one of the central tools used in Design Navi. It is a table of designed combinations of experimental conditions, originally established in the discipline of experimental design. The experiment design was invented by Fisher (1890–1962). The orthogonal table is a method used in research on wheat breeding and planting conditions that attempts to examine the effect of the factors to be studied with the fewest number of experimental samples possible. After experimental results are obtained, parameters (factors) and levels are reasonably assigned to facilitate statistical processing. In other words, by arranging the experimental conditions in this way, experimental design can use the statistical technique of analysis of variance to find the factors (parameters) that have the greatest influence on functional requirements even in a small number of experiments.

5.2 Process of Design Navi

95

One orthogonal table is shown in Table 5.1. There are many types and sizes of orthogonal tables [4], and the one shown here is the smallest of the three levels, L9. This table is the one most often used in Design Navi when the number of parameters is 4 or less, since it is the smallest of the 3-level tables and requires the fewest number of experiments. Cells that line up vertically in a table are called columns and those that line up horizontally are called rows. The table is organized with the first column being the experiment number. Columns 2 through 5 are assigned four parameters. The sixth and succeeding columns contain as many experimental data on the functional requirement to be evaluated as there are functional requirements. This data is used to determine the system range that will be needed later in the calculation of feasibility probability. In an orthogonal table, the columns where experimental or other data are entered are called evaluation items (corresponding to functional requirements). Basically, there is only one evaluation item (one column) in the case of experimental design and Taguchi Method, but in Design Navi, there can be any number of items. Two items are shown here as examples. In the development of actual technologies and products, there are always multiple evaluation items (in the case of the development of the cantilever in the example, there are two evaluation items). In this respect, Design Navi is more realistic and easier to use. A to D in the first row are the selected parameters. The second and subsequent rows contain the specific numerical values of the parameter. A1 , A2 , A3 , etc. are those. Since three kinds of values are entered, this is called “assigning three levels of values”. Each value is assigned to a defined position as shown in this table. Then, nine different combinations of experimental conditions are obtained, and the data are collected through experiments or simulation calculations under these combinations of conditions. Even when there is only one evaluation item, we often see the table used incorrectly. In other words, the mistake is to adopt the combination of parameter levels for the experiment number of the best data in the data of the evaluation item as the best Table 5.1 Orthogonal table L9 No

A

B

C

D

Evaluation item 1

Evaluation item 2

1

A1

B1

C1

D1

X1

Y1

2

A1

B2

C2

D2

X2

Y2

3

A1

B3

C3

D3

X3

Y3

4

A2

B1

C2

D3

X4

Y4

5

A2

B2

C3

D1

X5

Y5

6

A2

B3

C1

D2

X6

Y6

7

A3

B1

C3

D2

X7

Y7

8

A3

B2

C1

D3

X8

Y8

9

A3

B3

C2

D1

X9

Y9

96

5 Design Navi

condition. In general, the best combination of parameter values does not exist in this table. If we experiment with all combinations of the four parameters at three levels, we have a total of 81 experiments (3 × 3 × 3 × 3 = 81), and then we can see which combination is better. Therefore, it is wrong to choose one of the 9 combinations in the table. Moreover, Design Navi can find the overall optimal set of conditions from these 9 types of information as an in between value that is not among the 81 possible combinations. I will omit the analytical meaning, but to explain the term orthogonal in a sensible way, it means that any two columns contain all combinations of levels. The Design Navi uses the features of this table to gather information in an exhaustive and rational manner using a small number of experiments to find the optimal design value. Parameters do not necessarily have to be continuous quantities. For example, if you want to adopt an outsourced part as a parameter and assign it to parameter B, you can assign Company J’s part, Company K’s part, and Company L’s part as J, K, L for B1 , B2 , and B3 . In this case, B becomes a discontinuous quantity. The point to keep in mind when assigning levels is that the levels should be as wide as possible to avoid the cost and time required to repeat the experiment. However, if too wide a range of levels is taken in an experiment to make something, it is obvious that unusable products will be produced, and such data may be meaningless. Moreover, when the cost of materials is high, we want to avoid such waste as much as possible, so it is necessary to conduct experiments at a suitable level so that realistic data can be obtained. You should experiment or simulate as much as possible putting the optimal value between suitable levels. If you are not completely sure where the optimal value is, you should conduct a preliminary experiment as described above to get a rough idea. Furthermore, it is advisable to distribute the levels as equally spaced as possible. The problems with this are discussed below (see the explanation in Fig. 5.13). Experiments are time-consuming and costly. If it is possible to make an approximate calculation, we would like to collect as much data as possible by calculation. If a simulator is available, it is best to use it, but an approximate calculation will suffice. Please understand that it is dramatically better to use the optimum values obtained by this Design Navi, even with approximate calculations, than to develop by trial and error without using the Design Navi. Notions, X1 , X2 , X3 , and, Y1 , Y2 , Y3 , shown in the evaluation items column generally means that each cell will contain multiple data. Design Navi calculations should not take the average of these multiple data. Design Navi is designed to find the optimal value of parameters, including variations in performance, so never use averages but to use multiple raw data. As mentioned earlier, there is no limit to the number of evaluation items in Design Navi, but it is advisable to limit the number to five or six at most, as too many can make it difficult to find optimal values. Here, the number of parameters is 4, so an orthogonal table L9 is used. If the number of parameters to be handled in the three levels increases to, say, eight, then a larger orthogonal table, L18 (but only the first column is two-level), is used

5.2 Process of Design Navi

97

(see Appendix). As the number of parameters increases, a larger orthogonal table is prepared and used accordingly. However, L9 has one drawback. That is, when the relationship between columns 1 and 2 influences each other (this is called interaction) is strong, the influence of the interaction is stronger in columns 3 and 4. This is called confounding. Therefore, the data for columns 3 and 4 will have some errors due to their influence. To avoid this, as much as possible, assign parameters to the first and second columns that appear to have less interaction (i.e., are more independent of each other). Some say that L18 should be used as much as possible because it is said to have less of this effect, but using L18 means twice as much work and cost. In Design Navi, we do not mind a little bit of interaction effect. In engineering, it is all about producing a good product, so we recommend that you use L9 as much as possible to save time and money and to promote efficient development. Fortunately, as will be explained later, interaction effects are rarely a concern because of Design Navi’s feature of allowing the optimal value of each parameter to be determined independently. Interaction means that when determining the optimal value of A, for example, the optimal value of A depends on the values of other parameters, such as B and C. In other words, the optimal values of the parameters cannot be determined independently of each other. However, as you can see from the orthogonal table in Table 5.1, for example, when calculating the system range for A1 , there are three rows for the level of A1 , and if you look beside it, the parameters for B, C, and D are included in all three levels, so no matter what values are determined for B, C, and D, the optimal value for A is determined while considering their effects. In other words, Design Navi also takes into account the effects of interactions in determining the optimal values of the parameters. In fact, it is clear that L9 is sufficient for many of the products developed with Design Navi, as they have achieved their target performance even with such effects. Although we have discussed 3-level orthogonal tables above, 2-level orthogonal tables can also be used. However, the disadvantage of this table is that it can only determine which of the two-level points is optimal, and if there is an optimal value in the middle, it cannot be found.

5.2.4 Collection of Data by Experiment or Simulation Based on Orthogonal Table Now comes the process of obtaining the data needed for the system range, either by experiment or by simulation or approximate calculation based on the orthogonal table. It is preferable to obtain the data by simulation as much as possible, since it is considerably more advantageous in terms of time and cost not to perform experiments. However, even if simulation is not possible, if approximate calculation is possible, it may be slightly less accurate, but it will give much better results than the conventional trial-and-error, half-baked development.

98

5 Design Navi

Different simulators can be used for each evaluation item. For example, a thermal simulator can be used for evaluation items related to heat, while another simulator for fluid calculations can be used for evaluation of fluid-related characteristics. In any case, we want to save you the trouble of experimentation as much as possible. In this sense, we would like to limit the number of parameters to four and use the L9 orthogonal table. Even so, simulations and approximate calculations are only approximations. In the case of simulation, it is often impossible to change the conditions of the external environment (which delicately affects the calculation results), and especially when developing in a new field, the subtle effects of the external environment cannot be predicted by calculation, so collecting data by experiment remains the royal road to success. The basic experimental conditions are as given in the orthogonal table, but there must be other external conditions that affect the data. For example, room temperature, humidity, variations in characteristic values due to differences in lots of purchased parts, variations in input values to the development target, and many other things can be considered. The following is an idea of how to consider these many influences. If the combination of external conditions that produce large values for evaluation items and the combination of external conditions that produce small values are known in advance, data can be taken under the combination of these two types of external conditions. In other words, taking data under both extremes will allow us to determine a wide range of system ranges. The multiple data that go into each cell of an evaluation item are raw data obtained from experiments or simulations. As mentioned before, never use the average value of the data, because if you use the average value, the variation in the data will be filtered out and the characteristics of the variation will be lost. Design Navi should never use averages, since the main feature of Design Navi is to determine realistic optimal values that take into account variations in usage conditions (i.e., the best performance no matter what the usage conditions are). Now let’s go back to the cantilever in the example and do the calculation. In this case, the mass (weight) can be obtained by calculation, and the deflection can also be calculated by the study of mechanics of materials, which is equivalent to obtaining data by calculation. We can also predict where the optimum value is likely to be by preliminary calculations, so we have decided on a level putting the optimum value between the two values. Table 5.2 shows the calculation results for deflection (evaluation item 1) and mass (evaluation item 2). There is no repeated calculation here with different external conditions, so there is only one piece of data in each cell. The average in the bottom row is the average of the data in each column of the evaluation item, and is calculated because this value may be used for the design range when there is no special value specified for the design range. In this case, it means only that a better product should be developed.

5.2 Process of Design Navi

99

Table 5.2 Orthogonal table for finding the optimum shape of cantilever No.

b1

h1

b2

h2

Deflection

Mass

1

30

60

27

52

0.0904

1.556

2

30

65

28

55

0.0678

1.611

3

30

70

29

58

0.0524

1.643

4

32

60

28

58

0.1675

1.163

5

32

65

29

52

0.0515

2.248

6

32

70

27

55

0.0374

2.967

7

34

60

29

55

0.0964

1.749

8

34

65

27

58

0.0596

2.531

9

34

70

28

52

0.0314

3.631

0.0727

2.122

Mean

5.2.5 Determine the Optimal Values of the Parameters by Determining the System Feasibility Probability Now, from here, we begin the process of finding the optimum values for each parameter to achieve a light beam with minimal deflection. This is the core of Design Navi. The basic flow of calculation is the same as that of the FPT. In other words, if we find the feasibility probability of deflection and mass, multiply them together to find the system feasibility probability, and make a product with the parameter values that have the highest probability of achieving these two requirements, we will have a beam that will realize these two requirements with the highest probability. The concrete calculations are as follows. Let us find the feasibility probability of deflection for the parameter b1 . First let us find the system range of deflection at the position where b1 is 30. Looking at the orthogonal table, we see that 30 is located at three locations (rows 1 through 3). Therefore, we can find the average value m of the three sets of data 0.0904, 0.0678, and 0.0524 corresponding to this 30 as a group. m =

0.0904 + 0.0678 + 0.0524 = 0.0702 3

The standard deviation s is calculated by substituting the above three numbers for xi using the formula in Chap. 2. ┌ | | s=√

n / 1 ∑ (xi − m)2 = 0.0191 n − 1 i=1

Therefore, the system range at a value of 30 m ± ks

100

5 Design Navi

Since FPT requires an absolute value of feasibility probability, the value of the system range coefficient k is 3, as explained in Chap. 2. However, in Design Navi, the relative feasibility probability between levels of the same parameter is important. So, k is calculated as 1, because the sensitivity to find the optimum value will become dull if the system range is increased up to 3 times. But as the design range becomes tighter, the range in which the upper and lower points of the system range falls outside the design range becomes wider, making it harder to find the true optimum value. In that case, increase k. In this case, the upper value of the system range is 0.0893 and the lower value is 0.0511. These two points are plotted on the line in Fig. 5.4 where b1 has a value of 30. Similarly, the system ranges for positions 32 and 34 are calculated and the points on each line are plotted. For example, the system range for 32 is calculated from three data sets of 0.1675, 0.0515, and 0.0374. The reason why there is such a range of variation in deflection is that if we look at the three rows in the orthogonal table where b1 has the same value, we see that all levels of the other parameters are included, as mentioned above. In other words, the reason why such a varied range of deflection exists is due to the effect of the variation of all the other parameters. That is, no matter what the values of the other parameters are, the deflection at the position of b1 is always within the system range calculated here. Now that the upper three points and the lower three points of the system range have been obtained, the upper three points are connected with a quadratic curve, and the lower three points are connected with a quadratic curve as well. Figure 5.4 shows the upper and lower system range curves obtained in this way. The design range of deflection is the dark area of 0.06 or less. 0.18

Fig. 5.4 System range curves of deflection at b1 (k = 1)

0.16

Deflection [mm]

0.14 0.12 0.1 0.08 0.06 0.04 0.02 0

30

30.5

31

31.5

32

32.5

b1 [mm]

33

33.5

34

5.2 Process of Design Navi

101

Fig. 5.5 Deflection feasibility probability curve for b1 (k = 1)

Based on the above preparation, the feasibility probability at each position is obtained by dividing the area between 30 and 34 into several equal parts, and connecting these points to obtain a deflection feasibility probability curve. Figure 5.5 shows the result of the calculation by dividing into 100 equal parts as an example. Similarly, the feasibility probability curve for mass is shown in Fig. 5.6. The left end of the mass feasibility probability curve is partially straight, indicating that the feasibility probability for that range is 1. The curve for mass has exactly the opposite trend to that for deflection. In other words, deflection and mass have a trade-off relationship. Next, the system feasibility probability is obtained by multiplying the two feasibility probabilities at each position along the x-axis. The resulting system feasibility probability curve is shown in Fig. 5.7. The system feasibility probability is maximum at b1 of 30.4 (the location of the optimum value and the value of the system feasibility probability there are shown as xy-coordinate values in the figure), so this value is the optimum value of b1 that best satisfies the design range of required values for deflection and mass. In this example, three points are connected by a quadratic curve to obtain the system range curve, but this assumes that there are no abnormal changes between levels (between 30 and 32, between 32 and 34) that would not be represented by the quadratic curve. For example, if there are abnormal changes such as resonance phenomena in electric circuits or mechanical systems, or phase transitions in chemical reactions, they cannot be connected by a quadratic curve. Only the values at the three points of the level can be used. In other words, such cases cannot be treated as continuous quantities, so only the system feasibility probability at each level position can be used. If you want to obtain a special value between levels, you need to re-establish a level in the vicinity of that level and perform the experiments again.

102

5 Design Navi

Fig. 5.6 Mass feasibility probability curve for b1 (k = 1)

Fig. 5.7 System feasibility probability curve for b1

There are two points to note here. Because it is troublesome to calculate the system feasibility probability by subdividing the range of all levels, it is often adopted to calculate the feasibility probability of three levels first. Then, the feasibility probabilities of these three points are approximated by a quadratic curve and connected to find the point where the feasibility probability is maximum. This is called “approximate

5.2 Process of Design Navi

103

calculation” of feasibility probability. In conclusion, this approximate calculation should not be used because there is a risk of obtaining the wrong optimal value [5]. Another point is when the system range curve falls into an area that does not exist in reality. For example, a negative power consumption means that power is generated instead of consumed, which is not possible in reality. However, the system range calculation is only a convenience for determining feasibility probability, so when calculating the feasibility probability, it must always include the negative side. Next, let us examine h1 . First, the system range curve of deflection for h1 is shown in Fig. 5.8. One thing to note here is that below a value of h1 around 63, both the upper and lower system ranges are outside the design range. In other words, the feasibility probability is zero for this range. Thus, the wider the range over which the system range falls completely outside the design range, the more difficult it is to find the correct optimal value. In the extreme case, if the feasibility probability falls to zero at two of the three levels (in this case, levels 1 and 2), the optimal value will not be found. In such cases, the levels must be reassigned and re-experimented. Re-experimentation is undesirable because it is expensive and time-consuming, so the request is made that a realization probability curve be obtained using only the first data taken without re-experimentation, even if it is an approximation with poor accuracy. The method adopted is to increase k so that at least one of the curves, the above or below of the system range curves is fairly within the design range and the feasibility probability is greater than zero for most of the range. However, it is recommended to limit k to 2, since raising k beyond 2 may result in a larger error. Since the discussion in this area has not yet been established academically, we would like to wait for further research in this area. Fig. 5.8 Deflection system range curves for h1 (k = 1)

104

5 Design Navi

So, if we calculate the system range with k as 1.5, at least the lower line of the system range will be completely within the design range, as shown in Fig. 5.9, and the zero portion of the feasibility probability will disappear. For h1 the system feasibility probability curve is calculated with k as 1.5 and shown in Fig. 5.10. Since no such problem arises for b2 and h2 , the system feasibility probability curves are calculated with k set to 1 and shown in Figs. 5.11 and 5.12. Please recall that each parameter can be calculated independently. Fig. 5.9 Mass system range curves for h1 (k = 1.5)

Fig. 5.10 System feasibility probability curve for h1 (k = 1.5)

5.2 Process of Design Navi Fig. 5.11 System feasibility probability curve for b2 (k = 1)

Fig. 5.12 System feasibility probability curve for h2 (k = 1)

105

106

5 Design Navi

Fig. 5.13 Case when upper and lower system range curves being reversed

In this way, optimal values for all four parameters were obtained. The four optimal values are summarized in Table 5.3. To summarize the above concept of k, when either the upper or lower system range curve falls within the design range, k = 1, which has good sensitivity, is used in the calculation. For example, when the average value of the data is used as the design range, k = 1 is acceptable because the feasibility probability is rarely zero. However, as the design range becomes tighter, k = 1 will result in a wide range of zero feasibility probabilities, making it difficult to find the true optimum value. In such cases, k = 1.5 or more but less than 2.0 is used. Finally, we will discuss the phenomenon of the upper and lower system range curves being reversed. This phenomenon sometimes occurs when the levels are set at unequal intervals. For example, in the example in Fig. 5.13, when levels P are set to 5, 6, and 9, the system range is inverted between levels 6 and 9, making it impossible to calculate the correct optimal value. Therefore, data should be taken with levels equally spaced as far as possible, but if this is not possible and the system range curve is inverted, the only correct value is the position of the level. In other words, the correct value is only the position of the level. In other words, the feasibility probability cannot be treated as a continuous quantity, and only the value at the position of the level can be used. Table 5.3 Performances at the optimal values Parameter and evaluation item

b1

h1

b2

h2

Deflection (mm)

Mass (kg)

Optimum value and performance

30.4

66.1

29

53.9

0.0573 Target:0.06 or less

1.754 Target:2.0 or less

5.2 Process of Design Navi

107

5.2.6 Build a Product with the Optimal Values Obtained and Check Its Performance In the final step, the system is built with these optimal values and checked to see if it performs the target values. If the performance is in line with the target values, development is complete, but if the performance is not as planned, the levels are reassigned and the same process is repeated. When making corrections and redoing the process again, we can estimate the parameter values that are sure to produce better results than before, so we do not proceed with development while worrying about whether it will work or not, as in the conventional trial-and-error process. This point will be explained later. Now let’s go back to the above example to see if the performance is at the optimal value. In this case, we can check by calculation without experimentation. The results are shown in Table 5.4. It is clear from the results that the target values for both deflection and mass can be achieved in an overall optimum manner if the cantilever beam is built with these optimum values. Therefore, the development is now complete. Then what if a more difficult request is made? For example, FR1: Deflection: 0.05 mm or less. FR2: Mass: 1.6 kg or less. In other words, we want less deflection and less mass. The optimal values and final performance in this case are shown in Table 5.4. In this case, a value of k of 1 resulted in a feasibility probability of zero for a large range of h1 , so a value of k of 2 was used for all parameters. As a result, neither the deflection nor the mass achieved the target. Now, what should we do when the required value cannot be met in this way? In such a case, we first check whether the optimal value is well captured by the level of the initial design. Looking at the system feasibility probability curve of h2 in Fig. 5.14, we can expect better results than at present, since the feasibility probability will increase if we move h2 further to the right than the right end of the curve. It is important to note that if it turns out that h2 should be larger, the orthogonal table must be rearranged and reexamined with the new combination of conditions. It is a mistake to think that only this parameter should be estimated from the graph and set to a larger value. If you change the level of any parameter out of level, the optimal values of the other parameters will also change, resulting in a loss of balance Table 5.4 Performances at the optimal values (case where target values are stricter, k = 2) Parameter and evaluation item

b1

h1

b2

h2

Deflection (mm)

Mass (kg)

Optimum value and performance

31.2

68.6

28.2

58

0.0521 Target: 0.05 or less

1.986 Target: 1.6 or less

108

5 Design Navi

Fig. 5.14 System feasibility probability curve for h2 (k = 2)

in the optimal values of the parameters as a whole, and ultimately, no performance. You have to experiment again with a new orthogonal table, even if it is troublesome. Another way to improve is to check to see if any important parameters were overlooked. In this example, in fact, one important parameter was not taken into account. The parameter that would improve the fact that the beams have more extra material (mass) at the tip, which has less effect on reducing deflection, was missing. Therefore, if a new parameter that makes the cross section thinner toward the tip is considered, it may be possible to meet this strict requirement. If it still does not work, then this basic structure that was initially designed to meet this requirement was wrong. Design Navi teaches that this basic structure must be fundamentally changed. In the conventional development process, we often end up wasting time and money by repeating trial and error because we cannot get this kind of information, that is, information that says, “The basic structure is incorrect”. Design Navi eliminates such waste and leads you to the right solution quickly. In this example, a different basic structure means that we need to consider, for example, installing a support at the tip, or adopting expensive but rigid and lightweight materials. There is one more important point to be considered in the verification experiment. It is necessary to confirm that ±3σ of the performance (functional requirement) (assuming a normal distribution, 99.7% falls within the required range) is within the design range through repeated experiments under different external conditions (Fig. 5.15). In this case, σ is the standard deviation assuming a normal distribution of the data, and if this is confirmed, it means that the required performance is guaranteed with good reproducibility. When changing external conditions, we must also make sure that performance is within the design range even under the extreme conditions under which the user is likely to use this system. When taking data according to the orthogonal table in the early stages of development, if the experiments were conducted under the extreme

5.2 Process of Design Navi

109

Fig. 5.15 Result of performance confirmation experiments

conditions of use that can be expected, the optimal values should be obtained by the Design Navi to take this into account. This is a bit technical, but as I mentioned before, in this type of beam problem, we need to find out if the beam will break under this load, but will not cause permanent deformation. More technically, we also need to examine whether or not the vertical walls of the beam will buckle, since they will be quite thin (0.75 mm). These considerations are omitted here. But in general, if the deflection is kept below a certain value, there will be more margin for stress. Even if the optimum values of the parameters have been determined, the question is how much deviation from these values will result in a loss of performance. In other words, the question is how to determine the tolerances of each parameter to ensure safety in quality control. In this case, an orthogonal table is created by estimating the range within which quality can be maintained (the range in which tolerances are determined), and then rebuilding the table by dividing it into three levels to re-experiment. Since the aforementioned performance verification experiment has confirmed that all functional requirements are within the design range, there should naturally be a range of system feasibility probability of 1 for all parameters. Therefore, if the parameter tolerances are set within the range of system feasibility probability of 1, quality is assured. Needless to say, considering these data, it is better to set the tolerance as wide as possible to lower the production cost. There is one caveat in performance verification experiments. Although Design Navi can find the optimum values for the parameters of a given structure, it does not guarantee that the target values will be achieved with those optimum values. The role of Design Navi is only to achieve the best performance of the given system. It is not Design Navi’s responsibility if performance is not achieved in performance verification tests. It only means that the basic system is inadequate and incorrect. If the basic system is wrong, no matter how much the best performance of the system is achieved, the goal cannot be achieved.

110

5 Design Navi

5.3 Performance Prediction Suppose, for example, that the optimal values for performance P are obtained as AOP to DOP , as shown in Fig. 5.16. Naturally, the system range for performance P has already been obtained for the four parameters A–D. From this information, the range of variation (system range) of performance P at the optimum value of each parameter can be obtained as SA –SD . What this figure means is that the expected range of performance P corresponding to each parameter is different. For example, if the optimal value of parameter A is AOP , the expected range of performance P is SA , but if the optimal value of parameter B is BOP , the expected range of performance P is SB , which is different from SA . The expected range of performance P that the system can achieve when all parameters are fixed at their optimal values must be the product set of all these system ranges, i.e., the common system range. Thus, in the case of this figure, for values in the upper system range, following inequality is obtained. PB2 > PC2 > PA2 > PD2 For values in the lower system range,

Fig. 5.16 Performance prediction

5.4 Case When the Design Range Is a Positive Single Value

111

PC1 > PD1 > PA1 > PB1 then the expected range of performance P taken by the system when adopting these optimal values for the parameters is PD2 > P > PC1 Thus, we can only say that the central value of this range will be the most probable central expectation. The range of this performance is generally wide. This is because the initial orthogonal table has a wide range of levels. If one wants to predict performance over a narrower range, one can rerun the orthogonal table with narrower levels above and below the optimal values of these parameters. Using the system range obtained from this re-experimentation, the above calculation can predict a narrower, more pinpoint performance than the above. However, this is not very useful unless there is a need to obtain such a performance range, since the experiment with the orthogonal table must be performed again.

5.4 Case When the Design Range Is a Positive Single Value So far, we have discussed cases where the design range is below or above a certain value or has a certain width. In practice, however, there are cases where we want the design range to be a positive constant value without a specific range. For example, you may want to produce a film with a thickness of 0.1 mm, or you may want to mass produce a 50 μF capacitor. In reality, tolerances are determined, so it is sufficient to manufacture the film within a tolerance range of 0.1 ± 0.01 mm, or the capacitor within a tolerance range of 50 ± 0.1 μF, etc. Even so, the range of this design range is so small that it can be regarded as a constant value. Thus, a positive constant or narrow design range will most likely be completely contained within a much wider system range (Fig. 5.17). When this happens, the design range within the system range will have the same feasibility probability value no matter where the design range is located in the vertical direction. In the case of the figure, the center of the system range has a small feasibility probability value because the width of the system range is large, while the two ends of the system range have a larger feasibility probability value because the width of the system range is small. If the system range is the same width on both sides, which one should be adopted? Actually, it does matter which one is adopted. The position of the design range is the issue. The parameter value whose design range is closer to the center of the system range in the vertical direction is the correct optimal value. This is because, as you can see in Fig. 5.17, as the system range gets narrower and narrower, the left end of the design range goes out of system range more quickly. In other words, it is more

112

5 Design Navi

Fig. 5.17 Case that design range is narrow

difficult to realize the goal at the left end. Therefore, the rightmost parameter value is the optimal value in this case. In other words, when feasibility probabilities have similar values, the closer the center value (mean value) of the system range is to the center of the design range, the better. However, if the design range is narrow or positive and constant, such information will not appear in the calculated value of feasibility probability. So, what can we do to improve this? We can devise the data itself so that data close to the target value is moved closer to the target value, and data further away are moved further and further away, thus expanding the system range. The method is to use square error. The square error is the squared deviation from the target value yi at each xi , denoted ei 2 . If we want to keep within the required range ed , we consider a design range of ed 2 or less. If we redefine the data and design range in this way, the further away from the design range, the larger the number becomes and the wider the system range becomes, so the feasibility probability also becomes small and the value of the parameter is not chosen as the optimal value. The farther out of the design range, the wider the system range is, which means that the sensitivity is increased in the opposite sense. Therefore, the advantage of this is that the probability of finding the optimum value with higher sensitivity is higher than that of a simple difference e.

5.5 Feasibility of Functional I/O Relationship

113

5.5 Feasibility of Functional I/O Relationship Not all products in the world require a certain output. For example, an automobile may be required to achieve a certain relationship between the amount of step on the accelerator pedal and the acceleration of the car. The same is true for automobile deceleration. A lighting fixture with variable illuminance may be required to have a nearly linear relationship between the angle of rotation of the knob for adjustment and the illuminance. The same is true for the relationship between the knob rotation angle and the volume in the audio equipment. The subject of this section is how to realize a system with such an input–output relationship. Figure 5.18 shows a simple input–output relationship. To achieve this kind of input–output relationship, you determine the design ranges for a and b, and include a and b as evaluation items in the orthogonal table. Of course, if other evaluation items exist, they will be added as well. Then, under the conditions corresponding to each row of the orthogonal table, the input is changed and the output is measured to obtain the relationship as shown in the figure. From the line representing the input–output relationship, a and b can be entered as data in the cell for the evaluation item. Naturally, the values of a and b will vary as the experiment is repeated, but by applying Design Navi in this way, the optimal parameter values that realize the most totally optimal input–output relationship can be obtained, including these variations and other evaluation items if any. A further extension of this problem is that we want to correctly produce an output y according to a nonlinear functional relationship for an input x. In other words, we may want to realize a system in which the input and output are of nonlinear functional relationship. Written in the form of equation: y = f (x) If f (x) is simply a linear relationship, then the aforementioned approach is sufficient. However, if you want to realize a more complex nonlinear input–output relationship, such as the one shown in Fig. 5.19, you can apply Design Navi using the square error. Fig. 5.18 Case for linear functional relation Output y

y = ax + b

b

0 Input x

114

5 Design Navi

Fig. 5.19 Case that functional requirement is no-linear functional relation

e3

Output y

y3 y2

e2 e1

y1

0

x2

x1

x3

Input x

The case of an L9 orthogonal table is illustrated in Table 5.5. A sufficient number of definition points are placed on the curve (the figure shows 3 points for simplicity, but the number of definition points can be increased for more complex curves). The output y is measured with respect to the input x, and the error e from the original curve is obtained. The evaluation item is calculated as the square of the deviation e from the target value, as described above. In this table, only the functional relationship is listed as an evaluation item, but naturally, all other functional requirements are included in the columns to the right of column y3 and thereafter. However, for simplicity of explanation, these evaluation items are omitted here. By devising evaluation items in this way, general function relations can be treated in the same line as ordinary evaluation items. Table 5.5 Case that functional requirement is no-linear functional relation No

A

B

C

D

y1

y2 2

e31 2

1

A1

B1

C1

D1

e11

2

A1

B2

C2

D2

e12 2

e22 2

e32 2

D3

e13

2

e23

2

e33 2

2

e24

2

e34 2

3

A1

B3

C3

e21

y3 2

4

A2

B1

C2

D3

e14

5

A2

B2

C3

D1

e15 2

e25 2

e35 2

D2

e16

2

e26

2

e36 2

2

e27

2

e37 2

6

A2

B3

C1

7

A3

B1

C3

D2

e17

8

A3

B2

C1

D3

e18 2

e28 2

e38 2

D1

2

2

e39 2

9

A3

B3

C2

e19

e29

5.6 How to Choose Parameters

115

5.6 How to Choose Parameters Development using Design Navi is easy if all the important parameters are selected, but if even one important parameter is missing, it is necessary to find that parameter, reassemble the orthogonal table again, and conduct the experiment. As explained earlier, Design Navi will tell you when an important parameter is missing, but if you do not know the missing parameter from the beginning, you will have to spend more time and money. Therefore, it is preferable to know all the important parameters from the beginning in order to develop efficiently. As mentioned earlier, functional requirements are preferably independent of each other, with no overlap. If they are independent, then parameters should be chosen corresponding to each functional requirement. In other words, the number of parameters should be the same as the number of functional requirements. However, if such a situation is not feasible in practice, it is not necessary to have the same number of parameters. Finding parameters that are important in some sense has long been the story that leads to great scientific discoveries. For example, if we have an objective to increase the growth rate or yield of a plant, and we are trying to find the factors (parameters) that govern this objective, we can look at light intensity, light spectrum, light shedding cycles, temperature, humidity, nutrients, new and unconventional nutrients, how to water the plant... and once you start thinking about it, you have to examine a huge number of factors. Such problems cannot be handled by Design Navi. It is also a matter of the skill of the developer. It would be great if research in this field becomes more active in the future and many important findings are obtained. In experimental design, the influencing factors can be determined by, for example, experimenting with an orthogonal table and using analysis of variance, but unfortunately, it can only handle one evaluation item, and it is not a method that can exhaustively find the important factors. In real-world development where there are multiple items to be evaluated, it is rather easy to see at a glance which parameters have a significant impact by looking at the system feasibility probability curve using Design Navi. A parameter that is close to 1 or has a portion of the system feasibility probability curve that is 1 is a parameter that has a large impact. In particular, if the optimum value is located where the feasibility probability curve rises and falls sharply toward 1, the performance will deteriorate immediately if the parameter deviates from the optimum value (i.e., the feasibility probability will decrease), so it must be treated carefully. However, in order to avoid overlooking important factors, the best way is to find factors that affect functional requirements through preliminary experiments, etc., select parameters that seem to be the right ones, and confirm them in Design Navi.

116

5 Design Navi

5.7 Features of Design Navi Design Navi has a number of features, but the most important one is that it can systematically develop the best product that satisfies multiple functional requirements in a holistically optimal manner in a shorter period of time with fewer experiments than the conventional development method that involves a series of trial and error through ad hoc experiments. Although we have not obtained accurate data, most engineers in general are of the opinion that it is possible to reliably develop a product in 1/4 of the time compared to the conventional trial-and-error method, as we first mentioned. Moreover, it is possible to achieve the best performance that the basic structure can offer without compromise. When the basic structure is incomplete, it tells you during the development process that the basic structure is wrong, thus saving you from wasting your time. In the past, it was difficult to know that the basic structure was not good, and money and time were wasted in the end. Just being able to prevent this from happening can greatly help reduce development costs. Design Navi has the effect that once the basic idea is decided and the basic design is created, the work of realizing the highest level of design can be done easily. Put another way, the author’s argument is, “Don’t spend time on the development process of materialization, but spend plenty of time on the creation of a good idea (basic structure)”. Design Navi is a method that enables holistic optimization of multiple requirement items. Unlike the partial optimization method, in which the optimal value is determined only for a particular requirement item, this method can be used with confidence because it can holistically optimize all multiple requirement items in a well-balanced manner. If there is a defective item, the optimal value is never obtained. Another feature of Design Navi is the ability to set target values (design ranges) for each functional requirement. Therefore, once data is obtained from an orthogonal table, even if a customer’s requirements change or a different requirement is made, simply changing the design range is enough to immediately find the optimum value for that requirement. The ability to provide different customers with the most suitable product in a short period of time is a major feature of Design Navi. If you anticipate that other requirements will emerge in the future, it is recommended that you also collect data on such requirements in advance, even if they are not necessary at present. This way, when new requirements arise, we can instantly and appropriately respond to them by re-calculating with the addition of those items without having to go to the trouble of conducting experiments again. Although we have discussed engineering considerably here, it can be used for other problems as well, as long as the experiments and calculations can be done. It can be used for other issues as well. For example, it can be used for product planning, as discussed in Chap. 4, and it can also be used for management strategy, organizational reform, and agricultural reform. In medicine, it could also be used to establish optimal treatment methods.

References

117

Appendix: Orthogonal Table L18

No

A

B

C

D

E

F

G

H

1

1

1

1

1

1

1

1

1

2

1

1

2

2

2

2

2

2

3

1

1

3

3

3

3

3

3

4

1

2

1

1

2

2

3

3

5

1

2

2

2

3

3

1

1

6

1

2

3

3

1

1

2

2

7

1

3

1

2

1

3

2

3

8

1

3

2

3

2

1

3

1

9

1

3

3

1

3

2

1

2

10

2

1

1

3

3

2

2

1

11

2

1

2

1

1

3

3

2

12

2

1

3

2

2

1

1

3

13

2

2

1

2

3

1

3

2

14

2

2

2

3

1

2

1

3

15

2

2

3

1

2

3

2

1

16

2

3

1

3

2

3

1

2

17

2

3

2

1

3

1

2

3

18

2

3

3

2

1

2

3

1

Note A–H indicates the parameter and the three-level assignment of that parameter is shown below it

References 1. Nakazawa, H. (2001). Study on products development by Design Navigation method. Transactions of the JSME (vol. C), 67(658). (In Japanese). 2. Nakazawa, H. (2011). Trump card for manufacturing. Nikkagiren. (In Japanese). 3. Nakazawa, H. (2018). Japanese Encyclopedia of design science. Society for the Science of Design. 4. Taguchi, G., & Yokoyama, Y. (2004). Experimental design for quality design. Japan Standards Associations. (In Japanese). 5. Nakazawa, H. (2006). Design Navi for products development. Kougyouchousakai. (In Japanese).

Chapter 6

Development Examples with Design Navi

Design Navi has already been implemented and many products have been developed successfully by many companies. Some of the development examples, courtesy of the companies, are introduced here. Most of the examples introduced in this chapter are mechanical systems, but they have been applied in various areas, such as the improvement of resin film manufacturing conditions, the development of new batteries, and the development of new materials. There is also an example of improving the hydraulic circuit of a hydraulic elevator to realize an elevator with a comfortable ride by suppressing vibration. This method has also been used in the development of unexplored technologies.

6.1 Improvement of Injection Molding Machine The horizontal injection molding machine shown in Fig. 6.1 is a machine in which beads of plastic material called pellets are fed into a hopper, heated and melted by a barrel heater, and the melted plastic is pressed through a nozzle into a mold to make plastic products. Let us explain how this machine has been improved in performance, quoting from the paper [1]. The main objective of improving this machine was to reduce wear on the ring valve and screw head. Therefore, the first evaluation items to be considered were wear on the inner diameter of the ring valve, wear in the longitudinal direction of the ring valve, and wear on the screw head. To reduce wear, it is possible to reduce wear as much as possible by lowering operating conditions, but this would reduce productivity, so we included this in the evaluation items because it is also necessary to increase productivity. Productivity can be expressed in terms of resin flow rate per unit time. Since increasing productivity increases power consumption, we want to keep power consumption low. Therefore, we decided to include power consumption as an evaluation item. In other words, we included all the trade-off functions to prevent the development of machines with unbalanced performance. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Nakazawa, Design Navi for Product Development, https://doi.org/10.1007/978-3-031-45241-3_6

119

120

6 Development Examples with Design Navi

Fig. 6.1 Improvement of horizontal injection molding machine

It would be fine if these design ranges could be determined theoretically, but since this was not possible, the next best way was to find the average of the data from the experiments conducted according to the orthogonal table and make it better than the average. Next, we look for the parameters that have the most effect on the above evaluation items. Here we rely on the experience and intuition of the manufacturer’s engineers, and we finally selected four items: ring valve bore (shaft diameter holds constant), ring valve length, screw speed, and barrel temperature. Thus, the smallest size L9 can be used for the orthogonal table (Table 6.1). The experiments were conducted according to the nine combinations of parameter values (combinations of row-direction conditions) according to Table 6.1, and the data corresponding to the five evaluation items were taken. As mentioned in the previous chapter, since each parameter is assigned to 3 levels, 3 × 3 × 3 × 3 = 81 experiments normally must be conducted to obtain all the combinations of conditions. However, it is surprising that only the nine experiments in Table 6.1 provide sufficient information. Table 6.1 Experimental plan of horizontal injection molding machine by orthogonal table L9 No. Design parameters

Evaluation items Wear

Ring Ring Barrel Screw valve valve temperature revolution inner length diameter

Power Resin Ring Ring Screw consumption flow rate valve valve head inner length diameter

1

A1

B1

C1

D1

E1

F1

G1

H1

I1

2

A1

B2

C2

D2

E2

F2

G2

H2

I2

3

A1

B3

C3

D3

E3

F3

G3

H3

I3

4

A2

B1

C2

D3

E4

F4

G4

H4

I4

5

A2

B2

C3

D1

E5

F5

G5

H5

I5

6

A2

B3

C1

D2

E6

F6

G6

H6

I6

7

A3

B1

C3

D2

E7

F7

G7

H7

I7

8

A3

B2

C1

D3

E8

F8

G8

H8

I8

9

A3

B3

C2

D1

E9

F9

G9

H9

I9

6.2 Development of Dental Air Grinder

121

Table 6.2 Results of performance verification experiment Type

Wear (mm) Ring valve

Screw head

Resin flow rate (Productivity) (g/ min)

Power consumption (kwh)

Inner diameter

Length

Length

Improved machine

0.001

0.148

0.247

130

15.3

Conventional machine

0.005

0.234

0.533

54

32.3

The resin material is polypropylene resin, but it takes too long to wear out as it is, so abrasive grains (WA#1000) were mixed in for wear acceleration experiments. The mass ratio of resin to abrasive grain is 3:1. The procedure for calculating the feasibility probability for each evaluation item from the experimental data and multiplying by the total to obtain the system feasibility probability is as described in the previous chapter. Here, the coefficient k, which determines the system range, is set to 1.5. In paper [1], the lower limit of the system range was set at zero if the system range was on the negative side. We treated it this way because the theory was not fully completed at that point, but as mentioned in the previous chapter, such an operation is of course wrong, and the entire range must be adopted. Nevertheless, we obtained the remarkable results shown below. Those interested in the details of this work should also refer to my book [2]. I will follow the paper here. Table 6.2 shows the results of the final performance verification experiment using the optimal values of the parameters obtained from the above experiments and analysis. The results show that wear on the inner diameter of the ring valve has been reduced to 1/5 of that of the conventional machine, wear in the longitudinal direction has been reduced to about 1/2, and screw head wear has been reduced to less than half. One might be concerned that productivity might have decreased, but to our surprise, productivity (resin flow rate) has increased by a factor of 2.5. Then, when we asked whether power consumption had also increased, we found that power consumption had decreased to less than half. Thus, the conventional injection molding machine has been transformed into an amazing machine. This development took a long time because it was carried out during normal business hours, but it was completed in four months.

6.2 Development of Dental Air Grinder Figure 6.2 shows a chuck for an air grinder used for dental treatment. This figure shows the structure after improvements were made. In the older model of this manufacturer, which manufactures and sells such a product for the domestic market, the chuck parts broke after 2000 to 3000 replacements of the treatment tool. If the tool

122

6 Development Examples with Design Navi

Fig. 6.2 Improved dental treatment air grinder

were to eject during treatment, it would be extremely dangerous. The manufacturer tried various improvements, but could not make any progress because of a whacka-mole trial-and-error process in which changing one part caused other parts to break. The author thought that this might be due to a bad basic structure and proposed a new structure as shown in Fig. 6.2. The conventional structure was one in which the tool was tightly gripped. The new structure is supported by four blades as shown in the figure. When a load is applied to the tool, the blades of the chuck are turned in a prying motion to lock the tool. The more load is applied, the more the tool is tightened by the load and cannot be pulled out. In the general method of development, one is stuck with the structure of the old model and wastes time and money unnecessarily, but as mentioned above, if Design Navi had been used from the beginning, it would have pointed out structural defects at an early stage and this kind of waste could have been avoided. At the time, Design Navi had just been established, and manufacturers did not realize that they could not succeed in improving on the old model. Fortunately, however, the author’s intuition led them to change direction and develop a new structure as shown in Fig. 6.2, and an excellent chuck was developed in a short time using Design Navi. The functional requirement was to develop a chuck for dental treatment air grinders that would not pull the tool out. The design range for the number of tool changes until the tool is withdrawn is longer than the average of the experimental data according to the orthogonal table. The following four parameters were chosen to have a strong influence on this functional requirement. (1) (2) (3) (4)

Thickness of the chuck Taper angle of collet chuck Hardening hardness of the chuck Spring length (natural length of spring).

6.2 Development of Dental Air Grinder

123

The experimental method to meet the functional requirements involved manually repeating the cycle of attaching the tool, cutting the steel plate, and removing the tool again. We conducted an experiment by assigning the levels of the four parameters to an L9 orthogonal table. We then determined the feasibility probability for the number of times to failure and determined the optimal value. At this time, we were still using the concept of the information, but here we convert it to feasibility probability to explain. Since both the upper and lower system range curves did not deviate significantly from the design range at this time, the optimal value was determined by an approximate calculation method in which the feasibility probabilities of the level points were determined first and the three points were then connected by a quadratic curve. As explained in the previous chapter, the detailed calculation method should be used instead of the approximate calculation method, but since the theory was not advanced at the time, this approximate calculation method was adopted. The respective feasibility probability curves are shown in Figs. 6.3, 6.4, 6.5 and 6.6. The parameter value at the position where the respective feasibility probability is maximized is the optimal value. What we would like you to pay attention to here is Fig. 6.3 Feasibility probability curve versus chuck thickness

Fig. 6.4 Feasibility probability curve versus taper angle

124

6 Development Examples with Design Navi

Fig. 6.5 Feasibility probability curve versus chuck hardening hardness

1

Feasibility probability

Fig. 6.6 Feasibility probability curve versus spring length

0.5

0

D1

D2

D3=Dop

Spring length

the hardened hardness of the part. Generally, the hardening hardness of a part is an important factor affecting fatigue strength, but it is almost impossible, even with the current level of technology, to calculate the optimum value of hardening hardness. However, Design Navi can easily determine the optimum value of hardening hardness in this way. The feasibility probability curves for hardening hardness and chuck thickness are quite steep, and it can be read that if the parameter values deviate even slightly from the optimum values, the performance will immediately drop. These graphs show that quality control of chuck thickness and hardening hardness is very important for the performance of this chuck. Table 6.3 shows the results of the verification tests conducted with the air grinder manufactured using these optimum values. With the conventional machine, the tool was broken after 2000 cycles, but with the new product developed by Design Navi, the tool was not broken even after 34,000 cycles. This number of times is the number of times the researcher got tired and decided to end the experiment at this point because there was no sign of breakage after repeating the experiment up to this point, and this data does not indicate how much longer the experiment will continue.

6.3 Productivity Improvement of Grinding of Gas Turbine Parts

125

Table 6.3 Development results Types

Results

Conventional grinder

Tool pulls out after 2000 cycles

Grinder developed by design Navi

Tool doesn’t pull out after 34,000 cycles

Grinder made by German maker

Tool pulls out after 9000 cycles

The product of the German manufacturer with the largest market share at the time was damaged after 9000 cycles, which means that the new model has the best performance in the world.

6.3 Productivity Improvement of Grinding of Gas Turbine Parts Mitsubishi Heavy Industries, Ltd. Takasago Machinery Works applied this technology to find the optimum conditions for rough grinding of the curvic coupling part (Fig. 6.7) on the turbine disk of a gas turbine. A turbine disk is a disk that fixes the turbine blades to the outer circumference of the turbine. A curvic coupling is machined in the form of a ring on its side for the purpose of assembling the turbine rotor by bolting it to the neighboring disk in the axial direction and stacking them in layers. The curvic coupling is a ring-shaped coupling with convex and concave tooth profiles that mesh with each other without gaps between them to transmit torque and secure position. Grinding them was an arduous process requiring approximately 200 h per rotor for rough grinding alone. Design Navi was applied to improve productivity and reduce costs. The six evaluation items are the time required to grind a specified number of teeth, dressing time, difference in grinding depth between the first and last tooth, chattering noise (kansei evaluation), sparks (kansei evaluation), and spindle load. Four parameters were selected: dressing cycle, initial depth of cut, pulse cycle, and wheel speed. Dressing cycle is the number of teeth to be machined before dressing, Fig. 6.7 Turbine disc and curvic coupling (courtesy of Mitsubishi Heavy Industries, Ltd.)

126

6 Development Examples with Design Navi

which is used to shape the grinding wheel by machining. The initial depth of cut represents the amount of cut in the first step of roughing, which is performed in a fixed number of steps while gradually decreasing the depth of cut. Pulse cycle is the time required for cutting one step. The rotation speed is the number of revolutions per minute of the grinding wheel. An orthogonal table of L9 was created under these conditions, and the optimum values were obtained accordingly. When machining with these optimum values, the roughing time per rotor was reduced by approximately 47 h (hours, not minutes!) compared to the time required for conventional grinding. This is a reduction of 36% of the total machining time. This is a remarkable increase in productivity and at the same time a significant cost reduction. An additional result is that the life of expensive grinding wheels has been extended by a factor of 2.6. The economic effect alone is impressive. In this way, Design Navi has a remarkable effect not only on upstream processes such as research and development and design, but also on downstream processes such as improving production conditions at the manufacturing site.

6.4 Development of Coating Tools OSG Corporation applied Design Navi to the development of a new coating for cutting tools. The functional requirements (evaluation items) for the new coating are as follows. (1) A long time before adhesion occurs (hereinafter referred to as “adhesion generation time”). (2) Small initial maximum coefficient of friction (hereinafter referred to as “initial maximum coefficient of friction”) (3) Small wear area over a certain period of time (hereinafter referred to as “wear area”). The design ranges for the above functional requirements are as follows. (1) Adhesion generation time is more than 600 s. (2) Initial maximum coefficient of friction is less than 0.2 (3) Wear area is less than 6000 µm2 . This quantitative clarity of development goals is ideal from a management perspective. The following four parameters were selected as those most likely to affect these functional requirements. (1) (2) (3) (4)

Coating voltage Coating time Pretreatment time Pretreatment voltage.

6.5 Hand Washing (Work Optimization)

127

Table 6.4 Coating film quality Evaluation item

Conventional coating

New coating

Adhesion generation time (s)

300

900

Initial maximum coefficient of friction

0.3

0.2

Wear area (µm2 )

130,000

0

Based on these, an orthogonal table for L9 was created and an experiment was conducted. The optimum values were determined by Design Navi. The coating was applied again under these optimum conditions and the performance was confirmed, yielding the results shown in Table 6.4. It is clear that much better performance was achieved compared to the current performance. The time until adhesion occurs has been tripled, the maximum initial coefficient of friction has been reduced by approximately 30%, and the wear area has been reduced from 130,000 µm2 to zero, an astonishing level of quality.

6.5 Hand Washing (Work Optimization) Finally, using the example of hand washing, an action that everyone performs in daily life, we will introduce a practical example of its application to making work more efficient [3]. Every winter, the spread of influenza becomes a hot topic, and in 2020, a new type of coronavirus spread throughout the world, causing great damage. However, if hand washing becomes a hassle and is done simply, there will be left a lot of partial unwashed areas. Once you know how to scrub your hands when washing them, the more often you repeat the process, the better you will be able to remove the dirt. However, the reality is that it is not always possible to spend much time on hand washing. Also, since dirt is invisible, it is not possible to stop when the dirt is removed. We want to know the most efficient way to wash our hands within a set amount of time. As with the relationship between time and effectiveness, there is often a trade-off problem where one side stands in the way of the other, and this can be an example of how to optimize a similar task. Therefore, we set a time and targeted the most effective way to wash hands within that time. The recommended hand rubbing method is available on the Ministry of Health, Labor, and Welfare’s website, and we used it (Fig. 6.8). This shows six different hand rubbing methods, but the appropriate number of times to do each one will depend on how dirty your hands are at the time, the temperature of the water, and other factors. It is difficult to determine the number of times to scrub each type of hand rubbing, since each person may be good at it or bad at it. We decided to start with the time required, which is about 30 s. We then determined the number of times within this time that would produce the most efficient and overall good hand washing effect. First, the parameters and evaluation items (functional requirements) were determined. The parameters were four variables, as out of six types of rubbing, 1 and 4,

128

6 Development Examples with Design Navi

Fig. 6.8 Hands washing poster (ministry of health, labor, and welfare’s)

5 and 6 in the figure are the same number of times since they rub the same places. It would be desirable to have a parameter for each of the six types, but this would not fit in the L9 orthogonal table and would require a larger orthogonal table with more experiments. 9 experiments were considered the limit, so four was chosen. One evaluation item was time, and the other was the degree of finish after washing. Since dirt is not visible, we used a commercially available handwash checker. The handwash checker is like a cream containing a fluorescent substance that glows when exposed to ultraviolet light, and the user can check the degree of cleanliness by using the fluorescent substance as a stain. It is visually checked to see if it is clean or not. Since it is difficult to completely remove 100% of the residue that has penetrated into the skin tissue, we gave a score of 2 if no residue remained on the skin surface, 1 if only a small portion remained, and 0 if there was a clear residue. The five evaluation sites were the palm, back of the hand, between the fingers, fingertips, and thumb, and the total score was 10 points. Table 6.5 shows the parameters used and the results of the experiment. Since the average time of experiments was 26.3 s, we set the design range of time to 25 s or less, and for the design range of the finishing quality we set to the experimental average value 7 or more, considering that the finishing quality should be as good as possible. The optimal values were then calculated as shown in the upper

6.5 Hand Washing (Work Optimization)

129

Table 6.5 Parameters of hand washing experiment and experimental results No.

Experimental condition ➀Palm to palm ➃Between fingers

➁Palm to palm with fingers interlaced

Results ➂Rub palms and fingertips and under nails

➄Thumb

Time (s)

Quality (point)

➅Wrist

1

2

2

2

2

15.88

4

2

2

4

4

4

28.64

6

3

2

6

6

6

33.05

7

4

4

2

4

6

29.95

8

5

4

4

6

2

24.54

6

6

4

6

2

4

24.77

6

7

6

2

6

4

24.08

9

8

6

4

2

6

30.53

8

9

6

6

4

2

25.75

9

row of Table 6.6. However, for the parameter ➂ in the third column, the optimal value was 2.4 times. Since the operation can only be performed an integer number of times, 2 times was the solution that could be realized by comparing the feasibility probability at 2 times with that at 3 times. The bottom row of Table 6.6 shows the optimal solution conditions and the results of the verification experiment conducted under these conditions. The result was a time of 22.38 s and a quality of 8, both of which satisfied the design range. This means that the palm of the hand and the area between the fingers should be rubbed 6 times, while the other areas should be rubbed 2 times each. In this way, with several different evaluation items that have a trade-off relationship, it is difficult to know which evaluation item should be prioritized and to what degree. However, Design Navi is useful for automatically obtaining a balanced solution by setting target values for each. This example suggests that Design Navi can be used also in the medical field if there is a way to measure the results. Table 6.6 Calculated optimum values and inspection results Experimental condition %1 Palm to palm ➃ Between fingers

Results

➁Palm to palm with fingers interlaced

➂Rub palms and fingertips and under nails

➄Thumb

Time (s)

➅Wrist

Optimum value

6

2

2.4

2

Inspection experiment

6

2

2

2

– 22.38

Total quality (point)

– 8

130

6 Development Examples with Design Navi

We would like to discuss these results a little further. In this experiment, two evaluation items were used: time E1 and finish quality E2 . Since each evaluation item has a different dimension, they cannot be evaluated by simple addition. Therefore, many people will come up with the weighted sum method. In other words, let us set up the evaluation function f by giving weights w1 and w2 , respectively, as in Eq. (4.1). f = w1 E 1 + w2 E 2

(6.1)

However, we do not know how much of these weights to set. Since there will be countless optimal values with different weight balances, we will have to find the weights that are close to the desired balance among them. If you have specialized software for this task, you can predict the results of each weight and derive the optimal weight, but if not, you will have to conduct another experiment to determine the well-balanced weights. In contrast, Design Navi defines target values as design ranges instead of weights. In the handwashing example, an explicit target value of 25 s for time and 7 for finish quality is used directly. In this way Design Navi has the advantage that a balanced optimal value can be calculated at the experimental site and immediately tested for confirmation. In 2020, adequate handwashing is required as a measure against new coronas. No amount of repetitive handwashing is ever enough to prevent new coronas. The above data should be understood as the minimum hand washing procedures that must be followed. Here we have applied the data to the routine task of hand washing, but as can be expected from this example, it can also be applied to medical problems. The role of Design Navi is becoming more and more important in this age of speed, where efficient and economical results are required.

6.6 Design of Stabilizing Power Circuit Design Navi is especially powerful when multiple functional requirements are given, but for the sake of clarity, we will discuss the case of only one functional requirement to show that Design Navi can also be used for robust design (designing a system that is resistant to noise and parameter variations), as explained in more detail below. Let us apply Design Navi to an example of a stabilizing power circuit (Fig. 6.9) taken from the literature [4] to compare Design Navi with the Taguchi Method, which claims to provide robust design. The purpose of this circuit is to provide a stable output of 10 V DC when an input of 20 V DC is applied. Using the element constants in the figure, the output voltage can be calculated theoretically using Eqs. (6.2)–(6.5) in the same document. E 0 = (P2 + P3 )/P1

(6.2)

6.6 Design of Stabilizing Power Circuit

131

Io

Tr 1

R1 R4 Tr 2

Ei VZ

R1

Eo

R0

R3

Fig. 6.9 Stabilizing power circuit

(

)( ) ( ) R1 + 21 R2 1 + h f e1 + R4 h f e2 1 + h f e1 + R4 )( ) ( P1 = R1 + 21 R2 1 + h f e1 (VB E2 + Vz )h f e2 R4 (VB E2 + Vz )h f e2 R4 + 1 R1 + 2 R2 R3 + 21 R2 ( ) VB E2 R4 + Vz R4 − I0 R4 R1 + 21 R2 )( ) ( P3 = R1 + 21 R2 1 + h f e1

P2 = E i − VB E1 +

(6.3) (6.4)

(6.5)

Among these parameters, the resistance values R1 to R4 and the levels of the transistor current amplification factors h f e1 and h f e2 are assigned in the orthogonal table L18 (Table 6.7) to the same table as in the literature. The table also lists the output obtained from the theoretical equation for each row parameter value combination and the squared error of the output with respect to the target value E 0 (10 V). Note that the current used is assumed to vary in the range of 0 to 0.8A when determining the output, but 0.4A is used here. VB E1 , VB E R2 , and Vz are calculated assuming 0.6 V constant as in the literature. Using the squared error, the design range was given in Design Navi as follows. (E 0 − 10)2 ≤ 1 Therefore, actual design range becomes as follows. 9 ≤ E 0 ≤ 11 Reference [4] states that each parameter is subject to variations of ± 10% for resistors R1 to R4 , ± 30% for transistor current amplification factors h f e1 and h f e2 ,

132

6 Development Examples with Design Navi

Table 6.7 Assignment of parameters to the L18 orthogonal table and their output at the time No.

1

R1

R2

R3

R4

(kΩ)

(kΩ)

(kΩ)

(kΩ)

hfe1

hfe2

8

Output voltage (V)

Squared error

1

1

2.2

0.11

0.3

0.08

18

50

1

12.02

4.09

2

1

2.2

0.56

1.5

0.42

35

100

2

3.53

41.83

3

1

2.2

2.7

7.5

2.2

70

200

3

1.72

68.49 665.26

4

1

11

0.11

0.3

0.42

35

200

3

35.79

5

1

11

0.56

1.5

2.2

70

50

1

8.64

1.86

6

1

11

2.7

7.5

0.08

18

100

2

11.88

3.53 147.25

7

1

56

0.11

1.5

0.08

70

100

3

22.14

8

1

56

0.56

7.5

0.42

18

200

1

10.15

0.02

9

1

56

2.7

0.3

2.2

35

50

2

26.47

271.14 72.4

10

2

2.2

0.11

7.5

2.2

35

100

1

1.49

11

2

2.2

0.56

0.3

0.08

70

200

2

8.02

3.9

12

2

2.2

2.7

1.5

0.42

18

50

3

3.83

38.08

13

2

11

0.11

1.5

2.2

18

200

2

8.83

1.37

14

2

11

0.56

7.5

0.08

35

50

3

14.43

19.65

15

2

11

2.7

0.3

0.42

70

100

1

11.74

3.02

16

2

56

0.11

7.5

0.42

70

50

2

15.14

26.46

17

2

56

0.56

0.3

2.2

18

100

3

88.16

6108.63

18

2

56

2.7

1.5

0.08

35

200

1

20

100.04

± 10% for supply voltage E i , and 0 to 0.8 A for specification current I0 , due to variations in device performance and changes in temperature and aging, etc. The problem is to determine the value of the element constants so that the output is as close to 10 V as possible even when fluctuations occur, i.e., so that robustness is high. Let us examine the robustness of this case by using Design Navi to find the optimal value. Since it would be very time-consuming to find the optimum value from all these combinations of variations such as the element constants, the Taguchi Method considers only two cases: the combination N1, which produces a smaller output, and the combination N2, which produces a larger output. They call this method formulation. Table 6.8 shows the combinations of parameter variations that produce N1 and N2. Table 6.8 Relations of parameter variation values and output R1 (%)

R2 (%)

R3 (%)

R4 (%)

hfe1 (%)

hfe2 (%)

Ei (%)

Io

N1(low output)

−10

10

10

10

−30

30

−10

0.8A

N2(high output)

10

−10

−10

−10

30

−30

10

0A

6.6 Design of Stabilizing Power Circuit

133

In Design Navi, the optimal value including robustness can be obtained by the conventional method based only on the orthogonal table of standard values shown in Table 6.7, but in the Taguchi Method, two calculations must be made for each case of N1 and N2 to obtain the optimal value of a parameter, which requires twice the time and effort. The Taguchi Method takes a two-step design approach that uses the SN ratio (evaluation of variation) and the average of the output voltage (evaluation of sensitivity). First, a parameter that has a high influence on the SN ratio is selected using a factorial effect diagram, and the level at which the SN ratio is high (i.e., the level at which the variation is small) is determined. Next, the level of the remaining parameters with high average output values (sensitivity) is determined; see the literature of Taguchi Method for details. Table 6.9 shows the values of the most robust parameters, along with the results of the Design Navi. This result shows that the values are quite different for both. The same value is R4 and almost the same value is h f e1 . Table 6.10 shows the results for N1 and N2. The results show that the fluctuation range is 6.30 V for Design Navi and 5.73 V for the Taguchi Method, which is almost the same but slightly larger for Design Navi. The central (average) value of the output voltage swing is closer to the target value of 10 V for the Design Navi. The Taguchi Method was originally developed to achieve highly robust products, but the results show that Design Navi is no less robust than the Taguchi Method. Moreover, the robustness of the method is guaranteed even when calculations are performed using only orthogonal tables based on standard values, and only half the time of calculation is required. Furthermore, Design Navi is much easier to use than the Taguchi Method, considering the time and effort required for two-step design incorporating human judgement in the process. The reason why simply using Design Navi as it is a robust design at the same time is that when determining feasibility probability, the system range (range of performance variation) takes into account the range of performance variation caused by assigning the all levels of other parameters (which can also be seen as the variation of the specified values of other parameters). In other words, finding the optimum value Table 6.9 Element constant capable of stably outputting 10 V R1(kΩ)

R2(kΩ)

R3(kΩ)

R4(kΩ)

hfe1

hfe2

Design Navi

2.2

2.70

7.5

0.08

70

50

Taguchi method

6.8

0.56

1.5

0.08

70

200

Table 6.10 Comparison of output (V) for N1 and N2 cases

Design Navi

Taguchi method

N1(Low output)

7.05

7.52

N2(High output)

13.35

13.25

Average output

10.20

10.39

134

6 Development Examples with Design Navi

is a holistic optimization of the parameters so that the system range is as inclusive of the design range as possible, taking into account the effects of variations in the values of the other parameters. Now let’s check what the actual output will be with the optimum value obtained by Design Navi (in this case, the calculation uses the standard values in Table 6.9), and we get 9.80 V. This is within the design range of 9–11 V, which was set at the beginning, so the development was a success. If we need to get closer to 10 V, the present optimum values are the end points of the levels, so we can re-set the parameter (i.e., re-create the orthogonal table) and recalculate (or re-experiment if the calculation is not possible). If this is not sufficient, then the circuit itself must be reexamined because it will not achieve the goal with the circuit we initially assumed. The way the levels are set, the system range curves for R4 is reversed in this unequal interval, and the expected good values cannot be obtained when interpolated. Design Navi can obtain a better value if the interval of the levels is almost even. Finally, an important difference from the Taguchi Method must be pointed out once again. The only functional requirement in this case was a constant output voltage, but if we add, for example, the requirement for low cost, the taguchi method cannot do it. Only Design Navi can achieve multiple functional requirements.

References 1. Nakazawa, H. (Jun 2001). Stady on products development by design navigation method. Transactions of the JSME (vol. C), 67(658) (in Japanese). 2. Nakazawa, H. (2011). Trump card for manufacturing. Nikkagiren (in Japanese). 3. Tateno, T. (2021). Materials from him. 4. Tatebayashi, K. (2020). Introduction of Taguchi method. Nikkagiren (in Japanese).

Part IV

Correct Way of Thinking-Met-Concept Thinking

Chapter 7

Way of Thinking

It is obvious, but it is more important to properly plan and design a project before predicting the feasibility of the project. The early stages of conceiving a project are the most important. If the project is conceived incorrectly, the feasibility probability of the project will probably be quite low, and no results can be expected even if the project is implemented forcibly. Therefore, this chapter introduces the cognitive biases that lead to wrong ideas and the Meta-Concept Thinking that will guide you to successful ideas.

7.1 Beware of Cognitive Bias and Symptomatic Thinking Predicting the feasibility of a proposed project can be accomplished with the new feasibility study presented in this book. However, if the initial conception is wrong, the project will naturally fail. The conception is very important in the sense that it is the starting point for all actions. Failures are often seen in everyday problems, product planning projects as described in the previous chapter and large-scale national projects. Most of these failures are related to mistakes in the way of thinking. Aside from daily problems, product development projects and large-scale national projects absolutely cannot afford to fail, and therefore, sufficient money and time should be spent on them at the planning stage. So, I would like to introduce a method of thinking that I always use as a foundation to support what I have explained so far in this book. Two perspectives are important in the ideation process. One is the idea creation. The other is to pay attention to the causes of wrong ideas. In other words, the pitfalls that lead ideas to fail. These are “cognitive bias” and “symptomatic thinking”. After explaining these, I will introduce main theme, “Meta-Concept Thinking,” which enables correct thinking.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Nakazawa, Design Navi for Product Development, https://doi.org/10.1007/978-3-031-45241-3_7

137

138

7 Way of Thinking

7.1.1 Cognitive Bias Leads You into Unexpected Pitfalls The first mistake we tend to make in our daily thinking is cognitive bias. Cognitive bias is a concept in social and cognitive psychology that describes unconscious cognitive bias. Cognitive bias is a bias that arises in the process of “unconsciously” judging, interpreting, and discarding perceived stimuli (information). Perceived stimuli are recognized through cognition, but the process of cognition can lead to unconscious bias. Therefore, even if the cognitive process is biased, the person is unaware of it. Psychologists Amos N. Tversky and Daniel Kahneman argue that part of the cognitive bias arises when trying to solve problems through heuristics (a method of thinking with heuristic ideas that cannot guarantee accuracy but can reach an answer in a short time) or mental shortcuts [1]. A mistake is committed because human judgment and decision making is done in a different way than the rational way of thinking. This cognitive bias is often the cause of wrong thinking. What are some of the cognitive biases, if we only pick up those related to errors in thinking, they include the halo effect, anchoring bias, conservatism bias, and confirmation bias. Halo effect is a phenomenon in which people’s attention is drawn to the most conspicuous features of an object, distorting the other features. In other words, it is a bias in which judgments and evaluations are drawn to the most conspicuous features. When superior characteristics stand out, we tend to evaluate the subject in a positive way, and when inferior characteristics stand out, we tend to evaluate the subject in a negative way. For example, it is a pitfall to think, “He is a well-educated person and can come up with good ideas,” and leave it to that person alone, or to accept that the person’s ideas are always correct. It is better to share ideas with others without regard to academic background, as this will result in superior ideas. Anyone can come up with great ideas if they use the correct thinking method described below. Do not rely on academic background or expertise alone to come up with ideas. In other words, do not be misled by outstanding characteristics. Anchoring bias refers to the tendency to be drawn to the first numerical value (information) presented, distorting the judgment of subsequent numerical values. For example, when asking the question, “What is the pass rate for this exam?” the tendency of the answer will change depending on whether you add, “Is it higher than 70%?” or “Is it higher than 10%?”. It is important to note that it is easy to make a mistake if you encourage people to think in terms of too much specific information, not just numerical values. Conservatism bias refers to the adherence to certain views, expectations, or personal beliefs. It refers to the tendency of humans to cling to previously held ideas when confronted with new facts and to change those ideas only gradually. A good example of this would be success stories. We must be very careful about success stories. Especially when a supervisor or leader is pulled by the success experience bias. This is because success is not only due to the individual’s ability. The external

7.1 Beware of Cognitive Bias and Symptomatic Thinking

139

and internal environment at the time may have happened to work in one’s favor. The external and internal environments have naturally changed since the time of success. Confirmation bias is a cognitive bias that causes people to incorporate only information that is favorable to them and ignore or not incorporate information that is unfavorable when testing their hypotheses and beliefs. Making naive predictions is also a part of this bias. For example, if one is fixed on the idea that “production should be done in China where labor costs are low,” one will not accept information that contradicts one’s own opinion, such as “labor costs will rise in China too,” “high quality cannot be expected from the Chinese character,” or “Vietnam can produce better quality products”. It is a bias. They also avoid people who hold such opinions. This is often seen in politics, but it is also a serious pitfall in matters related to ideas. To avoid this kind of idea failure, it is important to ask yourself whether this idea is really correct, and to listen to others’ opinions with humility. For example, suppose you think, “I have a meeting this afternoon, so I will finish this work before then”. The assumption here is “There is a meeting this afternoon”. Then, we look for information that either contradicts or confirms this assumption. In other words, you think, “I wonder if there is a meeting this afternoon,” and then you check the schedule in your notebook. You ran to the conference room in a panic to find out that the meeting started at 10:00 a.m. instead of this afternoon... etc., in an effort to avoid cognitive bias by denying once-convinced facts. In other words, it is important to abandon preconceived notions about people and specific information, not to be swayed by your own or your boss’s success stories, and to have the courage to accept information that is unfavorable to your own opinions and ideas. There are many other interesting cognitive biases that affect social and human relationships. There is also the “fundamental attribution error,” which is a cognitive bias that places a heavy emphasis on internal factors such as personality and temperament and disregards situational factors, and the “hindsight bias,” which is a bias that causes people to believe that an event was predictable after it has occurred. The ultimate example is the “Dunning-Kruger effect,” a phenomenon in which the less capable a person is, the higher he or she rates his or her own speech, appearance, and behavior, giving the illusion of superiority over reality. We must be self-conscious of this phenomenon. If you are interested, please do more research.

7.1.2 Symptomatic Thinking is Dangerous The most common failed way of thinking is symptomatic thinking. Symptomatic treatment often does not solve the underlying problem, but rather makes the situation worse. When a person has a cold and fever, the symptomatic treatment is to use antipyretics to lower the fever. Unless it is an unusually high fever, this is considered a bad treatment because it lowers the immune system and slows down the recovery process. There must be a more fundamental treatment to restore the body to health. This example clearly shows that symptomatic treatment is not a good idea.

140

7 Way of Thinking

However, people unconsciously think in a symptomatic manner. The root cause probably has to do with the essential function of the brain. This is because the brain’s natural function is thought to be to create new information by connecting things to things. What’s worse, it seems to have a propensity to short-circuit ideas to quickly organized, categorized, and stored data, or memory. When a problem arises and the brain tries to solve it, it immediately goes to the task of connecting the problem with past memories and categorized data, which prevents the brain from thinking of more fundamental, higher-level, original ideas, and leads to symptomatic thinking. They are stuck in a kind of stereotype that has been nested in their brains for many years. Then, if we can figure out how to eliminate this brain habit and how to get out of the stereotype, we will be able to come up with the right idea. This is made possible by the meta-concept method described in Sect. 7.3, in which one climbs to the meta-concept of the problem to think about it. Here is a good(?) example of symptomatic therapy. The following article appeared in the February 9, 2004 issue of Nikkei Business. The title of the article reads as follows: “World’s First Bicycle with Automatically Inflated Tires, Columbus Egg Born in a Town Factory”. Let me quote. There is a device called an ‘air hub’ at the axle of the front and rear wheels, from which a hose extends to the tire tube. The rotation of the wheels activates a pump in the air hub, which sends compressed air to the tires. Each rotation of the tire replenishes 0.5 cc of air, and when the air pressure reaches a set value, the excess air is discharged outside In other words, the idea is symptomatic one that bicycle tires often lose air, so they should always be supplied with air. Is this really the invention of “Columbus’ Egg”?

If you think about it, bicycles left unridden for long periods of time often have tires that are so flat that they cannot run and must be inflated with a pump. The above invention has the disadvantage that it is “useless” if it is not ridden. Another drawback of this idea is that the pumping action continues even when the tire pressure reaches the set pressure, which is a waste of work and increases driving resistance, even though the value may be small. Then, let us think about a more fundamental solution. What is the problem when the tires lose air? It is that “the impact from the bumps on the ground will be transmitted to the person riding the bicycle”. From here, the real purpose of the higherlevel solution becomes clear. In other words, the real objective (i.e., the higher-level concept) is “to prevent the impact of bumps on the ground from being transmitted to the person riding the bicycle as much as possible”. If we think in terms of this higher-level objective, we can come up with a variety of better and more fundamental solutions. One of them is to develop a tire that does not use air and does not transmit vibrations from uneven ground. For example, if we develop a resilient and lightweight sponge-like material for tires, then the elaborate inventions such as air refill system would be completely unnecessary. Moreover, the tires would not flatten out after being left unridden for long periods of time, nor would they have the side effect of increasing drag while driving. Or we could go a step further and invent a pneumatic tire system that will not lose air for years. This way of thinking can lead

7.2 Information Collection and Correct Value Criteria

141

to a solution that costs less and performs better than the above invention. This is Meta-Concept Thinking. As you can see, the world is infested with many solutions that fall prey to symptomatic thinking and cognitive bias. They are far from ideal, waste money and time, and have terrible side effects. For example, the development of cancer drugs that destroy living organisms, the increase of the burden on the public by tinkering with medical fee and drug prices, the suppression of production to maintain the price of rice, the belief bias that the future happiness of children is to get into a good university, and the list goes on and on. Collecting data like this could be quite an interesting game, though.

7.2 Information Collection and Correct Value Criteria T is said that one of the reasons Japan lost the Pacific War was that the U.S. military had deciphered all the Japanese codes. In other words, Japan lost the war because of information warfare. Since the decision to start a war was wrong in the first place, the value criterion that made such a decision was wrong. This example shows how important information gathering and value criteria are when starting a project or business. Let us consider these two things here.

7.2.1 Information Gathering—Have You Done Your Research? It is more foolish than reckless to make up ideas on the spur of the moment without careful research. We should be more aware of the importance of information. Let me tell you a story about Eizaburo Nishibori: On November 8, 1956, the Antarctic research vessel Soya departed Tokyo’s Harumi Pier sent off by a large number of people, bound for the South Pole. It had only been 11 years since the end of the war, and the Japanese economy was finally recovering from the devastation of the war. At that time, Japan participated in the International Year of Earth Observation, a project launched to simultaneously observe many items related to earth phenomena around the world. At that time, Japan was assigned East Ongull Island in the Antarctic Circle as an observation base. Takeshi Nagata, a professor in the Faculty of Science at the Tokyo University, was chosen as the leader of the Antarctic expedition team, and Eizaburo Nishibori was chosen as the vice leader. In fact, when Nishibori was 11 years old, he was struck by the magnificent scenery of Antarctica when he saw motion picture of Antarctic activities taken by Lieutenant Shirase on his return from an expedition to the South

142

7 Way of Thinking

Pole, and he had a desire to visit Antarctica one day when he had the chance. In his book [2], Nishibori writes. If you have this kind of ambition, wish, or dream, you will find a way to realize it someday. In the course of our lives, we will always come to a fork in the road, but if we have a dream or ambition, we will choose that path and seize the opportunity. Forty-two years later, at the age of 53, he became the vice-leader of the Antarctic expedition and the captain of the first Antarctic wintering party. When he was 36 years old, he went to the U.S. to study and began to gather information about Antarctica. His dream of going to Antarctica made him gather information about Antarctica. When I was studying in the United States, I was thinking about what to do on my Saturday and Sunday off, and I thought; “Yes, let’s visit someone who has been to Antarctica”. There are people in the U.S. who have been to the South Pole with Mr. Bird and others. I thought “Yes, let’s visit people who have been to Antarctica”, and I decided to visit them. This was a fork in the road—when I met these people, they were very kind and seemed to have been friends for many years now. We have become very close, beyond race and beyond history. Some of them are dog beaters, some of them are cooks. These people are not “important people” and would never visit us about Antarctica, but since he came all the way from Japan to visit us, they were very happy to see me and we were able to get to know them very well. Also, when I go to used bookstores, my hand takes books on Antarctica with a puff of my hand, as if it were an instruction from the depths of my heart. I bought many of those books and returned to Japan. In those days, we used ships instead of airplanes. So, even though I had a lot of luggage, it did not cost much. Fortunately, or unfortunately, the war (World War II) broke out and I had no other books to read, so the only books I could read were about Antarctica. At that time, I did not think I could go to Antarctica. However, although it may be out of reach and aiming too high, the thought of it is a source of emotional support, and that is what encourages me. Isn’t that what a dream is all about? In the case of Nishibori, he did not gather information after he decided to go to Antarctica, but before he went, he had a dream and was inspired to gather this information for his future. Information is not something that can be gathered quickly if you try to gather it suddenly after you need it. Most of the time, we tend to collect information when we have a premonition of the need for it, but we can see how important it is to always collect information consciously, even if it is not unconsciously. The premonition that it is better to gather information comes from the dream that he has, as he also mentioned. In this sense, having a dream or vision is important for gathering information. Therefore, at that time, the only person who really knew about Antarctica was Nishibori, and he guided the team in every detail of the plan to succeed in wintering in Antarctica. Of course, in Nishibori’s case, knowledge was not enough; he also took action to gather information through experience. Nishibori took a group of 30 observers to Abashiri, Hokkaido, where they experienced 20-degree C below zero temperatures

7.2 Information Collection and Correct Value Criteria

143

and strong winds. There, the strong winds blew away tents without a second thought, and the oil in the generators froze and stopped working. This experience taught us that actual experience is an important way to obtain necessary information. It was only with the above information that such a large project was successful. What I am trying to say here is that you cannot succeed in doing something new unless you gather enough information in advance. It is not a good idea to just go off on a hunch, when information can be obtained with a little effort. Even though it may be difficult to do the research, it is important not to neglect the time and effort required to gather information. Some may argue that the various projects of the nation must have spent a great deal of money to gather sufficient information, so why do they fail? There are many reasons for this, and it is not possible to say that lack of information is all the cause, but in the end, feasibility studies were inadequate. The next question is how exactly to gather information. One can imagine that simply asking people to gather information is not enough: checking the literature, surfing the Internet, or listening to others is not enough. Something is missing, and what that something is, as some readers may have already noticed from Eizaburo Nishibori’s story, is to have a dream or vision related to the project. As to why we can find the necessary information there, the mechanism is generally thought to be as follows. The human brain has a reticulum activating system (RAS), a bundle of nerves through which all external information (auditory, visual, tactile, taste, smell, etc.) passes. For example, we are surrounded by all kinds of sounds in our daily lives. If our brain were to take in all of them at the same time, our brain would go flat. The same is true for information coming in from other organs. Fortunately, the RAS in our brain allows us to live a quiet life by taking in only the information we need. If you have a vision (or a problem in your daily work), the RAS (brain) will find a wide range of information that it determines is necessary for this vision, and it will take it into the brain. The RAS selects the necessary information on its own. In such a brain state, if you go to the field, conduct an interview, watch newspapers and TV news sources, or go on a trip, you can easily get the necessary information into your brain. The information that has been taken in this way is meaningless if it is merely stored in the brain as knowledge. The information must be combined with other information to create new information. This is where the hippocampus, one of the brain’s organs, comes in. The information selected in the RAS is sent to a place called the hippocampus. Since the hippocampus is responsible for implanting into memory only the information that is absolutely necessary for our survival, we must first be aware that visions are very important for our survival (to put it mildly). The hippocampus performs a variety of combination and integration tasks to match the information gathered here with the vision. Moreover, this work is done unconsciously while people are sleeping. As a result, one day a solution suddenly is created. To add more, there is a tissue called amygdala near the hippocampus, which deals with emotions, but at the same time is interrelated with the hippocampus. The desire

144

7 Way of Thinking

to realize a vision or an exciting emotion activates this amygdala, which in turn works on the hippocampus, making the hippocampus even more active in the above tasks.

7.2.2 Let’s Have the Correct Value Critera—The Wrong Ones Will Lead Us Astray There have been many examples of corporate leaders who have led their companies and organizations down the wrong path by making decisions based on the wrong value critera. It is not right to think in terms of the wrong value critera, such as putting company profit first and serving customers second. On the other hand, if an organization is run based on the correct value critera, it will develop and stabilize. Konosuke Matsushita who is the founder of Panasonic Corporation is a good example. One of Konosuke Matsushita’s value critera is described in his English biography [3] as follows. The mission of the manufacturer is to overcome poverty, to lift society as a whole out of poverty and to bring about wealth. The goal of the entrepreneur is to produce all products as cheaply and inexhaustibly as water. When this is achieved, poverty will be exterminated from the earth.

Since this was in 1932, the word “poverty” reflects the background of the times, but Panasonic has been developing today because it was managed based on these correct value criteria. No matter what time period you are in, there is no doubt that if you think and act based on the correct value criteria for that time period, you will be able to achieve your goals. The Toshiba Corporation example [4] is the opposite. Fujio Masuoka Professor Emeritus of Tohoku University applied for a patent for flash memory during his tenure at Toshiba in the 1980s. At that time, the mainstream storage device for computers was DRAM, a volatile semiconductor (data is lost when the power is turned off), and Toshiba was moving headlong into the manufacture of this type of device. However, Masuoka saw an overwhelming market for non-volatile (data is retained even when the power is turned off) semiconductors, which are smaller in size, faster in playback, and more shock-resistant than magnetic tape and hard disks, not only for computer storage devices but also for other types of devices. This is his value standard. In February 1985, he presented 256 K prototype flash memory at an international conference, and Intel immediately asked for chip samples. No Japanese company paid any attention to him, and in fact, Toshiba ignored him and did not pay him a penny for his research, but instead removed him to a window department. He moved to Tohoku University, where he could do research freely. People inside Toshiba were happy to see him leave. The subsequent fate of flash memory is tragic. Toshiba decided to jointly develop NAND flash memory with Samsung Electronics and licensed the technology to them. If Toshiba had not sold the technology to Samsung, it could have monopolized the

7.3 Correct Way of Thinking 1: Meta-Concept Thinking

145

market. Toshiba sold the seeds of future profits. IBM and Intel highly praised the invention, but there was no reaction in Japan. At that time, Toshiba had been so successful in the development of 1 megabit DRAM that it had conquered the world. Therefore, Toshiba’s value standard was “to further develop the business with DRAM” (or perhaps the value standard of Toshiba’s corporate culture was “not to allow a single employee to do anything on his own”). It is clear that this value standard came from the aforementioned conservatism bias and assurance bias. It was a failure to recognize the tremendous value of today’s flash memory. In FY2017, Toshiba earned nearly 90% of its 430 billion yen in operating income from NAND flash memory. It was said that if that memory was sold, Toshiba would be left with nothing but scraps [5]. This shows how important a product flash memory was. As you can see from Toshiba’s decline in the 2000s, you can see that if you make a mistake in your value criteria, the fate of a company can be a terrible thing. As I mentioned earlier, there is more than one value standard. We must identify all the value criteria and use them as the basis for our ideas and evaluations.

7.3 Correct Way of Thinking 1: Meta-Concept Thinking So how do we come up with the right ideas? There are many ways of thinking in the world. However, I feel that something is missing in all of them. In other words, the procedure of conception. Meta-Concept Thinking is a way of thinking that teaches us a procedure for thinking, which can be called the starting point of a way of thinking. The author has often failed to come up with new ideas, both at work and in his daily life at home. The Meta-Concept Thinking was created after much worry as to why such failures occur. I made the following mistakes when I was younger. I had a hard time using a straight toothpick against my back teeth. So, as a symptomatic remedy, I thought a toothpick with a curved tip would be a good idea. Thinking that it would be quicker than obtaining a patent, I tried to sell the toothpick idea to the branch manager of a large supermarket that handled household products, but he politely refused my proposal. I remember that after negotiations, I left the store with warm support such as, “Good luck”. A few years later, I found a toothpick (dental floss) for sale at a drugstore. Obviously, this was far better than a toothpick with a curved tip. I was troubled as to why I had made such an ill-conceived mistake. This thinking method was learned from my repeated experiences of such bitter failures and summarized as a principle. Meta-concept means “higher-level concept “. In other words, when a problem needs to be solved, we should think from the higher-level concept of “what is the real purpose of the problem?” In this way, we can free ourselves from the spell of symptomatic thinking. This way of thinking about solutions from the perspective of real purpose is called Meta-Concept Thinking.

146

7 Way of Thinking

The basis of this thinking method means that when you start anything, you must think back to the origin or you will fail. If you think only about the branches and leaves, you will get away from the trunk and lose sight of the essence. Once you go back to the trunk, climb to a higher level, and look far into the future, you will be able to see the right path and reach the right solution. Especially when it comes to large projects, failure is unacceptable, so the Meta-Concept Thinking is very important. Another way to look at it is that when trying to achieve a certain goal, improving the status quo is not a fundamental solution. For example, no matter how fast we make automobiles, when we want to reach a destination quickly, innovation will not happen if we are stuck on the ground. You have to get off the ground and think in terms of flying. Innovative ideas will not emerge unless you reject the status quo. Thinking in terms of meta-concepts means denying the status quo, rising above it, and coming up with unconventional idea. So how easy is it to find a meta-concept? Actually, it is surprisingly easy if you use the method presented here. When you are given a problem, the first thing to think about is, “What will be the problem if that problem is not solved?” In other words, the solution to this second problem becomes the real objective. A solution that serves that purpose, in theory, automatically solves the first problem set. Moreover, it is the best solution, including the solution that came out of symptomatic thinking. Meta-Concept Thinking can be acquired if we keep the habit of thinking about whether this way of doing things is the right way and what the real purpose of this work is, rather than simply trying to improve the efficiency of the work we are currently doing out of inertia. Figure 7.1 illustrates how to find the true objective. First, when a problem or need is proposed, think about what will happen if the problem or need [A] is not solved [B]. This is easy to think about. Next, the real purpose, or meta-concept [C], that results in solving this [B] is made explicit by rephrasing [B]. Since [C] is just a rewording of [B], anyone can do it. In this way, the meta-concept [C] can be easily found. There are many solutions to this meta-concept [C]. Conventional thinking methods, such as brainstorming with several members, can be used to generate various solutions from the [C] of the meta-concept. If you find the best one among them, it is inevitably the fundamentally best solution to the problem/need [A]. Now let me illustrate with a concrete example. Example: “The windshield wipers of a car cannot see the road ahead when it rains downpour.” What should I do? Fig. 7.1 Meta-concept thinking

Meta-concept [C]

Problem/ need [A]

If the problem/need won’t be solved what’s at stake? [B]

7.3 Correct Way of Thinking 1: Meta-Concept Thinking

147

Fig. 7.2 Meta-concept development of the example

In the usual symptomatic approach, one might think of making the wipers move faster or changing the shape of the wipers. In Meta-Concept Thinking, the first step is to find a meta-concept (Fig. 7.2). If this problem is not solved, the problem is that “when it rains downpours, forward visibility becomes poor, resulting in collisions with obstacles and cars” (simplified expression in the figure). If so, the meta-concept to solve this problem is “I want to drive without colliding with obstacles even in an environment with poor visibility. Notice the expression of the meta-concept here. It does not just represent the case of pouring rain. For example, the scope of application has been expanded to include cases such as when headlights fail to work at night or when visibility is poor due to dense fog. Note that the versatility of the solution can be expanded depending on the expression of the meta-concept. I’m trying to think of a solution that would accomplish this. (1) Millimeter wave sensor to recognize obstacles ahead on the center display screen (2) Forward recognition warning system using radar sensors (already in practical use) (3) Develop an automatic driving system that automatically avoids obstacles, ditches and cliffs (4) Equip roadside guidance systems for automobiles, and if there is an obstacle in front stop the car using (2) system (5) Embedding a line in the road and switching cars to automatic driving based on signals from this line, as in the automatic driving of golf carts and if there is an obstacle in front stop the car using (2) system (6) Use underground system for roads. The solution to the initial problem can be achieved through a variety of comprehensive ideas, including outlandish ideas that require technological development. This is a much more effective and versatile fundamental solution than symptomatic improvement of the wiper itself. If any of these were realized, there would be no need to “improve” the wipers. Now I have a question for you, the readers. What is the real purpose of being healthy?

Is it simply to live longer? To find this real purpose, we can start from the metaconcept. What is the trouble if you are not healthy? The real purpose of being healthy

148

7 Way of Thinking

is to solve that problem. This meta-concept will be different for each person. It may also be the purpose of your life. Why don’t you take this opportunity to think again about the real purpose of your life? The same is true for companies. While revenue is generally important to a company, when one considers what would be at stake without revenue, it becomes clear that revenue itself is not the objective. There must be a higher purpose for a company. Now, dear readers, as an exercise, please think about the following problems using Meta-Concept Thinking. (Those with a liberal arts background may skip exercise (1). (1) Want to reduce the number of defective drawings (troubled chief of the design section) (2) Recently, working on a computer for a long time makes my eyes tired and my shoulders stiff (senior manager) (3) Want to control the increase in medical expenses (government) (4) Having trouble with my child not studying (mother eager to send her child to a good school) One example of a solution might look like this, but what was your answer? (1) Want to reduce the number of defective drawings (troubled chief of the design section) Sample answer: If the above-mentioned problem is not solved, “We cannot deliver good products to our customers on time and at a profit. If we keep producing defective designs and keep having to rework them, costs will increase and profits will decrease”. In order to solve this problem, we should find the solutions to satisfy the following two meta-concepts. Meta-concept 1: “Deliver good products on time. Meta-concept 2: “To increase profits. If it is too cumbersome to separate the two meta-concepts, they can be combined into one as follows. We want to deliver good products to our customers on time and increase profits.

With this goal in mind, there is no need to focus on reducing design defects on the drawings. To put it another way, even if there are design defects, it is enough if we can deliver good products to customers on time and increase profits. A symptomatic approach will only give you ideas such as enhancing the immediate checklist, having the group leader do a thorough check, or trying to make do with the current personnel. One of the causes of many design defects is the lack of manpower in the design section. For example, retired design engineers who are experienced, creative, and still energetic should be rehired at a reasonable salary and asked them to check drawings professionally and train younger members of the design department. In this way, retirees can contribute to the company and society through their work and enjoy the

7.3 Correct Way of Thinking 1: Meta-Concept Thinking

149

satisfaction of educating their juniors, thus killing three birds with one stone. This seems to be the current trend in corporate personnel affairs. There may be other ways to do this, such as combining CAD with AI to build new design software that can check for design errors and sell it as a product if it can be commercialized, but I will leave the rest to the reader. (2) Recently, working on a computer for a long time makes my eyes tired and my shoulders stiff (senior manager) Sample answer: the meta-concept is “I want to keep the work flowing smoothly” because if the above-mentioned problem is not resolved, we will have trouble “keeping the work flowing”. One possible solution is to reform the way work flows throughout the company, since the problem can be solved if work can flow smoothly without the use of computers. For example, instructions from a senior manager could be given directly or by telephone in an analog manner, and at the same time, the instructions could be recorded on a computer using a voice recognition system, and the person to whom the instructions were given would automatically return them digitally as confirmation information. The results of the instructions would also be submitted both analog (e.g., by phone or in person) and digital (for confirmation and storage). These digital data will be organized and managed by AI and used to improve work efficiency in the future. In this way, the elderly general manager’s time at the computer will be greatly reduced and eye fatigue and stiff shoulders will be alleviated, and direct telephone communication will be easier to convey detailed nuances and facilitate the smooth flow of work. We often hear stories these days about people sitting next to each other but exchanging e-mails, which is diluting human relationships, but this can also help to improve relationships. Recently, with the benefit of inner-ear headsets, receiving reports and giving instructions while on the move seems to be becoming more popular. If you are symptomatic, you may rush to improve input circumstances or change glasses and be frustrated that the results (improvement in work efficiency) are not as good as you had hoped for. (3) Want to control the increase in medical expenses (government) Sample answer: If we try to solve this problem symptomatically, we will end up with poor ideas such as trying to lower the unit cost of medical insurance or increasing the medical care burden for the latter-stage elderly. What would happen if we use Meta-Concept Thinking? If this is not resolved, the problem of “people not receiving adequate treatment when they fall ill” is considered, but this is still not enough, so let’s move up one more level. If the problem “people cannot receive adequate treatment when they fall ill” is not solved, the problem becomes that “people cannot lead healthy lives”. From now on, the meta-concept will be “We want to enable the people to lead healthy lives”. If this were the true purpose, there would be no need to ask the elderly (I am already this age, and I do not like this word because it sounds really bad) to bear a

150

7 Way of Thinking

new increased burden, or to negotiate with medical associations to lower the unit cost of medical care. This is because if the people are healthy, the government’s medical expenditures will inevitably decrease. In other words, the government should invest more finances in measures to improve the health of the people. This would be much cheaper, and moreover, it would be a measure that would benefit the people, since their health would be maintained and their true happiness would be realized. Of course, if the cost of medical care is too high, it needs to be corrected immediately. One way to achieve this is to research and develop a national health exercise (or game) that is fun and enjoyable for as many people as possible to participate in, and that also helps them maintain good health. In this case, it would be necessary to research and develop inexpensive instruments that can easily measure the level of health, for example, not only body fat but also other indicators (equivalent to biochemical tests), and then disseminate these instruments. It would also be effective to promote and support restaurants with good nutritional balance and delicious food, and to provide the public with vouchers to subsidize such restaurants. We hear of that Swedish gymnastics has also contributed to the health and physique of the Swedish people. Japan has a poor number and quality of parks, and it would be a good idea to invest in the creation of more parks with lush lawns. The cost of maintaining them would be much cheaper than the cost of medical care. It may be a bit of a stretch, but it may be a good idea to provide financial support for the purchase of health and fitness equipment such as the virtual bike mentioned in Chap. 6. Furthermore, it would be effective if the government provides financial support for hobbies that are inexpensive, enjoyable, and effective in maintaining physical fitness among the elderly, and that everyone wants to play. For example, I enjoy playing golf, and if the government supports the cost of playing golf, more people will be able to play. Golf is one of the best sports for the elderly because it involves walking in nature (10,000 steps in one round), it uses the muscles of the body, it uses the mind to think about strategy, it allows players to interact with each other, and it can improve social withdrawal, making it one of the best sports for the elderly to maintain their health. More important, however, is the government’s support for more development of preventive medicine. (4) Having trouble with my child not studying (mother eager to send her child to a good school) Sample answer: If this is not resolved, “Her child can’t get into a good school”. The mother is probably thinking of the problem that if her child does not get into a good school, “her child will not lead a happy life”. There is quite a logical leap in this train of thought, but assuming this is correct, the meta-concept would be “I want my child to be happy”. In light of this meta-concept, nothing needs to be done to force a child into a high level school in order to make the child happy. If we support the child to find a dream more suited to him or her, understand the child’s dream, and guide the child on a path to realize that dream, the child will be happy. That path surely does not have

7.3 Correct Way of Thinking 1: Meta-Concept Thinking

151

to be all about graduating from a top university, getting a job at a big company, and spending the rest of their lives as a salaryman. If you have a child who likes to make things and wants to become a carpenter, he or she does not have to go to college, but can apprentice himself or herself to a good carpenter somewhere. If a child wants to become a French pastry chef, he or she can go to a French pastry shop for an apprenticeship, and then consider studying in France in the future. For this purpose, he or she will study French voluntarily and enthusiastically. Children find fulfillment in the process of realizing their dreams, and they will be enthusiastic about the studies necessary to realize their goals without being forced to do so by their parents. I believe that Meta-Concept Thinking is an important method of thinking in every sense of the word and should be included in school education. This method can be used not only for product development, but also for everyday work and everyday problems, and if you don’t use it, you are missing out. Once you acquire this way of thinking, it will become a habit, and you will start thinking in this way even if you are not conscious of it. This way of thinking is also very useful in meetings. Suppose your colleagues come up with a variety of ideas. When you propose an idea based on the metaconcept, you will see all the ideas of your colleagues fade away. Your idea will be the best, because it will encompass and surpass all of the ideas of your peers. The meta-concept may be just one level up, or in some cases, you may have to start from two or three levels up to make it work, but be careful not to go too high, because if you go too high, it will become too abstract and it will be difficult to come up with concrete ideas. As a change of subject, some of you may think that the Why-Why Analysis [6] method developed by former Toyota Motor Corporation (TMC) Vice President, Mr. Taichi Ohno, is similar. However, Meta-Concept Thinking is not similar to this method. Let me explain the difference. Let me quote an example of Why-Why Analysis from his book. Problem: “Suppose a Machine Stopped Working” (1) “Why did the machine stop work?”. .... because it was overloaded and blew a fuse.” (2) “Why did it overload?”. ... “Because the bearings were not sufficiently lubricated.” (3) “Why not enough lubrication?”. ... “Because the lubrication pump is not pumping up enough lubricating oil.” (4) “Why is it not pumping enough lubricant?” ... “Because the pump shaft is worn and rattled.” (5) “Why is it worn out?”. .... because it doesn’t have a strainer (filtration unit) and chips got in.” By repeating the “why” five times, we were able to discover the solution of installing a strainer. If the “why” is not pursued enough, the process will end at the

152

7 Way of Thinking

stage of replacing the fuse or pump shaft. If this is the case, the same problem will reoccur a few months later. This is an effective way to avoid repeating the trouble when a certain system or element malfunctions. However, a closer look reveals that it is actually a repetition of symptomatic ideas, although the individual expressions are slightly different. In other words, we can see that “when searching for symptomatic measures, why-why analysis can be used. Now, as a problem similar to this, let’s say that the machine is an NC lathe and the machine, the lubrication pump, and all other systems are all working properly. Yet the NC lathe stopped due to overload. What would happen if we analyze this with a why-why analysis? (1) “Why did the machine stop?”. .... because it overloaded and blew a fuse.” (2) “Why did the overload occur?” (This is where we differ from before. Assume all mechanical systems are normal). .... because the depth of cut was set too large.” (3) “Why was the depth of cut set too large?” ... Because I want to shorten the machining time. (4) “Why do you want to shorten the machining time?” ... “Because the delivery date is imminent.” (5) “Why is the delivery date imminent?” (Don’t you feel like we are heading in a strange direction?). .... because an unscheduled job just popped up.” You may feel that there is something somehow wrong with the situation. Would eliminating unscheduled work jumps solve this problem? I doubt it. Because there is a possibility that a different cause may be found than the above. For example, (4) “The worker’s capacity is inadequate...... “ Or you could give a reason such as, “Because the sales person took the order with an unreasonable deadline”. In other words, you can make up any number of reasons. When there is only one possible answer, such as a mechanical failure, the correct answer can be arrived at through the symptomatic approach. However, when the problem is complex and there are multiple possible causes, it is difficult to know which cause to pursue. In such a case, it is still a good idea to use Meta-Concept Thinking to find a solution from the big picture. In this case, the Meta-Concept Thinking can be used as follows. Think about what will be at stake if this problem is not solved. “We won’t be able to meet the delivery date for this part.” Therefore, the meta-concept is. “I want to process this part on time.” From here, you have to come up with every conceivable idea. In the above example, many problems must be solved in total, including sales, worker capacity, machine capacity, and optimization of processing conditions. If we think from this metaconcept, we can come up with many ideas that encompass them, and if we do not, they will not be fundamental solutions. Thus, you can see that it is necessary to

7.4 Correct Way of Thinking 2: Total Design Thinking

153

distinguish between determining the cause of failure of a system, such as a machine, and a much broader problem that requires creative thinking. Next, I would like to consider how to find a solution from the meta-concept. In the example above, I explained the solution quite simply, but finding it is surprisingly difficult. In fact, this one may be more difficult in the sense that it cannot be patterned. This depends largely on the creativity of each person, but, if possible, I would like to see a way for people (including the author) who are not considered to have much creativity to come up with a decent idea. In reality, there is no ideal way to fulfill such a selfish request. It is often used by one person alone, but the combination of Meta-Concept Thinking and brainstorming can be quite effective. It can be called “evolutional brainstorming”. Brainstorming is characterized by the fact that ideas are stimulated by other people’s ideas, so everyone should experience that it is easy to create a chain of ideas. Also, since each person has his or her own personality and experience, a wide range of ideas will emerge. An efficient approach is to use a personal computer and Microsoft’s Visio application software. We use Visio because the card-like templates are easy to fill in with meta-concepts and ideas, easy to move around when grouping them later, easy to modify, and neatly organized. This will be projected on a large screen using a projector as we all go along. Of course, there is also the method of writing on post-it notes and pasting them on a large paper, but it is difficult to rewrite, and computers are easier to use because you can write in beautiful letters and use your favorite colors. This method can generate quite a few ideas, even if you do it alone on a computer. Please give it a try.

7.4 Correct Way of Thinking 2: Total Design Thinking Whenever I see or hear news about failures in the world, I am often always reminded that if they had known this way of thinking, they would never have made such mistakes. Here we are talking about the design of a system. Let me share it with you below. The target of the total design thinking is a system that “already exists” or has been designed. What this means is that if there is a system that has fixed functional requirements, and new functional requirements are added to the system, this is a way of thinking about how the system should be designed. Although it may seem to be a very limited way of thinking, it is in fact an important way of thinking because it is often encountered in daily work. For example, in a personal matter, when considering whether to add a new part or scrap-and-build a house, it falls into this category. There are four major types of design. They are total design, add-on design, combination design, and improving design. The meanings of each are as follows. Total design means that when the new functional requirements are given, a new total system is designed for all functional requirements together with the existing

154

7 Way of Thinking

Fig. 7.3 System constraint condition

functional requirements, without being confined to the existing system. It is a kind of scrap-and-build. Add-on design is a design in which parts of an “existing” system are changed or new systems are added to it to satisfy newly given functional requirements of the existing system. Combinatorial design is the design of a system that satisfies all given functional requirements by combining multiple “existing” systems. In this case, the presence or absence of new functional requirements is irrelevant. Improving design refers to design a system that is even better than the current design without changing any of the functional requirements. By the way, although design can be said to be the process of materializing all functional requirements so that the system’s functionality falls within the required range, or design range, in reality it is not always possible to achieve what is desired due to various constraints. This is because there are conditions that constrain the system range when it is attempted to be included in the design range. This is defined as a system constraint condition. Figure 7.3 shows such a condition. In this case, if the system constraint condition is removed, the degree of freedom to move the system will increase, and the feasibility probability of the evaluation item will increase. In other words, it becomes a good system. From this fact, it is established that total design is superior to add or combinational design with respect to the evaluation items with system constraint conditions. This is because add-on design is a design method that uses an existing system and adds new parts or changes parts to it to meet new functional requirements, so it must use an already existing system, which is a system constraint and does not allow ideal design. Since combinational design is also a combination of existing systems, each existing system is a system constraint on each other, making optimal design impossible. Therefore, it is clear that ideal optimal design is not possible as long as there are constraints. In other words, add or combinational design is not only inadequate design, but can also cause problems. On the other hand, total design has no such constraints and is therefore an ideal and superior design method. When designing, the above ideas must not be forgotten. The last type of improving design should generally be even better because it is a design that tries to improve the original system to make it even better, while no new

7.4 Correct Way of Thinking 2: Total Design Thinking

155

functional requirements are given and the functional requirements are still the same as before. Put another way, in this case, there is generally a functional requirement that the system range is out of the design range, so it is a good design because it attempts to bring them into the design range. This cannot be discussed in the same breath as the aforementioned design. An example would be the case where an improving design is designed to reduce cost without changing the existing functional requirements. For example, if the earthquake resistance of a building needs to be increased, it is not feasible to scrap the building and rebuild it from scratch with a total design, either in terms of cost or time, or because it may not be possible to replace the environment currently in use. In other words, if the feasibility probability of scrapping and rebuilding is zero or close to zero, then the conclusion would be to design an addition. In other words, it becomes a matter of examining this project using the FPT to determine which has the higher feasibility probability of system realization. In the long run, it may be that the design range of cost and delivery time is inappropriate. In other words, the budget may be significantly exceeded at the time the project is being considered, but the cost design range must be revised if the benefits to be recouped later are well in excess of the investment. Delivery times should also be reviewed if there is some inconvenience at present but significant benefit/utility in the long run. There are countless examples of people who have created something new and wonderful with the right insight, without being limited by such short-sighted design ranges. Here is one example of a successful total design. Sorry to use an old example, but Minolta (in 2003 merged with Konica Corporation and became Konica Minolta., Inc., and later camera business was transferred to Sony.) developed and sold an auto-focus single-lens reflex (SLR) camera called the Minolta α-7000 in 1985. At that time, there were a few autofocus cameras on the market, but they were all of the type that incorporated a motor in the lens (add-on design design) and did not sell well. In addition to the multifunctionality that had been the function of conventional SLR camera, Minolta also set forth a new functional requirement: ease of operation. In order to achieve this, Minolta identified a meta-concept function (Minolta at that time unconsciously came up with such thinking), the details of which are omitted here. They made a detailed comparison of whether to put the motor in the body or on the lens, and have confirmed that the total design of putting the motor in the body is by far the most user-friendly, and also provides the advantage of cheaper interchangeable lenses although it would sacrifice compatibility of interchangeable lenses with the existing camera system (allowing interchangeable lenses to be used with both the new camera and the existing camera). Therefore, they decided to manufacture and sell cameras with a total design. This led to the completion and sale of the Minolta α-7000, a wonderful auto-focus SLR camera that no other manufacturer at the time had achieved. As a result, it won more than a dozen camera-related awards and maintained the top share of the SLR camera market for several years after its launch, an astonishing achievement. It is truly a success story of total design. The word compatibility is mentioned here, and when total design is adopted, situations often arise in which compatibility must be sacrificed, as described above.

156

7 Way of Thinking

However, if we insist on compatibility, we may fail. The reason is that the compatibility is to be shared with the existing system, so the existing system becomes the system constraint condition. This is why cameras manufactured and sold before Minolta (which attached motors to their lenses in order to emphasize compatibility) were not successful. However, given a new functional requirement, if the metaconcept of the newly given functional requirements is analyzed and as long as the meta-concept function is satisfied, there is no problem to adopt compatibility. Note that this means that the feasibility probability of the system must be considered in deciding which is better. There are many examples of failures due to adherence to compatibility (i.e., addon design), but to introduce one, there was the post-cassette tape war that took place around 1996. This was a period of great change in music systems from analog to digital. New functional requirements were placed on music recording and playback systems to go digital. Analyzing this meta-concept functionality, three things were considered: high sound quality, compactness, and ease of head start. The Matsushita Electric (now Panasonic) and Philips groups of the time adhered to compatibility that the system could also use conventional cassette tapes. In other words, conventional cassette tapes can also be used with the amplifier. The Sony group, on the other hand, entered the sales competition with a new media called MD (abbreviation of Mini Disc), which was designed to fulfill the above meta-concept function. If compatibility was adopted the meta-concept function of easy head start could not be fulfilled because it used tapes. It would also increase the volume of the media for the same recording capacity. The result of the sales competition was clear: In September 1996, the Matsushita Group finally entered the MD market in earnest. Thus, we must be very careful when adopting compatibility (add-on design design). When I discovered (invented) this total design thinking, I thought it was quite impressive, but upon closer examination, I found that this concept has existed for as long as 2000 years. In the New Testament, in Matthew 9:17, Jesus Christ says. New wine (new functional requirements) should be put into new leather bags (total design).

This means that if you put new wine in an old (existing) leather bag, the old bag is weak and will burst due to the gas pressure. It teaches the meaning that the new teachings of Christ cannot be understood within the framework of the old religion. From this story, you can see that the total design thinking can also be used for social issues as well. This concludes our discussion of the thinking. Thank you for your patience on this long journey of this book. I am sure that this FPT and Design Navi will be useful for your work. I hope you will make use of this method and contribute to society.

References

157

References 1. https://ja.wikipedia.org/wiki/cognitive bias (in Japanese). 2. Nishibori, A. (2001). You can’t cross a stone bridge without hitting one. Productivity Publisher 2001 (in Japanese). 3. Matsushita, K. (1988). Quest for prosperity. PHP Institute, Inc. 4. Business Journal. (26 Dec 2017). Web. 5. Weekly Diamond. (May 2017). Special report, (in Japanese). 6. Ohno, T. (2009). Toyota production system. Diamond Inc., (in Japanese).

Index

A Action matrix, 70 Active listening, 66 Add-on design, 154 AI, 39 Anchoring bias, 138 Antarctic expedition, 141 Apollo program, 48 Approximate calculation, 103 Axiom, 9, 18

B Backtracking, 67 Benchmarking method, 92 Boeing, 49, 51

C Cantilever, 91 Coating tools, 126 Cognitive bias, 138 Combinatorial design, 154 Common range, 15 Common range coefficient, 24 Compatibility, 155 Compound probability, 17 Conceptualize, 12 Confirmation bias, 139 Confounding, 97 Conservatism bias, 138 Continuous random variables, 29 Core competence, 74 Cost benefit analysis, 4 Curvic coupling, 125

Customers, Competitors and Company (3C), 56

D Dental air grinder, 121 Design, 3, 12 Design Navi, 79, 80, 87, 90 Design range, 15, 19 Diffuses, 60 Discrete random variables, 29 Dispersion, 59 Disruptive technology, 71

E Evolutional brainstorming, 153 Expected utility, 4 Experiment design, 94

F Feasibility Prediction Theory (FPT), 3, 11, 25 Feasibility probability, 7, 14 Feasibility Study (FS), 6, 47 Fisher, R.A., 94 Flash memory, 72 Focus-object method, 77 Functional Independence, 93 Functional Requirements (FR), 3, 12, 19 Function oversupply, 68 Future manifestations, 58 Future prediction, 58

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 H. Nakazawa, Design Navi for Product Development, https://doi.org/10.1007/978-3-031-45241-3

159

160 G Gas turbine, 125 H Halo effect, 138 Hand washing, 127 Heijden, 62 I Improving design, 154 Information gathering, 141 Information Integration Method, 8 Injection molding machine, 119 Interaction, 97 Interviews/surveys, 66 Intuition, 4 K Kansei evaluation, 32 Kansei (sensibility), 32 Kim, W. Chan, 68 KJ method, 74 Kotler, Philip, 56 L Laplace’s Demon, 62 Law of increasing entropy, 60 Level, 94 Line, 59 Literacy, 62 Logical thinking, 61 M 3M, 65 Market segment limitation principle, 66 Masked need, 67, 70 Matsushita, Konosuke, 144 Mauborgne, Renee, 68 McCarthy, Jerome, 56 MD player, 68 MECE, 56 Meeting, 35 Meta concept, 69 Meta-concept thinking, 51, 68, 145, 152 Minimum information axiom, 8 Minolta α-7000, 155 N Nakazawa method, 87

Index Need, 66 New Feasibility Study, 48 New Testament, 156 Nishibori, Eizaburo, 141 O Options, 12 Orthogonal table, 94 Outside-in, 75 P Parameters, 94, 115 Performance prediction, 110 ∏-theory, 19 Point evaluation method, 5 Probability, 14 Product, Price, Promotion and Place (4P), 56 Prototypes, 93 R Recsat, 8 Request, 12 Reticulum activating system, 143 Robust design, 130 S Scenarios, 62 Seed, 65 Squared deviation, 112 Stabilizing power circuit, 130 Standard deviation, 23 Standard normal distribution, 30 Strategic campus method, 68 Success stories, 138 Suh, N.P., 93 Sustainable Development Goals (SDGs), 57 SWOT analysis, 48, 73 Symptomatic thinking, 139 System, 3 System constraint condition, 154 System feasibility probability, 7, 12, 18 System feasibility probability curves, 104 System range, 15, 22 System range coefficient, 23 System range curves, 100 T Taguchi method, 6, 130 Timing, 63

Index Total design, 153 U Uncertain events, 61 Uniform probability density distribution, 15 V Value critera, 144

161 Value curve, 68, 70 Virtual bike, 76 Visio, 153 Vision, 50, 71 Visionary companies, 11

W Why-Why Analysis, 151