This text book on INTEGRATED STATISTICAL AND AUTOMATIC PROCESS CONTROL covers mathematical process and stochastic modeli

*322*
*155*
*4MB*

*English*
*Pages 241*
*Year 2014*

- Author / Uploaded
- G. Venkatesan

Integrated Statistical and Automatic Process Control Hybrid Process and Quality Control

Integrated Statistical and Automatic Process Control Hybrid Process and Quality Control

G. Venkatesan

α Alpha Science International Ltd. Oxford, U.K.

Integrated Statistical and Automatic Process Control 248 pgs. | 31 figs. | 14 tbls.

G. Venkatesan 567 Clayton Road Clayton South Victoria 3168 Australia Copyright © 2014 ALPHA SCIENCE INTERNATIONAL LTD. 7200 The Quorum, Oxford Business Park North Garsington Road, Oxford OX4 2JZ, U.K. www.alphasci.com All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without prior written permission of the publisher. ISBN 978-1-84265-881-9 E-ISBN 978-1-78332-062-2 Printed in India

Preface The material contained in this book is based on the author’s academic and industrial experience in the context of process modelling and product quality control. The fields of ‘Stochastic Control’ and ‘Control Engineering’ are vast and broad subjects that span long periods of time. Situations that arise in the practice of process control have been attempted both by statistical and process control practitioners by applying the concepts of stochastic and engineering control theories. This book reviews some of the relevant and pertinent situations and includes some of those references published in the process control literature. It covers the application of Statistical Process Control (SPC) and Automatic Process Control (APC) principles, practices, tools and techniques for process and product quality control at the interface of the two disciplines. The core contributions and the fundamental work on Time Series Analysis, Forecasting and Control by Professor George Box and Jenkins, K. J. Astrom and others working in the stochastic process control and control engineering areas will always be of immense value to process control practitioners. This book can assist and guide process control and quality control practitioners to develop process control methodology for quality production by applying stochastic and mathematical modelling concepts, develop and simulate stochastic feedback control algorithm for analysis and interpretation of the results. For this purpose, the book uses the interface of the two different disciplines, which have the common objective of process and product quality control. The book explains use of the developed feedback control algorithm in simulation and interpretation of the simulation results to develop statistical process control procedures and process regulation schemes. It is expected that the parallel processing computer architectures described in this monograph and other computer architectures that are currently available can be used for incorporating the stochastic feedback control algorithm suitably in process control domain in order to achieve the purposes of efficient and cost effective process and product quality control. It may be possible that some mistakes and errors might have crept in inspite of best efforts to make the book free of omissions, commissions, and repetition to maintain flow of text and continuity of discussion. G. Venkatesan

Acknowledgement I wish to thank my academic colleagues at Monash, RMIT and Victoria University for valuable guidance, suggestions and contributions that have appeared in international published papers and conference proceedings, for which references are provided in the relevant chapters. I acknowledge the permission given by the publishers of journals and conference proceedings to reproduce the tables and figures. I wish to thank Narosa Publishing House, management and the production department for the meticulous attention to detail efforts in making this publication possible. G. Venkatesan

Contents Preface v Acknowledgement vi

1.

Introduction to the Development of Process Control Methodologies

1.1

1.1 Objectives 1.1 1.2 Control Charts and Statistical Process Control (SPC) 1.4 1.3 Motivation and Background 1.7 1.4 Terms and Definitions used in SPC and APC 1.8 1.5 History of Statistical Process Control Practitioners’ Work 1.11 1.6 Mathematical Models – Integrated Statistical and Process Model for Regulator Product Quality Control – An Overview 1.12 1.7 Organization of the Book 1.15 1.8 Conclusion 1.15 References 1.16 Suggested Exercises 1.17

2.

Mathematical, Statistical and Cost Models

2.1

2.1 Objectives 2.1 2.2 Mathematical Models 2.2 2.3 Stochastic Models 2.7 2.4 Cost Models 2.10 2.5 Conclusion 2.14 References 2.14 Suggested Exercises 2.15

x Contents

3.

Process Control Algorithm for Process and Product Quality Control

3.1

3.1 Introduction 3.1 3.2 Time Series Model and Time Series Controllers 3.4 3.2.1 Time Series Model 3.4 3.2.2 Autoregressive Integrated Moving Average (ARIMA) Model 3.4 3.2.3 Prediction 3.5 3.2.4 Time Series Controllers 3.5 3.3 Stochastic Feedback Control Algorithm 3.5 3.3.1 Characteristics and Features 3.5 3.3.2 Stochastic (Statistical) Process Control Algorithm – Criterion 3.6 3.4 A Comparison of Time Series and PID Controller Performances 3.7 3.5 Development of Time Series Models 3.8 3.5.1 Feedback Control Difference Equation 3.8 3.5.2 Symbols used in the Feedback Control Model Block Diagram of Figure 3.1 3.9 3.6 Justification for Considering Second-order Dynamic Model for the Process 3.10 3.6.1 Important Considerations for Feedback (Closed-Loop) Control Stability 3.13 3.7 Expression for the Control Adjustment in the Input Variable of a Time Series Controller 3.20 3.8 Time Series Controllers–Forecast Error Variance Feature 3.24 3.9 Assumptions in the Formulation (Design) of Time Series Controller Parameters 3.26 3.10 Time Series Controller Performance Measures 3.26 3.10.1 Time Series Controller Tuning Parameter Combinations 3.26 3.10.2 Need for Simulation Study of Statistical Process Control Algorithms for Drifting Processes 3.27 3.11 Simulation Study of Statistical Process Control Algorithm Processes with Drifts–A Review 3.27 3.12 Conclusion 3.28 References 3.29 Suggested Exercises 3.30

4.

Discussion and Analysis of Stochastic (Statistical) Feedback Control Adjustment

4.1 Introduction 4.2 Statistical Process Monitoring and Feedback Control Adjustment Methodology 4.3 The EWMA Chart

4.1 4.1 4.3 4.7

Contents

xi

4.4 The Null Hypothesis 4.9 4.5 EWMA Forecasting 4.10 4.5.1 Fast and Slow Drifts 4.11 4.6 Development of Simulation Strategy 4.13 4.7 Controller Gain (CG) 4.14 4.8 Simulation Methodology and EWMA Process Control 4.15 4.9 Feedback Control Adjustment 4.16 4.10 Benefits and Limitations of Integral Control 4.17 4.10.1 Dead Band, Dead-Zone, Non-Self-Regulation – Explanation 4.19 4.10.2 Limitations of Integral Control 4.20 4.11 Analysis of Simulation Results for Dead-Time 1.0 4.20 4.12 The Effect of Control Limits on Product Variability 4.30 4.13 Dependence of Adjustment Variance on Dynamic Parameters and Process Drift 4.31 4.14 Constrained Minimum Variance Control and Control Action Variance 4.34 4.15 Effect of Increase in Dead-Time on Control Error Standard Deviation (Product Variability) and Adjustment Interval 4.41 4.16 Dead-Time and Feedback Control Scheme 4.42 4.17 Feedback Control Process Regulation Schemes 4.44 4.17.1 Average Run Length and Control Procedure 4.44 4.17.2 Process Regulation Schemes 4.46 4.18 Sample Size 4.51 4.19 Probability Model for Feedback Control Adjustment 4.51 4.20 Regression Analysis 4.54 4.21 Discussion of Simulation Results 4.55 4.22 Conclusion 4.58 References 4.59 Suggested Exercises 4.60

5.

Sampled Data – Control to Minimise Output Product Variance 5.1

5.1 Introduction 5.1 5.2 Principles of Direct Digital (Discrete) Controller (I) Design Process Identification 5.3 5.3 Direct Digital (Sampled-Data) Controller Performance Measures 5.5 5.4 Process Control Methodology 5.6 5.5 Benefits of Integral Control and Dead-Time Compensation 5.10 5.6 Discrete (Sampled-Data) Controller Performance 5.12 5.7 Remarks, Conclusions and Industrial Applications 5.12 References 5.14 Suggested Exercise 5.14

xii Contents

6.

Process Control for Product Quality

6.1

6.1 Introduction 6.1 6.2 Process Control by Digital Computer 6.2 6.3 Direct Digital Control (DDC) or Sampled-Data Systems 6.4 6.4 Justification for the use of DDC 6.5 6.5 Self-Tuning (Adaptive Control) 6.6 6.6 Dead-Time or Time-Delay 6.6 6.6.1 The Need to Identify Dead-Time 6.6 6.6.2 Sampled-Data (Discrete) Control System and Dead-Time 6.7 6.7 Identification of Process Dead-Time 6.8 6.8 The Role and Characteristics of Inertia in a Process 6.9 6.9 The Rate of Drift of the Process (‘r’) 6.10 6.10 Sampling and Feedback Control Performance 6.11 6.11 The Need for a Dead-Time Compensator 6.12 6.12 The Smith’s Predictor and the Dahlin’s Controller 6.13 6.13 Requirements of Controller for Product Quality 6.15 6.14 Product Quality Control 6.15 6.14.1 Engineering Control Application 6.15 6.14.2 Statistical Control Application 6.16 6.15 Feedback Control Procedure for Adjusting Controllers 6.17 6.16 Benefits of Process Control for Product Quality 6.19 6.17 Conclusion 6.19 References 6.21 Suggested Exercises 6.21

7.

Parallel Processing Computer Architectures for Process Control 7.1

7.1 Introduction 7.2 The Need for Parallel Processing 7.3 MIMO Process Control System 7.4 Multi-Plant/Multi-Process Control System 7.5 Problem Formulation 7.5.1 Feedback Control Algorithm for a Multi-Input/Multi-Output (MIMO) and Multi-Plant/Multi-Process Control System 7.6 MIMO Process Control and Multi-Plant/Multi-Process Control 7.6.1 MIMO Process Control 7.6.2 Multi-Plant/Multi-Process Control 7.7 MIMO and Multi-Plant/Multi-Process Feedback Control Methodology 7.7.1 MIMO Feedback Control Methodology 7.7.2 Multi-Plant/Multi-Process Feedback Control Methodology

7.1 7.3 7.4 7.5 7.5 7.7 7.8 7.8 7.8 7.8 7.8 7.9

Contents

xiii

7.8 Application of Feedback Control Algorithm for MIMO and Multi-Plant Process Control 7.11 7.8.1 Multiport-Memory Architectural Design, Star–Ring Parallel Processing System for MIMO Process Control 7.11 7.9 Design of Advanced Computer Architecture for Multi-Plant/ Multi-Process Control 7.13 7.10 Efficiency of Parallel Processing in Process Control Systems 7.14 7.11 Advantages 7.16 7.12 Conclusion 7.17 References 7.17 Suggested Exercise 7.18

8.

Cost Modelling for Process and Product Quality Control

8.1

8.1 Introduction 8.1 8.2 Background and Review of the Cost Model 8.2 8.3 Motivation for Cost Model Development 8.4 8.4 Feedback Control Adjustment Methodology 8.6 8.5 Cost Model for Feedback Control Adjustment 8.7 8.5.1 Assumptions in Developing the Cost Model 8.7 8.6 Dead-Time and Feedback Control Scheme 8.8 8.7 Cost Model 8.9 8.7.1 Method of Developing the Cost Model 8.10 8.8 Cost Model and Process Regulation Schemes 8.14 8.9 Comparison of Control Schemes 8.15 8.9.1 Application of the Cost Model: Comparison of Control Schemes 8.15 8.10 Benefits 8.16 8.11 Conclusion 8.16 References 8.18 Suggested Exercises 8.19

9.

Applications in Product Quality Control

9.1

9.1 Introduction 9.1 9.2 Integral Controller 9.3 9.2.1 Integral Controller Design 9.3 9.2.2 Implementation of Integral Controller 9.4 9.2.3 Integral Controller Performance Measures and Parameters 9.5 9.2.4 Limitations of Integral Controller in Controlling Product Quality 9.8 9.2.5 Integral Controller Applications 9.11 9.3 Characteristics and Requirements for Controller Applications 9.12 9.3.1 Dynamic Optimization 9.12

xiv Contents

9.3.2 Application Requirements for Controller 9.13 9.4 Feedback (Process) Control Systems 9.13 9.5 Sampled-Data Feedback Control Systems and Digital Controllers 9.13 9.5.1 Sampled-Data Feedback Control Systems 9.13 9.5.2 Digital Controllers 9.14 9.6 Principle of Operation of Discrete Digital Feedback Controllers 9.14 9.7 Means to Achieve Damping in a Feedback Control Loop 9.15 9.8 Requirements of a Sampled-Data Controller Algorithm 9.15 9.9 Controller Performance and Limitations 9.16 9.9.1 Performance 9.16 9.9.2 Robustness 9.17 9.9.3 Performance Limitations 9.17 9.10 Computer Control Implementation on a Process Loop 9.18 9.11 Application of a Model Based Controller to Control Output Product Quality 9.19 9.11.1 Model Based Controllers 9.19 9.12 Conclusion 9.20 References 9.20 Suggested Exercises

9.21

10. Conclusion

10.1

Index

I.1

Chapter

1 Introduction to the Development of Process Control Methodologies

1.1 OBJECTIVES Modern high-tech and sophisticated manufactured products should be of high quality with minimum variations to satisfy the needs and requirements of customers. Quality products with minimum variances are the outcomes of successful process control of production processes. Statistical Process Control (SPC) monitors and detects special cause variations. Automatic Process Control (APC) compensates for ‘disturbances’, adjusts a manufacturing process and reduces output product quality variations. APC, based on real-time feedback is popular as a process control tool in manufacturing applications. It is a challenge for process control practitioners to develop methodologies to integrate both SPC and APC to control processes with various unknown disturbances. The objective of this book is to describe from mathematical and statistical principles the development of a suitable feedback control algorithm, an integrated hybrid process control methodology for manufacturing processes that have both ‘time delays’ and unknown disturbances. The integrated process control methodology proposed and discussed in this book will assist in better process control and product quality control thus reducing manufacturing costs. In engineering (machines, machine components and parts), and chemical processes (petroleum, oil refining, mining) manufacturing, more often than not, it is quite common to encounter problems that are associated with control of output product quality. Physical or chemical change of matter or conversion of energy is called process, for example, changes in pressure, temperature, speed, electrical potential etc. A process can be anything from a control valve in a length of pipe, (simple flow control), to an enormously comprehensive and complicated physical and/or chemical complex such as a system of distillation columns, or any complex process comprised of many process subsections. The outcome of a manufacturing

1.2

Integrated Statistical and Automatic Process Control

process is not always the same as per original manufacturing plans. For example, in a production process, if measurements of a finished product are not exactly the same as was intended, the product may not perform exactly as per the specifications. Examples are numerous in every field of human endeavour. Let us consider few examples to explain this point. 1. An automobile tyre sustains a small increment of permanent damage when it hits a sharp stone. The vehicle can still drive with damaged tyre, but its efficiency at high speed may not be the same compared to the performance of an undamaged tyre without any defect. 2. A tiny crater on the surface of a driving shaft caused by corrosion remains there until the shaft is inspected and the surface is smoothed out for its normal function/operation. In this case, the machine may still perform its intended function, but output product may have variation in its dimension and consequently its quality may not be the same. 3. Consider that a certain detail in the standard hospital procedure for taking blood pressure inadvertently omitted or changed from that point onward. This omission may give wrong measurement of blood pressure and consequently may lead to wrong diagnosis for further treatment. 4. A similar case arises in the hospital and health industry. Consider the administration of a life-saving drug into a patient’s body intravenously in small doses by slow-release. Quantity of measured drug is of vital importance. Incorrect measurements will lead to variances in the quantity of drug received by the patient. If the drug is expensive, it has financial implications also. For example, the drug-feeding/injecting machine will consume more quantity of drug for producing a fixed number of injections if the machine setting is such that it dispenses more quantity in one injection. Alternatively, if the machine dispenses less quantity, the patient will not be able to get the correct quantity of drug as recommended by the doctor that may have consequences about the patient’s life. 5. In the case of metal/ores mining and refining operations, a small increase/ decrease in output variance may lead to change in the product-quality of the outgoing product which may have a huge impact on the profit/loss of a business. It is possible to achieve minimisation of variance by closely monitoring processes through process control, control charting and making input adjustments when essentially required and necessary. There are many examples that can be cited in this connection. These variances can be minimised in the control of processes by applying statistical techniques and tools (control charts), called Statistical Process Control, (SPC ) and are known as ‘Variance shocks’ or ‘sticky innovations’. Process control is the regulation or manipulation of variables influencing the conduct of a process in such a way as to obtain a product of desired quality and quantity in an efficient manner. A variable is a quantity or condition associated with a process, the value of which is subject to change with time.

Introduction to the Development of Process Control Methodologies

1.3

One non-automatic example of (technical feedback) is the use of quality control charts and to highlight any abnormal variations in the behaviour of a process (Figure 1.1). SPC

Monitor process by control charts to detect (abnormal) variations and assignable Causes

Apply (informal) control rule and adjust only when ‘statistically out of control’ using control charts

Process and quality characteristics

Figure 1.1 Statistical Process Control (SPC)

Statistical and Engineering or Automatic Process Control (APC), (Figure 1.2) practitioners have made efforts for more than half a century to integrate SPC and APC to control product quality and minimise the output variance of a process by defining clearly the contexts for which each is suited best. APC uses an appropriate feedback control procedure which tells the process operator or technician when and how many input units are required to optimally ‘adjust’ an existing process with the purpose of obtaining maximum efficiency of the process. This system of feedback control is known as ‘adaptive quality control’ (Box and Jenkins [1962]). It is possible to achieve efficiency of process operation by applying automatic process control techniques in different ways in production processes. Various forms of feedback and feed-forward control regulation schemes are used for process adjustment in APC. Determine process adjustment

Control algorithm

Update (if adaptive)

Process and quality characteristics

Figure 1.2 Automatic Process Control

SPC and APC have originated in the component parts manufacturing and the process industries respectively. They have developed in relative isolation from one

1.4

Integrated Statistical and Automatic Process Control

another. These distinct and divergent methodologies have also been significantly successful in improving the quality of the processes and products. The purpose of this chapter is to present the history and background of integrating SPC and APC in product quality control. The terms, nomenclature and technical definitions, as used in both the practice of SPC and APC, are explained in this chapter. Ideas and methodology proposed in this monograph are based and drawn from the existing work of the process control practitioners that are available in the stochastic process and control engineering literature. Further, the process control methodology described in this book can be used, modified and integrated into hybrid (process) control system for the best use of both APC and SPC methodologies. The common practice in the engineering and process control industries is to apply SPC methodologies for processes in which successive observations are ideally ‘independent and identically distributed’ (‘iid’) as a basis for achieving fundamental process improvement. Stochastic process control addresses this particular situation where observations are dynamically related over time; and helps to run the existing process well within statistical control limits, rather than improving it as a main criterion/objective for process control. Tucker et al. (1991) presented a scheme based on this control philosophy. SPC is a collection of techniques, (which include statistical monitoring and Shewhart charts as well as other ‘techniques’), found useful in improving product quality by helping a statistical quality control practitioner locate and remove root causes of quality variation, Tucker et al. (1991). Cumulative Sum (CUSUM) charts and Exponentially Weighted Moving Average (EWMA) charts are frequently employed along with Shewhart charts in SPC.

1.2 CONTROL CHARTS AND STATISTICAL PROCESS CONTROL (SPC) Control charts, the tools of SPC, are used to decide when to adjust the process and when to leave the process alone as a result of either identifying special or assignable causes of variability or concluding that only a chance cause system of variation is present in the process control system. Special causes should be duly accounted for when reacting to control warnings. Control charts are used for detecting apparent departures from a model. A model is a theoretical description that adequately accounts for the current behaviour of the system and will allow reasonable prediction of its future behaviour within limits of accuracy. A model can be constructed by direct analysis of the physical laws governing the system. Due to lack of knowledge of all the factors governing the system’s behaviour, it may not be possible to build a complete model of the system. Modelling is combined with experimentation to obtain data from the system. Various techniques are used to determine a model which best fits the measured data (Hunt [1989]).

Introduction to the Development of Process Control Methodologies

1.5

Process monitoring and control is concerned with regularly checking that the process control system continues to follow the assumed model and as such it parallels hypothesis testing as a statistical procedure. SPC techniques help an analyst in monitoring a process so as to detect and remove special causes of variability that are inconsistent with the working model. SPC improves the process over the long term by finding and removing these special causes. It may also subsequently prompt action in an attempt to reduce ‘product variability’ (control error standard deviation). Single value, X-bar and range control charts, cumulative sum (CUSUM) charts and exponentially weighted moving average (EWMA) charts are tools of trade employed in SPC, (Figure 1.1). The traditional SPC and Statistical Quality Control (SQC) achieves process monitoring and controlling the production processes and process parameters by detecting temporary changes in process performance from stable state and by identifying the assignable or special causes of variation pointed by these deviations and eliminate the same thus ‘improving’ the process in the end. APC is a collection of techniques for devising (feedback and feed forward) algorithms to manipulate the adjustable variables of a process to achieve the desired process behaviour, (for example, output close to a desired or target value can be the objective is some cases), Tucker et al. (1991). APC performs the monitoring and control functions by adjustment of relevant process variables to maintain a performance criterion in some desirable neighbourhood of a target value. This is usually a quality characteristic of a consistent end-product of the process. At this stage, it may be pertinent to mention that in modern practice of industrial process control, there may be specific processes where SPC, (in parts manufacturing engineering industries), and APC, (in chemical and petroleum-refining and mining industries), are used as stand-alone techniques for process monitoring and control. Because there is a plethora of industries, it is difficult to define a line of demarcation where either SPC or APC can be used in isolation. There are no specific rules or guidelines that can recommend use of SPC or APC only in a particular industry. Much depends on the quality control requirements and nature of process to be controlled. However, it is appropriate to use the statistical monitoring charts from SPC when it is reasonable to expect successive process measurements modelled as independent and identical distributions (‘iid’) and when the major concern is to detect departures from such an ideal. APC is most effective in the context of a ‘wandering’ process, in which the product mean constantly changes. The term ‘wandering mean’ takes its name from the field of Statistics. Such wandering process can be modelled by an autoregressive integrated moving average (ARIMA) time series and are effective on an independent, identically distributed (iid) process when used in a tactical manner. The feedback controllers are typically commissioned to maintain the set points of important process parameters.

1.6

Integrated Statistical and Automatic Process Control

In Algorithmic Statistical Process Control (ASPC), the traditional SPC and feed-forward/feedback control APC methods have been integrated into an algorithmic process control system, (APCS). This ASPC methodology seeks to exploit strengths of both the SPC and APC, two fields that have developed in relative isolation from one another. The goal of ASPC* (Figure 1.3) is to reduce predictable quality variations using feedback and feed-forward control techniques, discussed in Section 1.1. This approach monitors the complete process control system to detect and remove unexpected root causes of variations. Variations also arise in flow, measurement and control of fluids in a production process. For example, for this purpose, the ASPC employed the knowledge and experiences gained with the control and monitoring of intrinsic viscosity from a particular General Electric polymerization process. This led to a better understanding of how to unify SPC and feedback control into a single APCS system. Determine process adjustment Control

Update

algorithm Process and quality characteristics Monitoring algorithm Seek/correct root causes

Monitor signals

Fig. 1.3 Algorithmic Statistical Process Control (ASPC)

For this purpose, Tucker et al. (1991) took recourse to and built this unique APCS based on the results and achievements of past work by Process Control specialists and practitioners, see for example, the work by Harris(1989), Baxley(1991), Palmer and Shinnar(1979), MacGregor(1987), Box and Jenkins(1976), Astrom(1970). Scott et al. (1992) showed that it is practically possible to monitor, control and measure product variability and the performance measures, viz., variance and standard deviation of an outgoing product quality variable. The authors, at the same time, resolved the technical issues that arose during the course of making the union between the two process control methodologies to minimise product variability, a really possible and practical proposition. Moreover, the authors 1.endeavoured to bring both process control methodological approaches into better perspective by delineating more clearly the contexts in which each approach is best suited, either the engineering parts manufacturing industry or in the automatic process control of the chemical, pulp and paper, petroleum refining and other numerous process control industries. Statistical process monitoring and control is achieved through constant monitoring of the process and control charting. SPC is looked upon more as

Introduction to the Development of Process Control Methodologies

1.7

a control charting tool/technique, heavily dependent on sampling procedures, laboratory analysis and somewhat as a slow process to diagnose and correct the process due to various factors such as sampling method used, accuracy and true representation of samples of the population and other factors. APC, also referred to as Engineering Process Control, by engineers, is immediate and automatic. In this approach a correction is applied as soon as an error is detected and the process is brought under control. SPC and APC are two distinct, even divergent, methodologies in these aspects, which have scored significant successes in the drive for quality improvement. Tucker et al. (1991) contend that substantial improvements to product quality are often best attainable through an integration of techniques from both the methodologies, whereby one exploits the benefit of both the process control methodologies. Algorithmic statistical process control (ASPC), is an integrated approach to quality improvement – an approach that realizes quality gains through appropriate adjustment, (that is, process control) and through elimination of root causes of variability signalled by statistical process monitoring. Tucker et al. (1991) explained the underlying philosophy of SPC and APC, which forms the fundamental basis, application context and traditional development, combining SPC and APC. It is beyond the scope of this book, to explain the quintessence of the paper. The motivation and background presented in this book is different to the aims and goals of ASPC and APCS. Instead, SPC and APC is integrated into a hybrid control system by means of a ‘statistical approach to automatic process control’ at the interface of the two methodologies. A (stochastic) feedback control algorithm is developed with two dynamic parameters considering dead time compensation, (time delay) in making process adjustments. Process control regulation schemes are also devised for achieving minimum mean square error (MMSE) deviation from target of the outgoing product quality control variable with suitable Adjustment Intervals (AIs) within the Lower and Upper control Limits (LCL and UCL) of product specification. The motivation and background of this approach is explained in the next Section.

1.3 MOTIVATION AND BACKGROUND It is interesting to note that SPC and APC concepts have: (i) Independent origin, growth, development, and history in different manufacturing environments, (ii) Independent purpose, use and application in the parts manufacturing and Engineering Process Control industries, and (iii) Differences, limitations in the independent application use of SPC and APC principles in isolation. In this context, it is pertinent and relevant to know that the integration of SPC and APC were discussed as early as 1963, see Box and Jenkins (1963). Process

1.8

Integrated Statistical and Automatic Process Control

control practitioners originated this idea independently of control engineers who were developing Control Systems Theory on their own and were trying to tackle the problem of controlling continuous chemical processes automatically without human intervention. The control engineers developed ‘pole’, (a mathematical term in transfer function control engineering terminology, explained subsequently), placement and stability theories and used transfer functions, frequency response methods etc. The origin, growth and development of Control Systems Theory gave birth to new terms in Control Engineering terminology, to quote a few, at this introductory juncture. Set point: Desired value or target. Deviation: Error. Controller: A device, which operates automatically to regulate a controlled variable. Feedback: Control in which a measured variable is compared to its desired value to produce an actuating error signal that is acted upon in such a way as to reduce the magnitude of the error. Feed-forward control: It is relevant information concerning one or more conditions that can disturb the controlled variable, which is converted outside of any feedback loop, into corrective action to minimize deviations of the controlled variable. There are many terms similar to the ones mentioned above. Here, the purpose is to make the reader familiar about the literature. For further details, see Instrument Society of America Standard on Process Instrumentation Terminology (ISA – S51.1/1976). The technical terms and definitions in process control used in this book are explained in Section 1.4. The focus is on the integration of the SPC and APC process control methodologies and its industrial and scientific applications.

1.4 TERMS AND DEFINITIONS USED IN SPC AND APC In this monograph, the process control methodology is developed to control the output product quality by minimising its variance when the input is subject to random noise. The statistical process control and control engineering terms used are also explained, giving the definitions as well. To begin with, the basic terms such as Automatic Process control, Statistical Process Control, Feed-forward Control, Feedback Control, Deviation (error), simulation, model, algorithm, controller etc., are explained and then a description is given of the organisation of the book chapters and the subject text contained in each chapter. The terminology that can be useful in the context of the book is explained and any other terms that are not explained in this book can be found in scientific and technical dictionaries.

Introduction to the Development of Process Control Methodologies

1.9

It is desirable that the reader familiarises with these process control terms in order to obtain a comprehension of the text material presented in this monograph. Terms and Definitions The definitions presented are an extract from the Instrument Society of America Standard on Process Instrumentation Terminology (ISA – S51.1/1976). The ISA Standard contains additional terms, including many notes and amplifying figures that relate to the APC terminology. It is recommended to the reader to have a copy of the ISA Standard available for ready and routine reference, [APPENDIX B, Glossary of Standard Process Instrumentation Terminology. This is an Extract from the Text by Paul W. Murrill [1981]. “A control system, which operates without human intervention is called an ‘Automatic Control System’ (APC), whilst a control system is a system in which deliberate guidance or manipulation is used to achieve a prescribed value of a variable”. Though this definition is a general statement, it is believed that there are exceptions to a pure (100 per cent) automatic control system in that some form of human intervention is at least necessary to re-start or re-energise the system up and running. On the contrary, ‘Statistical Process Control’ (SPC), in general, is the control of processes by using statistical process control techniques and methodology, mainly, control charts, in which human interaction in process monitoring and control charting is absolutely essential to control and bring the process under control into stable operating conditions. ‘Adaptive control’ is control in which automatic means is used to change the type or influence of control parameters in such a way as to improve the performance of the control system. A ‘device’ is an apparatus for performing a prescribed function. The action ‘calibrate’ is intended to ascertain outputs of a device corresponding to a series of values of the quantity which the device is to measure, receive, or transmit data. Data obtained in such a manner are used to (i) determine the locations at which scale graduations are to be placed, (ii) adjust the output, to bring it to the desired value, within a specified limit or tolerance, and (iii) ascertain the ‘error’, (explained subsequently), by comparing the device output reading against a standard. ‘Deviation’ is another name for error. The term ‘signal’ is explained from the point of view of the control engineers. A ‘signal’ is a physical variable, one or more parameters of which carry information about another variable, which the signal represents.

1.10

Integrated Statistical and Automatic Process Control

‘Ideal value’ is the value of the indication, output or ultimately controlled variable of an idealized device or system. ‘Error’ is the algebraic difference between the indication and the ideal value of the measured signal. Error is the quantity, which algebraically subtracted from the indication gives the ideal value. In this context, ‘parameter’ is a quantity or property that is treated as constant but which may sometimes vary or that can be adjusted. A ‘summing point’ is a junction or node point at which algebraically addition of signals is performed. A ‘feedback’ or ‘closed-loop’ is a signal path, which includes a forward path, a feedback path, and a summing point and forms a closed circuit. In order to continue to understand and comprehend the meaning of a closedloop, the terms ‘loop gain’ and ‘closed-loop gain’ are defined first. ‘Loop gain’ is the ratio of the change in return signal to the change in its corresponding error signal at a specified frequency. ‘Closed-loop gain’ arises in a closed-loop system, and it is expressed as the ratio of the output change to the input change, at a specified frequency. It is called as ‘Frequency response control systems’ in Control Engineering. ‘Compensation’ is the provision of a special construction, a supplemental device, circuit, or special materials to counteract sources of error due to variations in specified operating conditions. ‘Compensator’ is a device, which converts a signal into some function, which, either alone or in combination with other signals, directs the final controlling element to reduce deviations in the directly controlled variable. It is a value associated with the variable, which is sensed to originate a feedback signal. ‘Control action’ is a device, which operates automatically to regulate a controlled variable of a controlling system, to minimize the change of the output affected by the input. The output may be a signal or a value of a manipulated variable. Similarly, the input may be control loop feedback signal when the ‘set point’, (desired value or target) is constant, an actuating error signal or the output of another controller. A ‘manipulated variable’ is a quantity or condition, which is varied as a function of the actuating error signal, so as to change the value of the directly controlled variable. An ‘error signal’ is the signal from its corresponding input signal in a closedloop, whereas an input signal is a signal applied to a device, element, or a system. Set Point – An input variable, which sets the desired value of the controlled variable, where the desired value is the preferred value of the controlled variable. ‘Derivative’ (D) control action is the control action in which the output is proportional to the rate of change of the input.

Introduction to the Development of Process Control Methodologies

1.11

As technology advanced, the control engineers faced more technological problems and challenges to control processes and introduced terms such as ‘filter’, ‘white noise’, ‘coloured noise’ to describe and represent specific control functions or characteristics of the signal. It was necessary for the engineers to predict and forecast the feedback control error in order to make the input adjustment so that the controller output is on target or set point. For this purpose, Smith (1959) devised and invented the ‘Smith Predictor’, while Kalman came up with the idea of a filter named after him popularly known as the ‘Kalman Filter’ (1960). Then the problem arose of compensating the time taken or delay for the input adjustment to travel through the entire process and reach the output. For this purpose, the process control practitioners designed ‘dead time’ (time delay) compensators and developed process control algorithms, some of them became popular and some not so, like the Dahlin’s algorithm, later referred in this monograph. The control engineers made considerable efforts as mathematical and computing analytical tools advanced, for example, the use of the idea of Simulation and using simulation principles to replicate actual processes that occur naturally in industrial environments and practice. Such efforts and experiments met with great success. Among other types of control that were introduced and experimented upon, were Model-Based Controller, Self-adapting or Self-tuning controller, Proportional Integral (PI) controller, Proportional Integral Differential Controller (PID), DirectDigital Controller (DDC) and many more. Minimum variance control, amongst such controllers is believed to be one of the oldest types of controllers, though it did not take off with such resounding success as the other types of controllers introduced by the engineers. Some of the reasons were, these controllers devised by engineers were robust, stable, easy to manipulate (tune) and adopt and fitted quite well within the technological principles and realm of control systems theory and emerging automatic process control technology.

1.5 HISTORY OF STATISTICAL PROCESS CONTROL PRACTITIONERS’ WORK Now, turning our attention to the contribution of statistical process control practitioners, Schewhart (1931), emerged on the international scientific stage in USA and developed control charts. This followed other noteworthy and monumental work in statistical process control charts, one by Barnard (1959). We mention here other statistical process control charts that use Exponentially Weighted Moving Averages (EWMA) and Cumulative Sum (CUSUM) Charts. These charts are: (i) “The Exponential weighed moving average chart” by Hunter (1986). (ii) “Design of Exponentially Weighted Moving Average Schemes” by Crowder (1989).

1.12

Integrated Statistical and Automatic Process Control

(iii) “CUSUM charts: by Lucas (1976 and 1982). “CUSUM Quality-Control Schemes” by Lucas and Crosier (1982). When knowledge of the value of some fluctuating measured input variable is used to partially cancel out deviations of the output from the target value, then the action is called feedforward control, (Figure 2.1) and Crosier (1982). In between, the control engineers were also equally up to the challenging task of independently developing APC and publishing their work on Automatic Process Control. The control engineering theory and development were so rapid that a few noteworthy milestone contributions are mentioned in order to complete the discussion. Noted among some of them are the famous Nicols and Ziegler, (1942) controller tuning rules. Mayr (1970) describes the origins of feedback control an 1.d Prett and Garcia (1988) explains the fundamental process control principles. Shinskey’s (1988) contributions to automatic process control is important. Earlier in 1970, Astrom (1970) sets basis for stochastic control theory and applications. Meanwhile, Box and Jenkins (1962, 1963, 1970, 1994) developed Stochastic Process Control theory and its practice. Their monograph on the analysis of time series forecasting and control is a front-runner for advanced process control. Contributions by McGregor (1988), Astrom and Wittenmark (1984) are also well-known in online statistical process control. Baxely (1991) focussed attention on drifting processes and developed special algorithms for drifting processes based on industrial work in oil and the petroleum refining industries. Baxley (1991) did a simulation study of statistical process control algorithms for drifting processes. Palmor and Shinnar (1979) worked on the design of sampled data controllers. For a more detailed description of a comprehensive literature review, visit the ADT website link: “http://wallaby.vu.edu.au/adt-VVUT/public/adt-VUT20050404.122353/index.html” of Victoria University of Technology, Melbourne, 1998, and in particular, Chapter 2 of the thesis on ‘the statistical approach to automatic process control’ by Venkatesan, (1997). See also Box and Kramer (1992) for statistical process monitoring and feedback adjustment and Tucker et al. (1991) regarding more details and information about Algorithmic Statistical Process Control.

1.6 MATHEMATICAL MODELS – INTEGRATED STATISTICAL AND PROCESS MODEL FOR REGULATOR PRODUCT QUALITY CONTROL – AN OVERVIEW The concept of an Integrated Statistical and Process (S&P) Model for Regualtor Product Quality Control developed in this monograph is based on the monumental

Introduction to the Development of Process Control Methodologies

1.13

work of Professor George Box, (University of Wisconsin, U.S.A), the late Dr. Jenkins (UK), Professor MacGregor, (McMaster University, Canada), Scott Van Der Wiel, (AT&T Bell Laboratories, U.S.A), William Tucker and Frederick Faltin, (researchers, GE Centre, U.S.A) and important contributions from many other similar process control practitioners, who have made endeavours to integrate Statistical Process Control (SPC) and Automatic/Engineering Process Control (APC/EPC). The integrated S&P model, introduced in this book, draws upon the knowledge and expertise presented in the monographs of Box & Jenkins, Astrom K.J., technical publications in Technometrics, IEEE Transactions, Journal of the Royal Statistical Society, Journal of Quality Technology to name a few and many other publications available in process control literature. The integrated S&P model considers the integration of SPC and APC at their interface and makes use of the principles and techniques from both SPC and APC. Some of the principles used are Box and Jenkins (‘s) Auto Regressive Integrated Moving Average (ARIMA) time series model for describing the stochastic output when identical, independent and Normally distributed random shocks pass through a secondorder filter. The ideas conveyed in the Smith’s predictor for dead time compensation and Dahlin’s algorithm are also used along with the results of Baxley’s, (Monsanto Corporation, U.S.A), paper on ‘Simulation Study of Statistical Process Control Algorithms for Drifting Processes’. Design principles from Palmor and Shinnar’s paper on ‘Design of Sampled Data Controllers’ are also discussed in the Chapter 6. In the book, a second-order process with two time constants, (inertia terms) is mathematically modelled for describing the process and a stochastic ARIMA (0, 1, 1) time series model for describing the stochastic output. These two models are combined to develop a stochastic feedback control algorithm with integral controller and dead time compensation properties to reduce product variability. The integral controller element in the feedback control algorithm calculates the input adjustment required to bring the control error standard deviation of the outgoing product quality control variable as close to the controller set point (target). Simulation of the feedback control algorithm yields control data about the number of adjustment intervals required to bring the output close to the desired target. A cost control model is also developed to show that it is economically and practically possible to control a process effectively and efficiently within the constraints of feedback control stability, controller gain, (set to 1.00) and appreciable process gain (PG). The book is concluded with applications to product quality control, paper manufacturing industry and food container packing, (over or under filling food containers), and other industries where producing a product that is out of the specification limits will be an infringement of consumer protection laws. The integral controller has minimum variance and adaptive control features along with dead-time compensation. A parallel processing architecture for implementing the stochastic feedback control algorithm for Single Input Single Output (SISO) and Multi-Input MultiOutput (MIMO) processes is also discussed in Chapter 7.

1.14

Integrated Statistical and Automatic Process Control

This monograph covers the origin, development of SPC and APC technologies and integration of the principles from the two methodologies for process control with relevant and suitable references in the monograph. This textbook reviews the situation at the interface of SPC-APC areas and includes those particular references in the process control literature. The scope of this monograph covers application of stochastic control and automatic process control theory for integrated process control and product quality control at the interface of the two disciplines. This monograph acknowledges and appreciates the core and fundamental work on Time Series Analysis, Forecasting and Control by Box and Late Jenkins, and lays much emphasis on their fundamental contributions which will always be of immense value to process control practitioners. This monograph develops control methodologies for quality production by developing stochastic and mathematical models and feedback control algorithm; carry out analysis of the models and interpret the results. This book uses the common objective of process and product quality control. The book explains practical use of the developed stochastic feedback control algorithm in simulation, and interpretation of the simulation results to develop statistical process control procedures and process regulation schemes for implementation through illustrative figures, graphs and tables with numerical values. The figures and tables show that minimum variance control can be achieved economically within the operating process control constraints to achieve process and product quality control. It is expected that the simple parallel processing computer architectures described in this monograph and other advanced computer architectures developed in the later years can be used to incorporate and implement the algorithm suitably in process control computers to achieve the purposes of process and product quality control. It is important to note that most of the references in this book relate to the research and international journal publications in the past 60-70 years. For further recent advances that are currently in vogue in the 21st century and current state and status of SPC and APC, the reader is directed to refer to publications in journals like Technometrics and those published by American Society of Quality Control and other IEEE, SIAM journals on control. Book of Box et.al (2010) on ‘Statistical Control by Monitoring and Adjustment’, ‘Process Control’ (2011) by Myke King and other similar books like ‘Process Dynamics and Control – Modelling for Control and Prediction’ by Roffel & Betlem, (2006), ‘Process Dynamics and Control’ by Seborg, Edgar & Mellichamp, (2010), and ‘Automation, Production Systems, and Computer-Integrated Manufacturing’, by Groover, (2007) will also provide additional reading material and more exposure and give an in sight into process control. In addition to the above monographs, this book may be very useful as a textbook for course on Advanced Process Control.

Introduction to the Development of Process Control Methodologies

1.15

Quality control methodologies and process control technologies continue to advance dynamically due to continuous on-going research. This monograph can be used as a text book for undergraduates to learn process and product quality control and process control practitioners in the industry. For more information on the advancement and further development of the process control methodologies, with recent references, the reader is advised to refer to the author’s forthcoming book “Process Control – 2000 and Beyond”. Some recent references in the integration of Statistical and Automatic Process Control are given in Chapter 10, Conclusion for the benefit of readers in the two process control disciplines.

1.7 ORGANIZATION OF THE BOOK Before concluding this chapter, an overview of the organisation of the other chapters in the book is presented below. 1. Mathematical, Statistical and Cost Models 2. Stochastic Process Control Algorithm for Process and Product Quality Control. 3. Discussion and Analysis of Stochastic (Statistical) Feedback Control Adjustment. 5. Sampled Data Control to Minimise Output Product Variance. 6. Process Control for Product Quality. 7. Parallel Processing Computer Architectures for Process Control. 8. Cost Modelling for Process and Product Quality Control. 9. Applications in Product quality Control (i) Integral Controller Application (ii) Model Based Controller to Control Output Product Quality 10. Conclusion.

1.8 CONCLUSION In this chapter, the process control concepts were introduced, along with S and P model for process control, and the origin and development of two process control methodologies. The discussion on integrating the SPC and APC techniques for product quality control accompanied an explanation of the motivation, background of the proposed integrated process control system. The technical terms and terminologies as used in SPC and APC are given for the reader to obtain a better understanding and comprehension. The history of both the statistical and engineering

1.16

Integrated Statistical and Automatic Process Control

process control practitioners’ work in controlling production processes is also explained in detail and the organisation of the book chapters. It is a common observation that if a process or process control system is not capable of producing quality parts consistently due to instability of the process, the process control system is unreliable for producing quality parts efficiently. The reliability system concept is brought into the probability model as quality and reliability are inter-connected and related. Each has a mutual bearing, influence and dependency on the performance of the other product characteristic. If the process is not reliable to produce quality products with minimum variance and standard deviation, the product becomes un-marketable and is of no commercial value to the manufacturer. This forms in a way the underlying basis for this monograph. In the next chapter, mathematical and process modellings are described along with an explanation of mathematical, statistical and cost models.

REFERENCES Astrom, K.J., (1970). Introduction to Stochastic Control Theory, New York: Academic Press. Astrom K.J. & Wittenmark, (1984). Computer Controlled Systems, Englewood Cliffs, NJ: Prentice-Hall. Barnard (1959). Control Charts and Stochastic Processes, Journal of the Royal Statistical Society, Ser. B, 21, 239-271. Baxely, R.V., (1991). A simulation study of statistical process control algorithms for drifting processes, In: SPC in Manufacturing. New York and Basel: Marcel Dekker, USA. Box, G.E.P. & Jenkins, G.M., (1963). Further Contributions to Adaptive Quality Control: Simultaneous Estimation of Dynamics: Non-Zero Costs, statistics in Physical Sciences I, 943-974. Box, G. E.P. & Jenkins, G.M., (1970, 1976). “Time Series Analysis: Forecasting and Control”, Oakland, CA: Holden-Day. Box, G.E.P. & Kramer, T., (1992). Statistical Process monitoring and Feedback Adjustment – A Discussion”, Technometrics, B vol. 34, No. 3, 251-267. Box, G.E.P. Jenkins & G.M., Rinsel, (1994). Time series analysis: Forecasting and Control, NJ. Prentice Hall. Box et.al (2010). Statistical Control by Monitoring and Adjustment, Wiley, UK. Crowder, S.V., (1989). Design of Exponentially Weighted Moving Average Schemes, Journal of Quality Technology, 21, pp. 155-162. Groover, (2007). Automation, Production Systems, and Computer-Integrated Manufacturing – Prentice-Hall. Harris, T. J., (1989). Interfaces between Statistical Process Control and Engineering Process Control, University of Wisconsin. Hunter, J.S., (1986). The Exponentially Weighted Moving Average, Journal of Quality Technology, 18, 203-210.

Introduction to the Development of Process Control Methodologies

1.17

Kalman, R.E., (1960). A new approach to linear filtering and prediction problems, Journal of Basic Engineering, Transactions ASME, Series D, 83, 95-108. King, Myke, (2011). Process Control, Wiley, United Kingdom. Lucas, J. M., (1976). CUSUM charts, The Design and Use of V-Mask Control Schemes, 1976, Journal of Quality Technology, 8, 1-12. Lucas and Crosier, (1982). Fast Initial Response for CUSUM Quality-Control Schemes: Give Your CUSUM a Head Start, Technometrics, 24, 199-206. MacGregor, J.F., (1987). Interfaces Between Process Control and On-Line Statistical Process Control, Computing and Systems technology Division Communications, 10 (No 2): 9-20. MacGregor, J.F., (1988). On-line Process Control, Chemical Engineering Progress, 84, 21-31. Mayr, O., (1970). The Origins of Feedback Control, Cambridge, MA: MIT Press. Murrill, Paul W., (1981). Fundamentals of Process Control Theory Instrument Society of America, North Carolina, USA. Nicols & Ziegler, (1942). Optimum Settings for Automatic Controllers, ASME Transactions, 64, pp. 759. Palmor Z.J. & Shinnar R., (1979). Design of sampled data controllers. Indust. Eng. Chem. Process Des Develop. 18(1): 8-30. Prett & Garcia, (1988). Fundamental Process Control, McGraw-Hill, N.Y. Roffel & Betlem, (2006). Process Dynamics and Control – Modelling for Control and Prediction – Wiley. Seborg, Edgar, & Mellichamp, (2010). Process Dynamics and Control – Wiley. Shewhart (1931), Economic Control of Quality of Manufactured Product, Princeton, NJ: Van Nostrand Reinhold, N.Y. Scott et al. (1992). Algorithmic Statistical Process Control: Concepts and an Application, Technometrics, August 1992, vol. 34, No. 3, 286-297. Shinskey, F.G., (1988). Process Control Systems, NY. McGraw Hill. Smith, O.J.M., (1959). A controller to overcome dead time. ISA. Journal, 6(2); pp. 28-33. Tucker et al. (1991) Algorithmic Statistical Process Control: An Elaboration, Statistical Report 102, AT & T Bell Laboratories, Murray Hill, New Jersey. Venkatesan, G., (1997). Ph D. Thesis, A Statistical Approach to Automatic Process Control, (Regulation Schemes), Victoria University of Technology, Melbourne, Australia. ADT website link: “http://wallaby.vu.edu.au/adt-VVUT/public/adt VVUT20050404.122353/ index.html”

Suggested Exercises 1. Conduct an Internet literature search for latest technical Publications on SPC-APC integration. 2. Make a class room presentation of the article. 3. Conduct SPC, APC and ASPC studies on a process operation.

Chapter

2 Mathematical, Statistical and Cost Models

2.1 OBJECTIVES In this chapter, definitions of mathematical, stochastic and cost models are explained in the context of quality. Particular attention is paid to model parameters and the effect of variation of these parameters in the performance and effective model behaviour with some examples. Some problems may arise associated with the selection and choice of model parameters, incorrect choice of model parameters and associated consequences leading to deleterious effects on the model, truly and not exactly representing the actual real-life conditions. Briefly, a ‘model is a true representation of the behaviour of a process which not only reflects its past behaviour and will also represent the process in future also (within reasonable accuracy limits)’. A model is characterised by its parameters and derives its character, properties and features from those parameters. A test for the effectiveness of the model and its parameters is by substituting values and verifying if the results are the same for different values. A pre-requisite for the model to possess this feature is by conducting a proper model validation test before the model is put into actual use. As such, along with the model parameters, model validation also plays an important part in determining a suitable model for a process. There are different kinds of models that are used in real-life situation. Economic, population, stochastic and cost models are just a few to mention diversity of situation and use. Each model is unique and different from other because of its properties, intended use and the data to be collected and used for that model. For example, population models are built on the statistical data of the calendar years and the numerical values of age of the population. An economic model may represent the

2.2

Integrated Statistical and Automatic Process Control

gross domestic product, (GDP money value) and again the calendar years are used for corresponding GDPs. It is a common practice to build economic models for big projects where millions of dollars of investment are involved and the outcome results are measured against different criteria that are of benefit to the community at large. There are meteorological models that explain the weather pattern and change in climatic conditions. These models use parameters such as force of the wind, (speed), atmospheric pressure, temperature, rainfall etc., to predict and forecast future weather patterns and use past data collected over a number of years and scientific forecasting methods to predict future weather. Another example can be huge water reservoirs and hydraulic dams that store water and supply water to an entire town or city. These models use water usage and storage to predict the availability of water at future date. Governments and municipal councils that are responsible for water supply use the water storage models to impose restrictions on use of water and to regulate water supply to ensure that sufficient water would be available for industrial, domestic and agricultural use. They calculate also associated risk in case of rainfall reductions and forecast future growth and population growth. There are inventory models used in manufacturing and transport models to regulate traffic flows along highways and major arterial travel routes. This chapter explains briefly the methodology, the background, the necessity and the characteristic features of a particular process. The process may be stochastic in nature. Cost models are developed for study of associated economy of the process. Thus this chapter is intended to give an insight about the mathematical and stochastic models that will be encountered in this monograph.

2.2 MATHEMATICAL MODELS From an engineering and scientific point of view, mathematical models involve a direct correlation between inputs and outputs associated with any process arising in manufacturing or otherwise. There is a certain relationship that exists between the input and the output. Under ideal conditions, it can usually be expected that in a well-defined model, a change in the input parameters will cause a change in the output parameters and the input has a direct bearing on the output. One of the main objectives of this monograph is about process and product quality control. As such, the focus will be mainly on mathematical process models and later on statistical (stochastic) models. In particular, the attention will be on the process model (say of second order) and particular stochastic model, (called Time Series Auto Regressive Integraed Moving Average, that are of interest. These models are used in the monograph for describing the process and the stochastic (random) variations that occur at the output due to changes made in the input. There are other mathematical process models and stochastic models available in literature but interest is focussed on and limited to process models that deal with process and product quality control in this book.

Mathematical, Statistical and Cost Models

2.3

Two dynamic parameters that describe the ‘inertial’ (dynamic) characteristics are considered for which a mathematical model is developed for the process. The mathematical description of the dynamic characteristics of a (process control) system is a ‘mathematical model’. In modelling a relation between input and output of a process, it is of utmost importance to consider the inertial characteristics of the process so that the developed mathematical model will describe exactly the process under consideration. An example that can be cited in this context is of a Feedback-Control Scheme, (Figure 2.1), in which control is achieved by changing the input catalyst formulation, which in turn changes the output viscosity. For simplicity, discussion is limited to elementary ‘linear’ feedback-control schemes and ‘linear difference’ equations are used to model the input-output relationship of a process. Linear systems are ones in which unknown quantities appear in linear form. A differential or difference equation’ is ‘linear’ if the coefficients are constants or functions of the independent variables only. Difference equations are used to represent ‘Discrete Time systems’, which deal with sequence of numbers separated by some defined intervals. In discrete systems, the input and output control signals are assumed sufficient to describe the behaviour of the system at the sampling instants at discrete times. For further details, see: ‘Computer Controlled Systems, theory and Design’, by K.J. Astrom and B. Whittenmark, Prentice Hall, New Jersey, 1990. A dynamical system presents output that depends not only on its present input, but also on its past history. In other words, such a system is having memory, See, ‘Lecture Notes in Control and Information Sciences’ by K. J. Hunt, (1989). Control Engineers call such mathematically configuring a process as “System Identification”. A system is a ‘mechanism’ or a collection of entities, physical or otherwise, that ‘transforms’ a set of clearly distinguishable quantities called ‘inputs’ to a set of ‘outputs’. For practical, realistic and actual life-time situations, non-linear considerations have to be taken into account. This is discussed in Chapter 3 while developing the ‘Stochastic Feedback Control Algorithm for Process and Product Quality Control’. The input in a chemical reactor is feed concentration and the output, the specific gravity is example of a ‘Single Input, Single Output’ (SISO) system. A Feedback Control Scheme or Feedback Control System, as it is usually referred to, is one in which the action of an ‘output controlled variable’ is measured, this measured value is compared with some ‘desired value’, (usually called the ‘Target” or ‘Set point’), value. Figure 2.1 is a simplification of a real process that also involves ‘error’ and influences forcing to change the output or value of controlled variables. The terms Input, Output and error were explained in Section 1.4. The term Disturbance (Noise) is explained shortly.

2.4

Integrated Statistical and Automatic Process Control Disturbance Zt Input Xt

Process

Output Yt

Control equation

Figure 2.1 A Simple Feedback Control Scheme

The scheduling and regulation of a production line or a manufacturing process is a feedback (control) system. The output controlled variable is the production rate or the inventory level and the desired value is obtained from a computer that minimizes costs or maximizes profits (including the characteristics of the market). The ‘error’ is used to compute the ordering rate for raw materials and the labour. Some more examples that could be relevant and cited in this context are: 1. The levels of the inventories of various canned goods on a retailer’s shelf, 2. The production rates of a machine shop, and 3. Prices controlled by subsidies and tariffs, 4. Quantities controlled by ‘rationing’ and wages, which are tied to the cost-ofliving, see Otto J.M. Smith, McGraw Hill Publishing Company, Inc.,[1958], N.Y. One non-automatic example of (‘technical feedback’) is the use of quality control charts to indicate when the process is out of control and to highlight any abnormal variations in the behaviour of a process (See Figure 2.2). A more precise definition of a Feedback Control System is ‘one which tends to maintain a prescribed relationship between the output and the reference input by comparing these and using the difference as a means of control’, (Ogata, K., Modern Control Engineering, [2001]). It is called Feedback control, when it is possible to use the deviation from target or error signal of the output characteristic itself to calculate the appropriate compensatory changes that need to be made, (Harrison [1964]), (Figure 2.2). In this context, it may be pertinent to know about feed forward control also. When knowledge of the value of some fluctuating measured input variable is used to partially cancel out deviations of the output from the target value, then the action is called feed forward control, (Figure 2.3).

2.5

Mathematical, Statistical and Cost Models (Noise) Disturbance Input manipulated variable

+ Process Output controlled variable

–

Target value

Deviation or error Controller

Figure 2.2 Feedback Control Model

Process control can be simple or complex. For example, certain steel making processes are inherently complex in nature. For satisfactory production and control, it is essential that their needs must be understood properly. Blast furnaces, petroleum and petrochemical plants are examples of some complex processes that are difficult to control. Some thermal processes can be modelled as linear ‘stationery’ systems for small deviations from set-point temperature. The process variables in the bleaching of pulp are dependent on one another that makes process modelling complex. Many wood-fibre treatment processes are inherently ‘non-linear’ in nature by themselves. Wood is a natural material used in paper-making processes. Because of its complicated structure, it causes constant changes that make control processes complex. Raw materials used in continuous process industries such as crude oil are heterogeneous which complicates process modelling. Measured input variable

Feedforward control system

Controller output

Disturbance

Process

Controlled output variable

Figure 2.3 Feed Forward Control Model

In industrial process control, a first-order dynamic model with a single ‘time constant’, (defined in Chapter 3), is used to describe the process behaviour. Instead, the second-order transfer function [(w0 – 2.2.ω1B)/(1 – δ1B – δ2B2)] considered in this monograph has two time constants and ‘dead-time’ (time-delay). The objectives for ‘discrete’ (sampled-data) control in controlling some time-delay models can be realised through the use of digital techniques. Dead-time occurs in process control

2.6

Integrated Statistical and Automatic Process Control

systems and causes difficulty to achieve adequate feedback control stability and speed of response of the output product quality variable to process adjustments. It makes process control difficult by sluggish response to input adjustments. A dead-time process inflicted by disturbance offers a challenge to the process control practitioner. Efforts must be made, therefore, to reduce the effects dead-time has on a process. There is a need to appropriately model a process that adequately describes its dynamic behaviour. There are some benefits from improved process modelling of time-delay control systems which provides the necessary motivation to use a second-order dynamic model to represent dead-time processes. The justification and reasons for considering the second-order dynamic model for the process are discussed in Chapter 3. Process lags are encountered while controlling chemical processes. The cumulative effects of such lags can be conveniently replaced by a single timedelay for modelling purposes. The effect of ‘transmission’ lags between the controller and the product quality variable and the process are to be considered in the feedback control strategy to determine the controller settings. If these lags are large in comparison with other process lags, they will have a significant effect in the operation and performance of the process and the controller. ‘Disturbance’ (noise) (Zt in Figure 2.1), causes undesirable changes in a process and makes the process output to drift off target if no ‘control’ action is taken to compensate for the deviation (error) of the product quality output from target (Box and Kramer [1992]). ‘Control’ compensates for disturbances which ‘infect’ a system. In control engineering, disturbance is a signal which tends to adversely affect the value of the output of the system. It makes process control difficult by sluggish response to input adjustments. Disturbance causes variability in the output or outputs of an otherwise stable process by producing undesirable changes in the (output) mean (Venkatesan [1997]). The need for process regulation arises when the system is afflicted with disturbances unless proper control action is taken to compensate them. Feedback control is an operation which, in the presence of disturbances, tends to reduce the difference between the output of a system and the reference input and which does so, on the basis of this difference. A disturbance causes a process to drift off and makes the process output variable to move away from target or controller set point (desired value). So, it is necessary to compensate for this by taking proper control action. For process drifts arising from disturbances, the true process level is not a stationary stochastic process (Box and Kramer [1992]). A process in which the mean of the output product variable is varying in nature with respect to time can be described as a non-stationary disturbance. Alternatively, a stationary disturbance represents the process situation in which there is no drift in the mean of the output product quality variable and the process is in a ‘state of control’, meaning that it is in an equilibrium condition. Disturbances entering a process at various points of a process are often persistent in nature. For example, there may be variations in ambient temperature or a change

Mathematical, Statistical and Cost Models

2.7

in the properties of feed stocks that are supplied by different vendors or suppliers, to a process input. In many instances, it may not be economically feasible or physically possible to eliminate these disturbances. Disturbances envisaged as the result of independent random shocks can be conveniently represented by a first order (linear) differential equation and as dynamic control systems in continuous time. Such a system is referred to as a ‘first-order dynamic system’ and the process model that describes the dynamic process is said to be of the first-order since it contains only one dynamic (inertial) parameter that describes the nature and character of the first-order process model. The dynamic-stochastic model of the process under consideration is a generalpurpose three-parameter transfer function-noise model with two dynamic (inertial) parameters δ1 and δ2. The second-order dynamic model of the time-delay system for some fairly common complex processes, is a discretely coincident (2,1) continuous system described by

Yt = [(ω0 – ω1 B)/(1 – δ1 B – δ2B2)]Bb + 1 Xt, ...(2.1), (Box and Jenkins 1970, 1976)

where δ1 = e–T/t1 and δ2 = e–T/t2 are its dynamic parameters t1 and t2 are the process time constants for the second-order process model and T is the sampling period. ‘b’, is the dead-time, (time delay) in the process, that is distributed between the (ω0 – ω1B) and the Bb+1 terms. Xt is the input manipulated variable and Yt is the output controlled variable. B is the (mathematical) backward shift operator, (BXt = Xt–1), and ‘w’ is the dynamic response, and the terms t1 and t2 and T are explained in Chapter 3. For a given stable dynamic model (transfer function), the higher terms in (ω0 – ω1B) and δ(B) in the backward shift operator B decrease exponentially with increasing sampling period. The (ω0 – ω1B) term in the numerator can be extended further if there is a large variation in the assumed ‘a priori’ dead-time value. The transfer function, [(ω0 – ω1B)/(1 – δ1B – δ2B2)] depends on T and approximates linear response at fixed time intervals. The controller designed on the second-order dynamic model can therefore be adjusted and the use of this model thus provides a justification for representing some complex processes. The justification and reasons for considering and choosing this particular second-order dynamic process model with two inertial parameters, two time constants and a dead-time element that is assumed to represent the process model are explained further in detail in Chapter 3. Having dealt with and discussed in detail about mathematical and feedback control models, let us divert attention to stochastic (statistical) models.

2.3 STOCHASTIC MODELS A ‘deterministic model’ makes possible exact calculation of the value of some time-dependent quantity at any instant of time. In many process control problems, it

2.8

Integrated Statistical and Automatic Process Control

might not be possible to write a deterministic model to calculate the future behaviour exactly, because of unknown factors. However, it might be possible to derive a model that could be used to calculate the probability of a future value lying between two specified limits. Such a model is called a ‘probabilistic’ or a ‘stochastic model’. Models that have no random input are deterministic. For example, a strict appointment with fixed service would be an example of a deterministic situation. Stochastic models, on the other hand, operate with inputs that are random in nature. A service provided by a bank to customers that arrive randomly and requiring a variety of services, is a classic example, the service time being, a random variable. A model can have both deterministic and random inputs in different components of its process, resulting in a stochastic model, which usually is the case in a real-life situation. It is clear also that stochastic models will result in uncertain outputs, which once again is closer to the reality. Deterministic information is relatively simple but stochastic information requires additional and extra information to describe the nature and characteristics of the randomness. Thus stochastic models are relatively more complex than deterministic models. Control system engineers described system model behaviour in which the response of a system to a given input is certain and well defined (deterministic). The engineers used Laplace transforms to obtain simplified solutions (Deshpande and Ash [1988]). The linearity assumption supplies an approximation for many practical situations. In a similar manner, in dealing with discrete processes, linear difference equations are employed to represent the processes in which the sampling intervals are short enough so that the dynamic or inertial properties of the process cannot be ignored. A first-order process may be represented by the first-order transfer function or ‘filter’ (the term used in engineering terminology, by control system engineers, (cf. Macgregor [1987]). A statistical model is a formalisation of relationships between variables in the form of mathematical equations. A statistical model describes how one or more random variables in the model are inter-related to one another. The model is statistical as the variables are not deterministically but stochastically related. In mathematical terms, a statistical model is frequently thought of as a pair (Y, P) where Y is the set of possible observations and P is the set of possible probability distributions on Y. It is assumed that there is a distinct element of P which generates the observed data. Statistical inference enables us to make statements about which element(s) of this set are likely to be true one. Most statistical tests can be described in the form of a statistical model. For example, the Student’s t-test for comparing the means of two groups of data is a test to find if the groups are really two different populations. Error is assumed to be normally distributed in most models, which may not be true in some cases. Mathematical models that are formulated based on statistical measurements such as height, weight, physical features, length of arms and legs are used to build statistical models for analysis, inference and interpretation of results. These models provide vital and important data that are useful to prevent malnutrition in growing

Mathematical, Statistical and Cost Models

2.9

children to monitor growth, progress and to take corrective action where necessary so that expected growth is achieved over a specific period of time. Another very well-known and common example is the monitoring of the agricultural corps yields by fertilizer, water usage and use of better harvesting methods. One such statistical model or stochastic model used in practice is a Time Series Model. A time series model describes the series of events that occur sequentially over time in a time sequence pattern. Typical examples of stochastic models are the stock market variations in share prices, drastic changes in annual rainfall for a particular city over a number of years, yearly yield of agricultural produce with respect to use of water, fertilizers, changes in harvesting criteria and methods etc. In a similar manner, statistical models and process control charts such as Shewhart charts are used to test and predict the randomness and variations in process control variables and to take corrective (control) action to bring the process under control. Some of the plausible causes for output variations are also explained while developing, ‘The stochastic Feedback Control Algorithm for Process and Product Quality Control’ in Chapter 3. The effects of disturbance on a process and its behaviour can be predicted by suitable statistical models. There exist many situations when there is a need to model the disturbances in a proper and fitting manner. Statisticians describe such disturbances in the form of stochastic (statistical) time series models. The idea and concept of using time series was proposed by Alwan and Roberts, in 1986 through their paper on time series modelling for statistical process control, published in the Proceedings of the American Statistical Association. A class of stochastic time series models called autoregressive integrated moving average (ARIMA) models (Box and Jenkins [1970, 1976]), (Box, Jenkins and Reinsel [1994]) provides an approach for handling drifting processes that cause output variations. The ARIMA models characterize and forecast the drifting process behaviour and describes the dynamic relationship between an (output) controlled variable and an (input) manipulative variable. The reader is advised to refer to various texts available on Time Series by K.J. Astrom, Box and Jenkins and other authors to gain more information and knowledge about ARIMA models, class and use of the models in statistical analysis, results and interpretations. A disturbance model is needed to derive an optimal (feedback control) scheme. This model should be simple and have sufficient flexibility to describe the different situations that may occur in practice. In this monograph, an ARIMA (0, 1, 1) model is used to describe the stochastic output process. Box and Jenkins (Box and Jenkins [1970, 1976]) have demonstrated that this particular type of model describes most commonly occurring disturbances (noise) that cause the product quality output to move away from target and controller set point. Again, disturbances, causes and how to control drifts due to disturbances are explained in Chapter 3. The uniqueness of considering this particular ARIMA model is that a ‘non-stationary’ disturbance is described and represented in the following manner.

2.10

Integrated Statistical and Automatic Process Control White noise at

Time series Zt

Moving average filter (or transter function) (B)

Stationary autoregressive –1 filter Ø (B)

Non-stationary summation filter Sd Wt

Figure 2.4 Block Diagram for Autoregressive Integrated Moving Average Model Adapted form (Box and Jenkins [1970, 1976])

The drifting behaviour of a process due to disturbance Zt is characterised by the ARIMA (0, 1, 1) time series model,

zt = zt–1 + at – Θat – 1, 0 < Θ < 1,

...(2.2)

where zt is the stochastic variable and at, the random variable at time t (Box and Jenkins [1970, 1976]). In Equation (2.2), the disturbance, Zt is the output (at time t) of a second-order linear filter when subjected to a sequence of uncorrelated random shocks {at}. It is assumed that at follows a Normal distribution with mean 0 and standard deviation σa represented by N∼(0, σa). Θ is the integrated moving average (IMA) parameter, 0 < Θ < 1, of the ARIMA(0, 1, 1) time series model that is used to model the random output variations due to disturbance (noise). A ‘(linear) filter’ can be regarded as a ‘device’ for transforming the highly dependent and possibly non-stationary process ‘Zt’ to a sequence of ‘uncorrelated random variables ‘at’. In Figure 2.4, note that in control engineering terminology, the engineers refer random (input) shocks as ‘white noise’, the moving average filter (or transfer function) Θ(B), Stationary Autoregressive filter ø–1(B) and the Non-stationary Summation Filter Sd. For more explanation on ARIMA model and meaning of the symbols and notations, the reader is advised to refer to Box and Jenkins (1970, 1976), Box and Reinsel (1994) and other relevant texts on Time Series. The attention is now focussed on the last class of models developed in this monograph, namely cost models.

2.4 COST MODELS** Cost models are necessary to derive the actual costs involved in operating a process efficiently and at costs that are affordable for the consumers. Cost models are a tool for analysing processes, measuring efficiency, controlling costs, and monitoring improvements. Cost models are used in business and industry before actually **

Prof. S. Kumar, Melbourne, Australia

Mathematical, Statistical and Cost Models

2.11

commencing the production to go into exhaustive details of each and every step of the process by making reasonable and meaningful changes where necessary, say for example, increasing the batch size or production volume, material substitution and working with better machinery and equipment etc. A simple cost model arises in a variety of situations when one type of cost increases and another type of cost decreases and total cost is comprised of the sum of these two costs. In such situations, one has to compromise between the two costs and accept that combination where the total cost is minimized. In this context, we may quote a few relevant examples. 1. Traffic flows on roads: Ideal situation for smooth traffic flow is a two-lane-oneway highway. Most of the freeways in Australia and USA are at least two-lane one-way roads. However, the same approach can be very expensive and may not be feasible, for roads on high altitudes. Compromise is achieved by two-way two-lane i.e., one lane for each side traffic. This type of road has a disadvantage that a slow vehicle can delay other fast vehicles also as overtaking is possible only when it is safe. Hence traffic engineers try to provide overtaking lanes for the traffic on each side when possible. Here the cost associated with each overtaking lane increases but traffic flow improves. The optimal compromise results in reasonable traffic flow within manageable cost. 2. Inventory items in the storage: Ideal situation is that one is able to store enough items to be able to meet all possible demands. However, risk associated is that the item may become obsolete or deteriorate before it may be used; large number of items will also result in increase in the holding cost of the inventory in the form of insurance, maintenance etc. If less number of items is kept in the storage, one may face the situation of being out of item, when required. Thus once again, one has to draw a compromise between inventory cost and demand. 3. Production run: Generally, one machine is used for production of several items. For example, a soft drink plant usually produces several flavours using the same machine. When the plant switches from one flavour to another, the machine has to be thoroughly cleaned to avoid mixing of two flavours as otherwise, will result in 2.2.‘auto correlations, (‘carry-over’) effects. This is known as ‘down time’, as no real output takes place during this period. Thus down time will be less when a large quantity is produced in one production run. Once again market demand requires that several flavours are made available at the same time. Here the compromise is to produce different flavours in some adequate quantity. Once again, one has to strike a compromise by making optimal decisions on production runs. 4. Software development: There is an ever-increase in automation in the modern days more and more machines are designed to be computer controlled to keep human intervention to a minimum level. The software controlling these automatic machines should be error free, that is, it should be able to perform the desired function without operational failure. Thus ideal software must be

2.12

Integrated Statistical and Automatic Process Control

error free. However, developed softwares are tested for errors and detected errors are corrected before release. However, error detection can be a very long process; it may be time consuming requiring lot of capital. As the software must be released before it loses its market value, a compromise is required between cost and errors that may be present in the software before it is released into the market. These are few examples of different situations in which mathematical model would be similar. A numerical illustration from inventory control is given below using the following notation. (i) Quantitative Illustrative Example:

Q = the order quantity, to be determined.

C0 = the fixed cost of placing an order, independent of the size of the order.

In production environment, C0 may be fixed cost to start a production run.

Ci = unit cost of an item, making the cost variable as a function of the order quantity Q, denoted by CiQ.

Let D be the annual demand for the item.

It is obvious that number of orders will have to be placed to obtain D, the number of units to meet the annual demand.

Thus if each order is of size Q, the number of orders will be equal to D/Q.

The total order cost per year will be equal to C0 (D/Q).

Holding cost in the form of maintenance and insurance etc., per year = Ch (Q/2), as on an average, the system will be holding an average of (Q/2) number of items in the inventory.

Thus the annual total inventory cost for the item will be the sum of (i) the Order cost, (ii) the Cost associated with purchases of the item and (iii) the Inventory holding cost.

Thus the total cost, C = C0 (D/Q) + CiD + Ch(Q/2)

This total cost will vary as a function of the size of one order, Q.

The optimum value can be obtained by using differentiation and solving the first derivative equal to zero.

This optimal order quantity is given by Q* = (2 C0 D/Ch)1/2

The model can be made more realistic by associating shortage cost in the form of goodwill or penalty for not providing the item when required by the customer.

Harris [1915] developed a simple formula to find the ‘Economic Order Quantity (EOQ)’ based on these four costs.

(ii) Numerical Example: Let the stock-out cost, denoted by Cs.

Mathematical, Statistical and Cost Models

2.13

Let the cost of placing an order C0 = $75, The demand in units per year D = 3600 units Cost of one unit item Ci = $16 per unit Holding cost per unit per year Ch = $4.00 Optimal order quantity Q = (2 C0D/Ch)1/2 = 367. Hence order quantity should be around 367 items. The cost model described in this monograph aims to make the process control practitioner know the costs of sampling and adjusting a process in order to minimize the manufacturing cost without compromising the objectives of process control. This fact is kept as the basis while proposing the cost modelling methodology that uses some of the sampling and cost modelling principles available in stochastic process control literature. The sampling (adjustment) intervals are obtained by direct simulation of the stochastic feedback control algorithm described in Chapter 3. The developed cost model and the associated cost functions use adjustment intervals and are less complex than the method suggested by Abraham and Box [1990]. The cost model and the corresponding associated process control regulation scheme have applications in basis-weight control on a paper machine and in a batch-polymerisation process that produces polymer resins in two groups of batch reactors that run in parallel and share common raw materials. The cost model can have practical applications in the continuous process industries such as industrial paper machines, commercial production of polymer resin and to control the absolute viscosity in process control of a fluid in a chemical plant. The purpose of these process ‘regulation schemes’ is to obtain an idea of the dynamic nature of the process behaviour, range of adjustment intervals and the corresponding increase in Control Error Standard Deviation (CESTDDVN)s. The regulation schemes use combinations of the Integrated Moving Average (IMA) parameter Θ of the stochastic ARIMA (0, 1, 1) time series model and the process dynamic parameters δ1 and δ2 that are taken from a large data. This data is generated by the process control computer (in a sub-programme of the code for simulation) for each set of combinations of the dynamic parameters δ1 and δ2, (that satisfy feedback control stability conditions), as it iterates through the values of the IMA parameter Θ of the ARIMA (0,1,1) stochastic time series model, from 0.05 to 0.95. The combinations of these parameters are neither designed values nor chosen and can be obtained for any large or small value of Θ. The process computer uses the mathematical equations and performs the required computations that are necessary for the calculations. The process control practitioner has the option to choose the process regulation scheme depending on the desired control error standard deviation (CESTDDVN) and adjust the process accordingly. The different process regulation schemes are described in this monograph in Chapter 4. It is shown that frequent adjustment of the process with small Adjustment Intervals (AIs) results in large CESTDDVNs.

2.14

Integrated Statistical and Automatic Process Control

The tables show that the process regulation and adjustment intervals are the result of adjusting the process after long AIs with small increases in CESTDDVN. Some alternative process regulation schemes presented in the tables offer an insight into the adjustment intervals and the corresponding CESTDDVN that can result for that particular adjustment interval.

2.5 CONCLUSION In this chapter, an insight and background into mathematical model, stochastic model and cost model is presented with examples and the need for considering such models to represent processes and their behaviour when subjected to white noise and disturbance (noise). We explained in particular first-order and second-order mathematical (process) models that can be conveniently and easily represented by linear differential and difference equations. It is expected that the reader will appreciate the detailed explanations provided for using the mathematical process model of second-order, and the ARIMA (0, 1, 1) stochastic model in particular to describe the disturbance and the need for the derivation of the cost model. In the next chapter, the stochastic feedback control algorithm development is described in detail.

REFERENCES Alwan, L. & Roberts, H.V., (1986). Time Series Modelling for Statistical Process Control, Proceedings of the Business & Economic Statistical Section, American Statistical Association, 315-320. Astrom, K. J., (1970). Introduction to Stochastic Control Theory, Academic Press, New York. Astrom, K. J. & Whittenmark, B., (1990). Computer Controlled Systems, Theory & Design, Prentice Hall, New Jersey. Box, G.E.P., & Jenkins, G.M., (1970, 1976). Time Series Analysis: Forecasting & Control, Holden-Day: San Francisco, U.S.A. Box, G.E.P., & Kramer, T., (1992). Statistical Process Monitoring & Feedback Adjustment – A Discussion, Technometrics, August 1992, vol. 34, No. 3, pp. 251-257. G.E.P. Box, G.M. Jenkins, & Reinsel, G. C., (1994). Time Series Analysis: Forecasting & Control, Third Edition, Prentice Hall, N.Y. Deshpande, P.B., & Ash, R.H., (1981). Elements of Computer Process Control with Advanced Control Applications, Instrument Society of America, North Carolina, U.S.A. Harris, T.J., (1989). Interfaces Between Statistical Process Control & Engineering Process Control, University of Wisconsin. Harris, T.J., (1992). Optimal Controllers for Non-symmetric & Non-quadratic Loss Functions, Technometrics, vol. 34, No. 3, pp. 298-305.

Mathematical, Statistical and Cost Models

2.15

Harrison, H.L., (1964). Control System Fundamentals, International Textbook Company, Scranton, Pennsylvania, U.S.A. Hunt, K.J., (1989). Lecture Notes in Control & Information Sciences, Springer-Verlag, Berlin. Hunt, K.J., (1989). Stochastic Optimal Control Theory with Applications in Self-Tuning Control, Lecture Notes in Control & Information Sciences , 117, Springer-Verlag, Berlin. McGregor, J.F., (1987). Interfaces between Process Control & On-Line Statistical Process Control, Computing & Systems Technology Division Communications, 10 (No.2 ): pp. 9-20. Ogata, K., (2001). Modern Control Engineering, Prentice Hall, N.Y. Shewhart, W.A., (1931). Economic Control of Quality Manufactured Product, D. Van Nostrand Company, Inc. New York. Smith, Otto J.M. (1958). “Feedback Control Systems”, Chapters 9 & 10, McGraw Hill Book Publishing Company, Inc., New York. Venkatesan, G., (1997). Ph D. Thesis, A Statistical Approach to Automatic Process Control, (Regulation Schemes), Victoria University of Technology, Melbourne, Australia.

Suggested Exercises 1. Develop a mathematical model for a simple known Chemical process with associated process dead-time and dynamics. 2. Discuss a practical cost model application in business or industry.

Chapter

3 Process Control Algorithm for Process and Product Quality Control

3.1 INTRODUCTION The terms process and process control were explained in Chapter 1. An algorithm refers to a control function, procedure or mathematical equation. The term ‘stochastic’ means random. When a process or variable is referred as stochastic, it means that the variable or the process is changing in its character and numerical value with respect to time. This chapter describes ‘Time series controllers’ and their use in deriving the stochastic feedback control algorithm. A ‘time series’ is a sequence of events occurring in time one after another. A controller that is built and works on the basic principle of time series is called a time series controller. A time series controller is used to control a process in a statistical sense, say, to minimize product variance at the output. Time series controllers employ stochastic characteristics to regulate production processes. In this modern era, highly technologically advanced and high-speed automatic and digital computer - controlled process systems are ubiquitous. A minimum variance time series controller works on the concept of minimum variance to control continuous processes. A common understanding among process control practitioners is that minimum variance controllers are known to produce controller outputs that are aggressive in the plant floor, for set point change. However, the characteristics and features of time series controllers favour design of ‘quality regulator’ for stochastically controlling the output product quality of a continuous chemical process. These controllers provide a means to perform an integrated operation at the interface of Statistical Process Control (SPC) and Automatic Process Control (APC). In general, in most of the quality control situations that are not very much subject to frequently varying

3.2

Integrated Statistical and Automatic Process Control

setpoint change, quality targets are well known and determined for most of the manufacturing processes in the industry. Hence, this fact may not pose a major problem when using a time series controller as minimum variance controller for product quality control. The control engineers, over the years as automatic process control technology and methodology advanced and made substantial progress, developed robust and efficient controllers, which were built and put into practical use successfully. Hence, the time series controllers or ‘minimum variance controllers’, as they were known before the advent and introduction of Proportional Integral (PI) and Proportional Integral and Derivative (PID) controllers were replaced over time by the latter and found their place in automatic process control as modern processes required and demanded quick and efficient automatic control of continuous processes. In brief, a PID controller operates on the three modes of ‘control action’, namely Proportional, Integral and Derivative, to automatically control chemical and industrial refining processes. PID controllers, also known as three-term controllers are automatic, continuous time controllers. Control action of a controller is the nature of the change of the output affected by the input. Proportional plus integral (‘reset’) plus derivative (‘rate’) control action, is a ‘control action in which the output is proportional to a linear, (straight line variation), combination of the input, (‘proportional’, P), and the time integral of the input, (Integral, I), and time rateof-change of input’, (‘Derivative’, D), (Paul W. Murril, [1981]). A PID controller combines additively the three response control actions in such a manner that the ‘proportional band’ adjustment affects all the three proportional, plus integral (reset) plus derivative (rate) control actions, simultaneously. Proportional band is the ‘change in input required to produce a full ‘range’ change in output due to proportional control action’, (Paul W. Murril, [1981]). Range is the region between the limits within which a quantity is measured, received, or transmitted, expressed by stating the lower and upper range values. Time series controllers are considered in this monograph since they have the ‘one-step-ahead’ forecasting feature which enables derivation of the stochastic feedback control algorithm. Forecasting is the term used to describe the methodology to find out (or denote) and determine the future value of a variable. This feature is helpful in deriving a ‘feedback’ control model in this chapter. Control in which a measured variable is compared to its desired value to produce an actuating error signal which is acted upon in such a way as to reduce the magnitude of the error (deviation), is called ‘feedback’ (or closed loop) control. ‘Deviation’ is any departure from a desired value or expected value or pattern. Feedback or closed-loop control uses past output deviations from target to determine an input process adjustment. The (feedback) process control approach makes use of the ‘error’, (‘difference between output value and target value’ also known as the ‘deviation’), as the mechanism of identifying changes to the process input. Using time series analysis, the effect of the disturbance in the absence of a control action is estimated and a dynamic model is developed linking the input and the process output. The feedback from the output

Process Control Algorithm for Process and Product Quality Control

3.3

changes the input process variables to maintain the output close to some desired target. The intention of feedback control is to minimize the process output variation, which in the process industries includes ‘drifts’ along with measurement error. Drift is an undesired change in the output-input relationship over a period of time. The derivation and characteristics of a stochastic feedback control algorithm for process and product quality control are explained to calculate the ‘control adjustment’ required in the input variable for a dynamic process. The statistical feedback control algorithm developed for the time series controller and the controller performance measures (explained in Section 3.9) are discussed in this chapter and a brief review is made of a method suggested by Baxley to control drifting processes (Baxley [1991]). A process is said to be ‘drifting’ when it has ‘high’s and ‘low’s in its values for the output variable about its target or desired value, (‘set point’), such as stock share prices, seasonal commodity prices, annual output yield of agriculture field etc. Time series controllers are described and a feedback control model is given in this Chapter. Time series controllers are used in the chemical and process industries for regulating quality variables measured at discrete, (non-continuous) time intervals. Their ‘stochastic feedback control algorithms’ are used to calculate a series of adjustments which compensates the ‘disturbances’ (Baxley [1991]). Also referred to ‘noise’, disturbance affects the process behaviour and causes variability in the process outputs. The exact cause of disturbance is not known but it is believed and understood by constantly observing and monitoring the process and assumed to be the result of (i) sudden changes in external atmospheric conditions and environment, (ii) the chemical and process changes that take place in a process and (iii) due to the mixing of inputs from different suppliers. For example, in petroleum refining, the crude petroleum may be supplied by more than one supplier. The external atmospheric conditions are the sudden change from a hot climate to weather conditions which are present in the event of a thunderstorm or sudden drop in temperature due to rain etc. Feedback control may be applied and employed to control stochastic process outputs, when the primary sources of disturbance are either not known or cannot be measured accurately without any initial information regarding the disturbances. Making use of the available knowledge of the production process and the serially occurring (industrial) process data (which are very likely correlated), it is often possible to build ‘stochastic models’ to represent and model the disturbances. Box and Jenkins [1970, 1976] expressed the process inputs and outputs in terms of time series models in order to manipulate the system for control purposes. A model that could be used to calculate the possibility of a future value lying between two specified limits is called a ‘probabilistic’ or a ‘stochastic’ model.

3.4

Integrated Statistical and Automatic Process Control

3.2 TIME SERIES MODEL AND TIME SERIES CONTROLLERS 3.2.1 Time Series Model In statistical process control practice, recourse to Auto Regressive Integrated Moving (ARIMA) models is often made in order to forecast the process drifting behaviour (Box and Jenkins [1970, 1976], Box, Jenkins, and Reinsel, [1994]). An ARIMA model, (Box and Jenkins [1970, 1976], Box, Jenkins, and Reinsel, [1994]), characterizes and captures the stochastic nature of a statistical ‘time series’. A time series is a sequence of events that occur in time when independent and identical random shocks which are Normally distributed with mean zero and standard deviation one pass through a ‘filter’. Again, a filter is a technical term used by control engineers in Automatic Process Control (APC), to describe a firstorder or second-order process model and the random shocks as ‘white noise’. The time series data is sequences of measurements that follow non-random orders. The analysis of time series is based on the assumption that successive values in the data file represent consecutive measurements taken at equally spaced time intervals. 3.2.2 Autoregressive Integrated Moving Average (ARIMA) Model ARIMA models belong to a special class of stochastic time series models. These models are used to describe the stochastic disturbances to the process control system and to provide a means of modelling the process dynamics (inertia). These time series models characterise and forecast the drifting behaviour of the process when no control action is taken and describe the dynamic relationship between the controlled variables (outputs) and the manipulated variables (control inputs). When the time series model predicts an out-of-control signal for shifts in the mean of the quality deviations from target, changes are made to the compensating variable to offset the effects of the predicted situations (Keats and Hubele [1989]). Models of this kind are used to solve inventory control problems, in econometrics and to characterize certain disturbances that regularly occur in industrial processes. The general model introduced by Box and Jenkins [1970,1976] includes autoregressive as well as moving average parameters, and explicitly includes ‘differencing’ in the formulation of the model. Specifically, the three types of parameters in the model are: the autoregressive parameters (p), the number of differencing ‘passes’, (meaning the number of times the ‘differencing’ operation is performed), (d), and moving average parameters (q). In the notation introduced by Box and Jenkins [1970, 1976], models are summarized as ARIMA (p, d, q). For example, a model described as (0, 1, 2) means that it contains 0 (zero) autoregressive parameters, (for p, p = 0) and 2 moving average parameters, (for q, q = 2), which were computed for the series which was ‘differenced’ once, (single pass, d = 1).

Process Control Algorithm for Process and Product Quality Control

3.5

3.2.3 Prediction It is fundamental of process control that it is possible to affect only the future behaviour of a process. It is advantageous to know the future disturbances to minimize their effect by timely manipulation of the control variables. It should be possible to predict the future with a reasonable degree of accuracy if the phenomenon that describes the future is known. Predicting (i) the arrival time of a train knowing its location en route, average speed etc. and (ii) the quality changes in a material flowing out of a pipe by measuring the material properties somewhere upstream are some examples of prediction. 3.2.4 Time Series Controllers Time series controllers require an adjustment for every sample and provide a performance benchmark by giving a minimum control error variance so long as the underlying process model remains correct. It is used to test the performance of feedback control strategies formulated (designed) using various available techniques employing methods used in statistical process control (SPC) or more elaborate strategies such as ‘pole placement designs’ of control engineers in APC, and ‘linear quadratic controllers’. However, as these topics are beyond the scope of this monograph on Integrated SPC-APC, Hybrid Process Control, the reader may refer to standard textbooks on ‘Control Engineering’, by B.C. Kuo, Ogata and other popular authors on the subject. Time series controllers are capable of giving one-step ahead forecast error variance over the time delay (dead-time) period in a process. Dead-time is the interval of time between initiation of an input change or stimulus and the start of the resulting observable response to an input. With time series controllers, it is possible to provide tight control of processes with dead-time and to provide minimum variance at the same time. It may be possible to restrict sampling and adjusting a process until an acceptable control error variance is achieved by making use of the time series controllers forecast error variance property (explained in Section 3.8) and to minimise monitoring and adjustment costs.

3.3 STOCHASTIC FEEDBACK CONTROL ALGORITHM 3.3.1 Characteristics and Features The terms ‘stochastic feedback control algorithm’ and ‘statistical time series (feedback) control algorithm’ are synonymous and used alternatively in the monograph. The stochastic feedback control algorithm or equation derived from the ARIMA models is often computer coded into specific high-level computing

3.6

Integrated Statistical and Automatic Process Control

language, for easy computerization and ‘simulation’ of the feedback control algorithm under process conditions that actually prevail in practice. Simulation is a transcription into computing terms of a natural stochastic process. It is the repetitive solution of a set of mathematical equations that describe a dynamic, (which constantly changes), process. Thus, ‘time series (feedback) control algorithms’, helps to calculate and make a series of adjustments, and minimizes the variance of the output controlled variable at every sample point by making an adjustment that exactly compensate for the forecasted disturbances. It will be shown in Section 3.3 that the stochastic feedback control algorithm derived for the time series controller has ‘reset’ or ‘integral’ action and a controller having integral action can eliminate (steady-state) ‘offset’ (deviation from set point) (Shinskey [1988]). The derivation of an approximate feedback control difference equation together with an algorithm for calculating the control adjustment required in the input variable when there are ‘dynamics’ (inertia) and ‘time delay’ in a process, are presented in this Chapter. An input adjustment made in a process takes time to make its effect on the output due to time delay or dead-time. The statistical (feedback) control algorithm developed for the time series controller and the controller performance measures are also explained in Section 3.10 and discussed in detail in this Chapter. A brief review of a method suggested by Baxley to control ‘drifting’ processes (Baxley [1991]) is also presented. A process is said to be drifting when its output variable is often changing with respect to time and does not have a constant mean value for its output variable. 3.3.2 Stochastic (Statistical) Process Control Algorithm – Criterion A background or refreshment of the Shewhart process control charts is required to understand the background principles basis on which the time series controller algorithm is developed and formulated. It is not possible to apply the modified Shewhart approach of splitting the control chart into six zones and employing run rules, as it takes a hypothesis testing approach to decide whether or not the process mean is on target. In accurate and precise process and product quality control, in order to deliver a stable and reliable product quality of output products, the product quality mean is required to be exactly on target or as close as possible to the desired target. This hypothesis testing approach is ruled out, since for this purpose, it is required that the mean should be exactly on target or as close as possible to the desired target. If hypothesis testing approach is used, it is necessary to have an alternate method to calculate the process adjustment from statistics calculated from the historical data. If the null hypothesis on target is rejected, there is no statistic built into the control logic which provides an estimate of the new process level and no adjustment can be calculated. The statistical control algorithm employed by a time series controller makes an adjustment to compensate exactly for the ‘forecasted

Process Control Algorithm for Process and Product Quality Control

3.7

deviation’ from target. There are also the (i) Exponentially Weighted Moving Average (EWMA) controller and (ii) the Cumulative Sum (CUSUM) controller which meet the same criterion; but these controllers are different from time series controllers in that they use the EWMA and the CUSUM statistics respectively to derive their own individual ‘stochastic feedback control algorithms’ to calculate the adjustment that may exactly compensate for the disturbance (Baxley [1991]). This monograph focuses on the application and use of time series controller only.

3.4 A COMPARISON OF TIME SERIES AND PID CONTROLLER PERFORMANCES At this juncture, it is necessary to compare the performances of time series and proportional integral derivative (PID) controllers in order to appreciate the process control properties of time series controller for continuous chemical processes, which have a dead-time element in their processes. The proportional integral derivative (PID) controller, though simple in performance, does not possess the capability of providing tight control over processes with dead-time. This is explained in the next paragraph. A PID controller works on the principle of making proportional, integral and derivative (rate) changes in the input process variable such that the output variable responds to these changes in the same manner as the input variable. For example, consider a discrete PID controller taking a control action on an output deviation from target occurring at time t. This control action will not affect the output until the lapse of dead-time. At time t + 1, the PID controller makes another correction for the same output error if there are no (new) disturbances to the process. The effect of the first correction will come through to compensate for the original error over the next adjustment interval, but then, overcompensation of the output error is the result of the second correction coming through a period later. A controller that is tuned tightly compensates for the disturbance and not the real changes in the process and leads to overcompensation. This problem may be overcome by ‘tuning’, (to adjust the process parameters in small quantities), the controller correctly and doing away with overcompensation (Wardrop and Garcia [1992]). PID controllers tend to give unstable (oscillatory, unsteady ‘wave-like’) performance in the face of dead-times unless they are ‘detuned’ so that only part of the necessary action is taken at each instant. There is a tendency to compound the problem of tight control with more periods of dead-time (time-delay) introduced by sampling. On the other hand, the optimal time series controller in which the prediction is made at time t, of the disturbance b + 1 (where ‘b’ is the dead-time), periods into the future, over the period of the process dead-time, will not compensate for the same error a second time (Harris, MacGregor and Wright [1982]). Dead-time compensators are built based on a similar principle. A time series controller is better in performance with respect to dead-time compensation than the

3.8

Integrated Statistical and Automatic Process Control

EWMA controller, which requires the controller gain to be less than 1.0 and has no dead-time compensation term in its control algorithm (page 486, Baxley [1991]). The effect of the choice of the sampling interval on the controller performance can be considered by comparing the minimum output error variance obtainable at the sampling instants for various sampling intervals. The sampling intervals are different from one another and the same error variances are not compared at these sampling instants. Simulation is a mechanism to evaluate these variances at intermediate times. The technique of dead-time simulation and control used for sampled-data control of processes with time delay (dead-time), is explained in Chapter 4. Thus, by building a dynamic-stochastic model of the process based upon data collected at a single interval, the time series controller performance is predicted at longer sampling intervals and thereby a reasonable choice of the sampling interval is arrived at, which is used for sampling, adjustment and process control.

3.5 DEVELOPMENT OF TIME SERIES MODELS 3.5.1 Feedback Control Difference Equation The ‘stochastic difference equation’ for the feedback control model is derived with the help of a block diagram shown in Figure 3.1. Disturbance Zt Controlled output Yt

Process input Xt + Process

et = Zt + Yt

xt+ = f(et, et–1,......)

Figure 3.1 Block Diagram for the Feedback Control Model

In the feedback control scheme shown in Figure 3.1, the process is regulated by manipulating the input variable Xt which in turn affects the controlled output Yt. Xt + is the setting of the input variable (the plus sign on the subscript of Xt + implies that the adjustment is made at time t during the interval between t and t + 1). A definite deterministic relationship exists between the process input Xt and its output Yt which does not exhibit stochastic characteristics. Zt, the non-stationary disturbance, is the output of the (linear) system, when subjected to a sequence of uncorrelated random shocks {at} where at ~ N(0, sa2).

3.9

Process Control Algorithm for Process and Product Quality Control

3.5.2 Symbols used in the Feedback Control Model Block Diagram of Figure 3.1 at

Random shocks N(0, sa2),

Zt Disturbance,1 et Forecast error,

Xt + Input Manipulative Variable (Linear function of et and of integral over time of past errors), Yt Output or Controlled Variable,

et = Zt + Yt.

Box and Jenkins [1970, 1976] described some dynamic models of the order (r, s) by

dr(B)Yt = ws(B)BbXt,

(Table 10.1, page 330, Box and Jenkins [1970, 1976]),

‘b’ being the number of whole periods of dead-time, where dr(B) and ws(B) are polynomials in b and BXt = Xt – 1, BbXt = Xt – b; B is the backward shift operator.

A first-order discrete (‘sampled data’) single input single output (SISO) dynamic system is ‘parsimoniously’, (‘parsimonious’ means that it has the fewest parameters and greatest number of degrees of freedom among all models that fit the data), represented by the general (linear) difference equation

(1 + x∇)Yt = gXt – b, 0 ≤ d < 1

...(3.1)

where x = d/(1 – d) and ∇ is the backward difference operator,

∇ = 1 – B.

The terms g(ain) and d are explained subsequently. This discrete dynamic model is of the order (1,0) and has the form

(1 – dB)Yt = w0 BbXt. 0 ≤ d < 1

With s = 0, the impulse response tails off exponentially (geometrically) from the initial starting value d = g (1 - d) w0 = g/(1 + x) = g / 1 + ( 1 – d ) (page 332, Box and Jenkins [1970, 1976]) where g, the (steady-state) gain denotes the ratio of change in the steady-state process output to the change in the input which caused it (Deshpande and Ash [1981], (Shinskey [1988]). 1.

‘z’ denotes the stochastic variable and ‘Z’ represents the stochastic disturbance. The same logic holds good for at which denotes the variable and {at} represents the sequence of random variables.

3.10

Integrated Statistical and Automatic Process Control

d represents the inertial capacity or the dynamics of a process to recover back to its equilibrium conditions after an adjustment is made where the adjustments do not have an immediate effect on the process. It is connected to the sampling interval and the time constant by means of the relation d = e–T/t where T is the sampling interval of the discrete process and t, the process time constant. The time constant is the time required for the process output to complete 63.2% (t = 1 – 1/e = 1 – 1/2.318 = 0.632) of its final steady-state value after a (step) change is made in the input. Time constant is the ratio of change in the output controlled variable to the product of the process (static) gain and the input step change (Shinskey [1988]). So, the recursive feedback control (linear) difference equation for the discrete dynamic model with b units of delay (dead-time) can be written in the form (1 – dB)Yt = w0Xt – b = g(1 – d)Xt – b = g(1 – d)BbXt. 0 ≤ d 0, and (iii) critically damped, when the roots are real and equal, that is when d12 + 4d2 = 0. Stability is achieved when the point (d1, d2) lies in a triangular region defined by the conditions,

d2 – d1 = 1, d1 + d2 = 1 and d2 < 1.

...(3.6)

This is shown in Figure 3.4. 2 – 1 = 1

1

1 + 2 = 1

0

2

–1 –2

0

2

1

Figure 3.4 Triangular Region Defined by the Inequality Conditions for Achieving Stability

For dead-time b = 1, Equation (3.5) becomes

Yt = d1Yt – 1 + d2Yt – 2 + w B1 + 1 Xt

That is

Yt = d1Yt – 1 + d2Yt – 2 + w Xt – 2.

For dead-time b = 2, Equation (3.5) becomes

Yt = d1Yt – 1 + d2Yt – 2 + w B2 + 1 Xt

which gives rise to Yt = d1Yt – 1 + d2Yt – 2 + w Xt – 3. These equations are directly built into and used in the subsequent simulations. The following points in regard to the feedback control Equation (3.5) should be noted: 1. A (linear) difference equation is employed to represent the discrete (sampleddata) second-order dynamic system. This is similar to representing continuous dynamic systems by linear differential equations. The (linear) model implies that the response to a set of impulses of an input series can be added to provide the output and suggests an approximation for practical situations. In dealing with discrete processes, linear difference equations representing the processes in which the sampling intervals are short, take care of the dynamic or inertial properties of the process. A second-order process is represented by the secondorder difference equation when sampled at discrete intervals or by the secondorder transfer function or ‘filter’, (the term used in engineering terminology, by control system engineers), (c.f. MacGregor [1983]).

3.17

Process Control Algorithm for Process and Product Quality Control

2. The initial parameter errors are assumed to be small in order to reduce the sensitivity of the difference equation, describing the discrete dynamic control system, to computer round off errors when conducting simulation experiments. 3. The second-order model is capable of representing some dynamic systems with dead-time for some reasonable value ranges of Xt and Yt. Moreover, Equation (3.5) reduces to that of Baxley’s [1991] first-order dynamic model, namely, Yt = dYt – 1 + w Bb + 1 Xt

which describes the first-order system with dead-time (delay) when d2 = 0 and d1= d. The steady-state gain, ‘g’, of such a second-order discrete dynamic model is given by g = (w0 - w1)/(1 - d1 - d2).

(Equation 10.2.5, page 346, Box and Jenkins [1970, 1976])

Further analysis of (closed-loop) feedback control system stability of Equation 3.7 is performed in the following manner:

(1 - d1B – d2B2)Yt = (w0 – w1B)BbXt

...(3.7)

= (1 – S1B)(1 – S2B)Yt = (w0 – w1B) Bb Xt where

w0 = [PG/(t1 – t2)]{(t1(1 – S1) – t2(1 – S2)},

w1 = [PG/(t1 – t2)]{(S1 + S2)(t1 – t2) + t2S2(1 + S1)

– t1S1(1 + S2)},

(Palmor and Shinnar [1979]),

S1 = e– 1/t1,

S2 = e – 1/t2,

d1 = S1 + S2 = e–1/t1 + e–1/t2,

d2 = – S1 × S2 = – e– (1/t1 + 1/t2) and

PG represents the process gain, realised by the total effect in output caused by a unit change in the input variable after the completion of the dynamic response (Baxley [1991]). Now,

w = (w0 – w1) = [[PG/(t1 – t2)]{(t1(1 – S1) – t2(1 – S2)}]

– [[PG/(t1 – t2)]{(S1 + S2)(t1 – t2) + t2S2(1 + S1) – t1S1(1 + S2)}] which on simplification, gives, for a critically damped system,

w = PG[1 – S1 – S2 + S1S2]

3.18

Integrated Statistical and Automatic Process Control

= PG[1 – (S1 + S2) + S1S2] = PG[1 – (e – 1/t1 + e – 1/t2) + e – (1/t1 + 1/t2)] = PG[1 – d1 – d2]. Therefore, the steady-state or system gain g = (w0 – w1)/(1 – d1 – d2) = w/1 – d1 – d2 = PG[1 – d1 – d2]/[1 – d1 – d2] = PG, the process gain. Baxley [1991] used PG = 1/1 – d and made PG = 1.0 by setting d = 0, meaning that there are no carry-over effects (inertia) into the next observation and seemed to have tackled the problem of feedback control stability in a convincing manner in his simulation studies for drifting processes. Kramer [1990] derived expressions for the disturbance and output effect of control actions as functions of random shocks, independent of the control scheme. Moreover, Kramer [1990] considered approaches for reducing adjustment variability. Since interest here is in reducing the product variability (control error standard deviation, CESTDDVN), at the output, it is worthwhile to consider the critically damped behaviour of the second-order dynamic system for which the time constants are real and equal thus ensuring closed-loop stability. Furthermore, the steady-state gain of such a critically damped second-order system is shown to be PG, the process gain itself. An additional term in the parameter d, (d2) of the second-order dynamic model makes it possible to account for more of the process dynamics for both small and large values of d and to represent the dynamic nature of the process more adequately. It is easier to control a dead-time process having an additional dynamic lag rather than a (pure) delay process (Chandra Prasad and Krishnaswamy [1975]). The additional term Yt – 2, defines the input-output relationship in a better manner than the first-order dynamic model. For stability of the second-order dynamic model, the parameters d1 and d2 must satisfy the following inequality conditions given by d1 + d2 < 1, d2 – d1 < 1, – 1 < d2 < 1 The ‘characteristic equation’ for the second-order dynamic system is

(1 – d1B – d2B2) = 0.

...(3.8)

When the roots of this equation are real, that is, when d12 + 4d2 ≥ 0, the solution

will be the sum of two exponentials. Description and details on obtaining the roots of a ‘characteristic equation’ can be found in a standard Mathematics Textbook. The roots of the characteristic Equation (3.8) determine the stability of the second-order dynamic feedback control system. When these roots are real and positive, the ‘step response’, (See Textbook on Control Engineering for the meanings of ‘Step Response, Impulse Response’ and other terms), which is the sum of two exponential terms, approaches its asymptote g, the steady-state gain, without crossing it. When the roots are complex, as can be seen from the Figure 3.5,

3.19

Process Control Algorithm for Process and Product Quality Control

(Reproduced from Figure 10.5, page 344, Box and Jenkins [1970]), the step response of the output variable above the target value, which is a problem in APC due to complex roots, overshoots the value g. From Figure 3.5, it can also be seen that the feedback control system has no overshoot when the characteristic equation has real positive roots. This explains the interest in the critically damped second-order discrete dynamic model which ensures closed-loop (feedback control) stability. The focus on the critically damped second-order dynamic model is one of many justifications for restricting attention to such a special case. It is shown in Venkatesan (1997) Chapter 9, that how the advantages of having an integral term in the stochastic feedback control algorithm (Equation 3.16) help determine the damping of the feedback control loop and guards it and the controller from the occurrence of over and under damped oscillations leading to an unstable feedback loop. C

R 1

2 –1 –2

Out

R

Real equal complex

0

–1

0 1

C 1

2

In

Figure 3.5 Step Responses of Coincident, Discrete and Continuous Second-order Systems having Characteristic Equations with Real Roots (Curve R) and Complex Roots (Curve C)

It is known that for the critically damped second-order dynamic system,

t1 = t2 = t.

So,

d1 = 2e–1/t and

d2 = – e –2/t.

As per Figures 3.2 and 3.3, the values of d1 and d2 should also satisfy the following values given below for d1, d2 for the feedback control system to have no overshoot of its asymptote g, the steady-state gain, without crossing it, when the characteristic equation has real positive roots, given by – 2 < d1 < 2, – 1 < d2 < 1. These are built into subsequent simulations.

3.20

Integrated Statistical and Automatic Process Control

3.7 EXPRESSION FOR THE CONTROL ADJUSTMENT IN THE INPUT VARIABLE OF A TIME SERIES CONTROLLER An expression is derived for the feedback control adjustment required in the input manipulated variable of a time series controller for a dynamic process with dead-time (delay). This expression is different from Equation (3.6) which explains the feedback control model. Figure 3.6 shows the feedback control scheme to compensate a disturbance Zt by means of a time series controller. Baxley [1991] considered the dead-time equal to one period when deriving the feedback control equation for a process, (with one dynamic parameter and single time constant only), with drifts. In this Section, the feedback control (adjustment) algorithm is derived considering b periods of dead-time. It conforms to the minimum variance (mean square) control equation derived by Kramer [1990] for a system in which adjustments to the input variable are made after the process is observed and so their effects are first seen at the next observation, for which the dead-time or process time delay, (b = 0). Process input Xt+

Process to controlled

Time series Adjustment xt controller

Disturbance Zt Controlled output Yt - T set point

et = Zt + Yt

Figure 3.6 Feedback Control scheme to compensate disturbance Zt by a Time Series Controller in the existence of Dynamics and Dead time

Using the same symbols and notation of Section 3.3, for the second-order dynamic model, Equation (3.5) please see page 3.13 (for convenient reference),

(1 – d1B – d2B2)Yt = wBb + 1 Xt.

...(3.5)

Changes are made in the input X at times t, t – 1, t – 2,..., immediately after observing the disturbances zt, zt – 1, zt – 2,.... Because of this, a pulsed input results and the level of X in the interval t to t + 1 is denoted by Xt + . For this pulsed input, assume that the dynamic model which connects the input manipulated variable Xt+ and the controlled output Yt is

Yt = L1–1(B)L2(B)Bb + 1Xt + ,

...(3.9)

where L1(B) is a polynomial in B of degree r,

L2(B) is a polynomial in B of degree s and b is the number of complete intervals of delay before an adjustment in the input Xt + begins to affect the output Yt. The non-stationary disturbance is represented by the ARIMA (0,1,1) model

∇Zt = (1 – QB)at,

where ∇ is the backward difference operator and B, the backward shift operator, ∇ = 1 – B. Zt is the disturbance and at, the random variable.

3.21

Process Control Algorithm for Process and Product Quality Control

zt = zt – 1 + at – Qat – 1. ...(3.10)

That is,

Zt measures the effect at the output of an unobserved disturbance, that is, an uncompensated non-stationary disturbance that reaches the output before it is possible for the compensating control action to become effective. This causes the process to wander off target and is defined as the deviation from the target that would occur if no control action were taken. The effect of the disturbance would be cancelled if it were possible to set xt + = – L1(B)L2– 1(B)Zt + b + 1, b > – 1.

This control action is not realisable since (b + 1) is positive; but, the minimum mean square error of the deviation of the output from its target value can be obtained by replacing Zt + b + 1 by its forecast estimate zˆt ( b + 1) made at time t. That is, by taking the minimum variance control action xt + = – L1(B)L2– 1(B) zˆt ( b + 1) .

...(3.11)

The change or adjustment to be made in the input manipulated variable is then xt = – L1(B)L2– 1(B) { zˆt ( b + 1) – zˆt – 1 ( b + 1)} .

...(3.12)

The error at the output or deviation from the target at time (t + b + 1) is the forecast error et(b + 1) at lead time b + 1 for the Zt disturbance. That is,

et(b + 1) = Zt + b + 1 – zˆt ( b + 1) made (b + 1) steps ahead at time t. The error observed at time t is

et = et – b – 1(b + 1)

= Zt – zˆt – b – 1 ( b + 1) .

zˆt ( b + 1) – zˆt – 1 ( b + 1) can be deduced from the observed error sequence et,

et – 1, et – 2, ...

Here et(b + 1) and zˆt ( b + 1) are linear functions of the {at}s. So, Zt + b + 1 = L4(B)at + b + 1 + L3(B)at, where

L3(B) and L4(B) are operators in B which can be deduced from the relations

et – b – 1(b + 1) = L4(B)at and

zˆt ( b + 1) = L3(B)at.

From these,

zˆt ( b + 1) = L3(B)/L4(B)et – b – 1(b + 1) = {L3(B)/L4(B)}et

and

zˆt ( b + 1) = {(1 – Q))/(1 – B)}at = L3(B)at.

Similarly, L4(B) is found by expressing the forecast errors as a linear function of future shocks (Box and Jenkins [page 128, 1970, 1976]). Then,

L1(B) = (1 – d1B – d2B2),

3.22

Integrated Statistical and Automatic Process Control

L2(B) = PG(1 – d1 – d2),

and

L3(B) = (1 – Q)/(1 – B) L4(B) = 1 + (1 – Q)B.

So, for a time series controller, when the disturbance is described by the ARIMA (0,1,1) model and there are definite carry over effects, the adjustment (xt) in the input manipulated variable required to make the control and forecast error variances equal, is given by

xt + = – {L1(B)L3( B)/( L2(B)L4(B))}et.

(Box and Jenkins [1970, 1976])

The control action in terms of the adjustment xt = xt + – xt – 1 +, to be made at time t is,

xt = – {(L1(B)L3(B)(1 – B))/(L2(B)L4(B))}et.

(Equation 12.2.8 page 435 Box and Jenkins [1970, 1976])

This ‘feedback control equation defines the adjustment to be made to the process at time t which would produce the feedback control action yielding the smallest possible mean square error since it exactly compensates the predicted deviation from target’ (page 213, Box and Jenkins [1968]). The above equation, on substituting in the expressions for L1(B), L2(B), L3(B) and L4(B) gives:

xt =

(1 – d B – d B ) (1 – Q ) 1

2

2

PG (1 – d1 – d2 ) (1 + (1 - Q ) B )

et ...(3.13)

where Q is the moving average (operator) parameter. The control (forecast) errors which turn out to be the one-step ahead forecast errors are measured in practice. It is known that the (forecast) error et at the output at time t is the forecast error at lead time b + 1 for the Zt disturbance. So,

et = et – b – 1(b + 1) = y0at + y1at – 1.

...(3.14)

For the ARIMA (0, 1, 1) model, the weights are y0 = 1 and y1=1 – Q, so

et = at + (1 – Q)at – 1

= (1 + (1 – Q)B)at and further from Equation (3.13),

xt =

(1 – d B – d B ) (1 – Q ) (1 + (1 – Q ) B ) a . ...(3.15) PG (1 – d – d ) (1 + (1 – Q ) B ) 1

1

2

2

2

t

Since (1 – Q) × 100 per cent of the control error will affect the future process behaviour as per the disturbance model, for a dead-time b,

et = at + (1 – Q)at – b

Process Control Algorithm for Process and Product Quality Control

3.23

= at[1 + (1 – Q)Bb] and so

at = et /[1 + (1 – Q)Bb].

Therefore, the control adjustment equation for b periods of dead-time is xt = -

et (1 - d1 B – d2 B 2)(1 - Q) × . PG (1 – d1 - d2 ) (1 + (1 – Q) Bb )

That is,

xt + (1 + Q)xt – b = -

Giving

xt = -

(1 - d1 B – d2 B 2)(1 - Q) et PG (1 – d1 - d2 )

(et - d1et – 1 – d2 et – 2 )(1 - Q)

- (1 - Q) xt – b . ...(3.16) PG (1 – d1 - d2 ) Equation (3.16) is derived and based on the method and steps in deriving Equation 12.2.8 page 435 Box and Jenkins [1970, 1976]).

Note that this stochastic feedback control algorithm (Equation 3.16) will be referred in this monograph for further analysis and discussion and explaining the integral and dead-time compensation terms in the algorithm. So a conjecture can be made to conclude that ‘the control adjustment action given by Equation (3.16) minimises the variance of the output controlled variable, as this ‘feedback control equation defines the adjustment to be made to the process at time t which would produce the feedback control action yielding the smallest possible mean square error since it exactly compensates the predicted deviation from target’ (page 213, (Box and Jenkins [1968]), (Box and Jenkins [1970, 1976]), (Box and Reinsel [1994]). The Equation (3.16) is in conformance with the feedback control action adjustment equation of Kramer [1990] when the output variance is made equal to the variance (sa2) of the random shocks, the at’s, for achieving minimum variance or mean square control when b = 0. The control adjustment action is made up of the current deviation (et) and the past adjustment action xt – b (Kramer [1990]). It is observed also that this is similar to the feedback control action adjustment equation for one period of dead-time for processes with drifts, derived by Baxley [1991] on taking a value 1 for b, the dead-time and when there are no carry-over effects for a ‘standard’ time series controller. On comparison with the equation of Baxley [1991], it is found that the first term in Equation (3.16) gives the integral action and the second term, the dead-time-compensator, developed by Smith [1959] (Baxley [1991]). Some simulation results of Equation (3.16) for the control error standard deviation (CESTDDVN), obtained when d1 = d2 = 0, PG = g = 1 and b = 1 are shown in Table 3.1. These results match closely with that of Baxley’s [1991] values for the time series controller. The feedback control adjustment action given by (Equation 3.16) is in conformance with the control action cited in Baxley [1991] when the output variance is made equal to the variance (sa2) of the input random

3.24

Integrated Statistical and Automatic Process Control

shocks, the at’s, for achieving minimum variance or mean square error control when dead-time, b = 0. The control action comprises of the current deviation (et) and the past adjustment xt. Table 3.1 Simulation Results of Equation (3.16)

d2 = d1 = d = 0 (no carry-over effects), Dead-Time, b = 1.0, Controller Gain, CG = 1.0, Process Gain, PG = 1/(1 – d) = 1.0 q (theta)

CESTDDVN

Control error sigma (SE) (Baxley)

0.25

1.260

1.250

0.50

1.112

1.118

0.75

1.010

1.031

The control adjustment action given by Equation (3.16) minimises the variance of the output controlled variable. The feedback control algorithm Equation (3.16) has both integral action and dead-time compensation terms (Baxley [1991]). Integral control is used in continuous process industries for control of drifting flow processes. This stochastic feedback control algorithm provides the discrete analogue of integral control and has also a stabilising effect on a feedback control system through adequate dead-time compensation. Process dynamics is taken care of by representing the process with two dynamic parameters in the second-order dynamic model to derive the stochastic feedback control algorithm (Venkatesan [1997]).

3.8 TIME SERIES CONTROLLERS - FORECAST ERROR VARIANCE FEATURE The time series controller has the characteristic that its control error variance is the (b + 1) step-ahead forecast error variance. This is explained as follows. The ARIMA (0,1,1) (Box and Jenkins [1970, 1976]) disturbance model is represented by

F(B)Zt = Q(B)at

where F(B) is the stationary autoregressive operator and Q(B) is the moving average operator. The one step-ahead forecast error for this model can be shown as

et () = zt + – zˆt ()

...(3.17)

where t represents the time at which the forecast is being made and , the lead time, (the time forecast in terms of the sample periods). That is, the forecast is made at origin t for lead time . In this Equation (3.17), Zt is the effect of the disturbance at the origin t and zˆt is an estimate of the expected

Process Control Algorithm for Process and Product Quality Control

3.25

value of the disturbance (Z) for any future time, conditional upon the realisation of Z up to time t. The forecast errors help determine the appropriate adjustment in the input manipulated variable for returning the process to target by making the forecast and control errors equal. The derivation of the expression for the control adjustment in the input variable was shown in Section 3.7. Although the forecasts are the same for all future sample points (values of (), the forecast error variance increases with . This can be seen by expressing the forecast errors as a linear combination of future shocks

et() = y0 at + + y1 at + – 1 + ... + y – 1 at + 1. ...(3.18)

The forecast error variance is,

Var[et()] = [1 + ( – 1)(1 – Q)2]sa2 ...(3.19)

of the random component of the disturbance where is the same time forecast, (defined above), in terms of the number of sample intervals into the future and sa2, the variance of the random shocks. So, the control error variance for the ARIMA (0,1,1) time series disturbance model is Var[et(b + 1)] = [1 + b(1 – Q)2]sa2.

...(3.20)

This is the (b + 1) step-ahead forecast error variance. In this, the effect of deadtime (b) is to increase the control error variance by an amount which depends on Q, the moving average operator, also called the smoothing or time series constant. For slow drifts, (values of Q closer to 1), the dead-time causes only a small increase in the control error variance, while for fast drifts, (values of Q closer to 0), the increase is large (Baxley [1991]). For processes with fast drifts (explained in (Venkatesan [1997]), Section 6.3.2), since the effect of dead-time is more pronounced, it is important to reduce dead-time achieved by employing a dead-time compensator as shown in Section 4.3, (Venkatesan [1997]). This particular characteristic regarding control error variance enables a time series controller to provide an important baseline that can be used for studying the particular class of controllers which require occasional adjustments. As mentioned earlier, the controller’s objective is to minimise the mean squared deviation from target of the quality characteristic. This is accomplished by positioning the process in order to exactly compensate for the forecasted deviation from target at the time when the current adjustment will take effect, b + 1 periods into the future. If this is done, then the deviation from target or control error et is just the error from a forecast originating from b + 1 periods in the past. As the level of the input manipulated variable at time (t), Xt + is placed to compensate for the forecast, the adjustment or change xt (that is, Xt + – Xt – 1) in the input manipulated variable is calculated to compensate for the change in the forecast from the previous sample period. This feature of a time series controller helps to know the control error variance (b + 1) step-ahead of the forecast error variance of a process with b periods of dead-time (time-delay).

3.26

Integrated Statistical and Automatic Process Control

3.9 ASSUMPTIONS IN THE FORMULATION (DESIGN) OF TIME SERIES CONTROLLER PARAMETERS Before proceeding with further discussions, the following assumptions are made in order to simplify the formulation (design) of the tuning parameter combinations for the time series controller. 1. There are only b full time periods of dead-time (delay) and no fractional periods of delay in the system. Any fractional periods of dead-time will be rounded off to the nearest integer. 2. There is no effect of additional noise on the input manipulated variable. 3. There are no large observational errors in the measurement of the manipulated variable and these uncorrelated errors, even if present, are assumed to be negligible compared with the errors in forecasting or prediction. The measurement errors are independent of the (conditional maximal) setting of the controller. 4. Continuous plant process production records are available so that it is possible to obtain an approximate knowledge of the process and system behaviour under different operating conditions. 5. The present study is not for new or start-up processes or initial pilot-run production schemes. 6. There is no model error in the assumed process model.

3.10 TIME SERIES CONTROLLER PERFORMANCE MEASURES 3.10.1 Time Series Controller Tuning Parameter Combinations The performance measures for a time series controller are 1. CESTDDVN, control error standard deviation sE (expressed as multiples of sa, the standard deviation of the random noise component of process disturbance {at}), CESTDDVN or sE = se/sa, where se is the forecast error standard deviation and sa, the standard deviation of the random shocks. 2. The average number of sample periods between adjustments denoted by ‘AI’, the sampling interval.

An optimal controller is one which for any adjustment interval (AI), the value of r( = 1 – Q), a measure of the rate of drift of the process and transfer function gives the lowest process variability (CESTDDVN or sE) (Baxley [1991]). The controller gain (CG) is set equal to 1.0 in order to reduce the size of adjustment that will exactly compensate for the forecast deviation from target (Baxley [1991]).

Process Control Algorithm for Process and Product Quality Control

3.27

The parameters which determine the process simulation are (i) b, the full number of periods of dead-time (ii) w = PG(1 – d1 – d2), a measure of the amount of process response carrying over into additional sample periods and (iii) ‘r’, a measure of the rate of drift of the process, called the IMA parameter (page 248, Baxley [1991]). Though, Baxley [1991] did not show the explicit relationship between r and Q, it is shown in (Venkatesan [1997]) that the values of the parameter r are similar to the EWMA weights used for statistical process control. It is proposed to determine these tuning parameter combinations, CESTDDVN (sE) and AI, so as to eliminate over-control, characterised by more variable control errors (Baxley [1991]). 3.10.2 Need for Simulation Study of Statistical Process Control Algorithms for Drifting Processes In practice, it has been found that for many feedback control loops, the performance improves significantly with a decrease in the sampling interval time. Moreover, when there are delays, sampling at time periods which are much shorter than the dead-time, may result in little improvement in the performance of a time series controller. The sampling time should be such that there will not be too many adjustments in the process during the sampling time period. The sampling interval is related to the type of process loop to be controlled and the process parameters, namely, the time constant and dead-time. Some feedback control loops respond faster to adjustments than some other feedback loops. Due to this, the samples are required to be taken at regular and faster intervals. A method to find the effects of sampling interval on controller performance is to compare the various minimum output error variances at the sampling instants for different intervals. In this method, the corresponding error variances are not compared for different sampling intervals. Simulation is a method to evaluate these variances at the intermediate times. In this context, a review of the Simulation study of statistical process control algorithms for processes with drifts (Baxley [1991]) is given and some of the principles of this simulation methodology are used to predict the performance of the time series controller (Chapter 4) at longer sampling interval time periods.

3.11 SIMULATION STUDY OF STATISTICAL PROCESS CONTROL ALGORITHM PROCESSES WITH DRIFTS - A REVIEW The study has been limited to six sets of the three parameters, (i) the dead-time (b),

3.28

Integrated Statistical and Automatic Process Control

(ii) the inertia constant (d) and (iii) the moving average parameter Q (which is set to match the IMA disturbance) for reasons (such as cost factors and dead-time) explained in this chapter. Response surface experiments were run for the feedback control algorithms on the six sets in order to determine optimal tuning parameter combinations. The two responses of interest were the control error sigma (SE or sE) and the adjustment frequency (AF), the reciprocal of which is the average adjustment interval (AI). The tuning parameters served as experimental factors. Each experiment consisted of a simulation run of 2000 sample intervals. The experimental design used was Central Composite Design with the relative spacing of star and factorial points set to give uniform precision. Empirical models of the form given in the study (Baxley [1991]) were fitted to the data in each of the 12 experiments using least squares regression analysis. Baxley [1991], used an optimisation procedure based on the Nelder-Mead Simplex Search Algorithm (Nash [1979]) to find tuning parameter combinations giving minimum control error sigma subject to an upper constraint on the adjustment frequency and generated, by varying these constraints, a series of optimal parameter combinations covering a range of adjustment intervals from 3 to 20. Then, an additional simulation run of 10,000 sample periods was made for each optimal set of tuning parameters in order to estimate more precisely the controller performance. Baxley [1991] made a stepwise regression analysis on the simulated data. From the analysis of variance (ANOVA) tables for (SE or sE) and AF, Baxley [1991] found the variability among the simulation runs to detect evidence for lack of fit. Contour plots of AI = 1/AF and SE versus, the control parameter L (the number of multiples of sa used for the control limits) and CG (Control Gain) were drawn. A scatter plot of SE versus AI for the extended runs at optimal settings (CG = 1.0) and ‘experimental data’ for other values of CG. Baxley [1991] determined the ‘form’ of the model by observing that for any adjustment interval, the slope of the relationship between SE and AI varies with Q in the same manner as the fractional increase in SE caused by one period of dead-time. As a check for the model, Baxley [1991] compared the performance model predictions with the work of Box, Jenkins and MacGregor [1974]. Baxley [1991] discussed also in detail the simulation results for both the EWMA and the CUSUM controllers.

3.12 CONCLUSION In this chapter, the characteristics, features and the criterion for a statistical control algorithm for time series controllers were presented. An approximate feedback control equation was derived and the time series controller performance measures discussed in light of the statistical control algorithm developed for this controller. The justification for considering a higher (second-order) dynamic model under

Process Control Algorithm for Process and Product Quality Control

3.29

critically damped conditions was also given. The time series controller performance measures were explained and also the tuning parameter combinations. A review of a simulation study of statistical process control algorithms for drifting processes was given. The (dead-time) simulation of the stochastic feedback control algorithm and EWMA process control are explained and the simulation results discussed in Chapter 4.

REFERENCES Astrom K.J., (1970). Introduction to Stochastic Control Theory. NewYork: Academic Press. Box, G.E.P., Jenkins G.M., MacGregor J.F. (1974). Some Recent Advances in Forecasting and Control Part II. Applied Statistics; 23(2): 138–139. Box, G.E.P., Jenkins, G.M., (1963). Further Contributions to Adaptive Control: Simultaneous Estimation of Dynamics: non-zero costs. Statistics in Physical Sciences; I: 943-934. Box, G.E.P., Jenkins, G.M., (1968). Some Recent Advances in Forecasting and Control. Journal of Royal Statistical Society, Applied Statistics Series; C: 91–109. Box, G.E.P., Jenkins, G.M., (1970, 1976). Time Series Analysis: Forecasting and Control. San Francisco: Holden-Day. Box, G.E.P., Kramer, T. (1992). Statistical Process Monitoring and Feedback Adjustment—A Discussion. Technometrics; 34 (3): 231–263. Box, Jenkins & Reinsel, (1994). Time Series Analysis: Forecasting and Control. Prentice Hall. Baxley, R.V., (1991). A Simulation Study of Statistical Process Control Algorithms for Drifting Processes, SPC in Manufacturing. New York and Basel: Marcel Dekker, Inc. Bitmead, B. (1994). Process and Control Engineering, pp; 43 (10): 63–66. Buckley, P.S., (1960) Automatic Control of Processes with Dead-time. Proceedings of IFAC World Congress, Moscow: 33–40. Carlos, A.S, Corripio, A.B., (1983). Principles and Practice of Automatic Process Control New York and Singapore: John Wiley and Sons, Dahlin, E.B., (1968). Designing and Choosing Digital Controllers. Instruments and Control Systems; 61 (6): 33. Deshpande, P.B., Ash R.H. Elements of Computer Process Control with Advanced Control Applications (1981). North Carolina, USA: Instrument Society of America. Harris, T.J., MacGregor J.F., Wright J.D., (1982). An Overview of Discrete Stochastic Controllers Generalized PID Algorithms with Dead-Time Compensation. The Canadian Journal of Chemical Engineering; 60: 423-432. MacGregor, J.F., (1983). Interfaces Between Process Control and on-Line Statistical Process Control Computing and Systems Technology Division Communications; 10 (2): 9-29. MacGregor, J.F., (1988). On-Line Statistical Process Control, Chemical Engineering Progress; 84: 21-31.

3.30

Integrated Statistical and Automatic Process Control

Palmor, Z.J., Shinnar R., (1979). Design of Sampled Data Controllers. Industrial and Engineering Chemistry Process Design Development; 18 (1): 8-30. Shinskey, F.G., (1988). Process Control Systems. New York: McGraw-Hill Book Company. Van der Wiel, S.A., (1992). Vardeman S.B. Discussion: Integrating SPC and APC. Technometrics; 34 (3): 238-281. Venkatesan, G., (1997). A Statistical Approach to Automatic Process Control (Regulation Schemes), Ph D. Thesis. Melbourne: Victoria University of Technology.

Suggested Exercises 1. Derive a suitable mathematical model for a process and develop a stochastic feedback control algorithm for disturbance using the steps indicated in this chapter or otherwise. 2. Derive an algorithm to minimise cash losses due to variations in fluctuations of stock market share prices describing the shares market volatility by an appropriate ARIMA model.

Chapter

4 Discussion and Analysis of Stochastic (Statistical) Feedback Control Adjustment

4.1 INTRODUCTION The concept that process control techniques from engineering and statistical process control overlap at the interface of the two process control methodologies was introduced in Chapter 1. SPC and APC techniques overlap in the areas of online quality control and the control logic based on knowledge about the nature of the process and the disturbances that inflict/affect the process. It was explained also that both statistical process control practitioners and engineering (automatic) process control engineers focused their attention on statistical process monitoring and feedback control adjustment. In the past, several process control practitioners have endeavoured to define the contexts in which various statistical process control (SPC) and automatic process control (APC) techniques can be applied to control an industrial manufacturing process. Application and analysis of engineering (APC) and statistical process control (SPC) techniques for controlling temperature, pressure, flow etc., of continuous chemical processes have been the focus and attention of process control practitioners at the overlapping interface of the two process control methodologies. For the past 40-50 years, in particular, process control practitioners have focused their attention on statistical process monitoring and feedback control adjustment. They showed that it is possible to control output product quality through process surveillance and feedback control adjustments by means of appropriate process control algorithms. They showed also that the use of the feedback control algorithm that is derived from the transfer function-noise model of a dead-time dynamic process yields minimum mean square error (variance) control at the output (Venkatesan [1997]). Feedback control adjustment of high-volume and complex production processes brings the process under control that the product quality mean is on the desired target/set-

4.2

Integrated Statistical and Automatic Process Control

point resulting in reductions of even a fraction of a per cent in output control error variance and financial savings. The stochastic feedback control algorithm derivation for input adjustment by use of techniques from the two process control methodologies is presented in Chapter 3. While developing the stochastic feedback control algorithm, it was mentioned that the process control practitioners encounter problems that are related to feedback (closed-loop) stability, controller limitations and dead-time compensation in order to obtain minimum variance (mean square error) (‘sum of the squared deviations between an output value and target value’) at the output in the process control of chemical processes. It has been established in stochastic process control literature that minimum variance control is the best possible control in the mean square error sense for processes described by linear functions with disturbances. Disturbances in these processes can be added and treated as a single disturbance (noise). A control strategy that minimises the variance of the process output variable is called minimum variance control, (Astrom and Wittenmark [1984]). Chapter 3 gives explanation of how feedback control stability can be achieved by considering the behaviour of the process control system under critically damped conditions. The minimum variance control algorithm brings the process back to controller set point and helps to accomplish set point changes. The required feedback control adjustments in the input variable are computed from simulation of the stochastic feedback control algorithm. The simulation results are used to formulate process regulation schemes and minimum variance control schemes. It is also demonstrated that it is possible to control output product quality through process surveillance and input feedback control adjustments by means of the stochastic feedback control algorithm (Equation 3.16) developed in Chapter 3. It is shown also that the feedback control algorithm that is derived from the dynamic-stochastic model of a dead-time process, yields minimum mean square error (variance) control at the output (Venkatesan [1997]). A minimum variance control algorithm brings the process back to set point with less oscillatory behaviour than usually experienced under manual control. It helps also in accomplishing set point changes in a smooth and rapid manner (Shinskey [1988]). It is a fact that the process control practitioner faces a challenge in the control of processes that involve time delays (dead-time), which are of non-linear nature. The significant amounts of lags introduced by dead-time into the system response frequently make the task of obtaining information difficult about (i) when to make an adjustment to the process and (ii) by how many units (xt) to change the input variable so that the output quality variable is at or near target. In making feedback control adjustments, it is possible and probable that because of its automatic nature, deficiencies in the feedback control loop such as inexact feedback error and incomplete feedback compensation, the feedback control adjustment is likely to be incomplete under certain circumstances. For example,

Discussion and Analysis of Stochastic (Statistical) Feedback Control Adjustment

4.3

an input adjustment may not be 100% complete due to operating conditions of the process and unexpected and unforeseen malfunction of the controller equipment. Then, a probability test model for feedback adjustment will provide the necessary information about the probability of the feedback control adjustments that are required to bring the product quality mean on target/controller setpoint. This chapter reviews statistical process monitoring, analysis and discussion of stochastic feedback control adjustment and a method to model feedback control adjustment in order to control the quality of products at the output of a continuous process. A method is proposed in this chapter for conducting a test of significance whether a feedback control adjustment is required in a process that is subject to random shocks at the input. The test is a (0, 1) type for feedback control adjustment in that an adjustment is either not made or made at the input. The basic probability test for a process that requires feedback control adjustment can be best described by a suitable probability model. A probability model developed for feedback control adjustment helps in conducting a test of significance for indicating alarm signals (by continuous flashing of light to alert the operator that the process is out of control), that the product quality mean is not on target and the process requires a feedback control adjustment in order to bring the product quality mean on target. The probability test model provides information about the probability of the feedback control adjustments that are required to bring the product quality mean closer to the desired controller setpoint. The feedback control adjustment probability model can be useful in control of high-volume and mass production complex processes to achieve financial savings by reducing its output variance. The method takes into consideration situations under which the product quality mean is on target/setpoint (process under control) and not on target situations (process out of control) by assigning suitable probabilities for the respective situations. A test of significance is conducted for such situations when a feedback control adjustment is required to bring the product quality mean to be on target and a suitable probability model is developed for such a situation. The method of feedback control adjustment is discussed first and then the underlying theory of statistical process monitoring that is used for the feedback control adjustment in the next section.

4.2 STATISTICAL PROCESS MONITORING AND FEEDBACK CONTROL ADJUSTMENT METHODOLOGY The drifting process behaviour due to disturbance is simulated by standard normal shocks N~(0,1) that are obtained from a random number generator with the seed based on the clock time in a digital computer used for process control. The ARIMA (0, 1, 1) time series model (Astrom and Wittenmark [1984]) discussed in

4.4

Integrated Statistical and Automatic Process Control

Chapter 3 is fitted to the variable quality data by superimposing the one-step-ahead forecasts along with the control limits. The forecast that originates at any time t is an exponentially weighted moving average (EWMA) of the previous forecast at time t – 1 and the current process data. It represents the minimum mean square error (mmse) forecast of the Auto Regressive Integrated Moving Average (ARIMA) model given by Q, (the integrated moving average, IMA parameter), 0 < Q < 1. The feedback control adjustment values calculated from simulation of the stochastic feedback control algorithm (Venkatesan [1997]) is used to compensate for the change in the forecast from the previous sample period. The feedback control algorithm, (Equation 3.16) is applied for a trial period to the process for controlling its output product quality. The changes that take place in the process are observed and Exponentially Weighted Moving Average (EWMA) charts are placed in position to monitor the process. Simulation of the process control algorithm under conditions of feedback (closed loop) control stability gives the EWMA forecasts of process data, variance of control error (varCE), feedback control adjustment (xt) and adjustment interval (AI). The EWMA forecasts estimate the process deviation from target and help determine the approximate adjustment for returning the process to target. These forecasts are compared with the EWMA chart control limits. The EWMA processmonitoring system notifies, by means of out-of-control signals, the shifts in the quality variable needed to maintain on-target performance. The input control adjustment is made when a forecast crosses the control limits. This is the condition for initiating feedback control. The variance of the output-controlled variable is minimised by making an adjustment (xt) that exactly compensates for the forecast disturbance by setting Q to match the noise (Venkatesan [1997]). As a result of setting the IMA parameter Q to match with the noise, Q is not only a parameter associated with the noise characteristics but it also changes in its value with the change in the random shocks at the input. The shocks after passing through the dynamic process model (second-order filter) manifests at the output as noise (disturbance) and the ARIMA (0, 1, 1) model captures the stochastic nature of the disturbance, Q being the IMA parameter of the ARIMA (0, 1, 1) model. It is pertinent to note this point at this juncture, as this will be helpful later on when developing the probability model for feedback control adjustment. A minimum variance control algorithm brings the process back to set point with less oscillatory behaviour than usually experienced under manual control. It helps also in accomplishing set point changes in a smooth and rapid manner (Shinskey [1988]). Figure 4.1 shows the flow chart for the method for feedback control adjustment using the feedback control algorithm (3.16). The feedback control adjustment (xt) given by the feedback control algorithm (Equation 3.16) derived in Chapter 3 is reproduced for convenient reference and completeness of discussion along with the notation for the parameters.

Discussion and Analysis of Stochastic (Statistical) Feedback Control Adjustment

xt = –

( et – d1et –1 – d2et – 2 ) (1 – Q ) – (1 – Q ) x PG (1 – d1 – d2 )

t –b

0 < Q < 1

4.5

...(3.16)

Notation: xt is the adjustment in the input manipulated variable;

et is the forecast error and of integral over time of past errors, independent of (xt), the sequence of random shocks that manifests as disturbance at the output; d1 and d2 are the parameters that represent the dynamic (inertial) characteristics of the dynamic model; b is the process dead-time in whole time periods, b > – 1; Q is the integrated moving average (IMA) parameter that represents noise, 0 < Q < 1. PG represents the process gain realised by total effect in output caused by a unit change in the input variable after the completion of the dynamic response (Baxley, Robert V., [1991]). For a critically damped second-order dynamic system when conditions for feedback control stability are satisfied, it is shown in (Venkatesan [1997]) that (i) w = PG (1 - d1 – d2) = 1 and

(ii) PG = 1/(1 – d1 – d2) = g, the steady-state gain.

w is the magnitude of the process response to a unit step change in the first period following the dead-time which carries over into additional sample periods, (Baxley [1991]). It was mentioned in Chapter 3 that only whole periods of dead-time for b = 1 is considered in this book for discussion and analysis of stochastic feedback control adjustment. Figure 4.1 shows the method for feedback control adjustment using the feedback control algorithm (Equation 3.16). A method is proposed for a test of significance whether a feedback control adjustment is not required or required in a process that is subject to random shocks at the input. The basic probability test for a process that requires feedback control adjustment can be best described by a suitable probability model. This model helps to conduct a test of significance for sounding alarm signals that the product quality mean is not on target and the process requires a feedback control adjustment in order to bring the product quality mean on target. The test is a (0, 1) type for feedback control adjustment in that an adjustment is either not made or made at the input. Due to reasons mentioned in Section 4.1, it is likely that the feedback control adjustment may not be 100% complete under certain circumstances. Then, this probability test model will provide the necessary information about the probability of the feedback control adjustments required to bring the product

4.6

Integrated Statistical and Automatic Process Control

quality mean on target/controller setpoint. In monitoring a closed-loop process operating under a known feedback control algorithm, the information about underlying changes in the process are reflected in the sequences of control actions and process output. This is found from the values of adjustment (xt) from simulation

Process under surveillance

Monitor process

Test for out of control signals

Make necessary adjustment to process

Product quality mean on target?

No

Simulate feedback control algorithm Yes Do not adjust process Compute input feedback adjustment Input adjustment

Figure 4.1 Flow Chart for the Method of Feedback Control Adjustment

results. This information is used to detect process changes by means of EWMA forecasts and Q falling outside the control limits and the feedback control action is initiated when the forecasts cross the control limits. As long as the forecast falls within the control limits (and hence is considered close to the target), no change is made in the process. Appropriate adjustment is made when a forecast crosses the control limits. The range of control error standard deviations (CESTDDVN) for corresponding values of the IMA parameter Q is used to formulate process regulation (adjustment) schemes. Details of some alternative process regulation schemes presented in Table 4.5 offer an insight into the adjustment intervals and the corresponding CESTDDVN that can result for that particular adjustment interval.

Discussion and Analysis of Stochastic (Statistical) Feedback Control Adjustment

4.7

4.3 THE EWMA CHART In the simulation study, the ‘geometric moving average’ Q at time t, (gma) is used for monitoring by the EWMA chart and an appropriate alarm criteria based on the EWMA statistic for sounding the ‘out-of-control alarm signal’. The EWMA control limits, (Section 4.4), give an indication of how the forecast is significantly different from the target. When an EWMA signal is obtained, appropriate corrective control action based on the forecast is devised. This is explained in detail while discussing EWMA forecasting in Section 4.5. An alarm signal which indicates that the process may be out of control is the appearance of a single last plotted point falling beyond the 3s control limits. A violation of the quality control 3s limit has only a 0.1% chance of being a false alarm signal. Reproduced from Proceedings of the IMechE, Part B, B11, 216, 2002, 1429– 1442 by permission of the council of the IMechE. One of the purposes of the process control chart is to monitor a stable operation and to reveal special or assignable causes. It will then be sensible to react to process changes only when some monitoring criteria is established as statistically significant. A suitable feedback control scheme should then be used to regulate (adjust) the process before making an adjustment that may produce a larger mean squared error than would be obtained from a pure feedback control adjustment scheme only (Box and Kramer [1992]). In practice, the alarm control signal that the process needs immediate attention is aided by the use of ‘runs’ (short sequences of observations). There are two possibilities for dealing with data that are serially correlated; one, using original observations and suitably modifying the control limits and rules to account for the correct process variance and another possibility is to model the observations as a time series and plot the resulting residuals on a control chart. If the observations have positive serial correlation, the variance of the process will be underestimated. If the observations are negatively correlated, the resulting estimate of the variation will be overestimated. It can be shown that for data from a first-order autoregressive process, a negative lag-one correlation will decrease the variance of the time series and also the average run length (explained in Section 4.17). A positive lag-one correlation, while reducing the average run length (ARL), as compared to a normal process, would increase the estimated variance over the true value. If this is ignored, such an action would result in a control chart that has limits set too far from the mean and may fail to indicate problems when they do truly exist. To analyse such a situation is to ‘correct’ the estimated variance to account for autocorrelation and to use the correct value to compute the modified control limits (Berthouex [1989]). This is discussed in Section 4.17. The test of hypothesis is discussed in the Section 4.4 before reverting to EWMA Forecasting in Section 4.5. An advantage of conducting the simulation study is that the values of the output variances (control error variance, abbreviated varCE) and control adjustment action

4.8

Integrated Statistical and Automatic Process Control

variances (square of the standard deviation of the input feedback control adjustment, SDdxt, for short in the computer simulation program code) are directly obtained from the simulation results. It obviates the need for developing complex expressions as shown by Kramer [1990] for the (‘constrained’) variance control schemes. It is pertinent to explain the term adjustment interval (Al) that is used for making the feedback control adjustments at this juncture. The term adjustment interval used in this monograph and in process control literature should not be confused with sampling interval used by statisticians and books on Statistics. Kramer [1990] and Baxley [1991] used the terms monitoring interval and adjustment interval in reference to their respective process regulation scheme and simulation study and so had specific intent. Kramer [1990] defined the monitoring (‘sampling’) interval as the multiple of the initial short base (unit) interval at which the process is experimentally monitored. Baxley [1991] defined the average number of sample periods (intervals) between adjustments as adjustment interval when the EWMA statistic violated the control limits for that statistic requiring an adjustment to be made to the process. So, as per Baxley’s [1991] definition, an adjustment interval is made up of a number of sample intervals (periods). This means that the process is under constant surveillance, the ‘gma’ and the value of the adjustment calculated at each instant and necessary adjustment made, if required, to the process when the ‘gma’ crosses the control limits. This is an advantage for monitoring the process closely but may become costly if the adjustments cannot be automated thus increasing the total cost of the process regulation scheme. Another advantage of using the concept of the adjustment interval is that for the time series controller, the sampling and control adjustment actions need not be done separately or independent of each other, the adjustment being made as soon as an out-of-control signal appears on the control chart, the required adjustment being known from computing the feedback control algorithm. The use of AI presents itself with an opportunity to sample the process at every instant of time, (at each iteration of the simulation run), if a knowledge of the state of the process at every instant of time is required. This action may not be necessary as the process may require to be sampled only when the process is out-of-control in order to know the exact reason for such an out-of-control signal. This may help in reducing the sampling cost and eventually the overall cost of regulation of the process control scheme. While Kramer [1990] explained the effects of altering the monitoring (sampling) interval on the dynamic parameter d, Baxley [1991] showed the effect of the IMA parameter Q on the adjustment interval. Baxley’s [1991] notation and meaning of AI are followed in this monograph to discuss the simulation results. The value of AI (the reciprocal of the mean frequency MFREQ in the simulation computer programme) corresponds to the total number of sample periods (intervals) with an adjustment.

Discussion and Analysis of Stochastic (Statistical) Feedback Control Adjustment

4.9

4.4 THE NULL HYPOTHESIS The IMA property of the EWMA forecast is to (i) Compare the quality deviations from target, that is, the mean of the quality data GMA (geometric moving average), with the control limits (LCL and UCL) and (ii) To adjust only when the GMA is at least greater than or less than the L × s Zˆ where L denotes the number of multiples of sa used for the control limits. These limits are set at

LCL = – L s Zˆ ...(4.1)

and

UCL = L s Zˆ ...(4.2)

where s Zˆ is the standard deviation of an estimate of the expected value of the disturbance z for any future time, conditional upon the realisations of z up to time t. Note: The statistic at time t is denoted by small letters ‘gma’ and capital letters GMA denotes the geometric moving average in general in the discussion. The null hypothesis, denoted by H0 is that the (process) mean of the quality variable is on target. The EWMA forecasts estimate the process deviation from target and help determine the approximate adjustment for returning the process to target. The ‘test’ statistic, (which is a function of the observed random sample), is calculated from the quality deviations from the target and compared with the control limits for that statistic. Under the null hypothesis of drifting behaviour and special causes, there is a probability of the test statistic falling outside the critical region of the control limits. The rejection of H0 would mean that there is evidence to suggest that the product mean is not on target and lead to the acceptance of the alternative hypothesis, denoted by H1(mean not on target). For accepting H0, when H1 is true, that is, by not taking action when the process is not in control, may lead to a Type II error by judging the process to be in control when it is actually not in control. The risk of committing a type II error is minimised in the following manner. When a limit violation occurs, it is assumed that there is a need to correct a special cause which is present in the process. The adjustment is then calculated to compensate for the change in the forecast from the previous sample period. This approach seems to be better than the hypothesis-testing Shewhart approach which is aimed at minimising the risk of taking action when the process is in control, (explained below) (Baxley [1991]). This action can lead to a Type I error rejecting H0 when H0 is true by judging a process to be out of control when in fact it is in control. This would initiate a search for assignable causes that do not exist. The probability of committing such a Type I error, the ‘level of significance’ of a test, ‘a’, is pre-assigned (at 1%, 5% or 10%). The value of the critical region is so chosen that it is compatible with a. This critical region is the one with the highest power

4.10

Integrated Statistical and Automatic Process Control

of the hypothesis testing procedure. Since the test statistic falls inside the critical region, this hypothesis-testing approach does not aid in providing an estimate of the new process level to calculate the required adjustment. For a time series controller, a sample is taken and corrective action is taken at once, following an out-of-control alarm signal for level shifts in the mean of the quality data away from the target, in such a manner that returns the quality index back to target. This is achieved for values of Q closer to 0 and 1.0 for drifting processes and the EWMA control limits L fixed by Equations (4.1) and (4.2) for controlling the average number of sample periods. The term control is used here in the sense of the stationary or iid variation about a target value (page 266, (Box and Kramer [1992])) and embraces Shewhart’s [1931] definition of control to predict within limits, the future behaviour of a process with the possibility of bringing the process into a state of statistical control by adjustment (page 257, (Box and Kramer [1992])). The adjustment interval (AI), the control error standard deviation (CESTDDVN), sE, (control error sigma) are obtained from simulation results of the computer programme for simulation of the stochastic feedback control algorithm (Equation 3.16). These are important to determine the time series controller performance measures, (referred earlier in Section 3.10).

4.5 EWMA FORECASTING The Equation (3.10) for the non-stationary disturbance represented by the ARIMA (0, 1, 1) model is rewritten to give the following recursive formula for updating a forecast of the process level

zˆt ( l ) = Q zˆt – 1 ( l ) + (1 – Q ) zt ...(4.3)

The current forecast zˆt ( l ) of Equation (4.3) for lead time is re-expressed by making successive substitutions for the previous forecasts to obtain

zˆt ( l ) = (1 – Q ) ( zt + Q zt –1 + Q2 + ...) ...(4.4)

The EWMA forecasts are a weighted average of the current and historical data, where the weights are decreasing exponentially for the data further back in time and the weights are, (1 – Q), (1 – Q)Q, (1 – Q)Q2--- etc., which sum to unity since 0 < Q < 1, and because of the fact that ∞

∑ xi = 1/1 – x, 0 < x < 1. 0

These weights are relatively heavy on recent data for ‘fast’ drifts but spread back in time for ‘slow’ drifts (refer to Figure 4, page 256, Baxley [1991]). Fast and slow drifts are explained first before proceeding further.

Discussion and Analysis of Stochastic (Statistical) Feedback Control Adjustment

4.11

4.5.1 Fast and Slow drifts A process with ‘fast’ drifts is one where the true process level can move rapidly over the range equal to the magnitude of the variance of the random shocks, NID (0, sa), because of the relatively large variance of the disturbance (Zt). The IMA parameter Q with values from 0 to 1 approaches zero for processes with fast drifts. As Q approaches zero, the process becomes less and less stable and closer to a ‘random walk’. The random walk is an IMA (0, 1, 1) process with Q = 0. Processes with slow drifts, that is, processes with values of Q approaching 1, have a small variance of the shocks, which can be modelled as IMA processes with the parameter Q approaching 1 (Baxley [1991]). As Q approaches unity, the time series model representing the drifting disturbance (Equation 3.10) behaves more like a stationary model. When Q = 1, (Equation 3.10) becomes the stationary model where the (measurement and sampling errors) errors are ‘iid’ (independent and identically distributed), about a fixed mean. See Section 3.9, where it is assumed that ‘there are no large observational errors in the measurement of the manipulated variable and these uncorrelated errors, even if present, are assumed to be negligible compared with the errors in forecasting or prediction’. The rate of drift of the process, ‘r’, away from the target for drifting processes is determined by the IMA parameter Q. Baxley [1991] described the ‘rate of drift’ for the IMA (0, 1, 1) disturbance model with the values of Q ranging from those approaching 0 and up to 0.25 as fast drifts and for values of Q above 0.75 and approaching 1.0 as slow drifts. For more information on identifying the IMA parameter, r = (1 – Q), as EWMA forecast, see Section 6.7.3 ([Venkatesan [1997]). The control limits are set at multiples of (usually 3 times) the standard deviation of zˆt + 1 ( l ) . It is known that

s Zˆ = ([1 – Q]2t [(1 – Q)/(1 + Q)])1/2sa

Asymptotically converges to [(1 – Q )/(1 + Q )])1/2 sa after a small number of sample periods, since Q < 1. At this stage, it is assumed that the value of s Zˆ in the situation when there are no drifts will be the same when there are process drifts. The control parameter L denotes the number of multiples of sa used for the control limits. The upper control limit and the lower control limits are set at the values given by Equations (4.1) and (4.2). The one step ahead forecasts are plotted about a target value T and refer to the control lines drawn at a distance of ± L s Zˆ , that is, ± L (1 – Q ) / (1 + Q ) sa above and below the target. Such a plotting procedure provides timely warning of a deviation from target and of the possible need for corrective feedback control action and may also provide clues as to possible assignable causes of variation

4.12

Integrated Statistical and Automatic Process Control

which may subsequently be eliminated or compensated for, in order to improve the process. As long as the predicted forecast falls within these control limits (and hence is considered close to the target), no change is made in the process. Appropriate adjustment is made when a forecast crosses the control limits. This is similar to keeping a Shewhart chart on the predicted deviation from target one step ahead particularly when the series of uncorrelated random shocks is a sequence of highly dependent random deviates about a fixed mean and has a tendency to drift. These control limits are related to the cost of making a change relative to that of being off-target and to the parameters of the non-stationary stochastic model. The minimum mean square error forecast of the non-stationary stochastic model is the geometrically (exponentially) weighted moving average of the previous observations. The geometric moving average is the IMA parameter Q of the ARIMA (0, 1, 1) process. For a dynamic system, where a cost is associated with making a change, we obtain the standard Shewhart chart with different control limits which are not related to any tests of significance and probabilities of being outside control limits. This control action is the discrete (analogue of) integral control action which is accumulating the deviations from target when action is being taken at every sampling or adjustment interval. This action is equivalent to taking an exponentially weighted moving average of past disturbances zt, zt – 1, zt – 2... (Box, Jenkins and MacGregor [1974]). In the computer simulation, the quality deviations from target, that is, mean of the quality data ‘gma’ (the notation used for the geometric moving average theta at time ‘t’ in the computer simulation) are compared with the control limits, UCL and LCL. Then, the adjustment required at time ‘t’, dxt, (the notation used in the computer simulation for control adjustment required in the input manipulated variable at time ‘t’), in the input manipulated variable X is calculated by means of the feedback control algorithm and the necessary required corrective action is applied to compensate for the change in the forecast from the previous sample period to return the quality index to target. From the simulation results, it is possible to know (a) when to make an adjustment to the process and (b) by how much, (number of units), to change the input variable so that the output product quality variable is at or near target. The output results of the simulation programme given in Table 4.1 titled ‘Time Series Controller Performance Measures’, shows the variance of the control errors (VarCE) and an average (Mean of FREQ, MFREQ) for an indicator variable (Adjustment Frequency, FREQ), which takes the value 1 for sample periods with an adjustment and zero otherwise in the computer simulation of the stochastic feedback control algorithm (Equation 3.16). The results are shown for values of Q for 0.05, 0.25 (fast drifts) and 0.75 and 0.95 (slow drifts) only.

4.13

Discussion and Analysis of Stochastic (Statistical) Feedback Control Adjustment

Table 4.1 Time Series Controller Performance Measures for the IMA Parameter Q, Rate of Drift 1 - Q, Dynamic Parameters d1, d2 and Dead-time b = 1.0 IMA parameter

Rate/drift

Dynamic parameters

Mean frequency

Adjust. interval

Control error (E) Var.

Q

r=1-Q

d1

d2

MFREQ

AI

–0.83 –0.25 –0.25 –0.02 0.00 –0.25 –0.83 0.00

0.000 0.102 0.050 0.075 0.000 0.000 0.116 0.187

0.00 9.80 20.0 13.33 0.00 0.00 8.62 5.34

Var. of E (VarCE) 1.000 1.079 1.060 1.152 1.000 1.000 1.000 1.007

0.05 (fast) 0.05 (fast) 0.25 (fast) 0.25 (fast) 0.75 (slow) 0.75 (slow) 0.95 (slow) 0.95 (slow)

0.95 0.95 0.75 0.75 0.25 0.25 0.05 0.05

–1.82 –1.00 –1.00 –0.27 0.00 –1.00 –1.82 –0.10

MFREQ = Mean frequency of process adjustment. Adjustment interval, AI = 1/MFREQ. The performance of the time series controller measured by the control error variance (varCE) and the average adjustment interval, the mean AI being equal to 1/MFREQ (Mean of FREQ in the computer simulation), (the mean adjustment frequency AF), are obtained from the simulation results of the feedback control algorithm. Control error standard deviation (CESTDDVN) = SE/SA(standard deviation of the forecast (control) error/standard deviation of the random shocks

se ). sa

There is no prior work that needs to be done which estimates the control error sigma (CESTDDVN) or the adjustment interval (AI) when the disturbances follow an ARIMA (0, 1, 1) process, (Baxley [1991]). Graphical plots (Figures 4.2, 4.3 and 4.4) of the values of CESTDDVN versus Q and AI shows the variation in CESTDDVN due to the IMA parameter Q and AI.

4.6 DEVELOPMENT OF SIMULATION STRATEGY The standard deviation of random shocks (sa) is set to be unity. Vander Wiel and Vardeman [1992] opined that a process disturbance, if different from an IMA, will result in the poor performance of a feedback control algorithm formulated for an IMA disturbance. It is likely that its performance may be worse with no feedback control if the disturbance is really a moving average. The feedback control based on the IMA disturbance model works whenever the variance of the error of the EWMA forecast of the disturbance is (substantially) lower than the variance of the

4.14

Integrated Statistical and Automatic Process Control

original disturbance (Box and Kramer [1992]). So, the IMA parameter Q is set to match the disturbance. Baxley [1991] used ‘Experimental Design’ to find the control limits L and the controller gain (CG), (explained in Section 4.7), in the simulation study of SPC algorithms for drifting processes (Baxley [1991]). It is not required to run ‘response surface experiments’ on the sets of values of Q, d and b for the time series controller to determine its optimal tuning parameter combinations. This is due to the facts that (i) the controller gain (CG) is set to be 1.0, and (ii) the control limits given by Equations (4.1) and (4.2) depend only in turn on Q. Yet, the control error sigma (CESTDDVN) and the adjustment frequency (FREQ in the computer simulation) are maintained as the responses of interest, since as noted earlier in Section 3.9, CESTDDVN and AI, the average number of sample periods (intervals) between adjustments, are the performance measures for the time series controller. It was mentioned in Section 4.5 that the reciprocal of the average adjustment frequency (MFREQ in the computer simulation programme) is the adjustment interval (AI). A controller, to be optimal, should give the lowest process variability (CESTDDVN) for (i) an average adjustment interval, AI, (ii) the value of r, the rate of the process drift and (iii) the process dynamic model (transfer function). Instead of Baxley’s [1991] analytical approach of using the Nelder-Mead Simplex Search Algorithm (Nash [1979]) to find tuning parameter combinations that give minimal control error sigma, the following observation of Box and Jenkins ([1970,1976]) in Section 3.4 is made use of. That is, ‘the values of the CESTDDVN’s, given by the simulation runs are the minimum variance of the output variable since the feedback control adjustment action xt (dxt in the computer simulation) given by Equation (3.16) exactly compensates for the forecasted disturbance. The minimal CESTDDVN and the AI are found directly from simulation results. Since, these minimum variance controllers are derived via simulation, it is possible to find adjustment intervals to minimise the sum of the adjustment, (which includes the sampling cost) and off-target costs (explained in Section 4.14 along with an outline of description of the process regulation schemes and statistical adjustment procedures).

4.7 CONTROLLER GAIN (CG) The Controller Gain, (CG), represents the change in its output in response to a change in the deviation. The principal function of a controller is to provide regulation against changes in load. This is accomplished by making the controller gain as high

Discussion and Analysis of Stochastic (Statistical) Feedback Control Adjustment

4.15

as possible. There is an upper limit which the controller gain cannot exceed without leading to un-damped oscillations. In this context, this limit of stability must be explored in order to determine how stable a feedback controller will be when it is applied to a process, (page 5, Shinskey [1988]). The reasons for setting the controller gain CG to 1.0 in the simulation of the stochastic feedback control algorithm (Equation 3.16), is discussed in brief in this Section. The maximum value of the controller gain for stable operation of a (pure) delay process is one (Chandra Prasad and Krishnaswamy [1975]). At this stage, it is intuitively assumed that the tuning parameter for the time series controller is Q only, which depends on the rate of drift (r) of the process, (r being equal to 1 - Q). For making such an assumption, inferences are drawn from Baxley’s [1991] simulation study on the behaviour of the EWMA and the CUSUM controllers for drifting processes. Baxley [1991], from the contour plots of AI (= 1/AF), and SE versus CG for the EWMA controller, showed that for the AI contours, the control error sigma (SE) is lowest when the controller gain is about 1.0. Baxley [1991] used the optimisation procedure based on the Nelder-Mead Search algorithm (Nash [1979]) to find tuning parameter combinations that gave a minimum SE subject to a constraint on the adjustment frequency. Baxley [1991] found from the sample results of these optimisations along with the results of additional simulation runs of 10,000 sample periods that the optimum controller gain is near 1.0. The adjustment control action exactly compensating for the forecasted deviation had a strong appeal for Baxley [1991] to set CG to 1.0 for the zero dead-time case. By drawing a scatter plot of SE versus AI for these runs at optimal settings (CG = 1.0), Baxley [1991] showed that the optimal controllers lie along the lower edge of the scatter plot. For these reasons, it is assumed that this property of optimal controllers will also hold good for a time series controller, being a minimum mean square error, (MMSE) controller. Then, it will be sufficient to consider Q only as tuning parameter for other cases of dead-time, say, b = 1, 2. This is also in view of the use of the EWMA forecasts which exhibit the IMA property.

4.8 SIMULATION METHODOLOGY AND EWMA PROCESS CONTROL The satisfactory control of a process with time-delay is made possible by (i) The process output following the command input (signal) closely and remaining unaffected by variations in the process parameters, (ii) The second-order dynamic model (Equation 3.6) with time delay matched the process and was an adaptive model of the process and (iii) As the feedback control system operated, the model tracked the variations in the parameters of the process. Under steady state conditions, due to drifting of the (sensitive) feedback control algorithm (Equation 3.16), a small mismatch between the process and model did

4.16

Integrated Statistical and Automatic Process Control

not significantly affect the operation of the feedback control stability. This is important since closed-loop stability cannot be guaranteed when there is process/ model mismatch (page 1486, Harris and MacGregor [1987]). See Section 3.6.1 for the discussion on Feedback control (closed-loop) stability. The property of the IMA (0,1,1) model, that the forecasts for all future time is an exponentially weighted moving average (EWMA) of current and past values of the disturbance z’s (pages 106 and 145, Box and Jenkins [1970, 1976]) is made use of to predict the future GMA, the geometric moving average. An ARIMA (0, 1, 1) time series model is fitted to the variable quality data by superimposing the one-step-ahead forecasts along with the control limits. The forecast originating at any time t is a weighted average of the previous forecast (at time t – 1) and the current data. Box and Jenkins (page 128, [1970, 1976]) showed that, for a lead time , these forecasts estimate the process deviation from target without bias and the forecast errors,

et() = zt + – zˆt +

have a lower variance than those for any other statistic calculated from historical data. The forecasts also help determine the appropriate adjustment for returning the process to target. By making an adjustment at every sample point which exactly compensates for the forecasted disturbance, the variance of the output controlled variable can be minimised. The time series controller feedback control algorithm (Equation 3.16) fits into this criterion since it requires an adjustment for every sample and gives the minimum control error variance as long as the dynamic model describing the process and the stochastic model describing the disturbance are correct. An advantage of conducting the simulation study is that the values of the output variances (Control error variance, varCE) and control adjustment action variances (variance of the control adjustment, var dxt in the computer simulation, that is, (standard deviation of dxt)2 , (Sddxt)2 are directly obtained from the simulation results. It obviates the need for developing complex expressions as shown by Kramer [1990] for the (‘constrained’) variance control schemes.

4.9 FEEDBACK CONTROL ADJUSTMENT The computer calculated the value of the required adjustment, given a small increment in the input variable xt, (notation dxt in the computer simulation programme), for each sample interval, whenever the geometric moving average (gma) was less than LCL or greater than UCL, thus fulfilling the monitoring criterion and the feedback control regulation procedure. During simulation runs, the values of xt (x at time t) (obtained from adding the adjustment, the increment in xt and xt–1(xtm1 in the computer simulation)) and the gma were obtained at various sample intervals (instants of time). It was found that whenever gma had

Discussion and Analysis of Stochastic (Statistical) Feedback Control Adjustment

4.17

a value lying between the control limits, no adjustment was needed, (and hence no sampling was necessary), and so the increment in xt (the adjustment dxt) was zero as also was the mean, though a time series controller calls for an adjustment for every sample interval. Thus, by using the gma statistic in EWMA control for monitoring, unnecessary adjustments are avoided and so too are the associated costs of sampling and adjustment. As long as dxt was zero, then xt was equal to xt–1. On the other hand, when the gma crossed either of the control limits, dxt had some significant value and so a control adjustment action was required to be made for that particular sampling interval. It was found also that the value of xt was zero even though et had some significant value. This was due to the fact that the values for the term xt–b–1 for b = 1, 2, namely, xt–2 and xt–3 were made equal to zero whenever their values were sufficiently small. So, the value of xt becomes zero and so also dxt. The process then requires no adjustment, the gma plot falls within the control limits and thus the process is in control. Table 4.1, titled, ‘Time Series Controller Performance Measures’, (Section 4.5.1), shows the values of the CESTDDVN and the adjustment interval (AI) (1/ MFREQ) for values of Q ranging from 0.05 (fast drifts) to 0.95 (slow drifts). Values of Q from 0.30 to 0.65 were of less interest since only fast and slow process drifts are considered for the ARIMA (0, 1, 1) disturbance model. Disturbances with Q closer to 0 may be termed less noisy, while a value of Q = 0.7 denotes a fairly noisy non-stationary disturbance. The simulation results indicate that the feedback control algorithm, Equation (3.16) holds potential in reducing product variability (control error sigma, CESTDDVN). If the input to the system (SISO, single input single output, considered in this monograph), is zero, then the process described by Equation (3.5) will return to the desired final state Yt = 0, due to the iterative nature of the feedback control algorithm (Equation 3.16) even if no control is applied. There are some definite benefits in applying this stochastic feedback control algorithm for controlling product variability as will be shown subsequently.

4.10 BENEFITS AND LIMITATIONS OF INTEGRAL CONTROL Automatic process control (APC) techniques have been applied to process variables such as feed rate, temperature, pressure, viscosity, etc. The application of APC or engineering process control techniques to issues of product quality is not quite new. Automatic process control has been applied to product quality variables as well. Conventional practices of engineering control use the potential for step changes to justify an integral term in the controller algorithm to give (long-run) compensation for a shift in the mean of the product quality variable. Process regulation is an important function of a controller, intended to keep the output controlled variable at the desired set point by changing it as often as necessary. Every process is subject to load variations. In a (well)-regulated feedback control loop, the input manipulated variable will be driven to balance the load, as

4.18

Integrated Statistical and Automatic Process Control

a consequence of which, the load is usually measured by engineers in terms of the corresponding value of the output controlled quality variable. The block diagram for the feedback control model illustrated in Figure 3.1 (Section 3.5, Chapter 3) assumed that the adjustment xt = f(et, et–1,...) can be written as a (linear) function of past deviations from target which is appropriate if the costs of adjustment and of sampling are negligible. This is likely to be the situation in process industries where the adjustments are automated and so the assumption of a linear function holds good for the form of feedback control considered. This is evident from the feedback control adjustment equation (Equation 3.16) for the adjustment xt which shows xt as a (linear) function of the past deviations et, et –1,... . A controller with integral action changes the output variable as long as a deviation from target or set point exists (page 15, Shinskey [1988]) and produces slightly greater mean square error (MSE) at the output than actually required. The rate of change of the output (variable) with respect to time is proportional to the deviation. As mentioned earlier in Section 3.6, the closed-loop gain must be 1 in order to sustain oscillations in the feedback control (closed) loop. A process variable that has a uniform ‘cycle’ and sustained oscillations does not threaten the stability of the feedback control system. Under integral control, the (feedback) closed-loop oscillates with uniform amplitude. The feedback loop tends to oscillate at the period where the system gain is unity, the integral (also known as ‘reset’) time, that is, the time constant (I) of the controller, then, affects only the period of oscillation which increases with damping for a controller with integral action in a dead-time loop. It can be shown that the ‘integral time’ (Iu) for zero damping (which requires a loop gain of 1.0), is 0.64PG Td where PG is the process gain and Td, the dead-time (page 16, Shinskey [1988]). A process with a dead-time of 1 minute would cycle with a period of 4 minutes under integral control with oscillations sustained by an integral time constant of about 0.64PG minutes. Control engineers endeavour constantly to have this integral time as nearly equal to the process dead-time as possible so that the process variable can take the same path as the dead-time. It is possible to achieve damping by reducing the closed-loop (feedback) gain and by increasing the integral time as shown in Chapter 9. Sampling actually improves control of a dead-time process that allows a controller with integral action to approach best-possible performance (explained in Chapter 9, Section 9.9.1), if its sample interval*(or ‘scan period’) is set equal to the process dead-time. This loop is even ‘robust’ (also explained in Section 9.9.2), for increases that occur in dead-time as well, by setting the sample interval at the maximum expected dead-time. For this, the integral time should be set at the product value of PG and AI (sample period). *Sample interval or scan period of a digital controller, (in process control terminology), is the interval between executions of a digital controller operating intermittently at regular intervals. The integrating control action is successful in eliminating ‘offset’ (deviation obtained with proportional control) at the expense of reduced speed of response and

Discussion and Analysis of Stochastic (Statistical) Feedback Control Adjustment

4.19

increasing the period of the feedback control loop with its phase lag. When integral time is too long, the feedback loop is, slightly over damped, but does not lead to instability. Integral action can cause limit-cycling with an ‘integrating process’, however, in the presence of valve ‘dead band’. In the absence of dead band, loop damping will depend on the integral time setting (Shinskey [1988]). Long integral times do not lead to control instability, not even if it’s controlling an integrating process; in the latter case, it will produce a slight dampened response in the output controlled variable which will be taken care of by suitably modelling the randomness of the output variable by the ARIMA (0,1,1) stochastic model. Refer to pages 23-24 Shinskey [1988] for meaning of integrating process. 4.10.1 Dead band, Dead-zone, non-self-regulation–Explanation In this connection, a brief explanation is given about the difference between ‘dead’ band and ‘dead zone’. Dead band is distinguished from dead zone in that different paths are taken for increasing and decreasing signals and produces phase lag as well as attenuation. Dead band is also known and referred to as ‘square-loop hysteresis’ by process control practitioners. It is common to observe dead band in the operation of valve motors which is caused by friction in the packing and valve guides. Friction opposes motion in either direction. Therefore, on a change in the direction of the input signal, motion ceases until a deviation between input manipulated variable and output controlled variable develops enough force to overcome the friction, (page 177, Shinskey [1988]). The main difference between a dead band and a dead-zone is that a dead-zone element removes the centre of a sine wave. The dead-zone element is used in some control systems to filter amplitude-sensitive ‘noise’, (disturbance or an unwanted signal) and to prevent overlapping of sequenced functions. An example is the sequential delivery of acid and base reagents to control the pH of a solution. The valves may be adjusted to allow a finite dead zone between one closing and the other opening in order to avoid concurrent addition. Then, the valves would respond to a sine wave at the controller output. An element of caution is required when working with a loop containing a dead zone. The signal will not remain in the dead zone, if the process is ‘non-selfregulating’ or self-regulating*, but with large time constant and high steady-state gain (as in pH control). Because of the lack of control action there, the signal will drift into one live zone or the other, eventually coming to rest at one of the corners, or limit-cycling between them, (page 176, Shinskey [1988]). An integrating process with a characteristic called non-self-regulation cannot balance itself; the process has no natural equilibrium or steady state. To understand the principle and concept of non-self-regulation, as an example, consider the nonself-regulating process in a fluid control system in which liquid is constantly flowing in and out of a tank, controlled by a valve manually to adjust the valve position for

4.20

Integrated Statistical and Automatic Process Control

the inflow and a metering pump to deliver constant outflow. In this process, if inflow is varied in the slightest from outflow, the tank would eventually flood or run dry. This characteristic is called non-self-regulation,(see pages 23-25, Shinskey [1988]). The non-self-regulating process cannot be left unattended for long periods of time without automatic process control, as most liquid-level processes are nonself-regulating processes. If the metering pump is replaced with a valve, then an increase in liquid level would inherently increase the outflow. This action works towards restoration of equilibrium and is called self-regulation*. 4.10.2 Limitations of Integral Control Another limitation is that there may be a maximum integral (reset) rate which cannot be exceeded without encountering stability difficulties and which saturates its integral mode when the input exceeds the range of the input manipulated variable. This condition is called ‘integral wind up’ by engineers and results in overshoot before control is restored. Overshoot can be avoided by setting the integral time higher than that required for (load) regulation and can also be minimized by limiting the rate of setpoint changes (Shinskey [1988]). During a ‘step’ change (change from a steady level of zero by an instantaneous change to a steady level of unity) of the input process variable, the closed-loop output is serially independent when pure one-step minimum mean square control is used which is often the practice in process control operations. A time-delay of two time periods was taken care of by considering dead-time, b = 2.0, in the feedback control algorithm. A dynamic element, such as integral, within the domain of (linear) controllers, has both beneficial and undesirable properties. The selection of the control mode requires a prior understanding of the benefits and drawbacks of the control mode. It may once again be emphasised that since there are some criticisms and drawbacks in using (pure) integral control such as integral wind-up and overshoot etc., the objective is in reducing the CESTDDVN (‘product variability’), of the outgoing product quality and so, such criticisms can be put to rest.

4.11 ANALYSIS OF SIMULATION RESULTS FOR DEAD-TIME 1.0 The mean of forecast error, (ME in the computer simulation programme) and standard deviation of the control error, namely the control error standard deviation (CESTDDVN), and that of the adjustment, Mean of dxt (Mdxt in the computer simulation) and standard deviation of dxt (SDdxt in the computer simulation) and the values of the process gain (PG) are shown in Table 4.2. In monitoring a closed-loop process operating under a known control algorithm, the ‘information’ about underlying changes in the process are reflected in the sequences of control actions and process output. This can be found from the values

4.21

Discussion and Analysis of Stochastic (Statistical) Feedback Control Adjustment

of adjustment (dxt) and adjustment variance (vardxt) from simulation results. The effect of control actions needs to be taken into account by an effective process monitoring scheme. Table 4.2 Process Model and Time Series Controller Parameters for Dead-Time b = 1.0 Notation: PG - Process Gain, CG - Controller Gain Second–Order Model Parameters

Time Series Controller Parameters

Second-order model with dead-time b = 1.0 Q = 0.05, d1 = –1.82, d2 = –0.83,

PG = 0.27,

CG = 1.0

VARIABLE

j(No.of runs)

MEAN

STD.DVN

Control error (E)

10000

–0.012

1.000

Adjustment (dxt)

10000

0.000

0.000

FREQ 10000 0.000 0.000 Q = 0.05, d1 = –1.00, d2 = –0.25,

PG = 0.44,

CG = 1.0

VARIABLE

j (No.of runs)

MEAN

STD.DVN

Control error (E)

10000

0.138

1.039

Adjustment (dxt)

10000

–0.072

0.450

FREQ

10000

0.102

0.303

Q = 0.05, d1 = –0.27, d2 = –0.02,

PG = 0.78,

CG = 1.0

VARIABLE

j (No.of runs)

MEAN

STD.DVN

Control error (E)

10000

0.311

1.157

Adjustment (dxt)

10000

–0.096

0.558

FREQ

10000

0.190

0.393

Q = 0.05, d1 = –0.10, d2 = 0.00,

PG = 0.91,

CG = 1.0

VARIABLE

j (No.of runs)

MEAN

STD.DVN

Control error (E)

10000

0.371

1.139

Adjustment (dxt)

10000

–0.088

0.522

FREQ

10000

0.199

0.399

d2 = 0.00

PG = 1.00

CG = 1.0

VARIABLE

j (No.of runs)

MEAN

STD.DVN

Control error (E)

10000

0.447

1.217

Adjustment (dxt)

10000

–0.100

0.568

FREQ

10000

0.224

0.417

Q = 0.05,

d1 = 0.00,

4.22

Integrated Statistical and Automatic Process Control

Q = 0.25, d1 = –1.82,

d2 = –0.83

PG = 0.27

CG = 1.0

VARIABLE

j (No.of runs)

MEAN

STD.DVN

Control error (E)

10000

0.033

1.018

Adjustment (dxt)

10000

–0.074

0.404

FREQ

10000

0.086

0.280

PG = 0.44

CG = 1.0

Q = 0.25,

d1 = –1.00,

d2 = –0.25

VARIABLE

j(No.of runs)

MEAN

STD.DVN

Control error (E)

10000

0.104

1.030

Adjustment (dxt)

10000

–0.027

0.248

FREQ

10000

0.050

0.217

PG = 0.78

CG = 1.0

Q = 0.25,

d1 = –0.27,

d2 = –0.02

VARIABLE

j (No.of runs)

MEAN

STD.DVN

Control error (E)

10000

0.272

1.073

Adjustment (dxt)

10000

–0.025

0.256

FREQ

10000

0.075

0.264

d2 = 0.00

PG = 0.91

CG = 1.0

VARIABLE

j (No.of runs)

MEAN

STD.DVN

Control error (E)

10000

0.361

1.109

Adjustment (dxt)

10000

–0.020

0.226

FREQ

10000

0.061

0.240

d2 = 0.00

PG = 1.00

CG = 1.0

VARIABLE

j (No.of runs)

MEAN

STD.DVN

Control error (E)

10000

0.332

1.099

Adjustment (dxt)

10000

–0.022

0.242

FREQ

10000

0.085

0.279

d2 = –0.83

PG = 0.27

CG = 1.0

VARIABLE

j (No.of runs)

MEAN

STD.DVN

Control error (E)

10000

–0.003

1.001

Adjustment (dxt)

10000

–0.006

0.082

FREQ

10000

0.017

0.128

Q = 0.25,

Q = 0.25,

Q = 0.70,

Q = 0.70,

d1 = –0.10,

d1 = 0.00,

d1 = –1.82,

d2 = –0.25

PG = 0.44

CG = 1.0

VARIABLE

d1 = –1.00,

j (No.of runs)

MEAN

STD.DVN

Control error (E)

10000

0.073

1.002

Adjustment (dxt)

10000

0.000

0.008

FREQ

10000

0.001

0.024

4.23

Discussion and Analysis of Stochastic (Statistical) Feedback Control Adjustment Q = 0.70,

d2 = –0.02

PG = 0.78

CG = 1.0

VARIABLE

j (No.of runs)

MEAN

STD.DVN

Control error (E)

10000

0.230

1.033

Adjustment (dxt)

10000

0.000

0.014

FREQ

10000

0.002

0.045

d2 = 0.00

PG = 0.91

CG = 1.0

VARIABLE

j (No.of runs)

MEAN

STD.DVN

Control error (E)

10000

0.200

1.040

Adjustment (dxt)

10000

0.000

0.014

FREQ

10000

0.002

0.042

d2 = 0.00

PG = 1.00

CG = 1.0

VARIABLE

j (No.of runs)

MEAN

STD.DVN

Control error (E)

10000

0.081

1.010

Adjustment (dxt)

10000

0.000

0.011

FREQ

10000

0.001

0.033

d2 = –0.83

PG = 0.27

CG = 1.0

VARIABLE

j (No.of runs)

MEAN

STD.DVN

Control error (E)

10000

0.029

1.002

Adjustment (dxt)

10000

–0.004

0.058

FREQ

10000

0.019

0.135

d2 = –0.25

PG = 0.44

CG = 1.0

VARIABLE

j (No.of runs)

MEAN

STD.DVN

Control error (E)

10000

–0.002

1.000

Adjustment (dxt)

10000

0.000

0.000

FREQ

10000

0.000

0.000

d2 = –0.02

PG = 0.78

CG = 1.0

VARIABLE

j (No.of runs)

MEAN

STD.DVN

Control error (E)

10000

0.002

1.000

Adjustment (dxt)

10000

0.000

0.004

FREQ

10000

0.000

0.010

d2 = 0.00

PG = .91

CG = 1.0

VARIABLE

j (No.of runs)

MEAN

STD.DVN

Control error (E)

10000

0.093

1.011

Adjustment (dxt)

10000

0.000

0.009

FREQ

10000

0.002

0.040

Q = 0.70,

Q = 0.70,

Q = 0.75,

Q = 0.75,

Q = 0.75,

Q = 0.75,

d1 = –0.27,

d1 = –0.10,

d1 = 0.00,

d1 = –1.82,

d1 = –1.00,

d1 = –0.27,

d1 = –0.10,

4.24

Integrated Statistical and Automatic Process Control

Q = 0.75, d1 = 0.00, d2 = 0.00

PG = 1.00

CG = 1.0

VARIABLE

j (No.of runs)

MEAN

STD.DVN

Control error (E)

10000

0.000

1.000

Adjustment (dxt)

10000

0.000

0.000

FREQ

10000

0.000

0.000

Q = 0.95, d1 = –1.82, d2 = –0.83

PG = 0.27

CG = 1.0

VARIABLE

j (No.of runs)

MEAN

STD.DVN

Control error (E)

10000

–0.009

1.000

Adjustment (dxt)

10000

–0.006

0.031

FREQ

10000

0.116

0.320

Q = 0.95, d1 = –1.00, d2 = –0.25

PG = 0.44

CG = 1.0

VARIABLE

j (No.of runs)

MEAN

STD.DVN

Control error (E)

10000

0.021

1.001

Adjustment (dxt)

10000

–0.011

0.039

FREQ 10000 0.389 0.488 Q = 0.95,

d1 = –0.27,

d2 = –0.02

PG = 0.78

CG = 1.0

VARIABLE

j (No.of runs)

MEAN

STD.DVN

Control error (E)

10000

0.086

1.002

Adjustment (dxt)

10000

–0.002

0.024

FREQ

10000

0.213

0.410

d2 = 0.00

PG = 0.91

CG = 1.0

VARIABLE

j (No.of runs)

MEAN

STD.DVN

Control error (E)

10000

0.108

1.004

Adjustment (dxt)

10000

–0.001

0.022

Q = 0.95,

d1 = –0.10,

FREQ 10000 0.187 0.390 Q = 0.95,

d1 = 0.00, d2 = 0.00

PG = 1.00

CG = 1.0

VARIABLE

j (No.of runs)

MEAN

STD.DVN

Control error (E)

10000

0.114

1.005

Adjustment (dxt)

10000

–0.001

0.018

FREQ 10000 0.125 0.330

This information is used to detect process changes by means of EWMA forecasts and gma theta falling outside the control limits. By virtue of the observations made earlier in Section 4.2, a range of minimal control error sigmas (CESTDDVN) are available for values of Q and hence the rate of drift, r = (1 – Q) in Table 4.1. As the

Discussion and Analysis of Stochastic (Statistical) Feedback Control Adjustment

4.25

process drift, r, decreases from fast drift, [that is, as the IMA parameter Q increases from a value of zero, (random walk)], to slow drift, [that is, as Q becomes closer to the value of 0.70, a non-stationary disturbance], the CESTDDVNs also decrease to a value which is close to 1.0 when Q = 0.75 ± 0.05, and the EWMA seems to have good control of the process. Around this value of Q, EWMA forecasts are effective in controlling a process. This inference is made possible because of the fact that the control error sigma achieved in controlling a process with no dead-time (and no carry over effects) is 1.0 (page 286, Baxley [1991]). Since the CESTDDVN values, (close to around the value of 1.0), are obtained for the second-order dynamic process with dead-time b = 1, 2, it is possible to achieve good (feedback) control possessing features such as (i)

Permissible gain of the feedback (closed) loop,

(ii) Stability of the feedback control loop and (iii) Precise regulation of loops containing dead-time, mentioned in Section 3.2. The range of control error sigmas (CESTDDVN) for corresponding values of Q and the process drift, r = (1 – Q) can be used to formulate process regulation schemes. Since only whole periods of dead-time (b = 1, 2) are considered, it is possible to estimate the best achievable performance when measured by the variance of the output (mean square error), knowing the ratio of the integer portion, (being b = 1 and 2, considered in this monograph), of the process time delay divided by the control interval, (the AI values), obtained from the simulation results (Harris [1989]). These values of Q and AI are used to formulate process regulation schemes in Section 4.17.2. The control error sigma (SE) and adjustment frequency (AF) obtained by Baxley [1991] for extended simulation runs with IMA parameter Q = 0.75 for (i) EWMA controller are 1.15 and 0.035 for dead-time b = 1.0 and controller gain CG = 1.0 with no carry-over effects (for dynamics or inertia d = 0), and control limits L = 3.15 (page 278, Baxley [1991] and (ii) for CUSUM controller, the corresponding SE and AF values are 1.13 and 0.064 (page 280, Baxley [1991]) respectively for dead-time b = 1.0 and CG = 1.04 again with no carry-over effects and ‘h’ (Cusum controller tuning parameter which is analogous to the spacing between control limits), equal to 3.98. Note that the simulation results obtained for the time series controller with IMA parameter Q = 0.75 for dead-time b = 1.0, d1 = d2 = 0 are CESTDDVN = 1.0 and AF = 0 (Refer Table 4.1). The SE and AF obtained by Baxley [1991] with IMA parameter Q = 0.25 and Q = 0.50 are (1.77, 0.065) (L = 3.63) and (1.37, 0.059) (L = 3.12) for the EWMA controller and (1.73, 0.105) (h = 3.64) and (1.48, 0.053) (h = 4.50) for the CUSUM controller (extended) simulation runs respectively. The controller gain (CG) in these situations were 0.83 and 0.85 for the EWMA controller and 0.77 and 0.85

4.26

Integrated Statistical and Automatic Process Control

obtained with the CUSUM controller. The EWMA controller has no dead-time compensation term and requires a controller gain below one in order to avoid overcontrol or overcompensation of the output controlled variable. Again, this is one of the issues raised by Kramer (Box and Kramer [1992]) which is taken care of by the time series controller feedback control algorithm with integral and dead-time compensation terms (Equation 3.16). The corresponding CESTDDVN and AF given by the time series controller are 1.099 and 0.085 for Q = 0.25, and 1.068 and 0.06 respectively for Q = 0.50 for inertia d = 0, (dead-time, b = 1.0). In fact, these CESTDDVN values are far below the SE values (1.25 for Q = 0.25 and 1.118 for Q = 0.50) presented by Baxley (page 286, [1991]) as indicated earlier in Table 3.1 (Chapter 3). Hence, it can be concluded that the feedback control provided by the time series controller adjustment Equation (3.16) is superior in performance to the performance of the EWMA and CUSUM controllers. The model and controller parameters for dead-time, b = 2.0 are given in Table 4.3. Notation used in computer simulation and Table 4.3 A - Random Shock NID(0, SA) SA - Standard Deviation of Random Shock AL - Control Parameter = 3 * SA, PG - Process Gain, CG - Controller Gain Table 4.3 Process Model and Time Series Controller Parameters for Dead-Time b = 2.0 Model Parameters Controller Parameters Second-order model with dead-time b = 2.0 Q = 0.05,

d1 = –1.82,

d2 = –0.83, AL = 3.02, PG = 0.27, CG = 1.0,

SA = 1.01

ARIABLE

j(No.of runs)

MEAN

STD.DVN VARIANCE

Control error (E)

10000

0.454

1.514

2.292

Adjustment (dxt)

10000

–0.013

0.438

0.191

FREQ

10000

0.019

0.137

AI = 1/0.019 = 52.63

Q = 0.05,

d1 = –1.00,

d2 = –0.25, AL=3.02, PG = 0.44,

CG = 1.0,

SA = 1.01

VARIABLE

j(No.of runs)

MEAN

STD.DVN

VARIANCE

Control error (E)

10000

0.829

1.567

2.457

Adjustment (dxt)

10000

–0.012

0.382

0.146

FREQ

10000

0.018

0.132

AI = 1/0.018 = 55.55

4.27

Discussion and Analysis of Stochastic (Statistical) Feedback Control Adjustment Q = 0.05,

d1 = –0.27,

d2 = –0.02, AL = 3.00,

PG = 0.78,

CG = 1.0,

SA = 1.00

VARIABLE

j(No.of runs)

MEAN

STD.DVN VARIANCE

Control error (E)

10000

1.069

1.795

3.222

Adjustment (dxt)

10000

–0.033

0.509

0.259

FREQ

10000

0.031

0.177

AI = 1/0.031 = 30.30

Q = 0.05,

d1 = –0.10,

d2 = 0.00, AL = 2.97,

PG = 0.91, CG = 1.0,

SA = 0.99

VARIABLE

j(No.of runs)

MEAN

STD.DVN

Control error (E)

10000

1.222

2.081

4.329

Adjustment (dxt)

10000

–0.028

0.453

0.206

FREQ

10000

0.025

0.155

AI = 1/0.025 = 40

Q = 0.05,

d1 = 0.00,

d2 = 0.00, AL = 3.00,

PG = 1.00,

VARIANCE

CG = 1.0,

SA = 1.00

VARIABLE

j(No.of runs)

MEAN

STD.DVN

Control error (E)

10000

1.002

1.859

Adjustment (dxt)

10000

–0.041

0.460

0.211

FREQ

10000

0.038

0.191

AI = 1/0.038 = 26.31

Q = 0.25,

d1 = –1.82,

d2 = –0.83, AL = 3.00,

VARIANCE 3.457

PG = 0.27,

CG = 1.0,

SA = 1.00

VARIABLE

j(No.of runs)

MEAN

STD.DVN

Control error (E)

10000

0.281

1.306

1.705

Adjustment (dxt)

10000

–0.006

0.202

0.041

FREQ

10000

0.007

Q = 0.25,

d1 = –1.00,

d2 = –0.25, AL = 2.98,

0.084

VARIANCE

AI = 1/0.007 = 142.86

PG = 0.44,

CG = 1.0,

SA = 0.99

VARIABLE

j(No.of runs)

MEAN

STD.DVN

Control error (E)

10000

0.449

1.388

1.926

Adjustment (dxt)

10000

–0.008

0.241

0.058

FREQ

10000

0.007

0.083

AI = 1/0.007 = 142.86

Q = 0.25,

d1 = –0.27,

d2 = –0.02, AL = 3.03,

PG = 0.78,

VARIANCE

CG = 1.0,

SA = 1.01

VARIABLE

j(No.of runs)

MEAN

STD.DVN

Control error (E)

10000

0.810

1.635

2.672

Adjustment (dxt)

10000

–0.011

0.240

0.058

FREQ

10000

0.011

0.104

AI = 1/0.011 = 90.90

Q = 0.25,

d1 = –0.10,

d2 = 0.00, AL = 2.98,

VARIANCE

PG = 0.91, CG = 1.0,

SA = 0.99

VARIABLE

j(No.of runs)

MEAN

STD.DVN

Control error (E)

10000

0.489

1.555

VARIANCE 2.419

Adjustment (dxt)

10000

–0.019

0.250

0.062

FREQ

10000

0.015

0.123

AI = 1/0.015 = 66.66

4.28 Q = 0.25,

Integrated Statistical and Automatic Process Control d1 = 0.00,

d2 = 0.00, AL = 3.02,

PG = 1.00,

CG = 1.0,

SA = 1.01

VARIABLE

j(No.of runs)

MEAN

STD.DVN

Control error (E)

10000

0.450

1.570

2.465

Adjustment (dxt)

10000

–0.017

0.239

0.057

FREQ

10000

0.014

0.118

AI = 1/0.014 = 71.42

Q = 0.70,

d1 =–1.82,

d2= –0.83, AL = 3.00,

PG = 0.27,

VARIANCE

CG = 1.0,

SA = 1.00

VARIABLE

j(No.of runs)

MEAN

STD.DVN

Control error (E)

10000

0.027

1.055

1.114

Adjustment (dxt)

10000

–0.005

0.087

0.008

FREQ

10000

0.006

0.079

AI = 1/0.006 = 166.66

Q= 0.70,

d1 = –1.00,

d2 =–0.25, AL = 3.00,

PG = 0.44,

VARIANCE

CG = 1.0,

SA = 1.00

VARIABLE

j(No.of runs)

MEAN

STD.DVN

Control error (E)

10000

0.311

1.092

Adjustment (dxt)

10000

0.000

0.041

0.002

FREQ

10000

0.002

0.047

AI = 1/0.002 = 500

Q =0.70,

d1 = –0.27,

d2= –0.02 AL = 2.99,

PG = 0.78,

VARIANCE 1.192

CG = 1.0,

SA = 1.00

VARIABLE

j(No.of runs)

MEAN

STD.DVN

Control error (E)

10000

0.598

1.181

1.394

Adjustment (dxt)

10000

–0.001

0.046

0.002

FREQ

10000

0.001

Q = 0.70,

d1 = –0.10,

d2 = 0.00, AL = 3.02,

0.056 PG = 0.91,

VARIANCE

AI = 1/0.001 = 333.33 CG = 1.0,

SA = 1.01

VARIABLE

j(No.of runs)

MEAN

STD.DVN

Control error (E)

10000

0.308

1.144

1.309

Adjustment (dxt)

10000

–0.002

0.052

0.003

FREQ

10000

0.005

0.068

AI = 1/0.005 = 200

Q = 0.70,

d1 = 0.00,

d2 = 0.00, AL = 3.00,

PG = 1.00,

VARIANCE

CG = 1.0,

SA = 1.00

VARIABLE

j(No.of runs)

MEAN

STD.DVN

Control error (E)

10000

2.144

1.196

1.429

Adjustment (dxt)

10000

0.000

0.036

0.001

FREQ

10000

0.004

0.060

AI = 250

Q = 0.75,

d1 = –1.82,

d2 = –0.83, AL = 3.01,

PG = 0.27,

VARIANCE

CG = 1.0,

SA = 1.00

VARIABLE

j(No.of runs)

MEAN

STD.DVN

Control error (E)

10000

–0.010

1.035

VARIANCE 1.071

Adjustment (dxt)

10000

–0.007

0.081

0.007

FREQ

10000

0.009

0.095

AI = 1/0.009 = 111.11

4.29

Discussion and Analysis of Stochastic (Statistical) Feedback Control Adjustment Q = 0.75,

d1 = –1.00,

d2= –0.25, AL = 2.98,

PG = 0.44,

CG =1.0,

SA = 0.99

VARIABLE

j(No.of runs)

MEAN

STD.DVN

Control error (E)

10000

0.165

1.054

1.110

Adjustment (dxt)

10000

0.000

0.021

0.000

FREQ

10000

0.001

0.030

AI = 1000

Q = 0.75,

d1 = –0.27,

d2= –0.02, AL = 3.01, PG = 0.78,

VARIANCE

CG = 1.0,

SA = 1.00

VARIABLE

j(No.of runs)

MEAN

STD.DVN

Control error (E)

10000

0.841

1.162

1.349

Adjustment (dxt)

10000

0.000

0.024

0.001

FREQ

10000

0.001

0.037

AI = 1000

Q = 0.75,

d1 = –0.10,

d2 = 0.00, AL = 2.98,

VARIANCE

PG = 0.91, CG = 1.0,

SA = 0.99

VARIABLE

j(No.of runs)

MEAN

STD.DVN

VARIANCE

Control error (E)

10000

0.131

1.052

Adjustment (dxt)

10000

–0.002

0.040

0.002

FREQ

10000

0.004

0.064

AI = 1/004 = 250

1.106

Q = 0.75, d1 = 0.00, d2 = 0.00, AL = 2.98, PG = 1.00, CG = 1.0, SA = 0.99 VARIABLE

j(No.of runs)

MEAN

STD.DVN

Control error (E)

10000

0.910

1.274

1.622

Adjustment (dxt)

10000

0.000

0.025

0.001

FREQ

10000

0.001

Q = 0.95,

d1= –1.82,

d2 = –0.83, AL = 2.99,

0.036 PG = 0.27,

VARIANCE

AI = 1000 CG = 1.0,

SA = 1.00

VARIABLE

j(No.of runs)

MEAN

STD.DVN

Control error (E)

10000

0.066

1.005

1.011

Adjustment (dxt)

10000

–0.001

0.031

0.001

FREQ

10000

0.188

0.391

AI = 1/0.188 = 5.319

Q = 0.95,

d1 = –1.00,

d2= –0.25, AL = 2.99,

PG = 0.44,

VARIANCE

CG = 1.0,

SA = 1.00

VARIABLE

j(No.of runs)

MEAN

STD.DVN

Control error (E)

10000

0.265

1.005

1.010

Adjustment (dxt)

10000

0.000

0.028

0.001

FREQ

10000

0.330

0.470

AI = 1/0.33 = 3.03

Q = 0.95,

d1 = –0.27,

d2 = –0.02, AL = 3.04,

PG = 0.78,

VARIANCE

CG = 1.0,

SA = 1.01

VARIABLE

j(No.of runs)

MEAN

STD.DVN

Control error (E)

10000

0.830

1.004

VARIANCE 1.008

Adjustment (dxt)

10000

0.000

0.017

0.000

FREQ

10000

0.229

0.420

AI = 1/0.229 = 4.366

4.30 Q = 0.95,

Integrated Statistical and Automatic Process Control d1 = –0.10,

d2 = 0.00, AL = 2.98,

PG = 0.91, CG = 1.0,

SA = 0.99

VARIABLE

j(No.of runs)

MEAN

STD.DVN

Control error (E)

10000

1.007

1.009

1.018

Adjustment (dxt)

10000

0.000

0.015

0.000

FREQ

10000

0.202

0.402

AI = 5

Q = 0.95,

d1 = 0.00,

d2 = 0.00, AL = 2.98,

PG = 1.00,

CG = 1.0,

VARIANCE

SA = 0.99

VARIABLE

j(No.of runs)

MEAN

STD.DVN

Control error (E)

10000

1.193

1.008

VARIANCE 1.017

Adjustment (dxt)

10000

0.000

0.011

0.000

FREQ

10000

0.169

0.375

AI = 59

The analysis of Simulation Results for Dead-Time b = 2 given in the above Table 4.3 is left as exercise for the reader and similar analysis can be made as shown in Section 4.11.

4.12 THE EFFECT OF CONTROL LIMITS ON PRODUCT VARIABILITY In order to know the effect of the control limits L on varCE and vardxt due to different monitoring intervals (AI’s), the increase in varCE for different AI and L are found from Table 4.4 (Section 4.14 on ‘Constrained Variance Control’). The choice of L is based on statistical considerations which have been set to L = 3sa, (3 times the standard deviation of the random shocks). Lower CESTDDVN values for L = 3, is the result of more frequent interventions with smaller adjustment intervals in order to return the process with fast drifts to target. If the adjustments are made automatically, as in some chemical processes involving small adjustment costs, and if the specification limits are narrow relative to process variability, then it may be proper to set a low value for L to minimise control error variability so that the production of off-quality material will be a minimum. In this way, increase in product variability due to over control can be avoided by widening the control limits (increasing L). This is substantiated by Box, Jenkins and MacGregor [1974] who showed that the motivation for increasing L is to reduce costs when process adjustments are expensive. On the contrary, if automatic adjustments are not possible and if specification limits are wide, then it may be proper to set L = 3 to minimise adjustment costs. Since the values of L in the simulation are around 3, which in turn depends upon the nature of the drifts, the performance for values of L around 3 are similar to that of using a statistical process control strategy with an average run length (ARL) of 400 under the assumption that the process is on target with no drifts (Baxley [1991]). For a process with dynamic parameters d1, d2 and process gain, PG, (defined in Section 3.6, as ‘the eventual effect of a unit change in the input manipulative variable after the dynamic response has been completed’), the value of w, (‘the

Discussion and Analysis of Stochastic (Statistical) Feedback Control Adjustment

4.31

magnitude of the response to a unit step change in the first period following the dead-time’), is equal to

PG(1 – d1 – d2) =

1

(1 - d1 - d2 )

× (1 - d1 - d2 ) =1.

This shows that the response to a unit step change is totally (100%) and completely reflected in the process due to the dynamic parameters, d1 and d2. The dynamic parameters measure the carry-over of the exponential process response into succeeding sample periods. If there are no dynamics in the system (d is very nearly 0), there are no carry over effects and so PG = w.

4.13 DEPENDENCE OF ADJUSTMENT VARIANCE ON DYNAMIC PARAMETERS AND PROCESS DRIFT In Section 3.6, the expressions for d1 and d2 were shown for a ‘critically damped’ second-order system to be d1 = 2e–1/t1, d2 = –e–2/t2, where t is the process time constant, since for the critically damped second-order dynamic system. t1 = t2 = t.

So,

d1 = 2e–1/t and

d2 = – e–2/t. T

Since= d e t , d1 and d2 are functions of sampling interval T, the control adjustment action variance (vardxt) depends on the rate of drift, r and also on the inertial process lags, d1 and d2. The expected variance of control adjustment action (vardxt), (from the expression for xt, Equation (3.16)) is evaluated by using the fact that minimum variance control generates deviations, et, from target that are equivalent to the random shocks {at}. That is,

(1 – Q )

2 xt = PG ( 1 – d1 – d2 ) (1 – d1 B – d2 B ) at

The second term on the right is in the form of a general ARIMA (2, 0, 1) process. It can be shown that the variance of this process is

(1 – d2 ) sa2 . ( 1 – d1 – d2 )( 1 + d1 – d2 )(1 + d2 ) [Refer to Equation (3.2.28), page 62, Box and Jenkins [1970, 1976] which gives the expression for the variance of the second-order Autoregressive process described by Equation (3.2.17), page 58 in the above monograph].

4.32

Integrated Statistical and Automatic Process Control

(

)

(1 – Q ) 1 – d1 B – d2 B 2 at So, Var(xt) = Var PG 1 – d1 – d2

(

=

)

(1 - d 2)(1 - Q) 2s 2a

PG 2 (1 - d 1 - d 2)2 (1 - d1 - d 2)(1 + d1 - d 2)(1 + d 2)

=

(1 - d 2)r 2 (1 - d 1 - d 2)(1 + d1 - d 2)(1 + d 2) w2

=

(1 - d 2)r 2 ...(4.5) (1 - d1 - d 2)(1 + d1 - d 2)(1 + d 2)

because of the fact that sa = 1.0, r = 1 – Q and PG(1 – d1 – d2) = w = 1. Table 4.4 shows the adjustment variance (vardxt) values for values of dead-time, b = 1 and for values of d1 and d2 that satisfy stability conditions. It can be shown that the values of vardxt obtained from simulation of Equation (3.16) compare reasonably well with the numerical values calculated from Equation (4.5) above. Since d1 and d2 are functions of the monitoring (sampling) interval, the effect of inertia may be reduced by lengthening the monitoring interval. The adjustment variance is a minimum (0.003) for large monitoring intervals (52.6) such as for example, for Q = 0.75, d1 = –1.82 and d2 = –0.83 and for dead-time, b = 1.0. From Table 4.4, (Section 4.14) it is observed also that for an increase in varCE and a particular value of the IMA parameter Q, large values of d1 and d2 yield considerable reductions in the adjustment control action variance (vardxt). The dependence on past control actions increases as d1 and d2 get larger which is in agreement with the observation of Kramer [page 157, 1990]. For varCE equal to 1.002 and Q = 0.70, the (longest) monitoring interval (58.8) occurs when d1 = –1.82 and d2 = –0.83. When d1 and d2 are both equal to 0, the control adjustment action (dxt) leads to an immediate adjustment in the controller set point and there is no bias due to the process dynamics. Longer monitoring intervals are possible as the bias due to the process dynamics is reduced. Plots of CESTDDVN and AI = 1/MFREQ against Q for various values of the parameters d1 and d2 and dead-time b = 1 are shown in Figures 4.2 and 4.4. From Figure 4.2 for plots of Q versus CESTDDVN, it can be seen that as the values of d1, d2 approach 0, the plot becomes flatter and almost straight for d1, d2 values equal to 0. The peaks in Figure 4.4 for plots of Q versus AI are due to the process requiring adjustments, (in fact, zero adjustments, dxt = 0), after long sampling periods, after 1000 adjustment intervals for Q = 0.70 and after 100 adjustment intervals for Q = 0.75. This is because the EWMA has a fairly good control of the process around

Discussion and Analysis of Stochastic (Statistical) Feedback Control Adjustment

4.33

these Q values. It is shown also that the control adjustment action variance (vardxt) is a fraction of the adjustment variance obtained when the control is minimum variance for large d1 and d2 and small values of r (large Q values). When r is small and the drifts are slow, long monitoring intervals can be considered for increases in varCE leading to reductions in the control adjustment action variance (vardxt). The amount of increase in CESTDDVN by extending AIs is the same irrespective of dead-time as this does not contribute to increase in control error sigma, this can be seen from the results. Theta vs CESD 1.06

CESD

1.04 1.02 1 0.98

0.8

0.9

0.85

0.95

0.7

0.6

0.25

0.15

Theta

0.05

0.96

deltal = –1.82, delta 2 = – 0.83

Theta vs CESD 1.06

CESD

1.04 1.02 1

0.65

0.55

0.25

0.15

Theta

0.05

0.98

deltal = –1.0, delta 2 = – 0.25

Figure 4.2 CESD against Q (for various delta values), for Dead-time b = 1

Minimum variance control requires large alternating control actions to give minimum output variance when d1 and d2 are large. The alternating character is eliminated by allowing slight increases in output variance (varCE). Substantial reductions in control action variance can be achieved for minor increases in output variance by constraining gma theta. By means of this constraint, minimum variance (or MMSE) control for minimising control actions was achieved independent of the monitoring interval. Note that the control error standard deviation (CESTDVN) is also denoted as CESD in the Figure 4.2 and in some places in this monograph.

4.34

Integrated Statistical and Automatic Process Control Theta vs CESTDDVN 1.04

CESTDDVN

1.03 1.02 1.01 1 0.98

0,95

0,95

0,95

0,75

0,75

0,7

0,75

0,7

0,7

0,25

0,25

0,05 0,25

0,05 0,05

0.96 Theta

Figure 4.3 Variation in CESTDDVN versus Values of the IMA Parameter, Q 1200

Theta vs AI

1000

AI

800 600 400 200 0 0.05 0.15 0.25 0.6 Theta

0.7

0.8

0.9

Figure 4.4 Variation in Theta versus Values of Adjust Interval AI

4.14 CONSTRAINED MINIMUM VARIANCE CONTROL AND CONTROL ACTION VARIANCE The objective in (feedback control) schemes that employ minimum variance control, is to find a control scheme (a control strategy relating control adjustment actions at time t to previous control adjustments and the current and previous deviations) such that variance of the forecast error, E(et2) is minimised. The aim in constrained variance control schemes is to find a control strategy that minimises gma theta subject to a constraint that the calculated individual gma’s are less than some limit, say L where L (referred in Section 4.4) denotes the number of multiples of sa, (L = 3sa, 3 times the standard deviation of the random component of the disturbance) used for EWMA control limits. These control limits were set as in Equations (4.1) and (4.2). As the control limits increase, more emphasis is placed on reducing the variance of control adjustment actions and less on the variance of deviations from

Discussion and Analysis of Stochastic (Statistical) Feedback Control Adjustment

4.35

target or output variances, (also termed by Kramer as MSD, mean squared deviation from target) (page 155, Kramer [1990]). Following the principle of Baxley [1991] in the EWMA simulation programme (page 293, Baxley [1991]), the gma theta is constrained to be the same as the IMA parameter theta, (the process drift, r, being equal to 1 – Q), for the disturbance in the simulation of the stochastic feedback control algorithm (Equation 3.16). Table 4.4 shows that the constrained variance control scheme generates a smaller control adjustment action variance (variance of the control adjustment, SDx2t , that is, vardxt in the computer simulation) than that would have been possible by minimum variance control at the expense of a larger output variance (control error variance, varCE in the computer simulation). The position of the control limits which determined the control scheme, changed also with Q as can be seen from Equations (4.1) and (4.2). Table 4.4 is used to identify a range of L that will yield an increase in varCE and the corresponding increase in control action adjustment variance for a combination of Q, d1 and d2. The constrained variance control scheme is determined once a value of L is chosen, usually corresponding to a particular increase in varCE. From Table 4.4, it is found that the constrained control adjustment action variance can be reduced by 0.7% of the minimal variance control action variance (namely 0), when a 0.2% increase in varCE can be allowed in the final product (for Q = 0.70, d1 = –1.82 and d2 = –0.83 for dead-time b = 1.0). The value of L corresponding to this constrained variance control scheme is between 3.04 and 2.97. It can be shown that substantial reductions in vardxt could be achieved for, in some instances, with minor increases in var CE by using a constrained control scheme. Table 4.4 gives information of AIs (adjustment intervals) for a certain reduction in vardxt which will result in an increase in varCE. In some situations, there may be substantial increases in varCE due to the control scheme being based only on the deviations associated with the AI’s. With higher AI’s, it is possible to have larger L and hence the process is allowed to drift farther from target before an adjustment is made, causing an increase in varCE. Notation used in computer simulation and Table 4.4 A

- Random Shock NID (0, SA)

SA

- Standard Deviation of Random Shock

AL

- Control Parameter = 3 * SA,

FREQ - Adjustment Frequency, MFREQ - Mean of FREQ CE

- Control Error

varCE - Control Error Variance dxt

- Control adjustment (xt)

vardxt - Variance of the Control Adjustment (SDdxt)**2

4.36

Integrated Statistical and Automatic Process Control Table 4.4 Constrained Variance Control Scheme

Q = 0.05, d1 = –1.82, d2 = –0.83, b = 1.0, AL = 3.02, SA = 1.01 VARIABLE VARIANCE Control error (E)

1.000

Adjustment (dxt)

0.000

FREQ

0(MFREQ)

AI = 0

Q = 0.05, d1 = –1.00, d2 = –0.25, b = 1.0, AL = 3.02

SA = 1.01

VARIABLE

VARIANCE

Increase in varCE

Increase in vardxt

Control error (E)

1.079

7.90%

Adjustment (dxt)

0.203

FREQ

0.102(MFREQ)

20.30%

AI = 1/MFREQ = 9.80

Q = 0.05, d1 = –0.27, d2 = –0.02, b = 1.0

AL = 3.00

SA = 1.00

VARIABLE

VARIANCE

Increase in varCE

Increase in vardxt

Control error (E)

1.338

24.00%

Adjustment (dxt)

0.311 53.2%

FREQ

0.19(MFREQ)

AI = 1/MFREQ = 5.26

Q = 0.05, d1 = –0.10, d2 = 0.00, b = 1.0, AL = 2.97, SA = 0.99 VARIABLE

VARIANCE

Decrease in varCE

Decrease in vardxt

Control error (E)

1.298

2.98%

Adjustment (dxt)

0.273 12.22%

FREQ

0.199(MFREQ)

AI = 1/MFREQ = 5.03

Q = 0.05, d1 = 0.00, d2 = 0.00, b = 1.0, AL = 3.00, SA = 1.00 VARIABLE

VARIANCE

Increase in varCE

Control error (E)

1.480

14.02%

Adjustment (dxt)

0.323 18.32%

FREQ

0.224(MFREQ)

Increase in vardxt

AI = 1/MFREQ = 4.46

Q = 0.25, d1 = –1.82, d2 = –0.83, b = 1.0, AL = 3.00, SA = 1.00 VARIABLE

VARIANCE

Increase in varCE

Control error (E)

1.036

14.02%

Adjustment (dxt)

0.163 18.32%

FREQ

0.086(MFREQ)

AI = 1/MFREQ = 11.63

Increase in vardxt

4.37

Discussion and Analysis of Stochastic (Statistical) Feedback Control Adjustment Q = 0.25, d1 = –1.00, d2 = –0.25, b = 1.0, AL = 2.98, SA = 0.99 VARIABLE

VARIANCE

Increase in varCE

Control error (E)

1.061

2.41%

Increase in vardxt

Adjustment (dxt)

0.062 61.96%

FREQ

0.05(MFREQ)

AI = 1/MFREQ = 20.00

Q = 0.25, d1 = –0.27, d2 = –0.02, b = 1.0, AL = 3.03, SA = 1.01 VARIABLE

VARIANCE

Increase in varCE

Increase in vardxt

Control error (E)

1.152

8.57%

Adjustment (dxt)

0.066 6.45%

FREQ

0.075(MFREQ)

AI = 1/MFREQ = 13.33

Q = 0.25, d1 = –0.10, d2 = 0.00, b = 1.0, AL = 2.98, SA = 0.99 VARIABLE

VARIANCE

Increase in varCE

Decrease in vardxt

Control error (E)

1.229

6.68%

Adjustment (dxt)

0.051 22.72%

FREQ

0.061(MFREQ)

AI = 1/MFREQ = 16.39

Q = 0.25, d1 = 0.00, d2 = 0.00 b = 1.0, AL = 3.02, SA = 1.01 VARIABLE

VARIANCE

Decrease in varCE

Increase in vardxt

Control error (E)

1.208

1.71%

Adjustment (dxt)

0.059 15.69%

FREQ

0.085(MFREQ)

AI = 1/MFREQ = 11.76

Q = 0.70, d1 = –1.82, d2 = –0.83 b = 1.0, AL = 3.00, SA = 1.00 VARIABLE

VARIANCE

Decrease in varCE

Decrease in vardxt

Control error (E)

1.002

17.05%

Adjustment (dxt)

0.007 88.14%

FREQ

0.017(MFREQ)

AI = 1/MFREQ = 58.82

Q = 0.70, d1 = –1.00, d2 = –0.25, b = 1.0, AL = 3.00, SA = 1.00 VARIABLE

VARIANCE

Increase in varCE

Decrease in vardxt

Control error (E)

1.004

0.19%

Adjustment (dxt)

0.000 100%

FREQ

0.001(MFREQ)

AI = 1000.00

Q = 0.70, d1 = –0.27, d2 = –0.02, b = 1.0, AL = 2.99, SA = 1.00 VARIABLE

VARIANCE

Increase in varCE

Control error (E)

1.067

6.27%

Adjustment (dxt)

0.000 Nil

FREQ

0.002(MFREQ)

AI = 500.00

Change in vardxt

4.38

Integrated Statistical and Automatic Process Control

Q = 0.70, d1 = –0.10, d2 = 0.00 b = 1.0, AL = 3.02, SA = 1.01 VARIABLE

VARIANCE

Increase in varCE

Control error (E)

1.081

1.31%

Change in vardxt

Adjustment (dxt)

0.000 Nil

FREQ

0.002(MFREQ)

AI = 500.00

Q = 0.70, d1 = 0.00, d2 = 0.00, b = 1.0, AL = 3.00, SA = 1.00 VARIABLE

VARIANCE

Decrease in varCE

Change in vardxt

Control error (E)

1.020

5.64%

Adjustment (dxt)

0.000 Nil

FREQ

0.001 (MFREQ)

AI = 1000.00

Q = 0.75, d1 = –1.82, d2 = –0.83, b = 1.0, AL = 3.01, SA = 1.00 VARIABLE

VARIANCE

Decrease in varCE

Increase in vardxt

Control error (E)

1.005

1.47%

Adjustment (dxt)

0.003 0.3%

FREQ

0.019(MFREQ)

AI = 1/0.019 = 52.63

Q = 0.75, d1 = –1.00, d2 = –0.25 b = 1.0, AL = 2.98, SA = 0.99 VARIABLE

VARIANCE

Decrease in varCE

Decrease in vardxt

Control error (E)

1.000

0.49%

Adjustment (dxt)

0.000 0.30%

FREQ

0.000(MFREQ)

AI = 0.00

Q = 0.75, d1 = –0.27, d2 = –0.02 b = 1.0, AL = 3.01, SA = 1.00 VARIABLE

VARIANCE

Change in varCE

Change in vardxt

Control error (E)

1.000

Nil

Adjustment (dxt)

0.000 Nil

FREQ

0.000(MFREQ)

AI = 0.00

Q = 0.75, d1 = –0.10, d2 = 0.00, b = 1.0, AL = 2.98, SA = 0.99 VARIABLE

VARIANCE

Increase in varCE

Change in vardxt

Control error (E)

1.023

2.30%

Adjustment (dxt)

0.000 NIl

FREQ

0.002 (MFREQ)

AI = 500.00

Q = 0.75, d1 = 0.00, d2 = 0.00, b = 1.0, AL = 2.98, SA = 0.99 VARIABLE

VARIANCE

Decrease in varCE

Control error (E)

1.000

2.24%

Adjustment (dxt)

0.000 Nil

FREQ

0.000 (MFREQ)

AI = 0.00

Change in vardxt

4.39

Discussion and Analysis of Stochastic (Statistical) Feedback Control Adjustment Q = 0.95, d1 = –1.82, d2 = –0.83, b = 1.0, AL = 2.99, SA = 1.00 VARIABLE

VARIANCE

Decrease in varCE

Change in vardxt

Control error (E)

1.000

Nil

Adjustment (dxt)

0.001 0.10%

FREQ

0.116 (MFREQ)

AI = 1/0.116 = 8.62

Q = 0.95, d1 = –1.00, d2 = –0.25 b = 1.0

AL = 2.99

SA = 1.00

VARIABLE

VARIANCE

Increase in varCE

Increase in vardxt

Control error (E)

1.001

0.10%

Adjustment (dxt)

0.002 100.00%

FREQ

0.389 (MFREQ)

AI = 1/0.389 =2.57

Q = 0.95, d1 = –0.27, d2 = –0.02, b = 1.0, AL = 3.04, SA = 1.01 VARIABLE

VARIANCE

Increase in varCE

Decrease in vardxt

Control error (E)

1.003

0.19%

Adjustment (dxt)

0.001 50.00%

FREQ

0.213 (MFREQ)

AI = 1/0.213 = 4.69

Q = 0.95, d1 = –0.10, d2 = 0.00, b = 1.0, AL = 2.98, SA = 0.99 VARIABLE

VARIANCE

Increase in varCE

Control error (E)

1.007

0.39%

Decrease in vardxt

Adjustment (dxt)

0.000 100.00%

FREQ

0.187 (MFREQ)

AI = 1/0.187 = 5.34

Q = 0.95, d1 = 0.00, d2 = 0.00, b = 1.0, AL = 2.98, SA = 0.99 VARIABLE

VARIANCE

Increase in varCE

Control error (E)

1.009

0.19%

Adjustment (dxt)

0.000

FREQ

0.125 (MFREQ)

Change in vardxt Nil

AI = 1/0.125 = 8.00

Table 4.4 shows the variance of the control error (varCE), and variance of the feedback control adjustment (vardxt) along with the mean frequency of process adjustment (MFREQ) for different values of the IMA parameter Q and the dynamic process parameters d1 and d2. The stochastic feedback control algorithm minimises the variance of the output at the sampling instants (Venkatesan [1997]). Table 4.4 gives some of the output results of the simulation programme. This table shows the standard deviation of the control error (CESTDDVN) and an average (MFREQ) for an indicator variable (FREQUENCY), which takes the value 1 for sample periods with an adjustment and zero otherwise. The results are shown for various values of Q and d1 and d2 for dead-time b = 1.0.

4.40

Integrated Statistical and Automatic Process Control

For the minimum variance (time series) controller built on the stochastic algorithm, a sample is taken and the process is adjusted immediately at the adjustment interval, AI = 1/MFREQ. Kramer [1992] observed that the effect of control adjustment variance on the process must be considered when making feedback control adjustments. This is one of the reasons for presenting some of the variance values obtained from simulation of the stochastic feedback control algorithm in this section for completeness of discussion on minimum variance control. Minimum variance control requires large alternating control actions to give minimum output variance when the process dynamic parameters, d1 and d2 are large. The effect of this alternating character can be either reduced or (partially) eliminated by allowing slight increases in output control error variance (varCE). In constrained minimum variance control schemes, reduced control action (in the input feedback adjustments) is achieved at a cost of small increases in the mean square error (MSE) at the output by placing a constraint on the input manipulated variable. Kramer [1990], in order to evaluate the trade off between the two variances, developed a constrained variance control scheme in which to demonstrate the effect on both adjustment variance and the specified output variance. Kramer [1990] derived expressions for the disturbance and output effect of control actions as functions of random shocks, independent of control scheme and considered approaches for reducing adjustment variability (vardxt). Interest is focused on reducing product variablity, (CESTDDVN) only and in minimising the variance of the product quality variable at the output. The principle of the EWMA simulation programme (Baxley [1991]) is followed in constraining the geometric moving average (gma) Q to be the same as the IMA parameter Q itself in simulation of the stochastic algorithm. Table 4.3 shows the values of the adjustment variance (vardxt) for values of dead-time, b = 1 and for values of d1 and d2 that satisfy feedback control stability conditions. The feedback control input adjustment is a minimum (0.003) for large monitoring intervals (52.6) such as for example, for Q = 0.75, d1 = –1.82 and d2 = –0.83 and for dead-time, b = 1.0. From Table 4.4, it can be observed also that for an increase in varCE and a particular value of Q large values of d1 and d2 yield considerable reductions in the feedback control adjustment variance (varxt). The dependence on past control action increases as d1 and d2 get larger which is in agreement with the observation of Kramer [1990]. For a value of control error variance, varCE equal to 1.002 and Q = 0.70, the longest monitoring interval (58.8) occurs when d1 = –1.82 and d2 = –0.83, the feedback control adjustment (xt) leads to an immediate adjustment in the controller set point and there is no bias due to the process dynamics. Larger monitoring intervals are possible as the bias due to the process dynamics is reduced.

Discussion and Analysis of Stochastic (Statistical) Feedback Control Adjustment

4.41

4.15 EFFECT OF INCREASE IN DEAD-TIME ON CONTROL ERROR STANDARD DEVIATION (PRODUCT VARIABILITY) AND ADJUSTMENT INTERVAL Dead-time and inertia have a bearing and influence on determining the adjustment interval (AI) when a process is drifting. Tables 4.2 and 4.3 are used for discussing the effect of increase in dead-time from b = 1 to b = 2. For a set of values of d1 = –1, d2 = –0.25 and Q = 0.05, the CESTDDVN and AI from Table 4.2 are 1.039 and 9.8 for dead-time b = 1.0. The corresponding values for dead-time b = 2 from Table 4.3 are 1.567 and 55.56. On comparing these sets of values, it is found that for a process with the same parameters, d1 = –1, d2 = –0.25 and Q, the CESTDDVN has increased from 1.039 to 1.567. However, the process gain is the same in both the cases showing that dead-time does not contribute to the steady-state feedback control (closed-loop) gain or process gain. This means that dead-time offers no gain contribution. However, it has increased CESTDDVN from 1.039 for b = 1 to a value of 1.567 for b = 2. The corresponding adjustment intervals (AIs) being, 9.8 for b = 1 and 55.56 for b = 2. Thus it is seen that the penalty of the existence of dead-time in a process is an increase in the CESTDDVN or product variability. It can also be shown that the penalty for a process with dead-time is more severe when the process has fast drifts than when the drifts are slow. This can be seen by considering the decrease in the rate of drift from r = 0.95(Q = 0.05) to r = 0.30 (Q = 0.70), the CESTDDVNs achieved are near 1.0 and comparable for a process with no dead-time. As the drifts further decrease and become slow, the EWMA weights have less effect on the forecasts and there is a slight increase in CESTDDVN. The situation is similar with the adjustment intervals. In these situations, if the control scheme is based only on the deviations associated with the adjustment intervals (monitoring periods), there is a (substantial) increase in the output variance (varCE). For a dead-time b = 1, the adjustment interval (AI) is 0 for d1 = –1.82, d2 = –0.83 and Q = 0.05. On comparing these sets of values, it is found that for a process with the same parameters, d1 = –1.82, d2 = –0.83 and Q , the AI has increased from 0 to 52.63 for b = 2.0. Again, as the drifts decrease and reach about (0.30) (Q = 0.70), the EWMA has effective control of the process and the process requires no adjustment (AI = 0). As the drifts decrease still further and the process becomes almost stationary, larger adjustment intervals are required to bring the already stationary process back to control. The AIs decrease from large values for a process with fast drift to a process with slow drift, as also do the CESTDDVNs. It is observed from Table 4.3 that for longer adjustment (monitoring) intervals, both the output variance (varCE) and the control action (adjustment) variance (vardxt) are large. It is required to know how sensitive the simulation results are to the assumptions made in deriving the feedback control algorithm. It may be a matter of surprise to note that small values of CESTDDVN could be associated with large

4.42

Integrated Statistical and Automatic Process Control

AI’s. These simulation results will definitely hold up (at least approximately) in a general practical context. It can be seen that the output variance (varCE) depends on the IMA parameter Q and in turn, the rate of drift of the process, r = (1 – Q), whereas the control action (adjustment) variance (vardxt) depends on the process dynamic parameters as well as the process drift, r, as shown in Section 4.12. It is shown that as Q gets larger and the parameters (d1, d2, b) remain constant, changes in the monitoring interval (AI) have a smaller effect on the output variance when Q is near the value of 1.0 (slow drifts), (for example, 0.95, that is, when the process is tending to become stationary), than when it takes smaller values such as 0.05 (fast drifts) (random walk). From Table 4.1, it can be seen that for values of r = 0.95, 0.25, the AI’s are 0 and 1.62. As r decreases from 0.95 to 0.05, the AI increases from 0 to 8.62 showing that larger adjustment intervals are required for controlling slow drifts and smaller AIs are required for fast drifts and it is comparatively easier to control fast drifts than slow drifts. This might be expected since when the process drifts are slow, the disturbance is almost stationary and it may be practically possible and sufficient to adjust a process which is already nearly under control after a long adjustment interval. The plausible reason for requiring small AI’s for fast drifts is that as the process is close to a random walk, when an adjustment is made to the process which is experiencing fast drifts, it goes out of control within short periods of time and immediately another adjustment becomes necessary.

4.16 DEAD-TIME AND FEEDBACK CONTROL SCHEME It is generally assumed, in controlling a process by applying SPC techniques that (i) the true process level is not a constant and (ii) the common-cause variation and the process state of statistical control follow only a (stable or) stationary model. If this assumption proves to be incorrect, then, there may be slight amounts of autocorrelations in the process level affecting the run-length and the control chart limits (Harrison and Ross [1991]). This is the scenario in continuous process industries, where the true process level is not constant due to process drifts. Feedback control can be an approach to compensate for the drifts in these circumstances. If the autocorrelations are large and persistent, then, a feedback control approach may be more appropriate than a SPC approach to control the process. Feedback control can compensate only for the predictable component of the uncompensated process output (MacGregor [1992]). So, the effectiveness of feedback control will depend on how much it will be possible to probabilistically predict the output process variance. Thus a situation arises to take a perspective view of a given process control situation as when to use SPC and APC. This depends upon the process level remaining

Discussion and Analysis of Stochastic (Statistical) Feedback Control Adjustment

4.43

constant or when there are changes in the process due to some significance in the autocorrelation estimates of the process data. In the ultimate analysis, a suggestion is for possible integration of both the SPC and APC procedures by judicial use of techniques from both the disciplines as may be warranted and considered necessary by the current process control conditions. The EWMA forecast of the (simulated) data was plotted against two parallel action lines in a geometric moving average (gma) control chart. In a feedback control scheme, the position of these control limits is determined on the (i) the relative costs of adjustment and of being off-target, and (ii) by the degree of non-stationarily of the process. The relative value of these costs is an important factor in deciding the optimal choice of a feedback control scheme. A procedure, to reduce the cost of the scheme, is by lengthening the sampling (monitoring) interval. This procedure may be less satisfactory in that it may increase slightly the mean square error (page 261, Box and Kramer [1992]). An assumption to represent complex dynamic process was by a second-order dynamic model. The focus is currently in on the influence and effects of dead-time on a feedback control scheme. For ARIMA (transfer function) models with b > 0 periods of delay, minimum mean square action yields a process output that becomes a moving average of at, at–1,...at – b (page 279, Vander Wiel and Vardeman [1992]). Due to delay in the process, the process deviation is a moving average time series model of order b-1. For b > 2, the adjacent values of the process output will be auto-correlated. When the delay exceeds one period, this autocorrelation will still be present, regardless of the feedback control scheme. There will not be any significant autocorrelations beyond lag 2 (MacGregor [1992]). The geometric moving average (‘gma’) Q was used for monitoring and sounding the out-of-control signal based on this gma statistic. The process with dead-time will still be in statistical control though the observations may be serially independent (Koty and Johnson [1985]). The Shewhart chart helps to monitor stable operation due to common causes and to reveal special causes. So, only when it is possible to establish some statistically significant monitoring criterion, it will be proper to react to process changes. A suitable feedback control scheme should then be specifically designed and used to regulate (adjust) the process (Box and Kramer [1992]). This is to avoid a (pure) feedback control adjustment scheme only from obtaining a large mean square error. However, it is possible to obtain the residual sequence a t , which can be used for process monitoring even with dead-time (in the feedback control loop). Having taken care of both the system dynamics and dead-time, the discussion is focussed in developing an approximate feedback control process regulation scheme.

4.44

Integrated Statistical and Automatic Process Control

4.17 FEEDBACK CONTROL PROCESS REGULATION SCHEMES 4.17.1 Average Run Length and Control Procedure The ‘average run length’ (ARL) measures the performance of a control procedure. A percentage point of the run length distribution can also be an appropriate measure in some applications. The average run length (ARL) is the mean number of points plotted on a control chart before signalling control action. This number should be large for a stable process and the average quality of the process output is acceptable to both the manufacturer and the consumer. The ARL should be small if there is a shift in the mean. For a (frequently sampled) continuous industrial process, ARL is the average number of sample intervals from the time a shift in the mean occurs until the control chart signals it. So that there will be few false alarms (out-of-control) signals, it is desirable that a charting method has a large ARL when the process is in control. On the contrary, it is desirable also to have a short average run length when a shift in the mean has occurred. Disturbances interrupt stable periods of operation of a manufacturing process and results in drifting behaviour of the process and shifts the output mean from target. Due to this, the product quality data does not fit in with this characterisation for measuring the performance of a control procedure using ARL (Baxley [1991]). The ARIMA (0,1,1) model characterised the disturbance. The objective in process regulation schemes is to regulate a process and not to discover the cause of disturbance. So, stochastic process control by the ARIMA model approach is preferable than the ARL and Shewhart process control approach. Also, a first order autoregressive integrated moving average model is an appropriate choice. The disturbance model represents the correlation structure of the data and the dynamic model for a feedback control system. Then, a process control EWMA chart scheme was formulated based on this modelling procedure. Box and Jenkins [1970, 1976] showed that, the residuals for a correctly identified and fitted model, form an independent identically distributed (i.i.d.) sequence. Special causes identified and associated with temporary deviations from the modelled process as leading to departures from the stochastic time series model. This, the outliers (independent observations) indicated in the error sequence {et} of the deviations from target. Then an adjustment control action is triggered when some function of the et’s exceeds certain boundary values. Special causes were highlighted by applying Shewhart control charting techniques to the residual series for a properly tuned feedback control system. For information on identifying the model parameters by an examination of the correlation structure of the residuals from a fitted ARIMA model, reference may be made to Box and Jenkins [1970, 1976]. In relation to performance measurement, it is required that a control procedure must have a large ARL when the process is in control and a small ARL otherwise.

Discussion and Analysis of Stochastic (Statistical) Feedback Control Adjustment

4.45

The run length of a control procedure is the mean of the number of samples required before a process gives an out-of-control signal. An out-of-control signal indicates that a shift in the mean is likely to have occurred and that control action should be taken to find and correct the assignable or special cause of this shift in the mean (Woodall [1985]). Attempting to adjust a process for slight shift in the mean leads to overcorrection and introducing variability into the process. The average run length should be large if the process is stable and if the average outgoing product quality level (AOQL) is acceptable. The ARL should be small if the mean has shifted to a particular quality level (Devore [1982]) and remains (steady) for a certain time. It is considered disadvantageous for a control procedure to have low ARL values for shifts in the mean that are of not much significance. Also, low ARL values for small deviation from target, is a drawback when some slack in the process is acceptable (Page [1961, 1962], Ewan [1963], Duncan [1974], Wetherill [1977]). A method to construct ‘modified’ control limits is described in Western Electric Company’s handbook [1956] when the product specification limits are wide compared to the process standard deviation. The modified control limits obtained according to this method are wider than Shewhart chart limits. Freund [1957] and Duncan [1957] recommended such Shewhart charts with modified limits for process control. However, the use of modified control limits are acceptable to the quality control charts practitioners only when some shift in the mean can occur without a significant increase in the percentage of products that are non-conforming with product specifications. Hill [1956] pointed out that the calculations of the modified control limits for a Shewhart chart may contain all values of the shift in the mean for which the percentage non-conforming product is between two specified (probability) values, measured in units of the standard error of the sample mean. These calculations depend on the probability distribution of the quality characteristic and may result in a control region too large and contain shifts in the mean that have to be detected quickly and corrected. The underlying principle in quality control is that not only the product meets with specifications but also achieves a distribution of the quality characteristic that is concentrated as closely as possible about the desired target value. The basic idea is that the process should be so controlled that if the process mean shifts to within 3s of specification limits, then this shift should be detected immediately. This means that the shifted mean should be three standard errors beyond the control action limits (Wetherill and Rowlands [1991]). It is often not practical to detect quickly shifts from the target value which are too small to be of practical importance. In relation to the modified control limits, if the time series model has been correctly identified, then, the sample means will appear to be independent identically distributed variables. This once again emphasises the need for identifying correctly the time series model.

4.46

Integrated Statistical and Automatic Process Control

4.17.2 Process Regulation Schemes The purpose of these regulation schemes is to obtain an idea of the dynamic nature of the process behaviour, range of adjustment intervals and the corresponding increase in CESTDDVNs. The regulation schemes use some combinations (not specific) of the IMA parameter Q and the process dynamic parameters d1 and d2 that are taken from a large data. This data is generated by the process control computer (in a sub-programme of the code for simulation) for each set of combinations of d1 and d2, (that satisfy feedback control stability conditions), as it iterates through the values of Q from 0.05 to 0.95. The combinations of these parameters are neither designed values nor chosen and can be obtained for any large or small value of Q. The process computer uses the mathematical equations and performs the required computations that are necessary for the calculations. Table 4.5 Alternative Process Regulation Schemes for Various Values of Dynamic Parameters that Satisfy Feedback Control Stability Conditions and Dead-Time, b = 1.0. Scheme

Adj.Int. AI

CESTDDVN

Scheme

Adj.Int. AI

CESTDDVN

A

10.9

1.02

B

10.5

1.03

I

5.10

1.12

J

5.68

1.18

C

8.2

1.04

K

5.26

1.15

D

10.0

1.03

L

5.02

1.15

E

9.0

1.05

M

4.46

1.21

F

20.0

1.03

N

9.52

1.09

G

12.20

1.03

O

6.94

1.12

H

12.35

1.03

From Table 4.5, it can be seen that for schemes A, B, C, D, E, F, G and H, the Adjustment intervals (AIs) vary from 8.2 to 20.0. Correspondingly, the variations in the control error standard deviations (CESTDDVN) range also from 1.02 to 1.05. This shows that for larger AIs, the CESTDDVNs are small which implies that the process can be adjusted after long AIs that will result in small increases in standard deviations only. For schemes I, J, K, L, M, N and O, the adjustment intervals (AIs) vary from 4.46 to 9.52. This shows also that for small AIs, the standard deviations are greater than that obtained for large adjustment intervals (AIs) in group 1. An exception to all these schemes is the scheme C that requires only an adjustment interval (AI) of 8.2 sample periods for achieving a control error standard deviation (CESTDDVN) of 1.04. A possible reason for this is that the dynamic parameters for this particular scheme C are d1 = –1.82 and d2 = –0.83. The inference is that the process control practitioner has the option to choose the scheme depending on the desired control error standard deviation (CESTDDVN) and adjust the process accordingly.

4.47

Discussion and Analysis of Stochastic (Statistical) Feedback Control Adjustment

The bias due to process dynamics on the adjustment interval (AI) is discussed in Section 4.15. The different process regulation schemes show that frequent adjustment of the process with small AIs results in large CESTDDVNs. Adjusting the process after long AIs result in small increases in CESTDDVN as shown by the group 1 (Schemes A to H). The Table 4.5 can be used as a ‘look-up’ table and a reference guide to adjust the process. The dynamic process parameter values d1 and d2 were the result of extensive simulation runs (10,000) and so the set of parameter values will almost represent actual tests on a real process plant. Hence, it is believed that it will give a very close realistic picture of the process so long as the process and stochastic models truly represent the actual process and operating conditions of feedback control stability etc. The variation of AI with respect to Q for various values of the dynamic parameters d1 and d2 is illustrated in Figures 4.5, 4.6 and 4.7. The dynamic parameters satisfy the feedback control stability conditions and as the IMA parameter Q is set to match the disturbance, the values of Q change as and when there is a change in the input random shocks. Figures 4.5, 4.6 and 4.7 (used for explaining the process regulation scheme), show the variation of AI with respect to Q for various values of the dynamic parameters d1 and d2. AI

Theta vs AI

80 60 40 20 0 0.05

0.05 0.25

0.25

0.7 Theta

0.7

0.7

0.7

0.95

0.95

Figure 4.5 Variation in Adjustment Interval, AI with Respect to the IMA Parameter, Q

AI

Theta vs AI

1500 1000 500 0 0,05

0,15

0,25

0,55 0,65 Theta

0,75

0,85

0,95

Figure 4.6. Variation in Adjustment Interval, AI with Respect to the IMA Parameter for d1 = –1.00 and d2 = –0.25

4.48

Integrated Statistical and Automatic Process Control AI 1200

Theta vs AI

1000 800 600 400 200 0 0,05 0,15 0,25

0,6 0,7 Theta

0,8

0,9

Figure 4.7 Variation in Ajustment Interval, AI with Respect to the IMA Parameter Q, for d1 = –0.27 and d2 = –0.02

Figures 4.5 and 4.6 show that the EWMA forecasts have a good control of the process around the value of Q = 0.70. From Figures 4.6 and 4.7, it can be observed that as the bias due to the process dynamics is reduced from [(d2 – d1) = ((–0.25) – (–1.00))] = 0.75 (in Figure 4.6) to [(d2 – d1) = ((–0.02) – (–0.27))] = 0.25 (in Figure 4.7), the process requires an adjustment interval, AI of 400 (in Figure 4.6) and 100 in (Figure 4.7) for the same value of the IMA parameter Q = 0.65. So, from these figures, it can be inferred that for less dynamic processes, the adjustment intervals will be small for the random shocks that may be similar in nature. If the performances of two processes of different dynamics are compared, this may result in identifying smaller adjustment costs for the process with less dynamics provided these processes are affected by random shocks of similar nature and characteristics. The discrete time constant of the closed-loop process is also set equal to Q, (whose values range from 0 to 1) to compensate for the dead-time (Venkatesan [1997]). The integral and dead-time compensation terms in the feedback control algorithm take action to minimise over correction of the control error. The controller gain (CG) is set at a value of 1.00 as it is reported in literature that the maximum value of controller gain is one for (pure) dead-time processes. The discrete time constant of the closed-loop process is also set equal to Q, (whose values range from 0 to 1) to compensate for the dead-time. The EWMA controller described in (Baxley [1991]) has no dead-time compensation term and requires a control gain below one in order to avoid over-control or overcompensation of the output-controlled variable. The performance of the minimum variance controller that is built based on the feedback control algorithm is thus superior to that of EWMA and CUSUM controllers discussed in (Baxley [1991]). This is a distinct advantage/benefit resulting from the use of the stochastic feedback control algorithm (Equation 3.16) discussed in this chapter. Process regulation is an important function of a controller, intended to keep the output controlled variable at the desired set point by changing it as often as

Discussion and Analysis of Stochastic (Statistical) Feedback Control Adjustment

4.49

necessary. Every process is subject to load variations. In a (well)-regulated feedback control loop, the input manipulated variable will be driven to balance the load, as a consequence of which, the load is usually measured by engineers in terms of the corresponding value of the output controlled quality variable. The choice of feedback control process regulation schemes depends on ‘how capable’ the ‘controlled process’ was of providing quality products within manufacturing specifications. If the process capability index was high, then, a moderate increase in the control error deviation (product variability) might be tolerated if this action resulted in savings in sampling and adjustment costs. A more detailed and accurate values of the simulation results of Table 4.5 are given in Table 4.6 in order to discuss ‘Alternative Process Regulation Schemes’ for Increases in Standard Deviation (ISTD) and different Control Limits values. The Table 4.6 shows the exact adjustment interval (AI) (to the accuracy of 2 decimal points), and the corresponding CESTDDVN to the accuracy of 3 decimal points), for some alternative schemes using combinations of L = 2.98, 3.0 and 3.04. These schemes are denoted by A, B, ...0. These schemes are based on how much the CESTDDVN would need to increase to achieve the advantage of taking samples and making adjustments less frequently. This approach avoids the direct assignment of values to costs Ca(cost of adjustment and sampling) and Ct(cost of being offtarget). The table shows for various values of the standardised action limit, L/sa = 3.010, 3.000, 3.009 and the adjustment interval (AI), the percent of ‘Increase in CESTDDVN’ (ISTD) with respect to sa and AI. The IMA parameter value Q which determines the process drift and the dynamic parameters d1, d2 are given below the scheme, AI and CESTDDVN values in order to show that for different sets of values of Q, d1, d2, the process regulation scheme varies according to the AI values. Table 4.6 Alternative Process Regulation Schemes for Increase in CESTDDVN (ISTD) and L/sa Values 1.

Control Limits L(= 3 sa ) 2.98 Scheme AI CESTDDVN ISTD A 10.99 1.022 0 (Q = 0.10, d1 = –1.51, d2 = –0.57) B 10.52 1.032 0.98 (Q=0.05, d1 = –1.50, d2 = –0.56) C 8.20 1.043 1.066 Q = 0.05, d1= –1.06, d2 = –0.28) D 5.10 1.127 8.05 (Q = 0.05, d1= –0.20, d2 = –0.01) E 5.68 1.186 5.24 (Q = 0.05, d1 = –0.01, d2 = 0.00) L/sa 3.010

4.50 2.

Integrated Statistical and Automatic Process Control

Control Limits L(= 3 sa) 3.0 Scheme AI CESTDDVN ISTD F 10.0 1.033 0 (Q = 0.05, d1 = –1.79, d2 = –0.80) G 9.0 1.058 2.42 (Q = 0.05, d1 = –0.80, d2 = –0.16) H 5.26 1.157 9.36 (Q = 0.05, d1 = –0.27, d2 = –0.02) I 5.02 1.158 0.09 (Q = 0.05, d1 = –0.08, d2 = –0.00) J 4.46 1.217 5.09 (Q = 0.05, d1 = 0.00, d2 = 0.00) L/sa 3.000

3.

Control Limits L(= 3 sa ) 3.04 Scheme AI CESTDDVN K 20 1.025 (Q = 0.15, d1 = –1.44, d2 = –0.52) L 12.20 1.032 (Q = 0.10, d1 = –1.22, d2 = –0.37) M 12.35 1.037 (Q = 0.10, d1 = –1.06, d2 = –0.28) N 9.52 1.089 (Q = 0.15, d1 = –0.27, d2 = –0.02) O 6.94 1.119 (Q = 0.10, d1 = –1.06, d2 = –0.28) L/sa 3.009

ISTD 0 0.68 0.48 5.01 2.75

The alternative schemes are: (i) Scheme B: To set L = 2.98 and adjust process at 10.5 sample periods, with an increase in CESTDDVN (ISTD.) of 0.98 or (ii) Scheme E: Adjust process at 5.7 sample periods and ISTD of 5.24 or (iii) S cheme J: by setting L = 3.0 and adjusting the process at 4.46 sample periods, the same ISTD. (5.09) could be achieved with an AI of 4.5. These schemes are based on how much the CESTDDVN would need to increase to achieve the advantage of taking samples and making adjustments less frequently. This approach avoids the direct assignment of values to costs Ca(cost of adjustment and sampling) and Ct(cost of being off-target). The table shows for various values of the standardised action limit, L/sa = 3.010, 3.000, 3.009 and the adjustment interval (AI), the percent of increase in CESTDDVN (ISTD.) with respect to sa and AI. The IMA parameter value Q which determines the process drift and the dynamic parameters d1, d2 are given below the scheme.

Discussion and Analysis of Stochastic (Statistical) Feedback Control Adjustment

4.51

4.18 SAMPLE SIZE The choice of a particular control procedure out of a number of feedback control procedures depends on the selection of in-control and out-of-control regions. The effect of an in-control region increases as the sample size increases. So, it is better to have a large sample size. A typical value for the component parts manufacturing industries is for the samples to be taken in groups of size 4 and 1 in the process industries (Wetherill and Rowlands [1991]). The ability of a process control chart to signal trouble depends on the sampling plan used. In general, it is preferable to sample less frequently in order to eliminate extreme auto-correlation and thus provide a tool for assessing the process behaviour over a long time period. By careful selection of sampling and subgrouping, it may be possible to control the sources of variation that may show as special causes based on ‘the control limits or the in control length’ (ARL) (Hoerl and Palm [1992]). Using this principle, Wheeler [1991] generalised the concept of rational sub-grouping as ‘rational sampling’ (Wheeler [1991]). A sampling scheme is rational when all of the ranges (moving or sub-group) are generated by the same common cause system and so have a consistent interpretation. The connection between sample size and the in control length is illustrated by the following example. The in-control run length may be the statistic used to describe the performance of an X chart. By considering the problem of model identification described in Box and Jenkins [1970, 1976], it can be shown for an AR(1) process rk = a|k|, given 2 estimates rk of the theoretical auto correlations rk, that |rk| = . It can also be N shown that a large sample N ≥ 100 observations is required to identify a small lag 1 auto correlation of r1 = 0.2 that has a significant effect on the in-control run length.

4.19 PROBABILITY MODEL FOR FEEDBACK CONTROL ADJUSTMENT A probability model for feedback control adjustment (xt) in the input manipulated variable is given in this section. After making an adjustment, the CESTDDVN of the output quality variable is checked for any off-quality product and if the product is within specification limits, the process is continued. Otherwise, a sample is taken based on the number of AI’s from simulation results and the process adjusted. The test for feedback control is that an adjustment is either made or not following the ‘gma’ theta falling outside the control limits L given by Equations 4.1 and 4.2. The input control adjustment is assumed to be say ‘a’ units made at a certain instant in a process under observation and control. Also, depending on the amount of feedback, the adjustment quantity changes and differs from the previous one. Here, the feedback is characterised by two probabilities. Firstly, the probability that an adjustment will make the mean of the output variable on target conditional

4.52

Integrated Statistical and Automatic Process Control

upon it being not currently on target and secondly, the probability that there will not be an adjustment that will make the output variable move away from target conditional upon it already being on target. Let p = P r(Adjustment/Mean Not On Target), i.e., Pr(A/MNOT), where A denotes an adjustment requirement. Let q = Pr(No Adjustment/Mean On Target), i.e., Pr(NA/MOT), where NA denotes the stage of the process requiring no adjustment. Clearly,

p + q = 1

The adjustment is characterised by a, where

a = Pr(MNOT), that is, Pr(Mean Not On Target).

The ideal situation will be a = 0. p and q are the properties of adjustment applied to the feedback, mean not on target (MNOT) and mean on target (MOT) respectively and a is a measure of the adjustment units. The larger the deviation (the error), the greater is the opportunity for feedback control to adjust the process and so, the feedback will be associated with a larger a level. The basic feedback control adjustment is shown in Figure 4.8. Yes

Process

Test for out of control signals

Mean not on target

Adjust process No Do not adjust process

Figure 4.8 Basic Probability Test for Feedback Adjustment

If the adjustment is not complete, there will be out of control signals again appearing on the EWMA chart. Interest is focussed on: (i) The required adjustment, that is a, to avoid out of control signals and further adjustment; (ii) The values of p and q for the required feedback control adjustment. A basic probability model is developed in which either an adjustment is not made or made depending upon the process conditions, that is, either the process is in control or out of control, identified by out of control signals. If an adjustment is

Discussion and Analysis of Stochastic (Statistical) Feedback Control Adjustment

4.53

required, the process is not in control and the mean of the product quality variable is not at or near target. Then, the probability that a feedback control with a units of adjustment will bring the mean closer to target is given by Pr(No Adjustment) = Pr(NA/MOT) Pr(MOT) + Pr(NA/MNOT) Pr(MNOT)

= q(1 – a) + (1 – p)a ...(4.6)

Pr(Adjustment) = Pr(A/MOT) Pr(MOT) + Pr(A/MNOT) Pr(MNOT) = (1 – q)(1 – a) + pa.

...(4.7)

It is obvious that Pr(No Adjustment) + Pr(Adjustment) = 1 Having adjusted or not adjusted the process, a new feedback control is imminent depending on the position of the product mean from the target. In the case of no adjustment, the appropriate a value, is given by Pr(MNOT/No Adjustment); that is from Equation (4.6),

Pr(MNOT/NA) =

=

[Pr(NA/MNOT)][Pr(MNOT)] Pr(NA)

(1 − p )α q (1 − α ) + (1 − p )α

...(4.8)

Using Equation (4.8), the approximate value of a for the situations when the mean is not on target and hence requires adjustment is given by

Pr (Adjustment/MNOT )Pr (MNOT ) Pr (MNOT) = Pr (Adjustment) Pr (Adjustment )

=

pα ...(4.9) (1 − q )(1 − α) + pα

Equations (4.8) and (4.9) provide the appropriate measures of the probability a of the mean not on target situations which would apply to further adjustments when the mean is either on target or not on target. Equations (4.7), (4.8) and (4.9) provide the basis for probabilistic expressions of interest in an adjustment situation. An expression for a likelihood estimate of the parameters a, p and q is not warranted or required to be developed at this stage since the feedback control adjustment requires that the adjustment a units exactly compensates for the deviation from target and the disturbance bringing the mean closer to or on target and the process under control. Alternatively, the parameters a, p and q can be approximately estimated for a process based upon a batch of test data collected on a sample test of the process, noting the number of times the process has to be adjusted for the mean not on target situations. A mathematical expression for the likelihood of the observations can then be maximised with respect to the parameters a, p and q to provide a maximum likelihood estimate of these parameters. If the value of a is

4.54

Integrated Statistical and Automatic Process Control

found to be high, meaning that the probability of the mean of the product quality variable is not on target consistently for a considerably long time requiring frequent adjustments, then, the production process would likely be investigated to identify and correct the underlying problem. Knowing the required adjustment (xt) from the algorithm (or a in the probability model), in the input manipulated variable, the probability of the mean not on target and hence requiring adjustment can be calculated by applying Equation (4.7). Knowing the required feedback adjustment, a units, through the test of significance, the process can be adjusted for this situation. After making an adjustment, the control error standard deviation (CESTDDVN) of the output quality variable is checked for any off-quality product and the process is continued if the product is within specification limits. This type of feedback control adjustment probability model can be useful in control of complex processes when there is uncertainty if after making the necessary feedback adjustment to bring the process under control that the product quality mean is on the desired target/setpoint. For such high-volume and mass production processes, it is reported in literature that reductions of even a fraction of a per cent can amount to large financial savings. A good example of such a process that can be mentioned in this connection is that in paper making where it is believed that a reduction in moisture variation of 1% in paper-machine control amounts to an annual savings of $100,000.

4.20 REGRESSION ANALYSIS A simple regression analysis was performed on the simulation results for control error standard deviation (CESTDDVN) and its dependence on the parameters Q, d1 and d2. Table 4.6 shows the coefficients of the fitted model for CESTDDVN and Table 4.7, the analysis of variance. The linear regression model is appropriate for the values of the control error standard deviation (CESTDDVN) obtained via simulation. The coefficients of the dynamic process parameters Q, d1 and d2 are significantly different from 0. A value of 71.1% for the residual sum of squares (adjusted) suggests that 71% of the total variability in the observed response (CESTDDVN) is explained by the model thus indicating a good fit. It can be seen that there is no strong evidence of lack of fit for the model for CESTDDVN. In Section 4.2, it was mentioned that the process drift, r = 1 – Q was simulated by the use of ARIMA (0, 1, 1) model fed by standard normal shocks NID (0, 1). The random shocks were obtained from a random number generator in the computer. These shocks are only hypothetically generated by the computer and are not the true and actual representation of the real process conditions. So, in actual practice (real life), it is expected that the value of the residual sum of squares will be greater than 71% thus indicating a better fit. The adjustment interval changes also depending on the values of the dynamic parameters d1 and d2 and on the value of the IMA parameter Q for the fast and slow

4.55

Discussion and Analysis of Stochastic (Statistical) Feedback Control Adjustment

process drifts. The actual regression analysis for AI shows a varying non-linear relationship with the process parameters and indicates only a value of 35% for the residual sum of squares. For these reasons, the regression analysis results are not furnished for the adjustment interval (AI). Table 4.7 Regression Coefficients for CESTDDVN Predictor

Coefficients

Std.deviation

t–ratio

p

VIF

Constant

1.10440

0.00150

738.26

0.000

—

C1 (Q)

–0.074723

0.001536

–48.64

0.000

1.0

C2 (d1)

0.081039

0.003233

25.07

0.000

14.8

C3 (d2)

–0.115836

0.006953

–16.66

0.000

14.8

Standard error in CESTDDVN (s) = 0.01809 R–square = 71.2% R–square (adjusted) = 71.1% The regression equation is C4 = 1.10 – 0.0747 C1 + 0.0810 C2 – 0.116 C3

Table 4.8 Analysis of Variance for CESTDDVN Analysis of Variance (ANOVA) Table for Control Error Standard Deviation SOURCE

DF

Sum Sq.

Mean Sq

F-ratio

p

Regression

3

1.25505

0.41835

1278.51

0.000

Error

1552

0.50784

0.00033

Total

1555

1.76289

SOURCE

DF

SEQ SS

C1(Q)

1

0.77445

C2(d1)

1

0.38978

C3(d2)

1

0.09082

Lack of fit test Overall lack of fit test is significant at P = 0.000

4.21 DISCUSSION OF SIMULATION RESULTS The characteristics of the simulation change when the rate of drift either increases or decreases or when there is a change in dead-time or in the dynamic properties of the process. From simulation of the stochastic feedback control algorithm

4.56

Integrated Statistical and Automatic Process Control

(Equation 3.16), the amount of control adjustment action, the frequency of adjustment (MFREQ) and AI, the adjustment interval (AI = 1/MFREQ), can be also found and the output control error sigma for a particular rate of drift (r). A dead-time compensation scheme which provides a process gain (PG) in the feedback path whose value depends on both the process output and model has been devised. This scheme is suited to use in situations where the process deadtime results from a measurement device in a laboratory and is a known quantity. A process control approach to product quality based on discrete laboratory data has the potential for improvements (in product quality). A practical control strategy would then be (i) based on the use of quality control laboratory analyses and (ii) the process model based on a time series analysis of plant data collected from a designed closed-loop experiment and using the laboratory data to update the set point of the minimum variance time series controller to verify the quality of outgoing product. The intelligent use of any control algorithm is an iterative process which depends upon the skill and judgement of the designer. Most controller designs involve compromises and intelligent choices. For successful use of complex algorithms, it is imperative that results are presented to the designer in such a way that they are assisted in making the compromises more intelligently. This allows for more efficient use of the designer’s judgment and experience. Competent use of modern control theory involves the use of principles of interactive trial and error that can be successfully utilised to improve practical controller design. Algorithms derived from stochastic optimal control theory have the potential for efficient control and modern control methods are useful tools in an iterative design method. Stochastic (optimal) design algorithms provide valuable clues as to the controller structure and an understanding of the controller design. These clues lead to controller designs which appear to be better than those obtained by conventional approaches. If proper account is taken of stability (as done in this formulation), the resulting design will lead to (efficient) controllers that are not sensitive to the dynamic model, (exact form of the transfer function), of the process or to small perturbations in its parameters. Practical control strategies result from employing an appropriate model for the process dynamics (such as the second-order model considered in this book) and disturbances (ARIMA (0, 1, 1)). It has been reported in the literature that the performance of optimal algorithms for sampled data controllers appear to be sensitive to the structure of the disturbance but do not seem to be sensitive to the parameters of the (noise) model. The dynamics may be known from earlier experimental and theoretical work. In these cases, the suggested second-order model can be used with appropriate dynamic parameters set to their pre-determined values since (optimal) algorithms appear also to be

Discussion and Analysis of Stochastic (Statistical) Feedback Control Adjustment

4.57

sensitive to small deviations between the real process model and the model used for controller design. Process dead-time can be determined from the process step response under manual control. The control algorithm thus derived has not only desirable properties (integral control and adequate dead-time compensation) but is also practically ‘robust’ in nature due to the basic assumptions made in developing the control algorithm. It may be difficult to obtain satisfactory control due to process characteristics and the control system’s inability to operate in a proper manner. A thorough analysis of the process and periodical inspection of the output controlled product variable, a tabulation of various rates and magnitudes of load changes, (change in process conditions requiring a change in the average value of the input manipulated variable to maintain the output controlled variable at the desired value or target), a study of the degree of the process lags, (retardation or delay in response of output controlled variable at point of measurement to a change in value of input manipulated variable), and an observation of the relation among these factors would assist in overcoming these difficulties. It may be also difficult at times to assign a proper relationship between the output controlled variable and the state of balance of a process which may not be satisfactory even though there is satisfactory control of that (output) controlled variable. Under these circumstances, it is important to control directly from the final output product in order to eliminate the possibility of any variance between the controlled variable and the control conditions of a process. It may be difficult to maintain the balance of a process when the load or any of the uncontrolled variables associated with the process are subject to frequent and fast changes. The deviation of the controlled variable is in direct proportion to the rate of the changes. In practice, it is found that it is often necessary to add a controller to the changing variable in order to eliminate its effect on the control of the main process variable. For example, in large (ceramic) kilns and furnaces, it is necessary to control the pressure in the furnace in order that the furnace temperature may be sufficiently stabilised. Maximum efforts should be made to eliminate the effect of supply variations and disturbances. It is possible to improve the control of a process by making minor alterations or by redesigning to obtain smaller lag due to the inertial characteristics of the process or shorter dead-time. The process improvement may be handicapped unless these steps are undertaken with careful consideration of the effects of dynamic process reactions. By a re-arrangement of the supply medium, it may sometimes be possible to reduce transfer lag, (retardation in response of output controlled variable), caused by large temperature or pressure differences. The most serious lag in automatic control, namely, the dead-time, must be kept to a minimum in the controlled system. This type of lag should be investigated to determine if it can be reduced or eliminated.

4.58

Integrated Statistical and Automatic Process Control

4.22 CONCLUSION The effects of the rate of process drift, r, on the control error standard deviation (CESTDDVN) have been discussed in this chapter. The simulation results were analysed and the performance measures of the time series controller discussed. The combined effects of the process dynamics and dead-time on CESTDDVN and AI were explained for b = 1 and the effect of increase in dead-time from b = 1 to b = 2 on CESTDDVN and AI. The benefits and limitations of integral control were briefly discussed along with details of a constrained variance control scheme. It was shown that the EWMA has fairly good control of the process for values of Q in the interval 0.75 ± 0.05. The stochastic feedback control algorithm methodology, discussed in this chapter, helps to bring the process back to the controller set point and to accomplish set point changes. It is demonstrated through explanation of feedback control adjustment methodology, EWMA chart and simulation, that it is possible to control output product quality through process surveillance and feedback control adjustments by means of the simulation of the stochastic process control algorithm. The required feedback control adjustments in the input variable are obtained and compared from the simulation results. It is shown that some of the simulation results (Table 4.1) can be used to formulate process regulation schemes which give the operator an idea of the output variance for a particular adjustment interval and minimum variance control schemes (Table 4.3). The method proposed in this chapter to conduct a test of significance to find out if a feedback control adjustment is required in a process that is subject to random shocks at the input and the probability model for feedback control adjustment together help in indicating alarm signals (flash of a light) that the product quality mean is not on target and the process requires a feedback control adjustment. It is shown that the probability model provides information about the feedback control adjustments that are required to bring the product quality mean closer to the controller set point. The feedback control adjustment probability model can be useful in control of high-volume and mass production complex processes to achieve financial savings by reducing its output variance. For such high-volume and mass-production processes, it is reported in literature that even a fraction of a per cent can amount to large financial savings. Feedback control adjustment of high-volume and complex production processes brings the process under control that the quality mean is on the desired target/ set point resulting in reductions of even a fraction of a per cent in output control error variance and financial savings. A practical example of such process that can be mentioned in this connection is that in paper making where it is reported that a reduction in moisture variation of 1% in paper-machine control amounts to an annual saving of $100,000. Hence, it can be concluded that there will be definite financial benefits that will result from conducting a type test for stochastic feedback control adjustment discussed in this chapter.

Discussion and Analysis of Stochastic (Statistical) Feedback Control Adjustment

4.59

REFERENCES Astrom, K.J. & Wittenmark, B., (1984). Computer Controlled Systems: Theory and Design (Englewood Cliffs, NJ: Prentice-Hall). Baxley Robert V., (1991). A Simulation Study of Statistical Process Control Algorithms for Drifting Processes, SPC in Manufacturing by Keats and Montgomery, Marcel and Dekker, Inc., New York and Basel. Box G.E.P & Jenkins G.M., (1970, 1976). Time Series Analysis: Forecasting and Control, Holden-Day: San Francisco. Box G.E.P. & Kramer.T., (1992). Statistical Process Monitoring and Feedback Adjustment – A Discussion, Technometrics, vol. 34, No. 3, 251-267. Box, G.E.P. & Luceno, A., (1994). Selection of Sampling and Action Limit for Discrete Feedback Adjustment, Technometrics, vol. 36, No. 4, 369-378. Capilla Carmen, Ferrer Alberto, Romero Rafaet & Hualda, Angel, (1999). Integration of Statistical and Engineering Process Control in a Continuous Polymerization Process, Technometris, vol. 41, Issue 1, pp. 14-28. Deshpande, P.B. & Ash, R.H., (1981). Elements of Computer Process Control with Advanced Control Applications, Instrument Society of America, North Carolina, U.S.A. Devore, J.L., (1982). Probability & Statistics for Engineering and the Sciences, Brooks/ Cole & Nelson, Singapore/Melbourne. Enrique Del Castillo, (2001). Some Properties of EWMA Feedback Quality Adjustment Schemes for Drifting Disturbances, Journal of Quality Technology, vol. 33, Issue 2. Hoerl, R. W. and Palm, A.C., (1992). Discussion: Integrating SPC and APC, Technometrics, vol. 34, No. 3, 268-272. Janakiram, Mani, Keats, J. Bert, (1998). Combining SPC & EPC in a Hybrid Industry, Journal of Quality Technology, vol. 30, No. 3, pp.189-200. Keats, J.B. & Hubele, N.F., (1989). Statistical Process Control In Automated Manufacturing, Marcel Dekker, Inc., New York and Basel. Kramer, T., (1990). Process Control from an Economic Point of View-Dynamic Adjustment and Quadratic Costs, Technical Report No. 44 of the Centre for Quality and Productivity Improvement, University of Wisconsin, USA. MacGregor, J.F., (1987). Interfaces Between Process Control and On-Line Statistical Process Control, Computing & Systems Technology Division Communications, 10 (no.2): 9-20. MacGregor, J.F., (1988). On-Line Statistical Process Control, Chemical Engineering Progress, 84, 21-31. pp. 251-267. MacGregor, J.F., (1992). Disc ussion: Integrating SPC & APC, Technometrics, vol. 34, No. 3, 273-275. Nasser H. Ruhhal, George C. Runger & Monica Dumitrescu, (2000). Control Charts and Feedback adjustments for a Jump Disturbance Model, Journal of Quality Technology, Volume 32, Issue 4, pp. 379-394.

4.60

Integrated Statistical and Automatic Process Control

Nembhard Harriet Black & Mastrangelo Christina, M, (1998). Integrated Process Control for Startup Operations, Journal of Quality Technology, vol. 30, Issue 3, pp. 201-211. Shinskey F.G., (1988). Process Control Systems, McGraw-Hill Book Company, New York. Vander Wiel, S.A., Tucker, W.T., Faltin, F.W. & Necip Doganaksoy, (1992). Algorithmic Statistical Process Control: Concepts and an Application, Technometrics, vol. 34, No. 3, 286-297. Vander Wiel, S.A., & Vardeman, S.B., (1992). Discussion: Integrating SPC & APC, Technometrics, vol. 34, No. 3, 278-281. Venkatesan, G., (1997). A Statistical Approach to Automatic Process Control (Regulation Schemes), PhD Thesis, Victoria University of Technology, Melbourne. Venkatesan, G., (2002). Computers & Electrical Engineering, an Algorithm for Minimum Variance Control of A Dynamic Time-Delay System to Reduce Product Variability, 28 (3) pp. 229-239. Wardrop, D.M. & Garcia, C.E., (1992). Discussion: Integrating SPC & APC, Technometrics, vol. 34, No.3, 281-282. Western Electric Company, (1956). Statistical Quality Control Handbook, New York.

Suggested Exercises 1. Construct EWMA chart for a production process. 2. Develop a suitable probability model for adjusting a chemical process. 3. Develop minimum variance control schemes for the process.

Chapter

5 Sampled Data – Control to Minimise Output Product Variance

5.1 INTRODUCTION In this chapter, the focus is on the design of a digital (sampled-data) controller to minimize output product variance. The term sampled-data implies literally that data are sampled in a discrete manner at regular time intervals with respect to time instead of being based on a continuous interval of time, say every minute, or hour. Some practical examples are: (i) checking the physical body temperatures of a terminally ill-patient, (ii) checking the pressures of automobile vehicle tyres which are constantly on the road for long periods of time, and (iii) checking the yield and growth of vegetation which are fed by fertilizers to find its effect on growth etc. All these measurements are made at constant and definite intervals of time. The term digital also refers to discrete measurements made on a process at well-defined discrete intervals of time. A digital controller is a controller, which works using the principles of digital technology to control a process, which can be a chemical process or a mechanical process. For example, in the case of an automobile or a mechanically driven vehicle, fuel flow for fuel combustion must be controlled, the valve mechanism controls the flow into the engine as per the engine’s decrease or increase in speed requirements in measured quantities when needed. This mechanical control mechanism is known as a servo-controller. The digital controller design is based on the stochastic feedback control algorithm (Equation 3.16) derived in Chapter 3 of this monograph. In that chapter, it is explained how automatic and statistical process control techniques can be

5.2

Integrated Statistical and Automatic Process Control

used to develop the control algorithm. The Chapter 4 on “Discussion and Analysis of Stochastic Feedback Control Algorithm” explains the dead-time simulation of the feedback control algorithm. The simulation results give information of when to make an adjustment in the input variable and the number of units required to adjust the input variable so that the mean of the quality variable is brought closer to the controller set point. The term set point is the desired value or target value for that variable. The term dead-time is explained in that chapter as the time taken for a process adjustment to travel the entire length of the process and to make an effective change in the output. The objectives of sampled-data (discrete) control can be achieved by employing digital techniques in process control along with some benefits that provide motivation to use a second-order model for the time-delay control system. It is easier to control in practice a dead-time process that has an additional lag in the form of an exponential, note that q = e–T/t , where T is the time period and t the time constant term than a pure dead time dynamic process. In particular, industrial processes such as thermal processes and distillation processes can be represented by inclusion of an element of time delay in the process model (Shinskey [1988]). Digital controllers can be applied to control systems with large time constants. A control program from the digital computer can be used to compute commands to the plant at sampling intervals. The exact form of the digital controller will depend upon the required application. The technique of dead-time simulation is used for discrete (sampled-data) control of processes with time delay. It is explained in Chapter 4 that the meaning of the term dead-time simulation is that, dynamic simulation is conducted for a process, which has an element of dead time as one of its process parameters, which can vary from few seconds to several minutes, along with dynamic (inertial) process parameters, (more than one inertial element under dynamic process conditions. The time period of dead-time simulation running into hours is virtually unknown except in very rare and most complicated physiological processes where the effect is realised after very long periods of time which runs into hours spread over number of days and the progress is very slow to be actually timed and accounted for in real life situations. It is shown in Chapter 4 that the minimization of the variance of the output product quality variable is possible by computing the input feedback control adjustment and by compensating exactly the forecast disturbance (noise) that inflicts a dynamic process. The discrete (sampled-data) (stochastic integral) controller built on this feedback control algorithm has potential to reduce output product variance that is used to control noisy, drifting processes. The term noise again refers to a process which has wide and random (stochastic) fluctuations in its output behaviour. Processes are said to be ‘drifting’, (explained in Chapter 6) when the output has no fixed mean and the mean is constantly changing with respect to time. It is important

Sampled Data – Control to Minimise Output Product Variance

5.3

to minimize the variance of the output in any production process and this type of control action can play a significant role in manufacturing.

5.2 PRINCIPLES OF DIRECT DIGITAL (DISCRETE) CONTROLLER (I) DESIGN PROCESS IDENTIFICATION One of the main focuses of direct digital control (DDC) is on the basic control function problems that are related to the choice of sampling period, control algorithms and the reliability of the processors. Researchers in modern control theory have done extensive studies to minimize the output product variance under feedback control by using ‘state-space’ models. A salient feature of the direct digital controller design, suggested and proposed for process and product quality control in this book, is the use and application of the stochastic feedback control algorithm (Equation 3.16) derived in Chapter 3 that is developed based on the techniques from Automatic Process Control (APC) and Statistical Process Control (SPC) (Venkatesan [1997]). A detailed discussion and analysis on ‘Stochastic (Statistical) feedback control adjustment’ is given in Chapter 4. ‘On-Line Statistical Process Control’ has also been discussed in detail in MacGregor [1988]. There are many research publications that are available in stochastic process control literature to show that it is possible to achieve the objectives of sampleddata (discrete) control by using digital techniques in process control of some timedelay models according to Dahlin [1968]), Palmor and Shinnar [1979]. There are some benefits that accrue from improved process modelling of time-delay control systems. Incidentally, this explains the motivation to use a second-order model with two dynamic parameters (δ1 and δ2) and two time constants, (as δ1 = e–T/t1 and δ2 = e–T/t2 ), where T is the sampling period and t1 and t2 are the process time constants. Moreover, it has been observed in practice that it is easier to control a dead-time process that has an additional lag in the form of an exponential term than a pure dead-time dynamic process. It is reported in Shinskey [1988] that if a process with dead time and two dynamic elements is approximated as a single-dynamic element, the digital (sampled-data) process controller settings based on that approximation will be safe to the extent if the ratio of the second time constant to dead time does not exceed a value of 1.0. It is shown in Chapter 4 that the stochastic feedback control algorithm (Equation 3.16) calculates a series of adjustments in the input manipulated variable that compensate for the forecast disturbances by making an adjustment at every sample point and minimizes the mean squared deviation (error) from target of quality index. The stochastic time series feedback control algorithm (Equation 3.16) (Venkatesan) [1997]) developed in Chapter 3 is given by xt = −

( et − d1et −1 d2 et −2 ) (1 − Q ) − (1 − Q ) X t −b . ...(3.16) PG (1 − d1 − d2 )

5.4

Integrated Statistical and Automatic Process Control

where, d1, d2 are the dynamic process parameters, Q is the integrated moving average (IMA) parameter, 0 < Q 0, et is the forecast error and of integral over time of past errors, d1 and d2 are the parameters that represent the process dynamics; d1 = e–T/t1 and d2 = e–T/t2, t1 and t2 are the process time constants and T, the sampling period; t1 and t2 are real and equal for the ‘critically damped’ feedback control path ensuring closed-loop stability, Q is the integrated moving average (IMA) parameter that represents noise 0 < Q < 1. PG represents the process gain realised by total effect in output caused by a unit change in the input variable after the completion of the dynamic response (Baxley) [1991]. When d1 and d2 satisfy the conditions for feedback control stability given in ((Box and Jenkins [1970,1976]), (Venkatesan [1997])). w = PG(1 – d1 – d2) = 1 and PG = 1/(1 – d1 – d2) = g, the steady-state gain (Venkatesan) [1997]). The feedback control algorithm (Equation 3.16) provides important clues as to the structure of the direct digital controller with features of both adaptive and predictive control and design of the digital (integral) controller (Venkatesan) [1997]). It is explained in Chapter 6 that the stochastic feedback control algorithm (Equation 3.16) has both integral action and dead-time compensation terms. Integral control is used in continuous process industries for control of noisy, drifting flow processes. This (feedback control) algorithm provides the discrete analogue of integral control and has also a stabilising effect on a feedback control system through adequate dead-time compensation. It is also explained that the feedback controller algorithm defines the feedback control adjustment (xt) that is to be made at the process input to compensate for the output deviation from target/controller set point due to disturbance and minimises the variance of the output controlled variable (Venkatesan) [1997]). It is also shown that the feedback control algorithm (Equation 3.16) is in conformance with the feedback control action equation of Kramer [1990] when the output variance is made equal to the variance (σa)2 of the random shocks, the at’s,

Sampled Data – Control to Minimise Output Product Variance

5.5

for achieving minimum variance or mean square control when dead-time, b = 0. The control action is made up of the current deviation (et) and the past adjustment action xt-b. This control algorithm requires sampling to be done only at the adjustment intervals obtained via simulation of Equation 3.16, and results (only) in a slight increase in control error variability. Some of the features of good digital feedback control are: (i) permissible gain of the closed-loop, (ii) stability of the feedback control loop and (iii) precise regulation of loops containing dead-time. In addition, the design of a direct digital (discrete) sampled-data controller should meet the following requirements: (i) the controller must be able to maintain the desired output variable at a given set point, (ii) it should be possible to design the digital sampled-data controller with a minimum of information with respect to the nature of the inputs and dynamic structure of the discrete (sampled-data) control system, and (iii) the sampled-data controller must be reasonably insensitive to changes in the system limits. It must be stable and perform well over a reasonable range of system parameters (Palmor and Shinnar [1979]). The dead-time compensator removes the time-delay from closed-loop stability considerations and provides a stabilizing effect on the feedback control system. These principles are used for designing the discrete (sampled-data) controller. Such a controller will maintain the mean of the process quality variable at or near some measurable target and will allow for a rapid response to process disturbances without much overcompensation or over correction. Feedback control stability is achieved by considering the critically damped behaviour of the second-order dynamic system as explained in Chapter 4 and keeping the closed-loop gain less than or equal to 1.0 (Shinskey [1988]). The forecast errors help determine the appropriate adjustment given by the control algorithm (Equation 3.16) in the input manipulated variable for returning the process to target by making the forecast and control errors equal.

5.3 DIRECT DIGITAL (SAMPLED-DATA) CONTROLLER PERFORMANCE MEASURES It is explained in Chapter 4 that the performance measures of the stochastic time series controller are: (i) the ‘Control Error Standard Deviation’ (CESTDDVN) and (ii) the average ‘Adjustment Interval’ (AI), the mean AI, being equal to the mean ‘Adjustment Frequency’ (AF), that is, AI = 1/MFEQ. MFEQ is the mean frequency.

5.6

Integrated Statistical and Automatic Process Control

It is pointed out that the same stochastic time series controller can be used as (i) A Sampled-Data Controller, (as shown in this Chapter), (ii) an Integral Controller and Model-Based Controller as described in Chapter 9. The simulation results, performance measures, figures can be used for all these controllers as these controller designs are derived from and based on the same stochastic feedback control algorithm (Equation 3.16). NOTE: CESTDDVN is also referred as CESD for short in this monograph and also the mean frequency as MFREQ. As mentioned, the direct digital (sampled-data) controller performance measures are obtained from the simulation results of the feedback control algorithm (Equation 3.16). This dead-time simulation approach is used for comparing inter-sample variances at various sampling points and is also suggested for sampling periods considering two process time constants. The values of the control error standard deviation (CESD) and the adjustment interval (AI) for values of the ARIMA model noise parameter Θ ranging from 0.05 to 0.95 is shown in Table 5.1. These output results of the simulation programme show the standard deviation of the control errors (CESD) and an average (MFRQ) for an indicator variable (FREQUENCY), which takes the value 1 for sample periods with an adjustment and zero otherwise. The results are shown for values of Θ for 0.05, 0.25, 0.70, and 0.95. Values of the noise parameter Θ for disturbances closer to 0 may be termed less ‘noisy’ while a ‘fairly noisy non-stationary disturbance’ is characterised by a value of Θ = 0.70. The simulation results indicate that the stochastic feedback control algorithm (Equation 3.16) holds potential in reducing product variability (control error sigma, (CESTDDVN)). For a single input-single output (SISO) system with zero input, the process described by the second-order model will return to the desired final (output) state Yt = 0, due to the iterative nature of the feedback control algorithm even if no control is applied.

5.4 PROCESS CONTROL METHODOLOGY Process control methodology has been described in Chapter 4. However, the feedback control procedure is briefly reproduced for better understanding and completeness of this chapter. In practice, the mean square error (MSE) or the minimum variance control algorithm is applied for a trial period to the process whose product variability is required to be reduced. Exponentially weighted moving average (EWMA) charts are placed in position after observing changes that take place in the process due to input feedback adjustments. These changes in the process cause the control algorithm to estimate the required feedback control adjustment and make the product quality variable to be exactly on target. The changes also make it possible to realise a reduction in the output control error standard deviation (CESD). The IMA parameter Θ is used for monitoring and indicating the out-of-control signal.

5.7

Sampled Data – Control to Minimise Output Product Variance Table 5.1 Direct Digital (Sampled-data) Controller Performance Measures for Minimum Variance Control and Dead-Time, b = 1.0. Θ

d1

d2

MFREQ

AI

CESD

0.05

–1.82

–0.83

0.000

0.00

1.000

0.05

–1.00

–0.25

0.102

9.80

1.039

0.25

–1.00

–0.25

0.050

20.0

1.030

0.25

–0.27

–0.02

0.075

13.33

1.073

0.75

–0.27

–0.02

0.000

0.0

1.000

0.75

–1.00

–0.25

0.000

0.0

1.000

0.90

–1.00

–0.25

0.402

2.4

1.002

0.90

–0.82

–0.83

0.121

8.2

1.000

0.95

–0.27

–0.02

0.213

4.7

1.002

0.95

–0.01

0.00

0.187

5.34

1.004

CESD = standard deviation of the forecast error/standard deviation of the random shocks

σe . σa

MFREQ = the mean frequency of process adjustment Adjustment interval, AI = 1/MFREQ

The EWMA process monitoring system notifies, by means of out-of-control signals, the shifts in the quality variable needed to maintain on-target performance and further reductions in product variability. The EWMA control limits give an indication of how the forecast is significantly different from the target. Appropriate corrective control action based on the forecast is devised when an EWMA signal is obtained. The continuous feedback is temporarily removed and connected again after a short period of time. This is done to minimise the cost of adjustment and cost of sampling that can be significant in product-quality loops. This action is necessary to prevent overcompensation of the output product variable, which is characterised by frequent adjustments to the process. A minimum variance control algorithm helps in accomplishing set point changes in a smooth and rapid manner (Shinskey [1988]). The direct digital feedback controller designed and built on the principles enunciated in Section 5.2 with dead-time compensation and integral action terms in the feedback control algorithm (Equation 3.16) will also possess these characteristics. The drifting behaviour of the process is simulated using the first-order ARIMA (0, 1, 1) model fed by standard normal shocks N (0,1). The input shocks are obtained from a random number generator with the seed based on the clock time in the computer to ensure complete randomization of the simulation runs. The property of the IMA (0,1,1) model, that the forecasts for all future time is an exponentially weighted moving average (EWMA) of current and past values of the disturbance is made use of to predict the disturbance. The ARIMA (0, 1, 1) time series model

5.8

Integrated Statistical and Automatic Process Control

is fitted to the variable quality data by superimposing the one-step-ahead forecasts along with the control limits. The forecast originating at any time t is a weighted average of the previous forecast (at time t-1) and the current data. The variance of the output-controlled variable is minimised by making an adjustment, which exactly compensates for the forecast disturbance. The process control methodology is explained with the help of graphical plots (Figures 5.1 and 5.2) of the values of Θ versus CESD and AI. These graphs show the variation in CESD due to the IMA parameter Θ and AI. The direct digital (sampled-data) controller gain (CG) is set at the value of 1.0 in the simulation, since, for stable operation of a pure delay process, the maximum value of the controller gain reported in literature is one. Theta vs CESD 1.06

CESD

1.04 1.02 1 0.98

0.9

0.8

0.7

0.6

0.25

0.15

Theta

0.05

0.96

deltal 1 = –1.82, delta 2 = – 0.83 Theta vs CESD 1.06

CESD

1.04 1.02 1

0.95

0.85

0.75

0.65

0.55

0.25

0.15

Theta

0.05

0.98

deltal 1 = –1.0, delta 2 = – 0.25

Figure 5.1 Graph of Theta Versus CESD for Various Delta Values

The integrating control action is successful in eliminating ‘offset’, (deviation obtained with proportional control), at the expense of reduced speed of response and increasing the period of the feedback control loop with its phase lag. The feedback control loop is over-damped, when integral time is too long and leads to unstable conditions. This fact, in a way may also explain why it is necessary to consider the performance of the feedback control loop under ‘critically damped’ conditions.

5.9

Sampled Data – Control to Minimise Output Product Variance 1200

Theta vs AI

1000

AI

800 600 400 200 0 Theta 0.05 0.15 0.25

0.6

0.7

0.8

0.9

delta 1 = –1.82, delta 2 = –0.83. 1200

Theta vs AI

1000

AI

800 600 400

0.95

0.85

0.75

0.65

0.55

0.25

0.15

0 Theta

0.05

200

delta 1 = –1.0, delta 2 = –0.25.

Figure 5.2 Graph of Theta Versus AI for Various Delta Values

The closed-loop, under integral control, oscillates with uniform amplitude at the period where the system gain is unity. The integral or reset time is the time constant (I) of the controller. It affects only the period of oscillation, which increases with damping for a controller with integral action in a dead-time loop. Zero damping requires a loop gain of 1.0. The ‘integral time’ (Iu) for zero damping is 0.64PGTd where PG is the process gain and Td, the dead-time (Shinskey [1988]). Caution: (Please note that different notations are used for denoting dead-time in control engineering literature). It is possible to achieve damping by reducing closed-loop gain by making efforts to have this integral time as nearly equal to the process dead-time as possible so that the process variable can take the same path as the dead-time. An effective and good control of a process depends on the characteristics of the discrete feedback controller loop and also the controller settings that are required to produce minimum output variance of the product. The period and the integral time required for damping of the feedback control loop are established by process characteristics and by minimizing the integral error. Integral error can be obtained from the feedback control algorithm, which is given by the expression, (et – δ1et–1 – δ2et–2).

5.10

Integrated Statistical and Automatic Process Control

5.5 BENEFITS OF INTEGRAL CONTROL AND DEAD-TIME COMPENSATION It is reported in Baxley (Baxley [1991]) that the dead time compensator term in the feedback control algorithm (Equation 3.16) is a direct result of minimal variance strategy and that minimum variance control of processes with dead time includes this type of dead-time compensation. The inclusion of the dead-time compensation term of either the Smith predictor (Smith [1959]) or the Dahlin’s (Dahlin [1968]) on-line tuning parameter, whose values range from 0 to 1, in a feedback control algorithm will also result in a minimum variance control strategy for processes with dead time (Palmor and Shinnar [1979]). Harris, MacGregor and Wright [1982] derived the minimum variance controller for a process with dead-time for which the number of whole periods of delay was equal to 2. The process control practitioners showed that upon setting the value of Dahlin’s (Dahlin [1968]) parameter, the discrete time constant of the closedloop process in the Dahlin [1968] controller, equal to Θ, the IMA parameter in the stochastic disturbance model that these two controllers were identical and reconciled the different approaches. Harris, MacGregor and Wright [1982] noted: ‘the IMA parameter Θ provides information about the magnitude of the closed-loop time constant’. The stochastic feedback control algorithm (Equation 3.16) is identical to Baxley’s [1991] algorithm for a first-order SISO dynamic dead-time process model with no carryover effects (dynamics = 0). Similar to Baxley’s [1991] statistical process control algorithm for drifting processes, the feedback control algorithm (Equation 3.16) includes both integral action and dead-time compensation terms and the IMA parameter Θ, (whose values range from 0 to 1), is set to take care of Dahlin’s [1968] parameter to compensate for the dead-time. So, it follows that the variance of the output product variable achieved by using (Equation 3.16) with integral control action and dead-time compensation terms is a minimum. The dead-time compensation term removes the delay from stability considerations and provides a stabilising effect on the sampled-data process control system. These principles are used for designing the discrete (sampled-data) feedback controller. Such a sampled-data feedback integral controller will maintain the mean of the process quality variable at or near target and will allow for a rapid response to process disturbances without much overcompensation or overcorrection (Shinskey [1988]). The minimum integral absolute error for integral control of dead-time is denoted by IAE. The value of IAE is achieved by setting the integral time constant of the feedback control loop, I = 1.6PGTd, where Td is the dead-time, (Shinskey [1988]). The period of oscillation changes to about half of the original period for value

Sampled Data – Control to Minimise Output Product Variance

5.11

of a time constant that is equal to the dead-time. An increase in the integral time constant above its un-damped value (Iu) contribute more phase lag to the feedback control loop and extends its period of oscillation. There must be a rise in integral time constant so that the controller can then contribute less phase lag (Shinskey [1988]). The deviation between the output controlled variable and its set point can be related linearly to the cost of process operation by integration of deviation over time and equating it to accumulated excess operating cost. The control objective, under such circumstances, is to minimise the integrated error. This criterion can be applied, for example, to control the quality of a product flowing into a storage tank. This can be achieved by keeping the integrated error as low as possible and the quality of the product closer to the set point. Integrated error can be significant in product-quality loops, where it may represent excessive manufacturing cost such as ‘product giveaway’. Lag-dominant dynamics characterise, similar to the second-order dynamic model with two exponential terms, the loops such as product quality. The integral error varies with time for these processes. Sampling actually improves control of a dead-time process that allows a controller with integral action to approach best possible performance if its sample interval or ‘scan period’ is set equal to the dead time. Scan period is the interval between executions of a digital controller operating intermittently at regular intervals. The feedback control loop, by setting the sample interval at the maximum expected time, is robust for increases that occur in dead time. The integral time should be set at the product value of the process gain and the sampling period in order to make the feedback control loop robust. A controller with integral action changes the output variable so long as the deviation from target or set point exists and produces slightly greater mean square error at the output than actually required (Shinskey [1988]). The rate of change of the output variable with respect to time is proportional to the deviation. The close-loop gain must be 1 in order to sustain oscillations in the closed-loop. A limitation of integral control is that there may be a maximum integral (‘reset’ rate), which cannot be exceeded without encountering stability difficulties and which saturates its integral mode when the input exceeds the range of the input manipulated variable. This condition, called ‘integral windup’ results in overshoot (of the output controlled product quality variable) before control is restored. Overshoot can be avoided by setting the integral time higher than that is required for load regulation and can also be minimised by limiting the rate of setpoint changes (Shinskey [1988]). A dynamic element, such as integral, within the domain of linear controllers, has both beneficial and undesirable properties. The selection of the control mode requires a prior understanding of the benefits and drawbacks of the control mode. There are some criticisms and drawbacks in using pure integral control such as integral wind-up and overshoot. The objective is in minimising the variance of the output product (quality) variable and so such criticisms can be put to rest.

5.12

Integrated Statistical and Automatic Process Control

5.6 DISCRETE (SAMPLED-DATA) CONTROLLER PERFORMANCE The digital controller required for a particular application depends upon the objectives of the process control system and the dynamic process model. The preferred controller for a specific task depends on the type of process to be controlled and the relative importance of performance and robustness. The control performance depends on the controller tuning. The performance of a (sampled-data) controller is usually referred to by a ‘performance index’. The index provides a measure of the proximity of control to minimum variance control. A value of zero for this index denotes minimum variance control. An advantage of discrete model-based controllers is increased robustness that results from sampling the process at slow intervals, slower sampling. Robustness can be improved by detuning a controller, but its performance is decreased at the same time. Performance and robustness are inversely related. Lowering the controller gain and slower sampling improve robustness at the cost of performance. The highest performance also brings with it the lowest robustness. So, high-performance controllers should be capable of on-line self-tuning; i.e. they should be adaptive in nature.

5.7 REMARKS, CONCLUSIONS AND INDUSTRIAL APPLICATIONS This chapter discussed the principles and design of direct digital (discrete) controller along with an analysis of the controller performance measures. The process control methodology to minimise output product variance is explained and also the benefits of integral control and dead-time compensation. This chapter is concluded with some remarks and industrial applications. The discrete (sampled-data) controller performance measures for control of output product quality by sampled-data are obtained by simulating the stochastic feedback control algorithm, (Equation 3.16), developed for the critically damped second-order dynamic process control system. A model-based controller provides good feedback control of a dead-time process, called ‘dead-beat’ response (with a closed-loop gain of 1.0). The deadbeat response of a model-based sampled-data controller under critically damped conditions can be exploited to the advantage of the process control industry. By considering feedback control stability, the design of the direct digital controller has resulted in an efficient controller that is not much sensitive to the form of the dynamic process model or to small perturbations in its parameters. This control strategy is also the outcome of employing an appropriate second-order model for the process and use of the popular ARIMA (0,1,1) time series model for the disturbance. An industrial application of this type of direct digital controller can be in the area of continuous process industries such as petroleum refining and petrochemical

Sampled Data – Control to Minimise Output Product Variance

5.13

engineering. In these industries, even a small but significant reduction in output variance can result in huge savings in cost and a great improvement in the process control system performance. The performance enhancement can result in reduced cost, increased productivity and improved quality. In paper-making, for example, a reduction in moisture variation of 1% in paper-machine control amounts to an annual saving of $100,000. A prime factor in paper-making is the ‘basis weight’, the weight per unit area of the paper. The performance assessment of the ‘basis-weight’ control on an industrial paper machine minimizes the control error variance of the ‘basis-weight’. Weight control is directed towards reduction of basis weight variations, moisture variation and fibre. Digital controllers automatically level the distribution of paper stock (the thin pulp suspension) on the wire to optimise weight and moisture profiles. Digital control in pulp and paper production simplifies the co-ordination between the (digital) motor drives to maintain speed and draw profiles for different grades of paper and stock material. When a recipe is changed, the digital controller can recalculate ‘draw’ values automatically, then, change speeds in a section to match requirements in the rest of the machine (Moore [1986]). Adaptive (self-tuning) of the digital controller is achieved by setting the IMA parameter Q of the ARIMA model to match the disturbance so that the process adapts itself to the changes in the white noise. Adaptive tuning takes into consideration the change in the digital controller response that is necessary as the conditions in the digester operations change. Gains that can be realized from digital computer control are higher yields, steam savings, decreased quality variations, reduced load on the recovery area and higher production rates. In the ‘reheat’ facilities for steel and aluminium, (preceding rolling operations), single-loop digital controllers are used for three zones – temperature, fuel/air ratio and furnace pressure that are controlled by interfacing a mini computer to the steel mill’s scheduling computer to coordinate the work flow. In such operations, the mill products are constantly changing in accordance with customer order requirements. Different size and composition billets require different reheat cycles and close timing to have them ready for rolling, but not so early that energy must be used to maintain their condition. For this energy intensive operation, a mathematical model of the ‘reheat’ furnace continually computes the thermal inertia of the furnace and of the work to optimise combustion and cause each piece of work to leave the furnace at conditions optimised for production and quality (Moore [1986]). It may be concluded that the proposed sampled-data (digital) controller can be a proper choice in its application to reduce output variance and hence to aid in the control of product quality.

5.14

Integrated Statistical and Automatic Process Control

REFERENCES Baxley, Robert V., (1991). A Simulation Study of Statistical Process Control Algorithms For Drifting Processes, SPC in Manufacturing, Marcel Dekker, Inc., New York and Basel. Box, G.E.P. & Jenkins, G.M., (1970, 1976). Time Series Analysis: Forecasting and Control, Holden-Day: San Francisco. Box, G.E.P., & Kramer, T., (1992). Statistical Process Monitoring and Feedback Adjustment-A Discussion, Technometrics, vol. 34, No. 3, pp. 251-267. Dahlin, E.B., (1968). Design and Choosing Digital Controllers, Instmn. Control System 61 (6), pp. 77. Harris, T.J., MacGregor, J.F. & Wright J.D., (1982). An Overview of Discrete Stochastic Controllers: Generalized PID Algorithms with Dead-Time Compensation, The Canadian Journal of Chemical Engineering, Volume 60, pp. 425-432. Kramer, T., (1990). Process Control from an Economic Point of View-Dynamic Adjustment and Quadratic Costs, Technical Report No. 44 of the Centre for Quality and Productivity Improvement, University of Wisconsin, USA. MacGregor, J.F., (1988). On-Line Statistical Process Control, Chemical Engineering Progress, 84, pp. 21-31. MacGregor, J.F., (1992). Discussion: Integrating SPC & APC, Technometrics, vol. 34, No.3, pp. 273-275. Moore, J.A., (1986). Digital Control Devices, Equipment and Applications, Instrument Society of America, Research Triangle Park, ISA Press, NC, USA. Palmor, Z.J. & Shinnar, R., (1979). Design of Sampled Data Controllers, Industrial and Engineering Chemistry Process Design Development, vol.18, No.1, pp. 8-30. Shinskey, F.G., (1988). Process Control Systems, McGraw-Hill Book Company, New York. Smith, O.J.M., (1959). A controller to overcome dead-time, ISA Journal, 6(2): pp. 28-33. Vander Wiel, S.A. & Vardeman, S.B., (1992). Discussion: Integrating SPC and APC, Technometrics, vol. 34, No. 3, pp. 278-281. Venkatesan, G., (1997). A Statistical Approach to Automatic Process Control (Regulation Schemes), Ph D. thesis, Victoria University of Technology, Melbourne, Australia.

Suggested Exercise 1. Discuss a practical sampled data control application in industry.

Chapter

6 Process Control for Product Quality

6.1 INTRODUCTION This chapter describes a method to control output product quality (product variability) by applying the stochastic feedback control algorithm developed in Chapter 3. It is well known that Automatic Process Control (APC) techniques are widely used to control process variables such as feed rate, temperature, pressure, viscosity and product quality variables as well. Statistical Process Control (SPC) techniques have also been applied to monitor a process and to control product quality. It is shown in Chapter 4 that it is possible to reduce/minimise and achieve an acceptable level of variation in the measured output characteristics by means of constrained variance control schemes and process regulation schemes. APC aims to maintain certain key process variables as near their set points for as much of the time as possible. There are situations in process control when there is some form of feedback control and yet where stability cannot be easily attained in the feedback control loop. The fact that disturbance (noise) afflicts a process, which together with issues of dynamics (inertia of the process) and dead-time (time-delay) compounds the process control problem, is also known. The process control practitioner faces a challenge while tackling issues of process delay and dynamics (inertia). Process control of product variability is possible by simulation of the stochastic feedback time series controller algorithm (Equation (3.16)) for drifting (dynamic) processes in the existence of dead-time (time-delay). It is explained in Chapter 3 how to tackle issues connected with feedback (closed-loop) control stability, controller limitations and dead-time compensation to obtain minimum variance control at the output. Details of a method to control the quality of a product at the output by applying statistical process monitoring and feedback control adjustment are presented in this chapter. The focus on this chapter is on the issues of process delay (‘dead-time’) and dynamics (‘inertia’) at

6.2

Integrated Statistical and Automatic Process Control

the interface between statistical process control (SPC) and automatic process control (APC) to control output product quality. It is necessary to control manufacturing processes during operation so as to provide products that satisfy customer requirements. Products of desired quality can be manufactured by statistical monitoring of production processes and by proper feedback control adjustment of process input/inputs in order to compensate for the deviation of the output/outputs from the target/controller set point. The process drifts off the target due to noise (disturbance) if no compensatory adjustments are made to the process. The stochastic feedback control algorithm (Equation 3.16) developed in Chapter 3 gives information about the number of units of adjustment that are required at the input to compensate for the effects of the disturbance at the output. This algorithm minimises the variance of the outputcontrolled variable by making an input adjustment at every sample point that exactly compensates for the disturbance. Conventional practices of engineering or automatic process control use the potential for step changes to justify an integral term in the feedback controller algorithm to give long-run compensation for a shift in the product quality mean. By identifying and minimising dead-time in a process, reduction in product variability (control error standard deviation) can be achieved. A process control method for control of output product quality is proposed in this chapter by simulating the feedback control algorithm (Equation 3.16) for dead-time processes. The feedback control time series controller algorithm is simulated and the simulation results are used to compute a series of input adjustments, which compensate for the forecast error and minimise the variance (mean square error) of the output product quality. Computation of the feedback control algorithm gives the number of units of input adjustments. An ‘out of process control’ signal obtained by means of a plot of the Exponentially Weighted Moving Average (EWMA) forecasts in an EWMA chart signifies the process control practitioner how many units and the time at which an input control adjustment that should be made to the process.

6.2 PROCESS CONTROL BY DIGITAL COMPUTER Use of automation techniques in industry makes it possible to apply digital computer capabilities for the solution of process control problems. The development of computer technology combined with the knowledge gained about computer process control has led to an increase in the application of digital computers to process control. The control engineer faces a challenge in the control of production processes involving time-delays (dead-time) because of their non-linear nature. The significant amounts of lag(s) introduced (by the dead-time) into the system response frequently, make use of conventional control algorithms a poor prospect. The innovative

Process Control for Product Quality

6.3

feedback control algorithm (3.16) derived in Chapter 3 gives minimum variance control even with in the presence of dead-time and it is shown that it has both integral control action and dead-time compensation (explained in Chapter 4). Integral control is extensively used in the continuous process industries for control of noisy, drifting flow processes, (Shinskey, [1988]). There need not be any constraint placed on the input variable in order to obtain minimum variance at the output. Process control loops containing pure deadtime are difficult to stabilise with conventional three-mode or three-term (PID) controllers (explained in Section 2.5). The controller, based on work reported in the literature, (Smith [1957]), suggested for control of a dynamic process with deadtime is an integral or ‘floating’ controller (Buckley [1960]). The integral controller is preferable because of its characteristic features of simplicity, low cost and easy adjustment capabilities. The time series control algorithm (Equation 3.16) derived in this monograph requires sampling to be done only at the adjustment intervals (sample periods) obtained via simulation, resulting in a slight increase in control error variability and does not call for an adjustment every time a sample is taken. Digital computers are used for on-line automatic process control (APC) to monitor processes, where the output is measured and the input is changed only at discrete intervals of time. Computer based controllers incorporating tuning tools make practically possible to use complex algorithms. Direct digital control (DDC) is the term used for controlling processes directly by computers. DDC focuses on the basic control functions such as problems relating to choice of sampling period, (‘the time interval between observations in a periodic sampling control system’), control algorithms and reliability of processors. It is reported in (Baxley [1991]) that the ‘Smith predictor (Smith [1959]) is a direct result of a minimal variance control strategy and that minimal variance control for processes having dead-time includes this type of dead-time compensator’. Moreover, the inclusion of the dead-time compensation term of either the Smith predictor or Dahlin’s (Dahlin [1968]) on-line tuning parameter, whose values range from 0 to 1, in a feedback control algorithm will also result in a minimum variance control strategy for processes with dead-time (Baxley [1991]). Harris, MacGregor and Wright (Harris, MacGregor, and Wright [1982]) observed that minimum variance control could be achieved by setting the IMA noise parameter q to match with the disturbance as well as the closed-loop time constant (Dahlin’s parameter) to compensate for the dead-time. In line with this observation, a conjecture is made that the variance of the output product variable achieved by the feedback control algorithm (3.16) with integral control action and dead-time compensation terms is a minimum (Venkatesan [1997]), (Venkatesan [2002]). The dead-time compensator provides a stabilising effect on the feedback control system. Feedback control stability is achieved by keeping the closed-loop gain less than or equal to 1.0 (Shinskey [1988]). The control adjustment (xt) given by the feedback control algorithm (3.16) minimises the variance or mean square error (MSE) of the

6.4

Integrated Statistical and Automatic Process Control

output-controlled variable by providing adequate dead-time compensation through the IMA noise parameter Q (Venkatesan [1997]), (Venkatesan [2002]). A minimum variance feedback control algorithm will bring the process back to set point with less oscillatory behaviour than usually experienced under manual control. It will help also in accomplishing set point changes in a smooth and rapid manner (Shinskey [1988]). The feedback controller designed and built on the principles of integral control and dead-time compensation will also possess these characteristics. The controller’s objective is to minimize the mean squared deviation (error) from target of the quality characteristic. As the level of the input manipulated variable at time t, Xt is placed to compensate for the forecast, the adjustment or change xt (that is, Xt – Xt–1) in the input manipulated variable is calculated to compensate for the change in the forecast from the previous sample period (Venkatesan [1997]), Baxley [1991] ). The (time series) feedback integral controller has the characteristic that its control error variance is the (b + 1) step-ahead forecast error variance where ‘b’ is the dead-time. This feature of the feedback controller helps in knowing the control error variance (b + 1) step-ahead of the forecast error variance.

6.3 DIRECT DIGITAL CONTROL (DDC) OR SAMPLED-DATA SYSTEMS In this section, we explain the Direct Digital Control (DDC) or called the SampledData Systems. Later in Section 6.6.2, we explain the effect of dead-time in sampled data (discrete) control systems. A measuring system senses the value of the output controlled variable in the traditional (conventional) feedback control system and transmits a message dependent on it to its controller. The controller compares this value with the value of the chosen controlled variable, called the desired value - or an input variable which sets the desired value of the controlled variable, called the ‘set point’, so as to generate a deviation called the ‘error’. The controller acts on this error to produce a control signal. This signal is then fed to a final control element, which is an automatic positioning valve to reduce the error. In sampled-data (discrete) digital control systems, in contrast to the conventional control systems, the discrete (digital) signals represent information by a set of discrete values in accordance with a prescribed law. The electrical signal, in a basic sampled-data (discrete) (digital) control system, represents the output controlled variable and is fed to a device called an analogue*to-digital converter, where it is sampled. The sampling period, a constant in process control applications, is called the ‘clock period’ in digital computer terminology. The value of the discrete (digital) signal is compared with the discrete form of the set point in the digital computer to produce an error. A control algorithm is executed yielding a discrete controller output. This discrete (digital) signal is then converted to

Process Control for Product Quality

6.5

an electrical signal by a digital-to-analogue converter and then fed to a final control element. The control strategy is repeated so as to achieve closed-loop (feedback) computer control of the process and this type of sampled-data (discrete) control technique is referred to as ‘direct-digital computer control’ (*An analogue system is one in which the data are everywhere known or specified at all instants of time and the (input, output) variables are continuous functions of time). In a sampled-data (discrete) control system, the analogue controller in a conventional control system is replaced by a digital computer and the control action produced by the controller in the feedback (closed) loop is initiated by the computer programme. The feedback controller is a special-purpose analogue computer used in the direct digital (discrete) (sampled-data) control of production processes. Digital computers automatically collect data about a process and its operating conditions, provide details about the product produced by the plant, its reliability and specifications. It is possible to achieve the objectives for sampled-data (discrete) digital control (DDC) by using digital techniques in the process control of some time-delay models.

6.4 JUSTIFICATION FOR THE USE OF DDC The use of digital controllers offers advantages such as (i) making available a wider selection of process control algorithms than in analogue controllers, (ii) faster calculations, (iii) logic capabilities, both at the controller input and the output, (iv) on-line restructuring of (control) loops and (v) adaptive-control features. Factors in justifying computer control for a given application include the number of conventional controllers that are to be replaced by digital computers and whether or not there is better process control performance. The computer needs to be used to automate functions and operations that could not be automatically accomplished earlier. Feedforward control, dead-time compensation and optimal control techniques can be implemented by exploiting the capabilities of the computer and by the use of DDC hardware systems. Control strategies can be implemented that are otherwise impractical or impossible with conventional analogue hardware. The availability of control computers makes possible a type of hybrid approach to process control involving both the digital capabilities and conventional analogue capabilities. The implementation of control strategies is achieved by leaving those (feedback) loops with conventional analogue control systems where feedback control is envisaged and employing direct digital control (DDC) only for those process loops in which there can be significant improvements that can possibly be made in the control performance.

6.6

Integrated Statistical and Automatic Process Control

6.5 SELF-TUNING (ADAPTIVE CONTROL) Marshall [1979] showed that process control is possible if a system is sufficiently understood to be modelled. Time-delay control systems benefit from improved process modelling. Good modelling makes possible the application of the principles of prediction to the stochastic disturbances in order to improve control. Identification methods for systems afflicted by disturbance on-line give estimates recursively for use in adaptive controllers. Parameter estimation (identification) methods result in adaptive control. Control systems combining recursive methods of estimation and use of minimum variance controllers are called ‘self-tuning’ (adaptive) controllers. Digital computation facilitates simultaneous estimation of parameters and on-line control and provides the required computer solutions for adaptive control. Process regulation schemes that involve recursive techniques are programmed using microprocessors. Modelling and formulation (design) of self-tuning regulators is by (i) determining suitable model structures, (ii) estimating model parameters recursively and (iii) using estimates to calculate the control. A regulator (controller) with facilities for tuning its own parameters is called a ‘self-tuning regulator’. The self-tuning regulator controls processes by suitably altering algorithms to track process parameters which change with time.

6.6 DEAD-TIME OR TIME-DELAY 6.6.1 The Need to Identify Dead-time Identifying and minimising dead-time in production processes is one of the measures that can be adopted to achieve a reduction in product variability (control error standard deviation), if the best achievable performance is not adequate enough to provide minimum variance control. Dead-time is the property of a production system by which the response to a control adjustment is delayed in its effect. It is ‘the interval of time between initiation of an input change and the start of the resulting observable response’. There may be a finite delay before any effect is observed in the output when changes are made in a process input. Dead-time occurs when process materials move from one processing stage to another without any change taking place in the properties or characteristics of the processed materials. Such delays are caused by flows of liquids or gases through pipes. Time-delays are also known by various other names such as transportation lag or velocity-distance lag, or pure delay. Dead-time may be a problem of transportation and is present in (process) control systems. Time-delays occur also in human biological, political and economic systems. The effect of dead-time in these systems is discussed particularly

Process Control for Product Quality

6.7

in the works of Bateman [1945], Justin [1953], Howarth and Parks [1972], York [1972], Howarth [1973] and Smith [1974] (Marshall [1979]) and other publications available in the literature. Dead-time causes difficulties in satisfactory control of processes by sluggish response to control actions and so, where possible, efforts must be made to reduce it. Time-delays are often created by sampling systems. So, it may be necessary to decrease the frequency of taking samples from a process. Sampling at periods that are shorter than the delay period may not be useful when delays occur in a process. An effective manner of improving process control is to reduce or eliminate the (feedback) dead-time since a feedback control strategy alone by itself cannot return the process output to its target value until the process dead-time or time-delay has elapsed. A feedback controller applies corrective action to the input of a process based on the present observation of its output. In this way, control action is moderated by its effect on a process. A process containing dead-time does not produce any immediate effect and thereby delays control action. Dead-time is one of the difficult dynamic elements that occurs in many production systems. The delay produces a change* in slope of the input-output curve and this property becomes an essential consideration in feedback loops characterised by the behaviour of the critical quality variable (during transition between two steady states). In view of this fact, feedback control-system design techniques must be capable of identifying and dealing with dead-time (*called ‘phase shift’ or ‘span shift’ in control theory terminology). There is a time-delay when an adjustment is made to the flow rate of a liquid travelling a significant distance between two receptacles and in measuring the thickness of insulation while coating a wire. A time-delay is significant over long distances in remote control systems and in processes which involve complex chemical reactions. There will be cases of delay in control because of remote operations, which may be several time periods in duration. Many industrial processes, particularly thermal processes and distillation processes may be best represented by including an element of time-delay in the system model. 6.6.2 Sampled-data (Discrete) Control System and Dead-time Sampled-data techniques involve the use of storing or holding and releasing information when required, which is a delay process. In this manner, a connection exists between sampled-data (discrete) control systems and delays. Systems involving the use of digital computers in process control rely on the use of stores of memory. Reliable storage or holding of data is the delay between the input or the calculation and the output at some multiple of the clock period later. Sampled-data techniques enable algorithms to be used in numerical analysis (digital computing methods). Formulation of a process control problem by sampled-data enables a solution to be found whereas it is difficult to analyse the corresponding control

6.8

Integrated Statistical and Automatic Process Control

problem in continuous time. The control problem in the sampled-data (discrete) case is solved by modelling the disturbance by time series techniques. The characterisation of disturbance in continuous time is difficult to treat in a rigorous fashion. Smith [1957] prediction techniques and its extensions are capable of using digital computing (numerical) methods. In production systems, where there are lags as in production involving chemical reactions, it is often convenient, from a modelling point of view, to replace the accumulation effects of the lags by a single time-delay. There are many complex processes in which this assumption is helpful. Process control schemes incorporate information regarding the process into the controller, by having a process model (with delay) built into the controller mechanism. The dead-time element required for building the controller mechanism is not usually physically realisable and even if approximated, results in increased costs and inaccuracies in process modelling. So, it is usual to assume that the value of the time-delay of the process in discrete (sampled-data) control systems is ‘a priori’ information. The sampled-data (discrete) control is used to provide the plant operator with information about control actions (adjustments) that should be taken to account for the plant dynamics (inertia) and the nature of the stochastic (random) disturbances.

6.7 IDENTIFICATION OF PROCESS DEAD-TIME For satisfactory operation and for best possible performance of the process, it is necessary to ensure that a process containing an element of time-delay should not be affected by parametric variations or extraneous noise (disturbance). Suitable (feedback) control strategies may be employed to minimise the effects of external disturbance and variations in the process parameters. A control strategy may be defined as a set of rules by which a control action is determined when an output deviates from a desired set point. It is an algorithm or a control equation that determines the controller output as a function of the present and past measured errors (Deshpande and Ash [1981]). An appropriate (feedback) control strategy for a process containing an element of time - delay, is to assume a dynamic model which adequately represents the process that it is required to control. This model should be capable of tracking any variations in the parameters of the process. Thus a process must be identified continually and the parameters of the model adapted accordingly. Identification of a process consists of deriving a suitable form for the model and fitting it with the required parameters. The form of the model and the initial values for it are determined beforehand and as the process operates, it is usual in practice to determine the changes in the process parameters. For this purpose, an ARIMA (0, 1, 1) time series model is assumed for the process, described earlier in Chapter 3. In order to reduce the effects of disturbance on the system output, an estimate of the disturbance is required (Mitchell [1987]).

Process Control for Product Quality

6.9 .

Having discussed the dead-time and its characteristics, focus is turned to the other dynamic parameters of the process, namely, the dynamics (inertia) and ‘r’, the rate of process drift.

6.8 THE ROLE AND CHARACTERISTICS OF INERTIA IN A PROCESS The concept of inertia is explained by the term ‘capacity’. In automatic control, capacity is a location where mass or energy can be stored and acts as a buffer between inflowing and out-flowing streams, determining how fast the level of mass or energy can change. The mechanical measure of the property ‘capacitance’ is ‘inertia’, which determines the amount of energy that can be stored in a stationary or flowing liquid, fluid, gas or fine granular material. The inertia is an important determinant of an optimal process control system. A control action applied to a process at time zero may not be fully effective until an elapse of some significant time due to the system dynamics (inertia). This is particularly true in the process industries, where attempts to compensate for the disturbances ignoring the dynamics may lead to inappropriate control actions. The need to allow for dynamics is less common in the parts industries and in view of this fact, controllers specifically built for the purpose of dealing with the dynamics and not tuned properly may be ineffective in such situations. Excessive changes in the input variable may be required when a minimum variance feedback control scheme is applied to a monitored process. This may be due to (i) the parameter governing the dynamics (inertia) of the system, d, being large in relation to the monitoring interval and (ii) there may not be any penalty associated with large adjustments. Kramer [1990] showed a method to evaluate the expected variance of the control actions (adjustment variance) by using the fact that minimum variance control generates deviations from target that are equivalent to the uncorrelated random shocks. The adjustment variance becomes larger as d becomes closer to the value one. Since the dynamic parameter d is a function of the monitoring (sampling) interval, it is possible to reduce its inertial effects by lengthening the (monitoring) interval. However, as d gets larger, the adjustment variance also can be reduced by suitably lengthening the monitoring interval. This fact was substantiated by Kramer [1990] with arguments which led to the conclusion that altering the monitoring interval changes also the variance. Abraham and Box [1979] showed that changes in d have effects on the optimal control adjustment and also affect the resulting variance of the optimum control adjustment (vardxt). The parameter d plays a minor role in determining the monitoring interval corresponding to an increase in the mean square error (variance) deviation, whereas the rate of drift of the process,

6.10

Integrated Statistical and Automatic Process Control

‘r’, plays a dominant role. This is true when the value of δ is not near one as a result of the small bias resulting from the dynamic nature of the input-output relationship (Baxley [1991]). In view of this, the role of the parameter ‘r’ in making changes in the variance is discussed in Section 6.9.

6.9 THE RATE OF DRIFT OF THE PROCESS (‘r ’) For the ARIMA disturbance model for processes with drifting behaviour from a given fixed-target value, the disturbance process Zt is

z= zˆt + at t

...(6.1)

where zˆt is an estimate of zt which is independent of at and is an EWMA of the past data defined by

(

)

= zˆt r zt -1 + Qzt − 2 + Q2 zt −3 + ... . 0 ≤ Q < 1.

...(6.2)

The coefficients r, rQ, rQ2, in Equation (6.2) form a convergent sequence that sums to unity. t −1

zt = zˆt + at + r ∑ ai . 0 < r ≤ 1 ...(6.3) i =1

In particular, if the process mean is set on target at time t = 1 by adjusting its level so that zˆt = 0 then, the subsequent course of the deviations from the target is represented by t −1

z= at + r ∑ ai t

0 < r ≤ 1 ...(6.4)

i =1

Equation (6.4) is an interpolation between the sequence of uncorrelated random shocks, NID (0,sa2), of the stationary disturbance equation, zt = at for a process in a perfect state of statistical control with no drift obtained as ‘r’ approaches the value 0 and the highly non-stationary random-walk model, t

zt = ∑ ai . ...(6.5) i =1

Equation (6.5) is obtained when ‘r’ = 1 in Equation (6.4). The purpose of this discussion is to show that for intermediate values of ‘r’, the process can represent slight, moderate, or severe degrees of non-stationarity (drifts). When the process drift, ‘r’ = 0, the disturbance is a sequence of random shocks and the process is known to be in a perfect state of control requiring no control action to be taken. When the drift, ‘r’ = 1, this degree of non-stationarity is so extreme that it can hardly be regarded as describing any control situation likely to be met in real life, although it has been shown in the literature that the variance doubles after only two monitoring intervals.

Process Control for Product Quality

6.11

6.10 SAMPLING AND FEEDBACK CONTROL PERFORMANCE Sampling at periods that are much shorter than the time-delay (dead-time) is likely to result in poor control. The sampling rate in sampled-data control has its influence on the closed-loop (feedback) behaviour. Considering this influence and based on the recommendations for its selection, leads to a rational choice of sampling rate. Sampling is economically advantageous where high production rates combine with relatively expensive or time-consuming measurements of individual items. Outputsampling is a practical necessity in the control of a large variety of continuous processes such as paper and sheet plastic. There is a myth that time series controllers are aggressive to set point changes on the shop floor and may be a less preferred controller. Such criticisms can be put to rest as the integral regulator described in this book for process and product quality control is an ‘automatic quality regulator’ similar to pressure, temperature, and other regulator used in the process control industries. Moreover the set point of the automatic quality regulator will be know well in advance and pre-determined for definite period of time, not subject to frequent changes. The use of automatic quality regulator will therefore be an aid in advanced process and quality control. It is observed that as the sampling interval is decreased, the feedback control loop performance improves, but at the same time, the effort necessary to accomplish this also increases. MacGregor [1976] introduced a variance constraint on the input manipulated variable since the control error variance often increases for a decreasing sampling period, (relative to the time response of the process). Box and Jenkins [1970, 1976] and Astrom [1970] showed that under minimum variance control, the error in the process output is the forecast error of the effective disturbance at the output. It is interesting to evaluate the changes in the variance of this output error at the sampling instants by increasing the sampling interval rate. According to Abraham and Box [1979], the effect of lengthening the sampling interval is (i) to increase the mean square error slightly and (ii) to reduce the cost of the feedback control scheme. The control performance is affected by too large sampling periods and long time-delays tend to reduce the controller gain (CG). So, there is a need for an optimal choice of the sampling interval. The control achieved using a sampling interval larger than the time-delay may be ‘tight’ requiring less-frequent sampling and there may be little economic incentive for such tighter control. Tighter control can reduce the stable operation of processes. However, some processes, such as polymerisation, sheet forming and fibre and other ‘no blend’ type processes require tighter control and efforts are constantly made to control the quality variables as tightly as possible by minimising

6.12

Integrated Statistical and Automatic Process Control

the variance of the output deviation about given set points (Kelly, MacGregor and Hoffman [1987]). A controller gives different levels of performance for the same process depending upon how tightly it is tuned. In some other situations, there can be definite economic incentives for moving process set points closer to the process or quality constraints. It is usual to minimise the product variability (control error sigma) for the required adjustment interval in order to achieve this objective (Harris and MacGregor [1987]). This can be done by simulating the feedback control algorithm (derived in Chapter 3 under the effects of the dead-time) for values of the IMA parameter Q ranging from 0 to 1.0 and the simulation results as shown and discussed in Chapters 3 and 4.

6.11 THE NEED FOR A DEAD-TIME COMPENSATOR A time lag (time-delay or dead-time) in limiting the permissible process gain (PG) reduces the ability to control the process. So, a controller mechanism is necessary to reduce this limitation. This mechanism is called the ‘dead-time compensator’. The principle of working of the dead-time compensator is explained below. Assume that a small adjustment is made in the input variable at the ‘n’th sample. The adjustment made will not have any effect on the next sample if the sampling interval is smaller than the dead-time (made up of the process delay and the measurement delay). If there is no appreciable effect of the adjustment in the process, the same control error deviation from the desired target will be measured at the output and there is a tendency to overcorrect the control error deviation if another adjustment is made. In this scenario, there exists an option either (i) to reduce the controller gain (CG) and to apportion a part of the adjustment to each of the samples occurring during the dead-time (time-delay), or (ii) to reduce the control action by accounting for all the control actions already taken during the time-delay, the effects of which are not yet perceived. The first option is achieved if the Controller Gain (CG) is chosen by a proper stability analysis. Long time-delays reduce the CG (Palmor and Shinnar [1979]). Baxley [1991] found different values for the CG in his simulation study by the ‘Central Composite Design’ method and the corresponding standard deviation of the control error along with the mean adjustment interval (AI). The maximum value of the controller gain for the stable operation of a time-delay process is 1.0 (Chandra Prasad and Krishnaswamy [1974]). So, the first option is chosen by setting the value of the automatic quality regulator gain to be 1.0 and minimising the control actions by accounting for all the control actions taken during the time-delay period. Plausible reasons for setting the value of the controller gain to be 1.0 are given in Chapter 4. This is in contrast to the EWMA controller for which a controller gain below 1.0 is required in order to avoid overcompensation of the control error and over control of the process (Baxley [1991]).

6.13

Process Control for Product Quality

The second option results in deriving an equation for the dead-time compensator from optimal control algorithms. This type of compensator is advantageous in that the problem of over-correction reduces during the time-delay and it may be possible to choose the value of the controller gain without the aid of the dead-time compensator. The dead-time compensator, though it may not be able to eliminate (completely) the dead-time in real systems, has a stabilising effect on the process. The response of a dead-time compensator is faster and smoother than an analogue (continuous) conventional controller in spite of sampling infrequently (Deshpande and Ash [1981]). The Smith predictor, (or the Smith dead-time compensator) is a result of the minimal variance strategy and that minimal variance control for processes having dead-time includes this type of dead-time compensation Smith [1957]. A description of the Smith predictor is given in the next section.

6.12 THE SMITH’S PREDICTOR AND THE DAHLIN’S CONTROLLER In this Section, the Smith’s predictor and the principle of operation of Dahlin’s controller are described briefly. Smith’s [1959] principle provides a criterion for selecting a control strategy for time-delay processes and dead-time compensation techniques. The technique is an approach to control of systems with long deadtimes. This principle, known as the Smith predictor, states that the response of a process with a time-delay should be the same as that for the same process without the delay, but delayed by a time equal to that of the delay. Smith [1959] proposed a discrete version of a dead-time compensator based on this principle. This (linear) predictor consists of a conventional PID controller in combination with a process model, which is used as a predictor of the output over the interval of the dead-time, in a feedback loop around it. Figure 6.1 gives a block diagram of the Smith predictor. Load

Process

Controller Model Predictor

Delay

Figure 6.1 Dead-Time Compensation with Smith Predictor

6.14

Integrated Statistical and Automatic Process Control

The Smith predictor contains two feedback loops; a positive loop containing the dead-time and a negative loop without it. The positive feedback loop cancels out the effect of the negative feedback loop through the process, leaving the negative feedback loop in the predictor with only the lag and gain of the model in it. This arrangement makes the predictor input identical to that which would exist if there were no dead-time in the process resulting in better control. The compensation technique involves the prediction of the process output through the use of a process model which does not contain the dead-time. The output of this predictor element is also delayed with a time-delay element which constitutes a separate model of the process dead-time. With model dead-time, lag and the controller gain matched to the process, the Smith predictor reproduces a step change exactly one deadtime later. A Smith Predictor achieves some form of derivative action required for compensating dead-time in first-order processes by a lag in its feedback path. By matching the lag in the Smith Predictor to the lag (inertia) in a dead-time process, the input manipulated variable follows the process lag exactly but delayed by the dead-time. The delayed predictor output is compared to the measured process output and the resulting model error quantity is added to the current predictor output to correct for predictor deficiencies, provided that the model is a true representation of the process and there are no further disturbances to the process during the deadtime period. It is observed that the optimal predictor part of the controller algorithm changes also with the time-delay. The Smith predictor is an optimal dead-time compensator for only those systems having disturbances for which the optimal prediction is a constant over the period of the dead-time. In brief, the Dahlin’s controller works on the principle proposed by Dahlin [1968] that digital controllers be designed to yield a desired first-order plus deadtime response to a set point or load change. Dahlin’s algorithm specifies that the sampled-data (discrete) closed-loop (feedback) control system behaves as though it were a continuous first-order process with dead-time. For designing sampled-data controllers, Dahlin considered a tuning parameter ranging from 0 to 1 whereas in the original formulation, the parameter could take values from –1 to 1. Thus a dead-time compensator allows the use of a large process gain. In order to select a suitable value for the time constant of the closed-loop response, namely, the Dahlin’s tuning parameter, an (trial or) initial value is assumed and the control system is simulated on a computer. A proper selection of this parameter can be made by repeatedly varying this parameter and examining the closed-loop response. The Dahlin computer-control algorithm is designed for a specific input, for example, a step change in set point. If an input (load) change occurs in a process for which the control algorithm is based on a change in set point, the response may not be equally good. The usual procedure, therefore, is to design for the worst possible change in either set point or load that is likely to occur. Dead-time compensators are usually complex to deal with in real systems. Nevertheless, they have a stabilising effect in the process in a manner which is similar to that of a controller working on an unconstrained optimal control algorithm. The

Process Control for Product Quality

6.15

precaution to be taken against instability for large gains in the real systems by having a dead-time compensator is by ensuring that there is no deviation between the assumed dynamic (transfer function) model and the real system (Palmor and Shinnar [1979]). It was explained in Section 3.7 the benefit of having built in the Dahlin’s dead-time compensator in the stochastic feedback control algorithm Equation (3.16) helps to achieve the required dead-time compensation and minimum variance.

6.13 REQUIREMENTS OF CONTROLLER FOR PRODUCT QUALITY Many processes are equipped with automatic control to ensure stable operation and to gain improvements in productivity and product quality. The requirements for quality control take different forms in the process industries and the component parts manufacturing industries. However, it is common to both the industries that the factors that determine the product quality have the least variation around the target value. Often, an output quality variable cannot be allowed to exceed (or fall below) a specified value for the reason that the product would be unacceptable to the customer. In many industrial situations, an increase of just a few per cent (or even a fraction of a per cent) in the average value of the output quality variable represents large economic gains in terms of savings in energy consumption, raw materials utilization or customer satisfaction and acceptance of a quality product. This implies the strong links between quality and productivity concepts.

6.14 PRODUCT QUALITY CONTROL 6.14.1 Engineering Control Application To control the quality of a product at the output, the set point of a product-quality controller is adjusted so that the product remains within its specification limits following expected load changes or disturbances. In product quality control, the product-quality set point is adjusted away from the specification limit in proportion to the peak deviation expected to be yielded by the controller. Again, the adjustment is in a direction that increases operating costs. Deviation in the ‘safe’ direction increase operating costs in proportion to the deviation. The quality-controller’s set point is positioned relative to the specification limit so that the limit will not be violated for most upsets (load changes). Since the average output product quality will be equal to the set point, the product will be more expensive to make than if the set point were positioned exactly at the specification limit. Excess manufacturing cost is proportional to the difference between the set point and the specification limit and so, proportional to the peak deviation expected. By limiting the peak deviation, excess manufacturing cost and product quality are controlled in engineering control. Peak deviation of the controlled variable from set point is significant when excessive deviation will cause an incident such as rejecting a product due to failure to meet the specification Shinskey [1988].

6.16

Integrated Statistical and Automatic Process Control

Process control provides operating conditions under which a process will function safely, productively and profitably. Ineffective control can be costly in causing, amongst other things, plant shutdown, in consuming resources excessively, in allowing off-specification product to be made, and in unnecessarily reducing the production rate. For a particular control loop, it is often possible to relate operating cost to deviation of the output controlled variable. In product-quality control loop, it is not surprising that the cost function is (usually) found to be different on opposite sides of set point; that is, it is possible to have both positive and negative cost functions. Process operators frequently place a large margin between the measured quality of a product and its specification. This is done in order to counteract the changes in economic performance when a product specification is violated. It will cost more to produce higher-quality product. Maximum profit can be realized when product quality meets the specifications exactly; but variations in product quality are not equally acceptable on both sides of the specification. As a consequence, the quality setpoint must be positioned far enough, without excessive operating costs, on the most acceptable side of the specification. The operating cost can be reduced by better control and smaller variation in quality, allowing the set point to be moved to closer the specification. The deviation between the output controlled variable and its set point can be related (linearly) to operating cost. For a specific production rate, each increment of time will correspond to a quantity of product manufactured over that time. So, integration of deviation (error) over time could be equated to accumulated (excess) operating cost. Under such circumstances, the control objective would be to minimize integrated error*. This criterion could be applied to control the quality of a product flowing into a storage tank. This can be achieved by keeping the integrated error as low as possible and the quality of the product closer to the set point. (*Integrated error can be estimated from the feedback control algorithm Equation (3.16), being equal to (et – d1et – 1 – d2et–2)). It is a function of the change required in the input manipulated variable and the setting of the integral mode of the (time series) controller. Integrated error can be significant in product-quality loops, where it may represent excessive operating cost such as product giveaway. Lag-dominant dynamics, (similar to the second-order dynamic model with two exponential terms), characterize, most of the important plant loops such as product quality. For these processes, the integral error varies linearly with time (Shinskey [1988]). 6.14.2 Statistical Control Application For the sample values of a product variable whose measurements are normally distributed, its mean will equal the set point if the integral of the error approaches zero over a period of time. In minimizing the deviation of the output controlled variable, the standard deviation is a transformation of that deviation over a statistically significant number of samples or time of operation. The economic

Process Control for Product Quality

6.17

incentive behind the standard-deviation criterion is that this criterion estimates the percentage of time the controlled variable violates the specification based on a normal distribution. If samples of an output controlled variable that is ‘cycling’ uniformly are averaged over a complete cycle to form a subgroup, then the mean of subgroups will lie on the set point and their standard deviation will approach zero (if there are no disturbances in the feedback control system), (Shinskey [1988]). The assignment of subgroup size should reflect the capacity of a process to absorb variations in product quality. The method used to average samples also needs to be selected to match the characteristics of the process. If the product is segregated into lots, then, the samples should be segregated into the same lots and averaged equally. Different criteria can be applied to set point and load disturbances that affect a control loop. Different controller settings will be required to satisfy these criteria. Overshoot of the output controlled variable can be minimized by limiting the rate of the setpoint changes that are likely to be introduced by the operator during the course of plant operation and process control, (Shinskey [1988]).

6.15 FEEDBACK CONTROL PROCEDURE FOR ADJUSTING CONTROLLERS In the closed-loop method to ‘identify’ a process, the controller excites the loop into oscillation and the integral time of the controller is set at a high value. The output is ‘pulsed’ manually to produce a large cycle that can be observed easily. An un-damped cycle gives accurate results. After introducing the settings into the controller, response for period and damping is checked. The IMA noise parameter Q is used for monitoring by the exponentially weighted moving average (EWMA) chart and appropriate alarm criteria based on the EWMA statistic for indicating the (process) out-of-control alarm signals. The EWMA control limits give an indication of how the forecast is significantly different from the target. The quality deviations from target, that is, the mean of the quality data are compared with the EWMA chart limits (L), (Equations 4.1 and 4.2). These upper and lower control limits, (UCL and LCL respectively), are given (1 − Q ) by L =± 3sa , where sa, the standard deviation of the random shocks {at} (1 + Q ) N(0, sa2), is equal to 1. Then, the required adjustment (xt) in the input manipulated variable X is calculated through the feedback control algorithm (Equation 3.16). Due to automatic feedback input adjustments, changes brought about in the process cause the controller algorithm to underestimate or overestimate the required control adjustment to realise a reduction in the output control error standard deviation (CESTDDVN).

6.18

Integrated Statistical and Automatic Process Control

Appropriate corrective control action based on the forecast is devised when an EWMA (process) out-of-control signal is obtained. Feedback control input adjustment is made when a forecast crosses the control limits following the alarm signal for level shifts in the mean of the quality data away from the target. The required corrective action is applied to compensate for the change in the forecast from the previous sample period to return the quality index to target. No change is made in the process as long as the predicted forecast falls within the control limits and hence is considered close to the target. The number of sample periods (adjustment interval, AI) after which the process should be adjusted and the adjustment units (xt) required to change the input variable so that the output quality variable is at or near target, are obtained directly from the simulation of the stochastic feedback control algorithm (Equation 3.16). Following the EWMA feedback adjustment controller procedure suggested in Chapter 4, data are collected at a single interval in order to predict the performance of the (time series) feedback controller. The performance of the controller is predicted at longer sampling intervals based on this data. Then, a sampling interval is so chosen that can be used for sampling, adjustment and process control. The control error standard deviation (CESTDDVN) and adjustment interval (AI, the sample periods) are the controller performance measures. The CESTDDVNs are also obtained along with the AIs from the simulation results. It is reported in (Venkatesan [1997]) that dead-time simulation results show that this method of feedback control algorithm simulation is a good performance measure of the feedback controller to control dead-time processes. It is shown that even due to the combined effects of both the dynamics and the dead-time, the time series feedback controller, the design of which is based on the stochastic feedback controller algorithm (Equation 3.16), still gives minimum mean square error (MMSE) or minimum variance control with minimal adjustments for slight increases in control error sigma (CESTDDVN). The feedback controller gain (CG) is set at the value of 1.0, since, for stable operation of a pure delay process, the maximum value of the controller gain is one (Krishnaswamy and Prasad [1975]). Setting the controller gain, CG = 1.0 results in an adjustment which exactly compensates for the forecast(ed) deviation from target and provides a convenient means of reducing the adjustment interval, AI, (Baxley [1991]). A dead-time compensation scheme with provision of a process gain (PG) in the feedback path is suited to use in situations where the process dead-time results from a measurement device in a laboratory and is a known quantity. A process control method based on discrete laboratory data has the potential for improvements in product quality. A practical control strategy based on (i) a process modelled on the basis of statistical analysis of plant data collected from a designed closed-loop (feedback) experiment and

Process Control for Product Quality

6.19

(ii) on the use of quality control laboratory analyses and data to update the set point of the minimum variance time series feedback controller can be employed to verify the quality of outgoing product (Venkatesan [1997]). Improvements in controller designs can be made possible by the competent use of modern control theory principles and the iterative use of a control algorithm. Stochastic optimal control design algorithms provide valuable clues to understand the controller structure and lead to good controller designs (Palmor and Shinnar [1979]). It is possible to design efficient controllers that are not sensitive either to the process dynamic model or to small parameter perturbations by properly considering feedback control stability as shown in this book. Practical feedback controller designs can result from employing an appropriate model for the process dynamics such as the second-order dynamic model considered in this book and the ARIMA (0, 1, 1) stochastic time series model for the disturbance.

6.16 BENEFITS OF PROCESS CONTROL FOR PRODUCT QUALITY The following are some of the possible benefits/shortcomings that can be thought of in connection with the feedback control method proposed for the process control of product quality: (A) Benefits: (i) Reduced manufacturing costs and energy costs in terms of power, material use, re-work etc.; (ii) Increase in production and efficiency of process operation; (iii) Better process control/quality control of product quality output. (B) Shortcomings: (i) Controller limitations; (ii) Inexact ‘feedback control error’ (input feedback) in quantity, number of units; (iii) Incomplete or not proper feedback compensation, that is, the effects of disturbance (noise) or dead-time not compensated completely 100%; (iv) Process control may not be effective in the face of oncoming, Continuous disturbances that may occur in other parts of the process control system; (v) Malfunctioning of the controller.

6.17 CONCLUSION In this chapter, an exposition of the influence of the sampling interval on feedback control performance was discussed. A brief explanation of the need to compensate dead-time and a description of the working of some of the dead-time compensators

6.20

Integrated Statistical and Automatic Process Control

was also given. Dead-time simulation is a method to control sampled-data processes with time-delay. A description and an explanation provided of direct digital control (DDC) and justification for its use. The need to identify dead-time has been explained as also has been the inertia and the parameter, the rate of process drift ‘r’. In Chapter 4, mention is made that statistical process control charts can be con sidered an appropriate engineering control strategy under certain specific conditions. One of these is specifying a loss function that quantifies the cost of being away from the desired or target value and the cost of making an adjustment to a process. In light of optimal control theory and by using the quadratic criterion function, (explained in Chapter 4), it is shown that it is possible to derive minimum variance controllers. The principle employed in the quadratic loss function is that the penalty or loss associated with being off target depends only on the squared magnitude of the mean square error (variance). The quadratic loss function so derived depends only on the absolute value of the standard deviation from target. It is shown in Chapter 4 that the control adjustment equation of the MMSE (minimum mean square error) controller is the discrete equivalent of a properly tuned integral controller. This form of the minimum variance controller would minimise the mean overall adjustment cost when it is possible to neglect other variable costs. Apart from the process adjustment costs, if there are other costs in monitoring and controlling a process and in taking observations, then the resulting minimum-cost feedback adjustment schemes have to be formulated on the basis of different configurations. In this context, these aspects are considered in Chapter 8, Cost Modelling for Process and Product Quality Control of the monograph for the stochastic feedback control algorithm (3.16) of the time series controller derived in Chapter 3. The performance, limitations and robustness and the function of a controller are explained in this chapter. The characteristics and requirements of a feedback controller are also given. A brief description of the working of a direct digital sampled data (discrete) feedback controller is also explained and examples of a model-based controller and integral controller are also given in this chapter. This chapter explained also a method for the process control of product quality by applying automatic and statistical process control techniques. Simulation of the time series feedback controller algorithm and its performance measures are used to control product quality. The process control method described in this chapter can be applied to reduce output control error variance and hence to aid in the control of product quality. The objectives of applying engineering and statistical techniques to the product quality control problem are also explained in this chapter. Discussion of the engineering and statistical process control applications to product quality shows that the stochastic time series feedback controller can be a proper choice to reduce output control error variance and to aid in the control of product quality output.

6.21

Process Control for Product Quality

REFERENCES Baxley, Robert V., (1991). A Simulation Study of Statistical Process Control Algorithms For Drifting Processes, SPC in Manufacturing, , (Marcel Dekker, Inc.,New York and Basel). Box, G.E.P. & Jenkins, G.M., (1970, 1976). Time Series Analysis: Forecasting and Control, (Holden-Day: San Francisco). Box, GEP, Kramer, T., (1992). Statistical Process Monitoring and Feedback Adjustment a Discussion, Technometrics,; 34(3); 251-67. Dahlin, E.B. (1968). Design and choosing Digital Controllers, Instm. Control System 4, 77. Harris, T.J., MacGregor, J.F. & Wright, J.D., (1982). An Overview of Discrete Stochastic Controllers: Generalized PID Algorithms with Dead-Time Compensation, The Canadian Journal of Chemical Engineering, vol. 60, 425-432. Krishnaswamy & Prasad (1975). Control of Pure Time-delay Processes, Chemical Engineering Science, vol. 30, pp. 207-21. MacGregor, J.F., (1988). Online Statistical Process Control, Chemical Engineering Progress, 10 pp. 21-31. Palmor, Z.J. & Shinnar, R., (1979). Design of Sampled Data Controllers, Industrial and Engineering Chemistry Process Design Development, vol.18, No.1, 8-30. Shinskey, F.G., (1988). Process Control Systems, McGraw-Hill Book Company, New York. Shinskey, F.G., (1991). Control Engineering, September, pp. 75-78. Smith, O.J.M., (1959). A Controller to Overcome Dead-time, ISA Journal, 6(2): pp. 28-33. Venkatesan, G., (1997). A Statistical Approach to Automatic Process Control (Regulation Schemes), Ph D. Thesis, Victoria University of Technology, Melbourne, Australia. Venkatesan, G., (2002). An Algorithm for Minimum Variance Control of a Dynamic TimeDelay System to Reduce Product Variability. Computers and Electrical Engineering, 28 pp. 229-239. Venkatesan, G., (1999). Process Control of Product Quality, the Instrumentation Control, Exhibition and Symposium Workshop, ICEX’99, Melbourne, Institute of Instrumentation and Control, Australia, (IICA).

Suggested Exercises 1. Discuss the importance of sampled data control. 2. Explain why is it necessary to consider effect of dead-time in a process. 3. Research internet and technical journals for modern methods of dead-time compensation and discuss. 4. Discuss statistical and automatic process control product quality control application.

Chapter

7 Parallel Processing Computer Architectures for Process Control

7.1 INTRODUCTION Modern, large-scale, high-speed process control automation is quite complex in nature. In actual practice, the modern chemical plants employing continuous processes like petroleum refining, and petrochemicals operations are quite huge in size and scale located in vast land areas. Moreover, these plants are quite complex in their day-to-day operations that it is not manually possible to control the plants by moving plant operators and other personnel from plant to plant without loss of man and machine hours, productivity, efficiency, and other associated costs involved in such complex operations. Because of these reasons, usually, these plants have high-speed process control automation built-in their structure. Continuous manufacturing processes currently in use require advanced computer technologies such as simultaneous parallel processing architecture for synchronous process operations that take place independent of each other. Modern day use of supercomputers for satellite navigation and tracking, meteorological weather forecasting, flood-warning and earthquake systems are well known and are currently widely used in all parts of the world. Similarly, some complex processes can also be computer controlled involving a single super-computer that is responsible for all the inter-connected process operations. The capital investment cost, cost of installation, operation and maintenance may sometimes be very high and prohibitive to warrant building and using such a super-computer for complex process plant operations and many international organisations may not have the technical know-how, scientific man power, capability to build and financial resources for such high investment, high-cost process operations. Alternatively, instead of using a single super-computer, process and chemical plants can also be connected to each other within a huge plant area through single independent (stand-alone) computers that are responsible for each

7.2

Integrated Statistical and Automatic Process Control

plant/process operation and connected to a local area network (LAN). The outputs of a process/plant may be an input to a neighbouring process/plant. For example, the refining of petroleum crude in which an output of a stage say, naphtha catalytic cracker is possibly an input to the next stage in a petroleum refinery. It is highly essential and imperative that in processing high value output products such as gold or iron ore or uranium refining, ore-dressing operations, a very high degree of care and consideration is given importance in regard to operating costs and cost of the output end-product. In these complex operations, even a small but significant reduction in output variance can result in huge cost savings and improve system performance to a large extent. It is highly essential also that these process plants operate at optimum efficiency at all times, in order to bring the total operating costs down to a lowest possible minimum. The performance of these complex process plant control systems can be optimised by a measure of its output variance. This can possibly be achieved by suitably structuring a robust computing structure that are absolutely necessary and also for multi-input/output process control. Such robust computer structures are made possible in practice through designing, building and employing appropriate parallel processing computer architectures that are ideally suited for the purpose and objectives of plant operations. This chapter describes (i) a Multi-ported-Memory, Star-Ring Parallel Processing computer architecture for process control of multiple inputs/outputs, called MIMO process control and (ii) a suitable parallel processing (Slave-Master) architecture for optimising the performance of an inter-connected process control system, called Multi-plant/ Multi-process control. Both the architectures use the feedback control algorithm (Equation 3.16) for multi-input/output process control by measuring its performance through its output product quality and for optimising the performance of the inter-connected process control system by using again the product quality output as a performance measure. In the process control industry, the conventional Proportional, Integral, Differential, (PID) regulator is in wide use due to its easy implementation. A ‘regulator’ is an equipment in which all the energy to operate the final controlling element is derived from the control system. In practice, extensive simulation studies are carried out in industrial processes in order to apply modern control theory (Baxley [1991]). Modern control theory applications lead to an algorithmic approach to solve the feedback control problem. 7.These simulation studies (Baxley [1991]) and experiments offer an insight to make the necessary modifications of the feedback control algorithm in order to improve the performance of the process control system. The weighting elements of the process control algorithms are chosen ‘a-posteriori’ that some satisfactory optimization criteria can be applied to the system parameters.

Parallel Processing Computer Architectures for Process Control

7.3

The implementation of these regulators entails a complex procedure. The existence of time-delay/dead-time, (“the interval time between initiation of an input change or stimulus and the start of the resulting observable response”) requires ‘a priori’ information in order to design a suitable and appropriate regulator that meets the system requirements and control objectives. Dead-time is widely prevalent in process control systems and it is quite common to encounter such time-delays while operating a process control system. It is useful to apply certain information about the known ‘a priori’ dead time for improving the performance of the system through adequate dead-time compensation. This chapter discusses the basic theoretical aspects of two different parallel processing computer architectures, which use the stochastic feedback control algorithm described in Chapter 3 (Venkatesan [1997]). The chapter is organised in the following manner. We recall from Chapters 4 and 6 the main steps of the development of the stochastic feedback control strategy for product quality control that is common to both the Multi-Input Multi-Output (MIMO) process control (‘Star-Ring’) computer architecture and Multi-plant/Multiprocess control (‘Multi-ring, Multi-bus’) parallel computer architecture. The need for parallel processing computer architecture is discussed in Sections 7.2 and 7.3 describes the MIMO process control system while Section 7.4 gives an account of the Multi-plant/Multi-process control system. Then, the star-ring parallel computer architecture (Venkatesan et. al. [1998]) for multi-input/output process control and then, the multi-ring, multi-bus parallel computer architecture (Venkatesan [2002c]) are explained and discussed in detail along with the advantages in the applications of the respective parallel processing architectures.

7.2 THE NEED FOR PARALLEL PROCESSING There are some definite advantages of parallel processing. Some of these are: (i) By means of faster processing of data, the performance is increased many folds; (ii) It makes practically possible to find solutions to bigger problems; (iii) The physical limits for single processors have “almost” been reached, or increasing performance with a single processor is only possible at a very high cost. Parallel processing is a natural form of information processing. The higher performance of the brain is its parallel processing due to the function of neurons. There are 1010 neurons (“CPU + memory”) in the brain which are always active. In modern High-Tech world, parallelism is everywhere. Some examples are: (i) Processes in nature/society/technical processes, (ii) Even in a simple PC, we can find E/A, DMA, Microcode, 16-bit-arithmetic, (iii) Parallel processing is the natural form of processing information,

7.4

Integrated Statistical and Automatic Process Control

(iv) On the contrary, Sequential Processing often is an artificial restriction, brought about by historical factors. (v) Problems suited to parallel processing can be better written and solved in a suitable parallel programming language than in a sequential one. (vi) Parallel processing is efficient, affords easy readability and clarity. In computers, parallel processing is the processing of programme instructions by dividing them among multiple processors with the objective of running a program in less time. In one form of parallel processing, the interleaved execution of both programs together is possible by the computer starting an I(nput)/O(utput) operation, and while it is waiting for the operation to complete, it would execute the processor-intensive program. The total execution time for the operation completion and program execution would be less. Parallel processing helps to achieve increased performance by doing more than one operation at a time.

7.3 MIMO PROCESS CONTROL SYSTEM There is an ever increasing demand on on-line process control and chemical analysis in terms of quick response of process variables to adjustments, fast processing power and high accuracy in the increasing competitive world of process control engineering. The computing power of many existing process control systems currently in use does not satisfy these specific operational demands. Parallel processing systems with a wide range of architectures are implemented in many areas of process control engineering, such as the petrochemical industries, to achieve the required level of performance. The parallel processing architecture for MIMO process control is formulated and discussed in this chapter based on these special needs of the petrochemical and petroleum refining industries. System designers, in order to further improve the fault-tolerance and selfdiagnostic capability of overall parallel processing, can incorporate a degree of Artificial Intelligence (AI) and Expert Systems Methodology into the database of the proposed ‘Star-Ring’ parallel processing system for MIMO process control. The application of Artificial Neural Network (ANN) techniques in conjunction with the human Knowledge-Based system can also be used to significantly enhance the overall performance of the proposed parallel processing architecture (Hwang Kai [1993]). The Multi-Input/Multi-Output (MIMO) (‘Star-Ring’) architecture considers the possibility of the application of the feedback controller algorithm (Equation 3.16). It uses the dynamic-stochastic model of the single-input, single-output process control system (Venkatesan [1997]) that is extended to multi-input/multi-output process operations (Venkatesan et. al. [1998]). A minimum variance type (integral) regulator is suggested which works on the basis of the summation (integral) of the control error. The process model of the control unit representing the system dynamics is

Parallel Processing Computer Architectures for Process Control

7.5

assumed to be fairly approximately known and only the forecast error has to be predicted to control the multi-input/output process. The feedback control algorithm (Equation 3.16) assists in control of a stable discrete process that has the potential to reduce the variances of the corresponding output product quality variables and results in better quality products at the outputs as shown in Chapter 6.

7.4 MULTI-PLANT/MULTI-PROCESS CONTROL SYSTEM In parallel processing, the introduction of multi-processing is made possible in the computer systems, for two or more processors to share the work to be done. We will be using one of the earlier versions which is a master/slave configuration. One processor (the master) is programmed to be responsible for all of the work in the system; the other (the slave) performs only those tasks it is assigned by the master. This slave-master parallel processing has been further developed to overcome some of its shortcomings by Symmetric Multiprocessing System (SMP) and Massively Parallel Processing (MPP) systems. We focus our attention on the simple Slave-Master configuration parallel processing architecture for Multi-Plant/ Multi-Process Control in order to appreciate the use of the stochastic feedback control algorithm as majority of the modern process industries are mostly operated by highly sophisticated and fast-acting, high-performance computers. The second parallel processing architecture (Venkatesan[2002c]) investigates the computer control of an inter-connected process control system by the application of the same feedback controller algorithm (Equation 3.16) that is also extended to multi-plant/multi-process operations (Venkatesan [2002c]). The feedback control strategy is developed and the adaptive regulator can be implemented in the central computer system of the process control plants. Details of discussion and analysis of stochastic feedback control adjustment are described in Chapter 3. This chapter shows that this type of regulator design approach is possible for successful MultiPlant/Multi-Process operations that uses a common feedback control algorithm in the regulators and a parallel processing computer architecture (Venkatesan [2002c]).

7.5 PROBLEM FORMULATION Various multi-variable process (feedback) control strategies have been successfully implemented in the chemical industries for more than two decades. Due to limitations and space restrictions, the main thrust and focus of this chapter is on the discussion mainly of the parallel processing computer architectures for process control. For more information on multi-variable control and its application in the chemical process industries, the reader may refer to the literature, which is replete with many articles and scientific papers in technical journals and monographs as evidenced by references ((Balchen et al. [1971]), (Balchen et al. [1973]), (Doyle and Stein [1981]), (Fisher and Seborg [1976]), (Foss, et al. [1980])).

7.6

Integrated Statistical and Automatic Process Control

The MIMO Process Control System (Venkatesan et. al [1998]) and MultiPlant/Multi-Process control (Venkatesan [2002c]) discussed in this Chapter deals with modelling of process control systems for MIMO and process control plants and the inter-connected control systems for the design of suitable and appropriate feedback controller/s of MIMO and Multi-plant/Multi-process control systems. It presents an adaptive regulator, which uses the ‘a priori’ known dead time (time delay) information, and satisfies the objective character of feedback control. The basic task of either a MIMO or Multi-plant/Multi-process control system is to satisfy the minimum variance control requirements of the plants/process outputs. For this purpose, the process plants have to ensure the required output product quality in such a way that the output standard deviation (mean square error) can be kept within the product specification limits. It is shown in Chapter 4 that the feedback control algorithm (Equation 3.16) performs the following objectives: (i) It statistically (stochastically) models and minimises/reduces the effect of plant/process disturbances in the form of a change in the input raw material characteristics, changes that take place within the processes/plants and environmental changes due to changes in atmospheric conditions. (ii) It ensures a satisfactory operation of the inter-connected processes/plants, which depend on one another for inputs/outputs in succeeding stages of plant operation. (iii) Most importantly, it minimises the variance of the output control variables by making input adjustments at every sample point that exactly cancels the output deviations (error) in the respective output variables and compensates for the forecast disturbances that affect process/plants. Figure 7.1 shows the schematic diagram of the adaptive control in one of the feedback paths of the MIMO and Multi-plant/Multi-process control system. Note that the same time series controller, used in MIMO process control and in MultiPlant/Multi-Process control, functions in the same manner in both the parallel processing process control architectures, namely, to bring the output variable close to controller set point. Zt

Disturbance

Input +

b+1

B

xit

2

(1 – 1B – 2B ) Process model

Output + Yit

Error eit = Zt + Yit

xit

xit = f(eit, eit–1, eit–2,...) Time series controller

Figure 7.1 Schematic Diagram of the Adaptive Process Control System

7.7

Parallel Processing Computer Architectures for Process Control

7.5.1 Feedback Control Algorithm for a Multi-Input/Multi-Output (MIMO) and Multi-Plant/Multi-Process Control System The feedback control algorithm (Equation 3.16) derived in Chapter 3 for a Single Input Single Output (SISO) process control system is modified for use in both the MIMO and Multi-Plant/Multi-Process Control System. Figure 7.1 shows the feedback control scheme to compensate a disturbance Zt by means of time series controllers installed in one of the feedback paths (MIMO control) or the control algorithm is stored in the memory of the individual computers (Multi-Plant/MultiProcess control) controlling the processes in respective plants. In Figure 7.1, Xit, (i = 1, 2, …n), is the input manipulated variable and Yit, (i = 1, 2, …n) is the corresponding output variable. w is the magnitude of the process response to a unit step change in the first-period following the dead time carrying over into additional sample periods (Baxley [1991]) and B is the backward shift operator, BXt = Xt–1. The control algorithm is stored in the memory of the individual computers in the case of Multi-Plant/Multi-Process control for controlling the processes in respective plants. The respective adjustment (xit) in the input variables are given by xit

–

eit –2 ) (1 − Q ) ( eit – d1eit –1 − d2= ,i − (1 − Q) x PG (1 − d1 – d2 )

it – b

1, 2,...n ...(3.16a)

where d1, d2 are the dynamic process parameters, eit is the forecast error for that input variable being controlled in the ‘i’th feedback control path, i = 1, 2, …n, Q is the integrated moving average (IMA) parameter, 0 < Q < 1; and the dead time, b > –1 (Venkatesan et. al. [1998]). PG represents the feedback gain realised by the effect on the corresponding output variable caused by a unit change in an input variable after completion of the dynamic response (Venkatesan [1997]). where t1 and t2 are the process time constants and T, the sampling period. t1 and t2 are real and equal for the ‘critically damped’ feedback control path. As shown in Chapter 3, d1 and d2 satisfy the following conditions for feedback control stability:

d1 + d2 < 1,

d2 – d1 < 1,

–1 < d2 < 1,

and

d12 + 4 d2 = 0.

w = PG (1 – d1 – d2) = 1 and PG = 1/(1 – d1 – d2) = g, the steady-state gain (Venkatesan [1997]).

7.8

Integrated Statistical and Automatic Process Control

7.6 MIMO PROCESS CONTROL AND MULTI-PLANT/MULTIPROCESS CONTROL 7.6.1 MIMO Process Control The feedback control algorithm (Equation 3.16a) minimises the variance of the output control variables by making an input adjustment at every sample point that exactly cancels the output deviation (error) in the respective output variables and compensates for the forecast disturbances that affect a process. The control algorithm (Equation 3.16a) is simulated to find the control error standard deviations (CESTDDVN) of the output variables and the frequency of adjustment (MFREQ in computer simulation program, with FREQ = 1 for adjustment and 0 for no adjustment, Adjustment Interval, AI = 1/MFREQ), of the time series controllers in the corresponding feedback control paths. The required adjustments in the different input variables are fed back into the inputs by separate individual feedback paths from the outputs. The input adjustments are asynchronous in that they have their effects only on the respective input variables and not on other input variables. It is assumed that the feedback control action on one input variable does not affect the other input variables although they are equally important in the control of the process in practice. This is achieved through a parallel processing computer architecture and by a Multi-Variate Exponentially Weighted Moving average (MEWA) chart that signifies by means of out-of-control signals to the process control practitioner when to make the input adjustments to the process (Venkatesan [1998]). The principles of EWMA chart control explained in Chapter 4 can be extended and suitably applied for MIMO process control. 7.6.2 Multi-Plant/Multi-Process Control A certain number of feedback integral controllers (regulators) ensure the satisfactory performance and operational requirements of the MIMO and the Multi-plant/Multiprocess control system. An element of hierarchical control system is adaptive control of the system to its dynamic conditions. The task of adaptive control is to keep the output variable as close as possible to controller setpoints while adapting itself to the dynamic changes that take place in individual processes/plants.

7.7 MIMO AND MULTI-PLANT/MULTI-PROCESS FEEDBACK CONTROL METHODOLOGY 7.7.1 MIMO Feedback Control Methodology The feedback control algorithm (Equation 3.16) developed for a SISO process control system is used to compensate the disturbances by means of time series controllers installed in the feedback paths of the MIMO process control system.

Parallel Processing Computer Architectures for Process Control

7.9

The input control adjustments (x1t, x2t, x3t, x4t,…, xnt) given by the feedback control algorithm (Equation 3.16a) minimises the variance of the output controlled variables as shown in Chapters 3 and 4. It is known that the first term in the control algorithm, (Equation 3.16a), gives the integral action and the second term, the dead-time compensator. The dead-time compensator removes the delay from the closed-loop stability considerations and provides a stabilising effect on the feedback control paths. The discrete (sampled-data) time series controllers in the feedback paths will maintain the product quality mean at or near some measurable targets and allow for a rapid response to process disturbances without overcompensation or overcorrection (Venkatesan [1997]). Feedback control stability is achieved by keeping the closed-loop gain less than or equal to 1.0 ((Shinskey [1988]) and (Venkatesan et. al [1998])). The minimum variance control algorithm is applied for a trial period to the MIMO process whose output product quality means are required to be reduced. Changes in the process are observed that could cause the control algorithm to underestimate or overestimate the required input control adjustments to make the output product quality means to be exactly on target and making it possible to realise a reduction in the output control error standard deviation (CESD.)s. A Multivariate Exponentially-Weighted moving Average (MEWA) chart (Venkatesan et. al [1998]) is then installed in position to notify by means of out-of-control signals the shifts in the product quality means to maintain on-target performance. The continuous feedback in the respective loops can be temporarily disconnected from the feedback loops when the costs of adjustments and costs of sampling the process are significant and connected again after a fairly short period of time. This control action is necessary to prevent overcompensation of the output product quality means, which is characterised by more variable control errors and more frequent adjustments to the individual process inputs (Baxley [1991]). This type of control action will also help in accomplishing set point changes in a smooth and rapid manner (Shinskey [1988]). 7.7.2 Multi-Plant/Multi-Process Feedback Control Methodology The process plants consist of similar and identical units each having a function of its own to minimise the output deviation from set point. The plants may have different functions, one controlling output product quality, another that controls viscosity of a flowing liquid, a third controlling temperature of the liquid and the fourth the pressure inside a chemical reactor inside which some chemical reactions take place that need to be controlled in the feedback control system. This is usually the scenario in iron and steel making, manufacturing and petroleum refining industries. There are two possible options/methods for control of such multi-plants. (i) Either a central computer (master computer) stores the feedback control algorithm, computes the required input feedback adjustment and transmits it

7.10

Integrated Statistical and Automatic Process Control

to the plant affected by the disturbance and process out of control upon receipt of an ‘out-of-control’ signal from that particular plant. or (ii) The “individual” process control computer stores the feedback algorithm and calculates the required input adjustment for its (adaptive) self-control. In this case, the “individual” computer receives an alert signal from the main central computer to trigger its own algorithm and initiates the computing of the required input feedback control adjustment. The central computer monitors the performance and stability condition of each individual plant and alerts that particular computer controlling the individual process plant by sending an appropriate alarm signal that the process is out of control and needs feedback control action to bring the output product quality variable close to the controller set point. The adaptive controller system performs the following control objectives: 1. The individual regulator in each process/plant regulates its own controller set point. 2. It contributes to the control system stability. 3. During periods of disturbance, the regulator adapts (self-tunes) itself to disturbance by setting the closed-loop time constant of the respective feedback control loops of each individual process/plant to match the disturbance (which is assumed to be of the same nature and character in the Multi-plant/MultiProcess control system). The respective regulator in each process/plant satisfies the minimum variance control requirements by virtue of its integral control action and provides adequate dead-time compensation in the feedback control loop. The respective regulators send the necessary feedback control signals to the process inputs without compromising other control objectives such as for example, maintaining temperature, pressure constant during the process operations. This implies that the adaptive regulators in each process/plant performs only the function of enhancing the performance of the complex process control system in terms of the output product quality and does not interfere with the functioning of the other controllers/regulators in the control system. The minimisation of disturbances caused by the local fluctuations in input load, changes in raw material input, changes that take place within the process and due to atmospheric changes and hence in the output variance is taken care of either by the individual process computers or the master-computer in Multi-Plant/Multi-Process control as explained by the advanced parallel processing computer architecture. This architecture is explained subsequently after describing at first the application of feedback control algorithm (Equation 3.16a) for MIMO and Multi-Plant Process Control in Section 7.8 and then the Star-Ring Parallel Processing System for MIMO process control (Venkatesan [1998]) in Section 7.8.1.

Parallel Processing Computer Architectures for Process Control

7.11

7.8 APPLICATION OF FEEDBACK CONTROL ALGORITHM FOR MIMO AND MULTI-PLANT PROCESS CONTROL The MIMO feedback control methodology explained earlier can be applied to a wide range of MIMO process control applications such as an inner loop (SISO) control over some output product quality variable that is of interest to us in a continuous manufacturing process. The scope is extended to embrace a discussion of a physical implementation with actual computing hardware in large-scale MultiInput/Multi-Output (MIMO) and Multi-Plant/Multi-Process control system. This scenario can quite comfortably be applied into a parallel processing system by having the same feedback control algorithm (Equation 3.16a) for different input variables and different feedback control paths. The input variables can be adjusted in such a manner that the effects of individual adjustments are quite independent of one another occurring in an asynchronous fashion. 7.8.1 Multiport-Memory Architectural Design, Star–Ring Parallel Processing System for MIMO Process Control In the proposed architecture, parallel processing is implemented via a dual approach that involves parallelism within the CPUs (Central Processing Units), and also via the architectural design of the overall system. By appropriate choice of the master processor, internal parallel processing is implemented by means of ‘super-scaling’ and ‘pipelining’. In addition, external ‘parallelism’ is incorporated within the overall system by implementing a Multi-ported-Memory, Star-Ring high performance computer architecture (Venkatesan [1998]). The proposed architecture is shown in Figure 7.2. In this figure, only one of the slave sub-systems (slave n) is shown in expanded form. The other slaves, which, are represented in the figure only as blocks, will have the same internal structure as slave n. The structure described in Figure 7.2 is based on the Multiport-memory, Star-Ring Parallel Processing System architecture (Venkatesan [1998]). In this arrangement, which utilises the master-slave structure, the star node is implemented as a Pentium microprocessor system with 400 MHz clock frequency and acts as a master processor. The satellite computers in the ring, which are interfaced via multi-bus highway to the master processor, play the role of slave processors and also utilise Pentium-based technology. Within the multi-ported memory architecture, each p module (P1, P2,…Pp) is based on a high-performance 32-bit microprocessor. The M modules (M1, M2,…Mm) are memory units with varying capacity, which can be chosen to satisfy different performance requirements. In this architecture, switching and priority arbitration logic circuitry is positioned at the interface to the memory unit. Due to their similarity among the

7.12

Integrated Statistical and Automatic Process Control

slave units, the process of connecting or removing slave processors is relatively straightforward from both hardware and software point of view. In the event of a localised hardware or communication malfunction, the system will still function, although at a slightly reduced efficiency. P1 P2 . . PP I1

I2

M1

M2 System Bus

Slave n

. . . . . . . .

Im ..

Mm Slave 1

Pentium Processor

Multi System Bus

M a s t e r p r o c e s s o r

Artificial Intelligence and Expert System

Interrupt Priority Controller

Multi System Bus

Cache Interface Circuitry

Memory System Controller I/O Interface

Slave 3

Slave 2

Figure 7.2 Architectural Design of Multi-ported Memory, Star–Ring Parallel Processing System for MIMO Process Control

In relation to throughput and speed of performance, parallelism is exploited in two ways: (i) within the individual processors and (ii) via the overall system architecture.

Parallel Processing Computer Architectures for Process Control

7.13

In order to increase reliability and fault tolerance within the overall system, sub-tasks are run in multiple distributed processing units and correspondence checking is implemented to detect malfunctions. Detected faults will result in automatic self-reconfiguration of the system, whereby a faulty unit is switched out and a maintenance condition is flagged.

7.9 DESIGN OF ADVANCED COMPUTER ARCHITECTURE FOR MULTI-PLANT/MULTI-PROCESS CONTROL The process plants, which are similar and identical units, each have a function of its own to minimise the output deviation from set point. The plants have different functions, a plant that controls output product quality, another, that controls viscosity of a flowing liquid, a third, controls flow rate of the liquid and the fourth, the pressure inside a chemical reactor. The following design methodology can be adapted for the computer control of inter-connected process plants as shown in Figure 7.3 with the following requirements: A central super-computer (master computer) that stores the feedback control algorithm (Equation 3.16a), referred ‘control algorithm’ in the discussion, and controls and monitors the overall system operations. The database management system of the satellite computers can include reliability analysis, quality control schemes and charts, safety procedures and analysis, simulation modelling and analysis, management control systems and Artificial Intelligence (AI) system. In order to perform the operation of each plant within the proposed architecture, one assumes that the required input feedback adjustment must be transmitted to the plant affected by the disturbance and process out of control upon receipt of an ‘out-of-control’ signal from that particular plant. In addition, there are several slave processors that perform allocated tasks in parallel. Each plant is connected to a slave processor and the “individual” process control computer stores the feedback control algorithm and calculates the required input adjustment for its (adaptive) self-control. In this case, the “individual” computer receives an alert signal from the main central computer to trigger its own control algorithm and initiates the computing of the required input feedback control adjustment. The central computer monitors the performance and stability condition of each individual plant and alerts that particular computer, which is responsible for controlling the individual process plant by sending an appropriate alarm signal that the process is out of control and needs feedback control action. In the proposed architecture, parallel processing is implemented via a dual approach involving parallelism within the central processing units (CPUs), and also via the architectural design of the overall computer system. By appropriate choice of the master processor, internal parallel processing is implemented by means of ‘superscaling’ and ‘pipelining’. The reader may refer to books on

7.14

Integrated Statistical and Automatic Process Control

computer technology for explanation and meaning of these computer terminologies. In addition, external parallelism is incorporated within the overall system by implementing a Multi-Bus, Multi-Ring high performance computer architecture. The proposed architecture is shown in Figure 7.3. The structure (Hwang Kai [1993]) described in Figure 7.3 that is based on the PC technology incorporating parallel processing system architecture. In this arrangement, which utilises the master-slave structure, the star node is implemented as a Pentium microprocessor system with 1.5GHz clock frequency, and acts as a master processor. The satellite computers in the ring, which are interfaced via Multi-Bus highway to the master processor, play the role of slave processors and also utilise Pentium-based technology. In this architecture, switching and priority arbitration logic circuitry is positioned between level 1 and level 2 computer configuration. Due to the similarity among the slave units, the process of installing or removing slave processors is relatively straight forward and easy from both hardware and software point of view. In the event of a localised hardware or communication malfunction, the system will still function, although at a slightly reduced efficiency. A method to compute efficiency of a parallel processing system is explained in the next section. In relation to throughput and speed of performance, parallelism is exploited in two ways: (i) within the individual processors and (ii) via the overall system architecture. In order to increase reliability and fault tolerance within the overall parallel processing computer system, sub-tasks are run in multiple distributed processing units, and correspondence checking is implemented to detect malfunctions. Detected faults will result in automatic self-reconfiguration of the system, whereby a faulty unit is switched out and a maintenance condition is flagged.

7.10 EFFICIENCY OF PARALLEL PROCESSING IN PROCESS CONTROL SYSTEMS In order to achieve an improvement in speed of operation through the use of parallelism, it is necessary to be able to divide the overall computation task into sub-tasks or processes, which can be executed simultaneously. A measure of relative performance between a parallel processing system (multi-processor system) and a single processor system is the speed-up factor s(n), defined as

s(n) =

Execution time using one processor Execution time using n processors

which identifies the increase in speed-in using the multiprocessor methodology.

7.15

Parallel Processing Computer Architectures for Process Control

Simulation Modeling & Analysis

Reliability Analysis Master Processor Quality Control Schemes & Charts

Safety Procedures and Analysis

Management Control Systems

Stochastic Feedback Control Algorithm

Level 1

Al/Expert System

Multi-Ring Highway

Interrupt Priority Controller

Main Highway

Multi-Bus Connection

Slave Processors Connected to Different Plants

Level 2 Slave1 Slave2 Slave3 Slave4

Slave 1 Output product quality control Slave 3 Control temperature of liquid

Slave 2 Control of a flowing liquid Slave 4 Control of pressure inside a chemical reactor

Figure 7.3 Multi-Ring Multi-Bus Parallel Computer Architecture for Multi– Plant/Multi-Process Control

This relationship is dependent upon the ability of the overall system to fully and simultaneously utilize the computing power of all processors; if this requirement is not fulfilled, the effective speed-up factor will be reduced. This is also reflected in the system efficiency E, which is defined as: E=

s ( n) × 100% n

7.16

Integrated Statistical and Automatic Process Control

that indicates how many processors are being used on the computation. For example, if E = 50%, the processors are, on average, being used for half the time on the actual computation. The maximum efficiency of 100% occurs when all the processors are being used on the computation and the speed-up factor s(n), would be equal to the number of processors n, in the parallel processing system. This means that the sum total of the process execution times using n processors in parallel processing at 100% efficiency is equal to the execution time using a single processor. This further explains the advantage of the use of the parallel processing architecture that results in high speed of execution and less processing time and cost. An alternative descriptor, applied parallelism, is used to indicate the degree of parallelism achieved in particular systems that have a limited parallel processing capability. Applied parallelism can be defined as a logarithmic quantity log2n. Natural parallelism is used to express the potential in a system for simultaneous execution of independent processes and this is often quantified as n log n , where 2 n is the number of processors (Hwang Kai [1993]). Factors such as the communication overhead, delay in task allocation and task duration depend on the type of the software and hardware system design, and dependencies of various tasks upon each other. The above aspects have been considered in formulating the proposed Multi-Ring, Multi-Bus parallel computer architecture. Necessary and proper software can be developed for the parallel processing computer architectures described in this chapter. Enquiries show a positive response from industry in applying these computer architectures for multi-input/multi-output process control and inter-connected process plants for commercial exploitation.

7.11 ADVANTAGES The main central computer should be large enough to monitor all the process plants and capacity (in terms of its RAM memory) to store the feedback control algorithm, compute the required input feedback control adjustment and send the information about the required input feedback adjustment to the individual process plants. Alternatively, the capacity of the main computer may be chosen that it performs only the monitoring operation of the process plants and lets the respective process control computers to compute their “own” feedback control input adjustments to bring them back into a state of stable operation and control. In either of the above methods, the cost of providing individual process control computers may not present a problem in view of the availability of modern hightech computers at low and reasonable prices.

Parallel Processing Computer Architectures for Process Control

7.17

7.12 CONCLUSION In this chapter, parallel processing methods have been suggested (i) for implementing a discrete feedback controller strategy by means of a parallel processing architecture and also (ii) for the modelling of process plants and the inter-connected control systems for design and use of appropriate feedback controllers of a multi-plant/multiprocess control system. It has been demonstrated that through the Multi-Ported Memory, Star-Ring Parallel Processing System architecture, it is possible to control a process with multiple inputs and outputs and optimise the performance of the MIMO process control system. The modelling of process plants and the inter-connected control systems for the Multi-Plant/Multi-Process control system presents an adaptive regulator which uses ‘a priori’ known dead time (time delay) information and satisfies the objective character of feedback control. It minimizes the effect of plant/process disturbances in the form of a change in the input raw material characteristics; changes that take place within the processes/plants and environmental changes (due to changes in atmospheric conditions). It also ensures a satisfactory operation of the inter-connected processes/plants that depend on one another for inputs/outputs in succeeding stages of plant operation. The MIMO process control methodology suggested in this chapter is well suited to high speed, automatic control of modern complex processes. It has also been demonstrated that through the massively parallel Multi-Ring, Multi-Bus computer configuration for the Multi-Plant/Multi-Process control of inter-connected plants, it is possible to control multiple plants/processes and optimise the performance of the process control system. This methodology is also well-suited to high speed, automatic control of modern inter-connected process plants which make use of a common feedback control algorithm in their regulators and operate under almost exactly identical operating conditions with respect to noise (disturbance) and other process parameters. With the wide use and availability of high-speed computers, the system requirements can be readily met in practice for both types of parallel processing architectures.

REFERENCES Baxley, Robert V.A., (1991). Simulation Study of Statistical Process Control Algorithms for Drifting Processes, “SPC in Manufacturing “ by Keats and Montgomery, Marcel Dekker, Inc., New York and Basel.

7.18

Integrated Statistical and Automatic Process Control

Balchen, J.G., Endresen T. Fjeld, M., & Olsen, T.O., (1971). Multivariable control with approximate state estimation of a chemical tubular reactor IFAC Symposium on Multivariable Systems, Dusseldorf, Oct. Balchen, J.G., Endresen, T. Fjeld, M., & Olsen, T.O., (1973). Multivariable PID estimation and control with biased disturbances. Automatica, 9, 295-307. Doyle, J.C., & Stein, G., (1981). Multivariable feedback design: Concepts for a classical/ modern synthesis. IEEE Trans. Automatic Control, 26, 1. Fisher, D.G., & Seborg, D.E., (1976). Multivariable Computer Control. North Holland Publishing Company, New York. Foss, A.S., Edmunds, J.M., and Kouvaitakis, B., (1980). Multivariable control systems for two-bed reactors by the characteristic locus method. Ind. Eng. Chem. Fund., 19, 1, 109-117. Hwang Kai, (1993). “Advanced Computer Architecture”, Parallelism, Scalability and Programming, MacGraw-Hill Publishing Company, New York. Shinskey, F.G., (1988). “Process Control Systems”, MacGraw-Hill Publishing Company, New York. Venkatesan, G., (1997). “A Statistical Approach to Automatic Process Control” (Regulation Schemes), Ph D. Thesis, Victoria University of Technology, Melbourne. Venkatesan, G., Abachi, H., Lisner, R., and Abdollahian, M., (1998). Proceedings of The Fifth International Conference on Control, Automation, Robotics and Vision (ICARCV ‘98), “Multi-Input/Output Process Control Utilising a Parallel Computer Architecture”, vol. 2, pp. 1598 - 1602. Venkatesan, G., (2002c). The 34th South-eastern Symposium on System Theory (SSST), “Parallel Processing Computer Architectures for Process Control”, University of Alabama in Huntsville, ISSN: 0094-2898 234-238.

Suggested Exercise 1. Discuss modern techniques of parallel processing architectures.

Chapter

8 Cost Modelling for Process and Product Quality Control

8.1 INTRODUCTION Process control is a management problem in the performance analysis of a continuous production process. Statistical process monitoring and feedback control recently have been of interest to process control practitioners. It is imperative for the process control practitioner to know the costs of sampling and adjusting a process in order to minimize the manufacturing cost without compromising the objectives of process control. This view is kept as a basis when proposing a cost modelling methodology using some of the sampling and cost modelling principles available in the literature. The sampling (adjustment) intervals are obtained by direct simulation of the feedback control algorithm (Equation 3.16). It was shown in Chapter 3 that the stochastic feedback control algorithm is developed by the application of statistical and automatic process control techniques. The cost model method and the associated cost functions discussed in this Chapter use ‘adjustment intervals’, which is less complex than the method suggested by (Abraham and Box [1979]). The cost function for feedback control adjustment is based on a second-order transfer function process model and the stochastic ARIMA 1st order noise model used to derive the stochastic feedback control algorithm, (Equation 3.16). The cost model and the corresponding process control regulation schemes have applications in basis-weight control on a paper machine and in a batch-polymerization process that produces polymer resins in two groups of batch reactors that run in parallel and share common raw materials. It is mentioned in Chapter 4 that ‘simulation study of statistical process control algorithm with fast and slow drifts and process regulation schemes’ is reported in (Baxley [1991]). The design of feedback control schemes described in (Box et al. [1994]) is a method to choose an appropriate sampling interval for making control adjustments in a feedback control system (Abraham and Box [1979]). Optimal

8.2

Integrated Statistical and Automatic Process Control

control can be achieved in feedback control schemes by optimizing input control actions with respect to a given criteria. Optimal control schemes account for costs associated with deviations of the controlled variable from its target value (called ‘off-target’ costs), as well as adjustment costs incurred because of the input feedback control adjustments. It has been established (Box and Kramer [1992]) that a process regulation scheme employing a minimum mean square error (MMSE) or minimum variance controller would minimize its overall cost if it were assumed that (i) the ‘off-target’ cost is the only cost and (ii) the cost is a quadratic function of the output deviation from target. It is mentioned in previous chapters that ‘Mean square Error (MSE) is the sum of the squared deviations between an output controlled variable and target or controller set point’. The MMSE controller principle is used to develop a cost model which also uses the simulation results of the feedback control algorithm, (Equation 3.16), (Venkatesan [1997]), (Venkatesan [2002a]) and performance measures of the minimum variance (time series) controller. The control error standard deviation (CESTDDVN) and adjustment interval (AI) are the performance measures of a time series controller (Baxley [1991]). The range of CESTDDVNs and the AI are used to formulate process regulation schemes. Kramer (1990) observed that the cost of being ‘off-target’ is proportional to the square of the deviation from target and that other variable costs are negligible. Development of the cost function makes use of this principle. This Chapter is organized as follows. A review of the literature background and the motivation to develop a cost model for the feedback control adjustment are explained in Section 8.2. The method to develop the cost model for feedback control adjustment is explained in Sections 8.3, 8.4 and Section 8.5. A comparison of some cost control schemes given in Section 8.6. Benefits that can be derived from the proposed cost model are given in Section 8.7. The Chapter is concluded in Section 8.8 with some industrial applications.

8.2 BACKGROUND AND REVIEW OF THE COST MODEL At the outset, details of a few seminal papers on process regulation schemes and fixed monitoring and adjustment costs are given in the literature review. In one of these papers, Box and Jenkins [1963] evaluated the performance of an exponentially weighted moving average (EWMA) controller in the mass production of ball bearings and discussed the effect of adjustment cost and the cost of being off-target. The authors showed that the sum of the adjustment and off-target costs could be minimized with an appropriate choice of EWMA control limits (L). Calculations were made for the expected ‘run length’ between adjustments and the ‘mean square’ deviation from the target. These run lengths were used to determine the values of L

Cost Modelling for Process and Product Quality Control

8.3

and the CESTDDVN of the diameter of the ball bearings, the quality index chosen for the performance evaluation. Box and Jenkins [1963] chose the diameter of the bearings as a ‘quality index’ and they considered the fact that an adjustment to the machine requiring an interruption of production resulted in an adjustment cost. They assumed that the diameters deviated from target according to an IMA (0, 1, 1) process and that there was no (time) delay (dead-time) in realising the adjustments. The cost of being off-target was also assumed by these authors to be proportional to the square of the deviation. The IMA parameter Q was chosen to generate the forecast errors. Box and Jenkins [1963] showed that the sum of the adjustment and off-target costs could be minimized with an appropriate choice of EWMA control limits (L). For certain values of Q and L, they calculated the expected ‘run length’ between adjustments and the mean square deviation from target. From these run lengths, the values of L and the control error standard deviation were calculated for values of Q ranging from zero to one. Box and Jenkins [1970, 1976] further described an approach to the formulation of feedback control schemes assuming that the sampling interval had been decided earlier before designing the control schemes. Box, Jenkins and MacGregor [1974] showed that a control strategy comparing the EWMA to a set of fixed limits is optimal when the adjustment cost is significant when compared with off-target cost. They calculated the effect of changing the EWMA on average adjustment interval and the control error variance when deadtime is not present. Kramer [1990], considering only quadratic off-target cost for adjustments made without delay, developed minimal cost process regulation schemes for the case of fixed monitoring and adjustment costs. For a dynamic system without delay (dead-time), the adjustment cost can be proportional to the square of the required adjustment, which was also considered by Kramer [1990]. An outline of a process regulation scheme using the feedback control algorithm (Equation 3.16) derived in Chapter 3 is formulated and discussed earlier in Chapter 6. Box and Jenkins [1970, 1976] developed the MMSE controller with fixed sampling interval. However, in practice, it is often possible and desirable to change the sampling interval. Abraham and Box [1979] suggested a method to carry out pilot runs from a process to determine its dynamic and stochastic characteristics with a short sampling interval. They used the information obtained to develop an optimal feedback control scheme. Some of these EWMA controller principles are used in Abraham and Box [1979] and also in developing an approximate feedback control cost model in this Chapter. An exhaustive review of research articles with regard to the cost aspects of feed-back control adjustment reveals that there are only a few publications related to cost models. For a brief review on the process control and quality engineering, we like to draw attention to Elsayed et al. [1995] and English [1994], where processes with damped controllers and the use of fixed control limits for continuous flow processes are discussed. For details of quality control monitoring of flow processes

8.4

Integrated Statistical and Automatic Process Control

and regulation schemes for quality control, see English et al. [1991] and Superville and Adams [1994]. Tseng and Adams [1994] have discussed the use of EWMA forecast for monitoring auto-correlated processes and Yourstone and Montgomery [1989] applied a time-series approach to discrete real-time process quality control.

8.3 MOTIVATION FOR COST MODEL DEVELOPMENT It is reported that an MMSE controller for fixed sampling interval can be developed (Box et al. [1994]). Nevertheless, it is possible and desirable in practice to change the sampling interval. A method is described by Abraham and Box [1979] to obtain information to carry out pilot runs from a process with short sampling intervals to determine its dynamic and stochastic characteristics. In this method, the information was used to develop process regulation schemes assuming a specific cost function and by considering fractional periods of delay and a second-order integrated moving average (IMA) disturbance model. A second-order autoregressive integrated moving average (ARIMA) model was used to represent the disturbance in Abraham and Box [1979]. Instead, the cost model methodology proposed in this chapter considers a first-order disturbance model as a specific case and whole periods of delay (dead-time), since naturally occurring industrial disturbances can be assumed to be of first order (Box et al. [1994]). The focus is to show that minimum cost regulation schemes can still be achieved with ease in a less rigorous manner. The complexity in developing the cost function is reduced by considering the dynamic process with whole integer periods of (time) delay. A distinct advantage in considering a first-order disturbance ARIMA model is that only one moving average noise parameter is used instead of two noise parameters θ1 and θ2 as in the second-order disturbance model considered in Abraham and Box [1979]. The development of the cost model uses the principles of sampling interval and feedback control (Abraham and Box [1979]) and enhances the cost modelling for feedback control adjustment. A method to choose a sampling interval is described in Abraham and Box [1979]. The authors followed the usual notation for the IMA parameter θ and the backward difference operator, ∇ = 1 – B, where B is the backward shift operator, B(Xt) = Xt–1, for an ARIMA (p, d, q) disturbance process (Zt) of the form (Box et al. [1994]).

∇dZt = (1 – Q1B – ...QqBq)at.

Abraham and Box [1979] used a second-order moving average process as a special case for deriving a cost function and illustrated it with a numerical example. The process is sampled at sampling (integer) interval (‘h’) subject to conditions (i) h > q – d and (ii) q ≥ d without considering delay in the system. The authors showed also, by imposing an additional restriction (h > b, the delay (dead-time) in the system), that the sampling interval obtained is optimum even when there is

Cost Modelling for Process and Product Quality Control

8.5

whole or fractional periods of delay in effecting the adjustments after sampling. The sampling interval was obtained by assuming a certain class of stochastic disturbance models and a specified cost function. General results were derived for the ‘new’ (the resulting process after sampling), sampled process and the parameters of the new process. Instead of deriving an optimum sampling interval, the (stochastic) time series feedback control algorithm, (Equation 3.16) (Venkatesan [1997]) is simulated to obtain the adjustment intervals (AI), as discussed in Chapter 4. Note that the same parameter θ can be used for both the IMA (0, 1, 1) process and also the ARIMA disturbance model. The method of Abraham and Box [1979] is modified to show that the use of sampling periods (adjustment intervals, AIs) obtained directly from simulation of stochastic feedback control algorithm (3.16), still lead to minimum cost control schemes even with time delay (dead-time) in the system. This is due to the fact that these adjustment intervals (AIs) are for minimum mean square error (variance) (MMSE) control for the second-order dynamic model with delay, considered in this book. By assuming an adequate ‘transfer function (dynamic second-order) noise model’ to describe the process and the disturbance (noise), the effect of ‘model misspecification’ (on optimization, a problem encountered by the authors), is minimized. The disturbance model considered is the ARIMA (0,1,1) and it is shown that the derived cost model leads to a minimum cost regulation scheme. The Control Error Standard Deviation (CESTDDVN) and the Adjustment Intervals (AI)s, being the time series controller ‘performance measures’, are obtained from simulation of the stochastic feedback control algorithm, (Equation 3.16), (Venkatesan [1997]) as shown in Chapter 4. This is a distinct advantage of considering a second-order transfer function process model that describes the dynamic nature of the process. Incorrect specification of the process model might otherwise lead to an incorrect noise model that may not adequately describe the drifting nature of the process due to disturbance (noise). It is reported that it may be rather difficult in general to discuss the effect of ‘model misspecification’ on optimization (Abraham and Box [1979]). This chapter considers the second-order dynamic model for the process and overcomes this difficulty by using the AIs. Hence, it is not necessary to devise a control adjustment scheme after rebuilding the transfer function-noise model with the optimum interval as suggested in Abraham and Box [1979]. The values of AI are used to sample and immediately an adjustment is made to the process when an EWMA forecast crosses the control limits, as the time series controller requires an adjustment for each sample. The EWMA feedback control method of process adjustment is explained on feedback control methodology in Chapter 4. It is shown in (Venkatesan [1997]) that these AIs are for MMSE or minimum variance control of the process. This method of obtaining the sampling interval obviates the need for locating an optimum sampling interval as shown by an

8.6

Integrated Statistical and Automatic Process Control

algorithmic procedure given in Abraham and Box [1979]. The IMA parameter θ determines the instant at which compensatory action is to be taken depending on the EWMA control limits lines L given by Equations (6.1) and (6.2), L = ±3sa√(1 - Q)/ (1 + Q). sa is the standard deviation of the random shocks, {at}N~(0,sa), (Baxley [1991]). Adjustments (xt) in the input variable are made, when needed, to bring the mean of the output quality variable close to the target value. Use of the IMA parameter θ is suggested as an on-line self-tuning, (‘adaptive’) parameter for the minimum variance controller (Venkatesan [1997]) and set to match the disturbance. Hence, θ and θAI (as defined subsequently) are the same and the resulting control regulation scheme depends only on the ratio of C, that is on Ct, the off-target cost and Ca, the adjustment cost and on the process drift, r = (1 – Q). A complex algorithm is given (Abraham and Box [1979]) to locate the optimum θh, (the IMA parameter for the ‘sampled’ process) in the cost model and a table of sampling interval values is also provided as a function of overall cost to find optimum sampling intervals for the cost ratio, C. The suggested method eliminates the need to use this table and also makes it easier to locate QAI, the IMA parameter for the new ‘sampled’ process. This is shown while developing the equivalent expression for the cost function and it can be claimed that the proposed method is an enhancement of the traditional method outlined by Abraham and Box [1979].

8.4 FEEDBACK CONTROL ADJUSTMENT METHODOLOGY It is shown in Chapter 4 that simulation of the stochastic feedback control algorithm (Equation 3.16) gives also the EWMA forecasts of the process data. These forecasts are compared with the EWMA chart to monitor the process with control limits (L). The IMA parameter θ is used for indicating process out-of-control signals based on the EWMA statistic. In a process regulation scheme, the position of these control limits is determined on the basis of the relative costs of adjustment and of being offtarget. The relative value of these costs is an important factor in deciding to choose a process regulation scheme. A procedure, to reduce the cost of the scheme, is to lengthen the sampling interval, when there is a possibility that frequent sampling and adjustments to the process will result in high costs of sampling and adjustment. The sample interval is lengthened relative to pre-determined and fixed sampling intervals and thereby reducing the cost of a regulation scheme. This type of control action, incidentally, may also increase slightly the ‘Mean Square Error’, (‘MSE’) due to the dynamics (inertia) of the system. A suitable feedback control scheme (with lengthened sampling interval which may result in increased MSE than normally would be expected under similar circumstances) should then be specifically designed and used to regulate (adjust) the process in order to avoid obtaining a large MSE at the output (Box and Kramer [1992]).

8.7

Cost Modelling for Process and Product Quality Control

8.5 COST MODEL FOR FEEDBACK CONTROL ADJUSTMENT 8.5.1 Assumptions in Developing the Cost Model (i) Cost of sampling the process is negligible and even if slightly significant, it is included in Ca, the cost of adjustment. This is usually the situation in continuous process industries where the cost of sampling is negligible or insignificant. (ii) The process is represented by a second-order model and the disturbance by a first-order model. (iii) Overall cost C, the ratio of (off-target cost) Ct /Ca (adjustment cost) is known. In Chapter 3, it is mentioned that the time series controller given by the feedback control adjustment Equation (3.16) is the discrete equivalent of a properly tuned integral controller. This equation defines the adjustment to be made to the process at time t which would produce the feedback control action compensating for the forecasted disturbance yielding the smallest possible mean square error (variance). In other words, the control adjustment action given by Equation (3.16) minimizes the variance of the output controlled variable. A feedback control scheme employing a Minimum Mean Square Error, MMSE (or minimum variance) controller would minimize the overall cost if it is assumed that (i) the off-target cost is the only cost, and (ii) the cost is a quadratic function of the output deviation from the target (Box and Kramer [1992]). So, the time series controller based on Equation (3.16) would minimize the mean overall cost of the feedback control scheme if the cost of being off-target is assumed to be proportional to the square of the deviation from target and that other variable costs, (given below), are negligible. Minimum variance control that can be achieved with feedback control schemes will be minimum cost schemes if based on the assumption that an off-target cost is an associated cost for the deviation of the output quality characteristic being away from the desired target. This assumption is possible since the minimization of the mean square error at the output is equivalent to minimization of the quadratic offtarget cost (Kramer [1990]). However, there are other variable costs such as (i) the cost of adjustment, and (ii) the cost associated with the frequency of sampling to examine the process, called the monitoring or ‘observation cost’. The resulting minimum-cost feedback adjustment schemes have then to be formulated in a different manner from the minimum cost schemes based on the minimization of the mean square error at the output. This is due to the fact that the cost of the feedback control scheme changes as the objectives of the scheme are

8.8

Integrated Statistical and Automatic Process Control

changed to minimize the deviation and also to adjust the process (Box and Kramer [1992]). With this as our objective, an outline of a process regulation scheme is given when there are (i) off-target costs, and (ii) adjustment costs (which include sampling costs) in a stochastically controlled process employing feedback control.

8.6 DEAD-TIME AND FEEDBACK CONTROL SCHEME It is generally assumed, in controlling a process by applying SPC techniques, that (i) the true process level is a constant and (ii) the common-cause variation and the process state of statistical control follow only a (stable or) stationary model. If this assumption proves to be incorrect, then, there may be slight amounts of autocorrelations in the process level affecting the run-length and the control chart limits (Harrison and Ross [1991]). This is frequently the scenario in the continuous process industries, where the true process level is not constant due to process drifts. Feedback control can be an approach to compensate for the drifts in these circumstances. If the autocorrelations are large and persistent, then, a feedback control approach may be more appropriate than a SPC approach to control the process. Feedback control can compensate only for the predictable component of the uncompensated process output (MacGregor [1992]). So, the effectiveness of feedback control will depend on how much it will be possible to compensate the output. Thus a situation arises when a decision must be taken as when to use SPC and APC. This depends upon the process level remaining constant or where there are changes in the process due to some significance in the autocorrelation estimates of the process data. In the final analysis, a suggestion is made that integrates both the SPC and APC procedures by judicial use of techniques from both the disciplines as may be considered necessary by current process control conditions. The EWMA forecasts of the (simulated) data have been plotted against two parallel action lines in a geometric moving average (gma) control chart. In a feedback control scheme, the position of these control limits is determined on the basis of:(i) the relative costs of adjustment and of being off-target, and (ii) by the degree of non-stationary process. The relative value of these costs is an important factor in deciding the ‘optimal’ choice of a feedback control scheme. A procedure, to reduce the cost of the scheme, is to lengthen the sampling (monitoring) interval. This procedure may be less satisfactory in that it may increase slightly the mean square error due to dynamics (inertia) of the system (page 261, Box and Kramer [1992]). Since an assumption is

Cost Modelling for Process and Product Quality Control

8.9

made to represent a complex dynamic process by a second-order dynamic model, the focus is currently on the influence and effects of dead-time on a feedback control scheme. For ARIMA (transfer function) models with b > 0 periods of delay, minimum mean square action yields a process output that becomes a moving average of at, at–1, ...at–b (page 279, (Vander Wiel and Vardeman [1992])). Due to delay, the process deviation is a moving average time series model of order b – 1. For b > 2, the adjacent values of the process output will be auto-correlated. When the delay exceeds one period, this autocorrelation will still be present, regardless of the feedback control scheme. There will not be any significant autocorrelations beyond lag 2 (MacGregor [1992]). The geometric moving average (gma) Q was used for monitoring and indicating the ‘out-of-control’ signal based on this ‘gma’ statistic. The process with dead-time will still be in statistical control though the observations may be serially independent (Koty and Johnson [1985]). The Shewhart chart helps to monitor stable operation due to common causes and to reveal special causes. So, only when it is possible to establish some statistically significant monitoring criterion, will it be proper to react to process changes. A suitable feedback control scheme should then be specifically designed and used to regulate (adjust) the process (Box and Kramer [1992]). This is to avoid a (pure) feedback control adjustment scheme only from obtaining a large mean square error. However, it is possible to obtain the residual sequence, aˆt , which can be used for process monitoring even with deadtime (in the feedback control loop). Thus both the system dynamics and dead-time are taken care of which will enable to focus attention on developing the cost model for the feedback control process regulation schemes proposed in this monograph.

8.7 COST MODEL Briefly, the method of Abraham and Box [1979] mentioned earlier in Section 8.3 is reproduced here for sake of continuity of discussion and review. Abraham and Box [1979] modified the approach to the design of feedback control scheme of Box and Jenkins [1970, 1976] in which the monitoring interval had been decided prior to the design of the scheme. Abraham and Box [1979] derived general results for the (‘new’ sampled) process and the parameters of the new process. They called the resulting process after sampling in the presence of disturbance

∇dZt = (1 – Q1B –... Q qBq)at,

sampled at sampling (integer) interval (‘h’) subject to conditions (i) h > q – d and (ii) q ≥ d without considering delay in the system. The authors also showed that by imposing an additional restriction (h > b, the delay (dead-time) in the system), that the sampling interval obtained is ‘optimum’

8.10

Integrated Statistical and Automatic Process Control

even when there is whole or fractional periods of delay in effecting the adjustments after sampling. The ‘sampling’ interval was obtained by assuming a certain class of stochastic disturbance models and a specified cost function. They considered a second order moving average process as a special case and illustrated it with a numerical example. The method of Abraham and Box [1979] is modified to show that the use of sampling periods (adjustment intervals, AIs) obtained directly from simulation of stochastic feedback control algorithm (Equation 3.16), still lead to minimum cost control schemes even with time delay (dead-time) in the system. This is due to the fact that these Adjustment Intervals (AIs) are for minimum mean square error (variance) (MMSE) control for the second-order dynamic model with delay, considered in this book. By assuming an adequate ‘transfer function (dynamic second-order) noise model’ to describe the process and the disturbance (noise), the effect of ‘model misspecification’ (on optimization, a problem encountered by the authors), is minimized. The disturbance model considered is the ARIMA (0,1,1) and it is shown that the derived cost model leads to a minimum cost regulation scheme. 8.7.1 Method of Developing the Cost Model The cost model is developed by considering the four different forms of cost functions as derived in Equations (8.10), (8.12) and (8.13). This cost modelling approach is suggested in order to reduce the degree of complexity in arriving at the conclusion that the minimization of cost functions G(AI) and F(AI) are equivalent. Notation AI > b, dead-time in whole periods of delay, Ct is the cost associated with being off-target (off-target cost), Ca is the cost of sampling and adjustment, C = Ct /Ca is assumed to be known, se2 is the variance of the observed error from the target corresponding to the sampling interval (AI). Consider a cost function of the form

F*(AI) = Ct(se2/sa2) + Ca(1/AI).

...(8.1)

An economical sampling interval will be the value of AI that makes the above cost function a minimum. It is assumed that the system has no dead-time, (b = 0), and that the disturbance Zt is an integrated moving average process of order 1 given by

∇Zt = (1 – QB)at. [d = q = 1 for first-order ARIMA(0, 1, 1) disturbance process.]

Cost Modelling for Process and Product Quality Control

8.11

Sampling and adjustment, (because the time series controller requires an adjustment for every sample), is done with an adjustment (sampling) interval such that AI > 0 and q = d = 1. Then, the resulting process Mt and aAI,t, the random shocks which are assumed 2 is given by to be N(0, sAI) with variance sAI

∇AIMt = (1 – ΘAI BAI)aAI,t ...(8.2)

where ∇AI = (1 – BAI), (1 – BAI ) ∇AI being a differencing operator associated with the adjustment (sampling) interval AI and QAI, the IMA parameter for the new process Mt. Let g0 (AI) be the variance of ∇AI Mt and g1 (AI) be the first lag auto-covariance of ∇AI Mt. Abraham and Box [1979] used the following Lemma from the monograph, ‘The Statistical Analysis of Time Series’, of Anderson [1971] to obtain (a) the result given in Equation (8.2) and (b) to obtain also expressions for the variance, g0(AI) and g1(AI), the first lag autocovariance of ∇AIMt. Lemma. “Given any arbitrary covariance or correlation sequence with only a finite number of non-zero elements, there is a finite moving average process corresponding to the sequence” (Anderson [1971] - Abraham and Box [1979]). (a) Proof of Equation (8.2): 1 - B AI ∇ AI M t = (1 – B ) Z t 1- B

= [(1 + B + … + BAI – 1)].[(1 – QB) at] ...(8.3) This is written as ∇ AI M t= at + ψ1at –1 + ... + ψ1 + (AI – 1) at – 1 – (AI – 1) . Let gk(AI) denote kth lag auto-covariance of ∇AI Mt. Then it is enough to show that (i) g1(AI ) ≠ 0 and (ii) g1+ k(AI) = 0 for all k > 0. Now, ∇ AI M t = at –(1 + k )AI + ψ1at –(1 + k )(AI – 1) + ... + ψ1 + AI – 1at –(1 + k )AI – AI ...(8.4) – (1 + k )AI

Since q = d = 1, using k = 0 in Equation (8.4), it can be seen that g1 (AI) =E (∇ AI M t ∇ AI M t – AI ) ≠ 0 , where E denotes the expected value.

It is obvious that (1 + k)AI > AI for all values of k ≥ 1 in order to show that g1+k(AI) = 0 for all k ≥ 1. It is known that since sampling and adjustment is done with an adjustment (sampling) interval AI > (q – d = 1 – 1 =) 0 and so

8.12

Integrated Statistical and Automatic Process Control

(AI) × (d) + (q – d) > (AI) × (d) + AI > (AI) × (d) + k(AI) for all k ≥ 1 (page 7, (Abraham and Box [1979])). Now, using the fact that g1(AI) ≠ 0 and g1+k(AI) = 0 for all k ≥ 1 and the Lemma, the result of Equation (8.2) follows. (b) Expressions for the variance, g0(AI) and g1(AI), the first lag autocovariance of ∇AIMt. With d = 1 in equation (8.2) and using Equation (8.3), it can be shown that ∇ AI M t= at + (1 - Q)at –1 + ... + (1 - Q)at + (1 - Q)(at –1 + ... + at – AI + 1 ) + (-Q)at – AI

Hence, 2 2 2 2 2 g 0 (AI)= [1 + (1 - Q) + ... + (1 - Q) + (AI - 1) + (1 - Q) + (Q) ]sa .

Now

g1 (AI) = E ((∇ AI M t )(∇ AI M t – AI )) = - [(Q)]sa2 = - Qsa2

and after some tedious algebraic manipulations,

g 0 (AI) + 2g1 (AI) AI(1 – Q) 2 = . g1 (AI) Q

...( 8.5)

2 it can For a first-order moving average process with parameters QAI and sAI , also be shown that

g1(AI ) = –QAI sAI2 ,

and g 0 ( AI ) + 2 g1(AI) = – (1 – Q AI ) ...(8.6) g1 ( AI ) Q AI 2

Hence from Equations (8.5) and (8.6), the following relations connecting the parameters of the processes, Zt and Mt are obtained: AI(1 – Q)2/Q = (1 – QAI)2/QAI or [(1 – QAI)2/AI] = [(1 – Q)2/Q][QAI], ...(8.7) and sAI2 = (sa2)(Q/QAI) or [QAI/Q] = [sa2/sAI2 ]. ...( 8.8) The variance of the error et in the output at time t is given by 2 se2 =s2AI 1 + ( b / AI )(1 – Q AI ) . ...(8.9)

8.13

Cost Modelling for Process and Product Quality Control

When b = 0, eAI,t, the observed error from target at time t is the one-step ahead forecast error aAI,t +1, and hence se2 = sAI2 . Using this, Equation (8.1) is simplified and a cost function considered,

F(AI) = C(sAI2 /sa2) + (1/AI),

where

...(8.10)

C = Ct /Ca.

The equation for the cost function that includes fractional periods of delay, (Equation 5.4, page 6, Abraham and Box [1979]), is modified for integer periods of delay to give the cost function corresponding to Equation (8.7). G(AI) = C(sAI2/sa2)[1 + (1 – QAI)2(b/AI)] + (1/AI).

...(8.11)

So, Equation (8.11) becomes, on using Equations (8.7) and (8.8), 2 )]. G(AI) = C(sAI2/sa2) + (1/AI) + C(sAI2 /s2a)[b(1 – Q)2(sa2/sAI

That is, G(AI) = C(sAI2 /sa2) + (1/AI) + C(b(1-Q)2)

...(8.12)

= F(AI) + k, where k = C(b)(1 – Q)2, a known constant, since the dead-time, b, the rate of drift r = (1 – Q) and information about C, which will always be available, are known quantities. The minimization of G(AI) and F(AI) are equivalent and hence the sampling period (adjustment interval) obtained will definitely provide minimum cost control schemes even when there is a delay in the system. A similar result is obtained by Abraham and Box [1979] for a general case and a second order disturbance process. The function F(AI) given by Equation (8.9) when written as a function of QAI becomes F1(QAI) = C(Q/QAI) + [(1 - Q)2/(1 - QAI)2](QAI/Q).

...(8.13)

After locating QAI which minimizes Equation (8.13), a corresponding control scheme can be obtained using Equations (8.7) and (8.8) and the control adjustment Equation (3.16) derived in Chapter 3. In this case, since the disturbance is described by the ARIMA(0,1,1) model and the IMA parameter Q is the tuning parameter for the time series controller, Q and QAI are the same. Hence the resulting cost regulation scheme depends only on the ratio of C, that is on Ct, the off-target cost and Ca, the sampling and adjustment cost and the process drift r = (1 – Q) as shown subsequently. As mentioned earlier, the value of the adjustment interval obtained from the simulation is used to sample the process, and immediately an adjustment is made. The geometric moving average (gama theta) determines the instant at which compensatory action is to be taken depending on the EWMA control limit lines L given by Equations (4.1) and (4.2), L = ±3sa √ (1 – Q)/(1 + Q) where Q is the IMA parameter of the stochastic disturbance and sa, the standard deviation of the

8.14

Integrated Statistical and Automatic Process Control

random shocks {at}N~(0,sa) Adjustments, when needed, are made to bring the mean of the output quality variable close to the target value.

8.8 COST MODEL AND PROCESS REGULATION SCHEMES The need for process regulation arises when the process control system is afflicted with disturbances, which cause it to drift off target if no control action is taken to compensate the disturbances (Kramer, [1990]). This fact emphasizes the need to model the disturbance by an appropriate noise model. It was mentioned earlier that the EWMA control limits are determined based on the relative costs Ra. The latter is taken care of by the noise parameter, which is incorporated in the expression for the control limits L. Thus, both these factors are given due consideration in choosing a particular control regulation scheme. The control limits are determined by the relative costs Ra and by the degree of non-stationary process

Ra = (Ca /Ct)/(1 – Q)2

where Ca is the cost of adjustment, $a, per unit AI (Kramer 1990) and

Ct = kt(sa)2

where kt = reprocessing or processing cost in terms of material cost per hour (C0)/ (deviation from target, D)2, since as the deviation (D) from target increases, it will reach a point at which the manufactured material must be discarded or reprocessed at a cost C0. For compensating a non-stationary disturbance, the only cost is that of being off-target (Box and Kramer [1992]). The process is sampled and adjusted regularly at the AIs (sample periods given by simulation), resulting in substantial cost Ca for adjusting the process. Overall cost C per unit time,

C = (AI) sa + kt(sa)2

where AI is the number of sample period required to make adjustment sa the cost per unit AI, When a digital computer is used to execute the single integral mode control equations, an (approximate rule) for the lower limit on the adjustment interval (sampling period, T) for integral control is

T > t1,

where, t1 is the ‘integral time constant’ of the controller (Deshpande and Ash [1981]). It is sometimes difficult to judge the costs directly since different kinds of cost models may be appropriate for various circumstances. Then, the alternative is to draw a list of options from which the choice among minimum-cost regulation schemes should be made empirically by balancing the advantage of longer AIs against the

8.15

Cost Modelling for Process and Product Quality Control

consequential increase in the mean square error about the target value as illustrated in Tables 4.4 and 4.5 discussed in Section 4.17. Such a table is provided by Box [1991b]). Process regulation schemes depend upon the capability of the ‘controlled process’ for providing quality products within manufacturing specifications. A moderate increase in the control error deviation (product variability), might be tolerated if this action resulted in savings in sampling and adjustment costs for a process with high process capability index.

8.9 COMPARISON OF CONTROL SCHEMES 8.9.1 Application of the Cost Model: Comparison of Control Schemes A comparison of different control schemes (Table 8.1) for fast and slow drifts shows that the application of the proposed feedback control cost model for the IMA parameter Q values 0.05 and 0.95. Table 8.2 shows that as the process drifts from fast to fairly noisy and slow drifts, the cost function which is made of the overall cost (C) of the process regulation scheme also varies being high for fast drifts, then decreasing for fairly noisy processes around the values of the noise parameter Q = 0.70 and 0.75, and then to low values for slow drifts. Table 8.1 Comparison of Control Schemes for Fast and Slow Drifts IMA

Rate of Drift

Adj. Interval

Control Adj.

Var.of Adj.

Cost

Q

r=1–Q

AI

dxt

vardxt

F

2 /s 2) + b(1–Q)2] + 1/AI F = C[(sAI a

= C[1+ b(1–Q)2] + 1/AI 0.05

0.95(Fast)

10.52

–0.092

0.266

0.59

0.95

0.05(Slow)

8.62

–0.006

0.001

0.616

Table 8.2 Cost Functions for Fast and Slow Drifts for Dead-Time b = 1.0 IMA q

Rate of Drift(1–r)

CESTD DVN

Adj.int.,AI

2 /s 2) + Cost, F = C[(sAI a 2 b(1Q) ] + 1/AI

0.05

0.95 (fast)

1.039

9.80

1.903C + 0.102

0.25

0.75 (fast)

1.030

20.00

1.563C + 0.05

0.70

0.30 (noisy)

1.002

1000.00

1.09C + 0.001

0.75

0.25 (noisy)

1.000

0.0

1.063C

0.90

0.10 (slow)

1.010

8.26

1.01C + 0.121

0.95

0.05 (slow)

1.001

2.57

1.003C + 0.389

2 /s 2) is taken as C itself since from Equation (8.8), s 2 /s 2 = 1 and as Note: C(sAI a AI a mentioned in Section 8.7, the tuning parameter Q is the same as QAI.

8.16

Integrated Statistical and Automatic Process Control

8.10 BENEFITS Possible chance/risk to represent incorrect weight or volume of food and to other containers can be avoided. In countries where the Weights and Measures and Food and Drug Administration (FDA) and consumer protection laws are strong and rigorously enforced in practice, this type of cost modelling can be helpful and beneﬁcial to manufacturers in avoiding penalties. Company’s name, reputation and public image may be enhanced in National and International markets. Reduced cost, increased productivity and improved quality.

8.11 CONCLUSION A brief review and background of the literature review of cost modelling are given in this chapter. It is shown that the cost regulation scheme is a minimum for a process with delay and an approximate cost model was presented based on a cost function methodology developed in this chapter. Comparison of processes with both fast and slow drifts, (Table 8.1) is given and a review of some process regulation schemes along with comparison of some control schemes (Table 8.2). The outcome of this cost modelling is useful for process regulation and as an effective cost control tool that can be applied in manufacturing practice. Brief descriptions of some of the industrial applications are given, as the main objective is to develop a cost model for feedback control adjustment. In the performance assessment of the ‘basis-weight’ control on two different industrial paper machines, to minimize the control variance of the ‘basis weight’, the weight per unit area of the paper, is a prime economic factor in paper manufacturing. Weight control is directed towards a reduction of basis weight variations, moisture variation and fibre. Control systems automatically level the distribution of paper stock (the thin pulp suspension) on the wire to optimize weight and moisture profiles. Measured values at each position can be compared with optimum profiles so that control actions (input feedback control adjustments) take place at the same time. Moisture control, which is critical for the thicker paper board and boxboard products, may require a balance between reducing speed or re-wetting the paper surface. Calliper measurements use measurement heads that float above the paper sheet. The calliper and smoothness profiles are controlled by positioning the space between steamrollers, the temperature of the rolls and their speed (Moore [1986]). Polymer viscosity in the commercial production of polymer resin is controlled in two groups of batch reactors running in parallel asynchronously and sharing common raw materials. The main purpose of this application is to determine the exact amount of catalyst that needs to be added in order to minimize the viscosity variation of the resin in relation to the desired target about a target level. The key

Cost Modelling for Process and Product Quality Control

8.17

quality characteristic of the polymer resin is the viscosity. In this application, by regulating the amount of catalyst added to the resin, changes in viscosity are made and thereby the manufacturing cost is minimized. A moderate change in the amount of catalyst added represents negligible cost or savings when compared with the cost incurred by a batch of off-target material. This process regulation scheme is based on the criterion of choosing the mean square deviation (MSE) of viscosity from its target value. The adjustment is a linear function of the catalyst added to the previous batch and the previous viscosity measurement. This batch polymerization process application can result in reduction in viscosity variability and elimination of material that is outside the specification limits (Vander Wiel et al. [1992]). In process control of a fluid in a chemical plant, its absolute viscosity tends to vary because of variations in the component of the fluid, the plant temperature and humidity. In the minimum variance control of the temperature of the oil bath that surrounds the processing chamber, an alternating character of low and high temperature adjustments are required to maintain the fluid viscosity at a stable level. These drastic changes in the bath temperature are taken care of by considering two dynamic parameters of the process that account for the changes in the process characteristics and at the same time provide minimum variance control. Minimum variance controllers and some of their extensions available in the literature are members of a class of optimal predictive controllers. These controllers are derived using a quadratic loss function, which implies that the penalty associated with being off target depends only on the squared magnitude of the error and not on the sign of the deviation from the target. There are situations in which the penalty associated with not being on target depends both on the magnitude and on the sign of the deviation. In food processing industries, for example, it is not desirable to overfill containers, as there is no cost recovery for the over filling of containers. In case the containers are under-filled, there are chances to incur severe penalties for misrepresenting the product’s actual weight or volume. In these cases, it is desirable to evolve a control strategy to reduce variability that considers the relative costs of not being on target. The product may be deemed acceptable if it meets its specification and totally not acceptable if the product specifications are exceeded by a large margin of error and thus incurring a cost for exceeding the specification limits (Harris 1992). The choice of an optimal system for process regulation depends on costs as well as the nature of disturbance that affects the process and its dynamics. For illustration, some alternative process regulation schemes were proposed (Table 8.1) for off-target cost and cost of adjustment that allow optimal schemes to be chosen for different circumstances. In particular, it is shown that for feedback control adjustment schemes that use EWMA chart for process monitoring and control similar to the ones considered are optimal, since the control limit lines are decided by relative costs as shown in the Section 8. The process adjustment costs can also be minimized by monitoring the process at larger AIs as shown by the process regulation schemes.

8.18

Integrated Statistical and Automatic Process Control

The cost modelling methodology described in this chapter can be extended to different disturbance models of cyclic and jump disturbance characteristics and other combinations of first-order process model, – first-order disturbance model and second-order process model, – second-order disturbance model. An optimal process regulation cost control scheme has been designed by modelling the process dynamics and the disturbance at the output. The proposed cost modelling methodology can have practical applications in the continuous process industries.

REFERENCES Abraham, B. & Box, G.E.P., (1979). Sampling Interval and Feedback Control. Technometrics, 21, 1–8. Anderson, T.W., (1971). The Statistical Analysis of Time Series (New York: Wiley). Baxley, R. V., (1991). A simulation study of statistical process control algorithms for drifting processes. In SPC in Manufacturing, edited by J. B. Keats and D. C. Montgomery (Marcel Dekker), 247–297. Box, G.E.P., Jenkins, G.M. & Reinsel, G.C., (1994). Time Series Analysis — Forecasting and Control, 3rd edn., (Prentice Hall). Box, G.E.P. & Kramer, T., (1992). Statistical process monitoring and feedback adjustment — a discussion, Technometrics, 34, 251–267. Elsayed, E.A., Reibeiro, J. L. & Lee, M. K., (1995). Automated process control and quality engineering for processes with damped controllers. International Journal of Production Research, 33, 2923–2932. English, J.R., (1994). Fixed control limits for continuous flow processes, IEEE Transactions on Engineering Management, 41, 310–314. English, J.R., Krishnamurthy, M. and Shastry, T., (1991). Quality monitoring of continuous flow processes, Computers and Industrial Engineering, 19, 258–262. Harris, T.J., (1992). Optimal controllers for non-symmetric and non-quadratic loss functions. Technometrics, 34, 298–306. Kramer, T., (1990). Process control from an Economic Point of View-Fixed Monitoring and Adjustment Costs. Technical Report No. 43 (Centre for Quality and Productivity Improvement, University of Wisconsin). Moore, J. A., (1986). Digital Control Devices, Equipment and Applications (North Carolina: ISA Press). Proceedings of the IMechE, Part B, B11, 216, 2002, 1429–1442. Shinskey, F.G., (1988). Process Control Systems (McGraw-Hill). Superville, C. & Adams, B.M., (1994). An evaluation of forecast-based quality control schemes. Communications in Statistics — Computation and Simulation, 23, 645–661. Tseng, S. & Adams, B.M., (1994). Monitoring auto-correlated processes with an exponentially weighted moving average forecast, Journal of Statistical Computer Simulation, 50, 187–195. Vander Wiel, S.A., Tucker, W.T., Faltin, F.W. and Doganaksoy, N., (1992). Algorithmic statistical process control: concepts and an application. Technometrics, 34, 286–297.

Cost Modelling for Process and Product Quality Control

8.19

Venkatesan, G.(1997). A statistical approach to automatic process control. PhD thesis, Victoria University of Technology, Melbourne, Australia, 1997. Venkatesan, G., (2002a). An algorithm for minimum variance control of a dynamic timedelay system to reduce product variability. Computers and Electrical Engineering, 28, 229–239. Venkatesan, G., (2002b). Discussion and analysis of stochastic feedback control adjustment. Journal of Engineering Manufacture, The Institute of Mechanical Engineers, IMech.E, Proceedings Part B, Issue B11, vol. 216, 2002, pp. 1429–1442. G. Venkatesan, (2003), Cost modelling for feedback control adjustment, International Journal of Production Research,vol. 41, no. 17, 4099– 4114. Yourstone, S.A. & Montgomery, D.C., (1989), A time-series approach to discrete real-time process quality control, Quality and Reliability Engineering International, 5, 305–317.

Suggested Exercises 1. Discuss the algorithm and cost model procedure described in the paper by Abraham, B. and Box, G.E.P., (1979), Sampling Interval and Feedback Control. Technometrics, 21, 1–8. 2. Develop a cost model for a known process and discuss benefits.

Chapter

9 Applications in Product Quality Control

9.1 INTRODUCTION In the preceding chapters of this monograph, it is shown that it is possible to develop a practical feedback control strategy by applying Automatic Process Control (APC) and Statistical Process Control (SPC) techniques. It is shown also that skillful execution of the control strategy, statistical control procedures and process regulation schemes thus developed, results in the achievement of the final product of desired quality. Few examples of specifying product quality output can be cited in relation to product quality control. The quality specifications and the tolerances of deviation of the final product from target quality are different for different processes. For example, steam pressure and degrees of superheat are used to define the product quality output of a steam generator plant. The final output of product quality in a plating plant is in terms of thickness of plating material. Other examples of specifying product quality output are chemical plants that specify quality in terms of chemical analysis, acidity of the chemical bath and viscosity of flowing liquid. This fact, namely, specifying product quality, underlines and emphasises the need to control manufacturing processes during operation to provide quality products that satisfy the needs of the customer. More importantly, it is necessary to show by practical examples that the developed control strategy is of use and benefit to the scientific and industrial community and the society at large. It may be mentioned that a scientific procedure, formulation or algorithm will be of no use if it cannot be put into practice for the benefit of humanity. With this as background, this chapter discusses two practical applications of (i) an ‘Integral Controller’ and (ii) ‘A Model - Based Controller’. It is worth noting that the same stochastic feedback control algorithm (Equation 3.16) and the time series controller developed in Chapter 3 can be used for demonstrating

9.2

Integrated Statistical and Automatic Process Control

the two practical applications taking advantage of specific parts of the algorithm. The controller is called an integral or model-based controller depending upon the use of the specific characteristics and part of the stochastic feedback control algorithm (Equation 3.16). It is possible to show that (i) by the application of an integral controller which uses a second-order transfer function-noise model to represent a continuous process that is sampled at discrete time intervals, (ii) by appropriately modelling a process that adequately describes its dynamic behaviour and also (iii) by understanding thoroughly the dynamic characteristics of a process, to determine accurately the controller characteristics that are necessary to maintain the output controlled quality variable at the controller set point. In Chapter 3, it is mentioned that the algorithm (Equation 3.16) has an integral term and a dead-time compensation term. The integral term is put to use and an integral controller is developed on the basis of this integral term. An ‘integral’, (or ‘reset’ controller which produces integral (summation) action only), controller can be used to control the quality of products manufactured at the output of a production process. The integral controller can be useful and can find applications in the control of flow, sheet and film forming processes. The points (ii) and (iii) are accounted for by developing a controller on the basis of the second-order process model (Equation 3.5) and appropriately using the simulation results of the stochastic feedback control algorithm (Equation 3.16), to calculate an input adjustment that exactly cancels the output ‘deviation’, (error) due to forecast (predicted), disturbance, and minimises the variance, of the output variable (Venkatesan [1997]). Several authors, ((Morary and Doyle [1986]) and (Smith [1959])), have postulated a feedback control system that is modelled after the process (Shinskey [1988]).The configuration can take many forms, but most can be reduced to that shown in Figure 4.13, page152, Shinskey [1988], the configuration of model-based controller. The reader is advised to refer pages 150-156 Shinskey [1988] for a detailed description of the theory of operation of model-based controller as it is not possible to go into the model-based controller details in a monograph of this nature. As will be shown in later in Section 9.11 of this chapter that a model-based controller can provide the best possible feedback control of a dead-time process, called ‘deed-beat’ response. A (feedback control) loop gain of 1.0 gives dead-beat response. It is recalled that in Chapter 4 it is mentioned that the time series controller gain is set to 1.0 to prevent oscillations in the feedback control loop. In addition, care is taken to see that the model-based controller matches the process quite faithfully and also the process contains a time constant as little as one-tenth the value of the

Applications in Product Quality Control

9.3

dead-time, in order to achieve stability with a model dead-time between 0.5 td and 1.1td (Shinskey [1988]) td.

9.2 INTEGRAL CONTROLLER 9.2.1 Integral Controller Design The integral controller design is based on the stochastic feedback controller algorithm (Equation 3.16) that provides integral control and the necessary deadtime (time-delay) compensation in the feedback control loop to achieve minimum variance control. The integral controller provides one-step ahead forecast of the disturbance over the dead-time-period and adequate dead-time compensation (Venkatesan [1997]). The integral controller thus has both predictive and deadtime compensation characteristics that are essential in the control of output product quality in continuous process industries. Integral control is best suited for control of processes having little or no energy storage (capacitance). The integral controller algorithm is of particular importance in that it has a strong appeal to be of practical utility in the manufacturing industry. Integral controller is used in process industries to control flow processes. Use of pure integral controller is indicated in control engineering literature for fast processes such as flow controllers, steam turbine generators and control of outlet temperatures from reformer furnaces. The most suitable general-purpose controller for a process with dead-time would be ideally an integral or re-set controller. The effect of automatic reset is to give greater controller gain at low frequencies starting at corner frequency 1/TR where the automatic reset time constant TR = K/KR, K being the proportional gain, (ratio of change in output due to ‘proportional control action’ to input change), and KR, the integral or reset gain. It is common in process control to refer to the amount of automatic reset by the magnitude of 1/TR expressed as repeats/unit time. Reset action can be expressed as “reset rate” in repeats per minute or the smallest number repeats per minute gives the minimum ‘reset’ action. When the main function of a controller is to maintain the controlled variable at a constant set point, in the presence of disturbances, it is called a “regulator”. The integral controller (described in this book), is a type of a quality control regulator in that it maintains the output product quality variable at the controller set point. Thus the ‘quality control regulator’, has a different direction and approach in its use from the routine functions to control process variables like temperature and pressure. It is shown in Chapter 5 that the objectives for ‘discrete’, (sampled-data) control in controlling some time-delay models can be realised through the use of digital techniques. The dynamic-stochastic model of the process under consideration is a generalpurpose three-parameter transfer function-noise model with two dynamic (inertia) parameters, δ1 and δ2. The justification and reasons for choosing this particular

9.4

Integrated Statistical and Automatic Process Control

second-order dynamic process model is explained in detail in Chapter 3 of this monograph. The structure of the integral controller is based upon this dynamicstochastic model of the process. We recall the stochastic feedback control algorithm developed in Chapter 3 to design the integral controller algorithm. xt = −

( et − d1et −1 − d2 et −2 ) (1 − Q ) − (1 − Q ) x t −b PG (1 − d1 − d2 )

0 < Q Max?

INT = INT + Max – Output Output = Max

No Yes

Output < Min?

INT = INT + Min – Output Output = Min

No

Control signal = Output Save INT save c(nT)

A

Figure 9.1 Flow Diagram for Practical Integral Controller Set points Controlled process

Process computer Measurement values Production data Quality analysis

Figure 9.2 Digital Control

9.7

Applications in Product Quality Control Theta vs VarCE 1 0.95

0.95

0.75

0.8

0.75

VarCE

0.6 0.4 0.25 0.25

0.2 0.05 0

0.05 Theta

Figure 9.3 Variation in VarCE with Respect to Q Theta vs AI 50

AI

0 Theta

Figure 9.4 Variation in AI with Respect to Q

In digital control, the computer keeps the process output constant by means of the feedback control loop. The measurement data as the process operates along with the quality analysis and the production requirements provide the computer with the data that is necessary to adjust the controller set point. The computer determines the process settings and the quality of the product to be manufactured. The computer realises the control error variance (Figure 9.3) by comparison of the set point and the measured value of the output controlled product quality variable and by means of the controller algorithm, the process computer computes the required input adjustment to update the quality index from time-to-time. The variation in VarCE and AI with respect to Q is also shown in Figures 9.3 and 9.4 respectively. Table 9.1, shows the control error variance and the adjustment interval (AI) for the IMA parameter Q = 0.05, 0.25, 0.75 and 0.95. The simulation results indicate that the integral controller algorithm (Equation 3.16) holds potential in reducing the control error variance. It is shown in (Venkatesan [1997]) that these simulation results compare favourably for Q = 0.25, Q = 0.50 and Q = 0.75 and for dead-time, b = 1.0 with no carry-over effects, (dynamics, d = 0), for control error sigma (SE) reported in (Baxley [1991]).

9.8

Integrated Statistical and Automatic Process Control Table 9.1 Integral Controller Performance Measures

IMR Parmeter Q

Dynamic Parameter

Mean Frequency

Adjustment Interval

Control Error (E) Var.

MFREQ

AI

VAR. OF E (VARCE)

d1

d2

0.05 0.05

–1.82 –1.00

–0.83 –0.25

0.000 0.102

0.00 9.80

1.000 1.079

0.25 0.25

–1.00 –0.27

–0.25 –0.02

0.050 0.075

20.0 13.33

1.060 1.152

0.75 0.75

0.00 –1.00

0.00 –0.25

0.000 0.000

0.00 0.00

1.000 1.000

0.95 0.95

–1.82 –0.10

–0.83 0.00

0.116 0.187

8.62 5.34

1.000 1.007

MFREQ = Mean frequency of process adjustment. Adjustment interval, AI = 1/MFREQ.

It can be seen from Table 9.1 that the integral controller gives minimum control error variance with minimal adjustments. The controller gain (CG) is set at the value of 1.0 in the simulation, since the maximum value of controller gain reported in literature is one for the stable operation of a pure delay process as mentioned also earlier in Section 9.1. The integral controller parameters are given in Table 9.2 for a second-order model with dead-time. The table shows the mean and standard deviation of the control error (E), adjustment (xt) and the frequency of adjustment (MFREQ). From Table 9.2, it is possible to know the process gain for different dynamic process conditions and random shocks that affect the process for a given mean process input adjustment (xt). 9.2.4 Limitations of Integral Controller In Controlling Product Quality (i) Role of Integral Controller in Minimising Integral error

Under integral control, the feedback control (closed) loop oscillates with uniform amplitude at the period where the system gain is unity. The time constant (I) of the integral controller affects only the period of oscillation, which increases with damping. It is possible to achieve damping by reducing the closed-loop gain and by setting the integral time equal to the process dead-time. There is a distinct benefit in minimising the integral error, (et – d1 et–1 – d2 et–2) which helps in determining the period and the integral time required for damping the feedback control loop. The integral error is computed from the stochastic feedback controller algorithm (Equation 3.16). The period of oscillation of the feedback control loop changes to about half of original period for a controller

9.9

Applications in Product Quality Control

time constant equal to dead-time. More phase lag is the result of increasing the integral controller time constant (I ) above its undamped value (Iu), which further leads to an extended period of oscillation of the feedback control loop. This situation is avoided by making the integral controller to contribute less phase lag by raising the integral time constant (Shinskey [1988]). Table 9.2 Integral Controller Parameters for Second-order Model with Dead-time, b = 1.0 Model Parameters

Controller Parameters

Q Control error (E)

Std.Devn. PG

d1

d2

Mean

CG

0.05 –1.82 –0.83 –0.012 1.000 1.0 1.0

Adjustment (xt)

0.05

–1.82

–0.83

0.000

0.000

1.0

1.0

Frequency 0.05 –1.82 –0.83 0.000 0.000 1.0 1.0 Control error (E)

0.25 –1.00 –0.25 0.104 1.030 0.44 1.0

Adjustment (xt)

0.25 –1.00 –0.25 –0.027 0.248 0.44 1.0

Frequency 0.25 –1.00 –0.25 0.050 0.217 0.44 1.0 Control error (E)

0.75 –0.27 –0.02 0.002 1.000 0.78 1.0

Adjustment (xt)

0.75 –0.27 –0.02 0.00 0.004 0.78 1.0

Frequency 0.75 –0.27 –0.02 0.000 0.010 0.78 1.0 Control error (E)

0.95

–0.10

0.00

0.108

1.004

0.91

1.0

0.95

–0.10

0.00

–0.001

0.022

0.91

1.0

0.95

–0.10

0.00

0.187

0.390

0.91

1.0

Adjustment (xt) Frequency

One method of minimising integral error is by summing the deviation (error) over time and equalising it with the operating cost. This principle can be applied to control the quality of a chemical that flows into a tank kept for storage purposes. Quality control can be achieved by keeping the integral error low and by maintaining the output quality of the product variable close to the controller set point. In productquality loops, integral error can be significant and may represent excessive operating cost such as product give-away. Lag-dominant dynamics are characteristics of most

9.10

Integrated Statistical and Automatic Process Control

of the important plant loops such as product quality. These product quality loops are similar to the second-order dynamic model with two time constants considered in this book. These process loops have the common characteristic in that there is a linear variation of the integral error with time. (ii) Discrete Sampling and Integral Controller Performance

Sampling produces a phase lag in digital controllers and improves process control. Discrete sampling makes possible for an integral controller to approach its best performance when the sample interval or ‘scan period’ is set equal to dead-time. Scan period is the interval between executions of a digital controller operating intermittently at regular intervals. An integral controller can be made robust for increases in dead-time by setting the scan period equal to the process dead-time. The integral time constant of the controller should be set at the product value of the process gain and the sampling period in order to make the controller robust against variations in dead-time. MAX. I-action INT –

+ –

Manual Auto S

MIN.

Limiter To actuator output measured value of process output

Desired value

Figure 9.5 Practical Realisation of an Integral Controller Algorithm

(iii) Limitations in Integral Controller Performance

Integral control is limited by ‘output wind-up’ that arises due to the saturation of the integral action as a result of prolonged integration of the output error. This causes the output product quality variable to fall outside the final control operation limits and results in overshoot of the output controlled variable before the controller output returns to its normal range. Overshoot can be avoided by setting the controller time constant higher than that required for load regulation and can also be minimised by limiting the rate of setpoint changes (Shinskey [1988]) using ‘conditional integration’ as shown in Figure 9.5. Final selection of the control mode depends on the amount of deviation that can be tolerated in view of the desired product quality. Minimum number of controller functions should be used for reasons of economy. The insertion of an “anti-windup” feature for the integral mode is to place an upper limit on the integral sum of the feedback control error detecting when integral control saturates and to terminate the summation until the input manipulated variable returns to within the normal operating range for that variable.

(iv) Dead-Time Compensator

A minimum variance controller for a dead-time process for which the number of whole periods of delay is 2 is described in (Harris [1992]). The minimum

Applications in Product Quality Control

9.11

variance controller was found to be identical upon setting the value of the discrete time constant of the closed-loop process equal to Q, the IMA parameter in the disturbance model. Based on this observation, Q is used as the closedloop time constant and on-line tuning process parameter in the integral controller algorithm (Equation 3.16) for dead-time compensation. The deadtime compensator term in the integral controller algorithm is a direct result of minimal variance control strategy (Baxley [1991]). The inclusion of dead-time compensation term (whose values range from 0 to 1), in the integral controller algorithm thus results in a minimum variance control strategy as explained earlier in Chapter 4. 9.2.5 Integral Controller Applications The integral controller performance measures are obtained by simulating the controller algorithm (Equation 3.16) to control output product variance. An integral controller can be made robust by discrete sampling a continuous process at slow and infrequent intervals. This is an advantage of using discrete model-based controllers. It is mentioned in Section 9.1 that a model-based controller provides good control of a dead-time process, called ‘dead-beat’ response with a closed-loop gain of 1.0. The dead-beat control strategy is unique to discrete (sampled-data) systems. An application of model-based control of the sugarcane-crushing mill is described in (Bob Bitmead 1994]). In the application mentioned in the article, better performing control of the sugarcane milling process results in improved extraction of the sugar bearing juice from crushing the sugarcane fibre (called the ‘bagasse’). Another application of model-based control to reduce the product variability at the output is given in (Venkatesan [1998]). This integral controller application to control output product quality will yield huge savings in the manufacture of high-value, largevolume products by reducing output product variability and minimising variance at the output. The output product quality of the mill products constantly changes in accordance with the customer requirements in the reheat facilities for steel and aluminium preceding rolling operations. Different size and composition billets require different reheat cycles and close timing to make them ready for rolling, but not so early that energy must be used to maintain their condition. This is an energy intensive operation for which a ‘mathematical model’ of the furnace continuously computes the thermal inertia of the furnace and of the work to optimise combustion and cause each piece of work to leave the furnace at conditions optimised for production and quality. The integral controller finds its application also in areas such as control of dye concentration, which is described in Buckley [1960]. Similar to the integral controller application developed in this Chapter, the principles of system identification (process modelling), controller synthesis and evaluation by simulation are reported in (Page [1994]), which describes the implementation of a model-predictive control of a semiautogenous (SAG) grinding mill. Another integral controller application is in the

9.12

Integrated Statistical and Automatic Process Control

control of a multi-input/output process through a parallel computer architecture that is described in (Venkatesan et. al. [1998]) and an integral controller application in quality control in (Venkatesan et. al. [1999]). Other applications of integral controller are not discussed in detail since this Chapter mainly analyses its performance and discusses its application for product quality control. Feedback control stability has been considered in the integral controller application and also an appropriate dynamic process model (Equation 3.5) and the skillful use of the popular and time-proven stochastic time series model for the disturbance (Box and Jenkins [1970, 1976]). This has resulted in an efficient integral controller that is not much sensitive to the form of the dynamic process model or to small perturbations in its parameters. Such an integral controller can be a powerful tool to control product quality by minimising its variance at the output and by maintaining also the mean of the product quality at or near controller set point. The integral controller for similar quality control applications will allow for a rapid response to process disturbances without overcompensation of the disturbances. Thus, it can be concluded that an integral or reset controller is the most suitable general purpose controller for controlling a process with dead-time and this can be one of the reasons that integral controller is preferred in continuous process industries, especially to control flow processes.

9.3 CHARACTERISTICS AND REQUIREMENTS FOR CONTROLLER APPLICATIONS In this section, we explain briefly (the terms performance and robustness as applied` to a process controller. The characteristics and requirements of a controller are also given as too is a brief description of the working of a direct digital (discrete) feedback controller along with the means to obtain damping in the feedback control loop. These explanations are given to complete the discussion on the economic benefits of controller application to product quality. 9.3.1 Dynamic Optimization Various management objectives are described by profit maximization, loss Minimization or by optimization or minimization of other functions. Process dynamics are ignored in the ‘steady state’ optimization of industrial processes. In these processes, a reference value (set point) is set in the various control systems to hold the process variables within the desired limits. ‘Dynamic optimization’ considers the dynamic behaviour of the process by manipulating the inputs during non-stationary conditions. By considering a second-order dynamic model and simulation of the stochastic feedback control algorithm (3.16) that minimized the variance at the output dynamic optimization under non-stationary conditions has been achieved in the integral controller application described in this monograph.

Applications in Product Quality Control

9.13

9.3.2 Application Requirements for Controller An industrial process control system, regardless of the control structure or algorithm used, must be ‘robust’, that is, it must function reasonably despite process and/or modelling errors over the full range of process operations. The controller required for a particular application depends upon the objectives of the process control system and the dynamic process model (transfer function). The preferred controller for a specific task depends on the type of process to be controlled and the relative importance of ‘performance’ and ‘robustness’ (explained subsequently). The response of the controller and the system depend upon the process disturbances. A detailed model of any industrial process is likely to be complicated. Therefore, the control system engineer faces a situation in which a control system must be formulated (designed) on the basis of a simplified description (model). In this regard, it is often better to work with an ‘acceptable’ solution than to be unable to find the ‘perfect solution’.

9.4 FEEDBACK (PROCESS) CONTROL SYSTEMS The performance of a feedback control system employing mechanical, hydraulic or pneumatic elements is measured quantitatively by the ‘root mean square’ (rms) error. It is the ratio of the ‘rms error’ to the ‘rms input (signal)’. In many controller applications, it is essential that the desired output be obtained instantaneously in time. In a control problem, a satisfactory (linear) phase shift, (explained in Chapter 3), may be the cause of excessive error. The purpose of feedback compensation is to alter the performance of the process control system so that the resulting error will fall within specifications. The desired output (signal) is frequently instantaneously equal to the input (signal) and imposes a severe specification on the control system design.

9.5 SAMPLED-DATA FEEDBACK CONTROL SYSTEMS AND DIGITAL CONTROLLERS 9.5.1 Sampled-Data Feedback Control Systems A discrete (‘sampled-data’) system is defined in Chapter 5 as one of those ‘purely digital systems where the input and the output are described only at the sampling instants’ (Marshall [1979]). A sampled-data (discrete) system is a control system in which the data appear at one or more points as a sequence of numbers. An analogue system is one in which the data are everywhere known or specified at all instants of time and the variables are continuous functions of time.

9.14

Integrated Statistical and Automatic Process Control

If the control system includes elements which feed the output (dependent variables) back to the input (independent variables) and if a sampling operation is included, the system is referred to as a ‘sampled-data (discrete) feedback control system’. 9.5.2 Digital Controllers Some of the salient features of a sampled-data system are discussed in this Section. The digital controller is a computer that accepts a sequence of numbers at its input, processes it in accordance with some logical programme, (usually linearly) to produce an output. The output (number) sequence is reconstructed into a command signal and the resultant sequence is applied to the controlled element. By properly designing the (linear) computer programme of the digital controller, the system can be stabilized and its dynamic performance made to conform to rigid specifications. It is possible to implement it by means of digital computer techniques or its equivalent as a mixture of analogue and digital components or wholly by an analogue computer. If the process programmed in the computer is linear, it can be expressed in terms of a recursion formula (or equation) which is transformed into a generating function. The sampling periods are equal for all the samplers in a (linear) sampled-data dynamical system. Sampled-data (discrete) systems are often subjected to random disturbances. Any compensating device preceded by (synchronous) samplers is referred to as a digital controller. A process control system requires the storage of only a finite number of input and output samples. A pure regulator (controller) system has a fixed reference or set point and the only dynamic effect is the result of disturbances. In regulator system-design, the input in the form of disturbances has an influence in the controller design. In a stable digital system, the reference input is assumed to be a constant and if the system is linear, only that component of the output caused by the disturbance need be considered since it can be superimposed on any other outputs produced by other sources. One of the advantages of digital controllers is that they can be applied to systems with large time constants. A control programme from the digital computer could be used to compute commands to the plant at sampling points. The exact form of the digital controller depends upon the required application.

9.6 PRINCIPLE OF OPERATION OF DISCRETE DIGITAL FEEDBACK CONTROLLERS With a basic idea of sampled data (discrete) systems and digital controllers as background, the principle of operation of discrete digital feedback controllers is explained briefly in this section.

Applications in Product Quality Control

9.15

The characteristics of the discrete (sampled data) digital feedback controller loop determine how well the process can be controlled and what controller settings are required to produce minimum output variance of the product. The conditions of uniform oscillation of a feedback control loop serve as a convenient reference on which rules for controller adjustment can be based. It is known that the tendency towards oscillation is one of the characteristics of a feedback control loop. So, the feedback control loop as well as the feedback controller should be guarded from the occurrence of over and under damped oscillations which may lead to an unstable feedback control loop.

9.7 MEANS TO ACHIEVE DAMPING IN A FEEDBACK CONTROL LOOP The integral action term in the control adjustment Equation (3.16), determines the ‘damping’ of the feedback control loop. Damping is the ‘progressive reduction or suppression of oscillation in a device or system’ (Murrill) [1981]. The period and the integral time required for damping are established by the process characteristics. Minimizing the integral error ensures damping of the feedback control loop. It is shown in Section 5.5 that the minimum integral absolute error for (integral) control of dead-time is ‘IAE’. The value of IAE is achieved by setting the integral time constant of the feedback control loop, I = 1.6 PGTd, where Td is the time-delay (dead-time) (Shinskey [1988]). Under integral (or as it is also known as ‘floating)’ control, the feedback control loop tends to oscillate with uniform amplitude and at the period where, the (pure) steady-state gain is unity. For an integral time constant that is equal to the dead-time, the period of oscillation changes to about half of the original period. Increases in the integral time constant contribute more phase lag to the feedback control loop and extends the period of oscillation of the feedback control loop. There must be a rise in integral time constant so that the controller can then contribute less (phase) lag (Shinskey [1988]). Since the controller algorithm also plays a part in the stable functioning of a feedback control loop, some of the requirements of a sampled data controller algorithm are discussed in the next section.

9.8 REQUIREMENTS OF A SAMPLED - DATA CONTROLLER ALGORITHM (i) The first and foremost requirement of a controller is that it must be able to maintain the desired output variable at a given set point. (ii) The set point changes should be fast and smooth. (iii) The algorithm must lead to stable overall control and converge fast on the desired steady state, thus ensuring asymptotic stability and satisfactory performance for different types of disturbances that could arise.

9.16

Integrated Statistical and Automatic Process Control

(iv) The controller should be designable with a minimum of information with respect to the nature of the inputs and the structure of the system.

This is because of the fact that optimal algorithms, by their n ature, tend to be (very) sensitive to both the structure and the exact values of the parameters of the model describing the process.

(v) The controller must be reasonably insensitive to changes in system limits. This means that it must be stable and perform well over a reasonable range of system parameters. (vi) It is preferable to avoid excessive control actions.

(Palmor and Shinnar [1979])

The performance of the computer-controlled algorithm depends not only on the tuning constants but also on the sampling period (adjustment interval). It is reported in control engineering literature that although a second-order (conventional) control system is stable for all values of the (controller) gain, it is likely that computer control of the same system can give unstable response for some specific combinations of the gain and the sampling period. This is one of the main reasons to consider the critically damped behaviour of the second-order model in Chapter 4.

9.9 CONTROLLER PERFORMANCE AND LIMITATIONS 9.9.1 Performance ‘Performance’, an important characteristic of a controller, refers to how closely the unit holds the output controlled variable to set point in the face of disturbances. The nature of the process, particularly its dead-time, determines the best performance achievable by a feedback controller. One way of improving control performance is to reduce dead-time in the closed-loop. Performance depends not only on the controller, but on the ‘tuning’, as well. Tuning means to adjust the controller parameters by small quantities so that stability or equilibrium is restored in the feedback control system. Tuning rules are affected by the process being controlled. When an input change is encountered, the output controlled variable moves away from the set point along a trajectory (path) determined by the disturbance and the lags in the path between the input and the output controlled variable. Concurrently, the controller output changes according to its algorithm and tuning parameters. However, the controlled variable cannot respond to the controller until the dead-time in the feedback control loop lapses. Once a controller is tuned for a given performance, a change in the process gain or dead-time could bring the loop to the limit of its stability (that is in the meaning of an ‘undamped’ oscillation). The smallest change in any process parameter capable of bringing the loop to this point is the ‘robustness index’. The robustness of the Dahlin’s controller (Dahlin [1968]) can be improved by setting its dead-time relative to machine speed.

Applications in Product Quality Control

9.17

Sampling, it is believed, produces a phase lag equal to half of that produced by the same dead-time in digital controllers. The performance of a controller is usually referred to by a ‘performance index’. Desborough and Harris [1992] introduced a ‘normalized performance index’ to characterize the performance of feedback control schemes. Refer to the method outlined in their book to estimate the index from routine closed-loop process data using linear regression methods. 9.9.2 Robustness Robustness of a process control loop is that quality that keeps its closed-loop (feedback) stable following variations in process parameters. A ‘robust’ control loop is one that performs well even in the presence of moderate changes in process parameters. Changes in dead-time and gain make a controller reach instability. It is known to control engineers and process control practitioners that sustained un-damped oscillations in a control loop represent the limit of stability. This can be brought about by increasing the steady-state gain of a process that reduces the damping of the control loop. The increase in gain required to bring the loop to the limit of stability is a measure of robustness. The performance of a feedback controller in responding to load changes is at the cost of reduced robustness. Processes having lags (like inertia) form more robust feedback control loops with proportional, integral and differential (PID) controllers. A process lag, however, requires derivative control action to attain a level of performance that returns the feedback control loop to essentially the same level of robustness as a (pure) dead-time process. There is a trade-off between robustness and performance. The process dead-time determines the best performance achievable by a feedback controller. For a feedback controller that is not robust enough but provides an acceptable performance, the recourse is to go in for ‘self-tuning’ by adapting the controller settings to follow variations in the process parameters in order to keep high performance. An advantage of discrete (sampled-data) modelbased controllers (Section 9.11) is increased robustness that results from sampling the process at slow intervals, (slower sampling). Robustness can be improved by detuning a controller, but its performance is decreased at the same time. Performance and robustness are inversely related. Lowering the controller gain and slower sampling, improve robustness at the cost of performance. The highest performance also brings with it the lowest robustness. So, high-performance controllers should be capable of on-line self-tuning; i.e. they should be adaptive in nature. 9.9.3 Performance Limitations An (‘optimal’) controller design depends

9.18

Integrated Statistical and Automatic Process Control

(i) in defining the input and disturbance to the system in statistical terms and (ii) on a performance criteria such as the mean square error (MSE). The mean square error for stochastic signals is an example of a quality measure for the performance of a process control system. The mean square error is identically equal to the value of the autocorrelation function of the error at zero ‘shift’ (‘a parallel change in slope of the input-output curve’). An attempt to predict the future value of a stochastic (signal) by means of a (linear) system leads to a definite performance limitation in the sense that the mean square error cannot be made (indefinitely) small. Even when the system is adjusted for minimum mean square error, there is a lower limit below which this error cannot be reduced. Imposing a requirement for prediction on a linear system, that operates only on present and past information, limits the performance that can be achieved. Disturbance (noise) makes it impossible for a linear system to establish equality between the ideal output and the actual output. A time-delay (dead-time) must be regarded as a fundamental factor in limiting the performance that may be achieved with a linear system. In many industrial processes, it is difficult to properly tune standard regulators (controllers). It becomes necessary to use sophisticated controllers when long delays or time constants are present, especially in complex systems and when the minimum output variance conditions are imposed.

9.10 COMPUTER CONTROL IMPLEMENTATION ON A PROCESS LOOP A brief description of implementing computer control on a typical process loop is given in this Section. Consider the implementation of computer control on a single-loop process control system. The single-loop system is a quality control loop currently working on (conventional) control mode. The controller of this loop is the integral type. It is desired to place this loop under computer control, utilizing the discrete equivalent of the integral controller. A suitable measuring device, strategically placed in the process control system, measures some required characteristic of the output quality process variable and converts it into an (electrical) signal. The integral controller compares this signal with the desired value, set point. If an error exists, the controller outputs a feedback signal which manipulates an input adjustment to eliminate the error. In computer control, the electrical signal is transmitted to the control computer terminals which represent one of the analogue-to digital (A/D) converter channels. The computer hardware design is such that it can access the discrete output of the A/D converter. The discrete output from the computer is converted to a continuous signal, on demand, by one of the digital-to-analogue (D/A) converter channels. The D/A output is available at the analogue output terminals of the computer. The control computer is instructed to sample the A/D channel every T seconds, T being the sampling period. The computer programme operates on this measurement,

Applications in Product Quality Control

9.19

representing the value of the measured variable at the sampling instant. For this purpose, the computer uses the discrete equivalent of the controller adjustment equation and computes the desired control-algorithm output. The computer is then instructed to forward this output to the D/A converter and to the compensating (correcting) device. This procedure is repeated every T seconds to achieve closedloop computer control. The benefit of computer application to process control is the facility to implement control strategies that might not be practical with analogue hardware. Development of such strategies requires the analysis of computer-control loops to determine their stability characteristics.

9.11 APPLICATION OF A MODEL BASED CONTROLLER TO CONTROL OUTPUT PRODUCT QUALITY 9.11.1 Model Based Controllers The question arises whether integral control is the best control mode in application to product quality control. Easily controllable processes can justify the use of integral control. It is essential, in practice, to have integral action to control difficult processes. Integral action control mode is, in actuality, time constant just like the time constants in a process, but they bear no resemblance to dead-time that exists in the process or plant. A feedback control system that is modelled after the process, called a ‘modelbased’ controller has a feature that critical damping can be achieved with a loop gain of 1.0. This is accomplished by the simultaneous feedback of controller output through the process and controller. If both signals arrive at the integral (summing) junction downstream (of the integral block or mode) with the same amplitude at the same time, they will cancel, avoiding further changes in output. This avoids oscillation even in spite of achieving a loop gain of 1.0. The controller output follows the load with open-loop response. Both set point and load responses are ideal for a (pure) dead-time process. It is mentioned in Section 9.1 that a modelbased controller provides good feedback control of a dead-time process, called ‘dead-beat’ response (a loop gain of 1.0). Model-based controllers are superior to PID controllers in performance for processes in which dead-time has a dominating effect on the process characteristics and behaviour when a process is subject to different loads and disturbances. Model-based controllers perform well on a dead-time process, but do not possess much robustness. Section 9.2.5 describes the integral controller applications of the model-based controller principle to control a sugarcane crushing mill and that of an integral controller application for control of dye concentration in Buckley [1960]. The reader will appreciate that the time series controller design explained in this

9.20

Integrated Statistical and Automatic Process Control

monograph is based on a second-order process model (Equation 3.5) with two dynamic parameters and two time constants. This itself provides a good example of a model-based controller. For more information on “Model-Based Control of Product Quality Output”, refer to (Venkatesan [1998]).

9.12 CONCLUSION This chapter explains the objective of applying the engineering and statistical techniques to find a solution to the product quality control problem based on the techniques from the two different disciplines at the interface of the two process control methodologies. The integral and model based controller applications of the process model to product quality are also given in this Chapter.

REFERENCES Astrom K.J. & Wittenmark, B., (1984). “Computer Controlled Systems: Theory and Design” Prentice-Hall, Englewood Cliffs, N.J. Baxley, Robert V., (1991). “A Simulation Study of Statistical Process Control Algorithms For Drifting Processes”, SPC in Manufacturing, Marcel Dekker, Inc., New York and Basel. Bob Bitmead, (1994). “Model-Based Control of Sugarcane Crushing Mill”, Guest Editorial, Process & Control Engineering, PACE, Vol. 47, Number 10, October, pp. 65-66. Box, G.E.P. & Jenkins, G.M., (1970, 1976). Time Series Analysis: Forecasting and Control, Holden-Day: San Francisco. Buckley, P.S., (1960). Automatic Control of Processes With Dead-time, Proceedings of the IFAC World Congress, Moscow, Pp. 33-40. Desborough, L.D., & Harris, T.J., (1992). Performance assessment for Univariate Feedback Control, The Canadian Journal of Chemical Engineering, Volume 70, 1186-1197. Dahlin, E.B., (1968). Design and Choosing Digital Controllers, Instrm. Control Syst. 4, 77. Harris, T.J., (1992). “Optional Controllers for Non-Symmetric and Non-Quadratic Loss Functions”, Technometrics, vol. 34, No. 3, pp. 298-306. Kramer T., (1990). “Process Control from An Economic Point of View – Industrial Process Control”, Technical Report No. 43 of the Centre for Quality and Productivity Improvement, University of Wisconsin, USA. Marshall, J.E., (1979). Control of Time-delay Systems, Peter Peregrinus Ltd. Morari, M., & Doyle, J.C., 1986. “A Unifying Framework for Control System Design”, Chemical Process Control III, Elsevier, New York, pp. 5-51. Murrill, Paul W., (1981). Fundamentals of Process Control Theory, Instrument Society of America, Research Triangle Park, North Carolina, U.S.A.

Applications in Product Quality Control

9.21

Page, Graham, L.E., (1994). “Experiences with Model-Predictive Control of A SemiAutogenous Grinding Mill”, Process and Control engineering, PACE, vol. 47, No. 12, December, pp. 49-51. Palmor, Z.J. & Shinnar R., (1979). “Design of Sampled Data Controllers”, Industrial and Engineering Chemistry Process Design Development, vol. 18, No.1, pp. 8-30. Shinskey, F.G., (1988). Process Control Systems, Mc-Graw-Hill Book Company, New York. Smith, O.J.M., (1959). “A Controller to Overcome Dead-time, ISA, vol. 2, pp. 28-33. Venkatesan, G., (1997). “A Statistical Approach to Automatic Process Control (Regulation Schemes)”, PhD Thesis, Victoria University of Technology, Melbourne. Venkatesan, G., (1998). “Model-Based Control of Product Quality Output”, Proceedings of the Sixth International Applied Statistics in Industry Conference, Melbourne, December, pp. 147-154. Venkatesan, G., Abdollahian, M., Abachi, H. & Lisner, R., (1998). “Control of A Multi-input/ Output Process Through A parallel Computer Architecture”, Proceedings of the Fifth International Conference on Control, Automation, Robotics and Vision (ICARCV ’98), Singapore, pp. 1598-1602. Venkatesan, G., Abachi, H.R., Ibrahim, R.N., Mcormack & Debnath, N.C., (1999). “An Integral Controller Application In Quality Control”, Proceedings of the 26th International Conference on Computers and Industrial Engineering, Melbourne, pp. 601-605.

Suggested Exercises 1. Discuss process control applications to product quality control. 2. Explain Dead beat control. 3. Describe a model-based controller. 4. How are robustness and controller performance related and find out how industries achieve good performance through a robust controller.

Chapter

10 Conclusion

This monograph has achieved the following objectives. This book will assist graduate students, post-graduates, researchers in Mathematics, Statistics, control engineering and other applied or closely-related areas/disciplines where modelling, algorithm development, simulation principles, analysing and interpreting the simulation results to achieve reasonable outcomes. The results may be useful to the scientific and technical community process control practitioners, quality assurance and quality control practitioners. It will be useful to process control practitioners, irrespective of their background, be it Mechanical, Electrical or Chemical Engineering working in production, process, quality control, quality assurance areas. An integrated S&P model of Automatic Process Control (APC) and Statistical Process Control (SPC) can be developed especially when the tools and techniques of the two different process control disciplines overlap at their interface thus reaping advantages of best of both worlds. The second-order mathematical model developed in this monograph shows that it is possible to make a marked deviation from conventional first-order SingleInput, Single-Output (SISO) model and still process control can be achieved without deviating from the process control principles from both the disciplines. This book points out that once the stochastic feedback control algorithm is developed, it is possible to interpret and explain the salient features and terms, (for example, the second-term in the algorithm which gives Integral Control) and features and terms of the algorithm. Once the algorithm is simulated in a computer code and the results are obtained, this monograph shows the possible ways and methods to discuss and analyse the simulated results to develop process regulation schemes, statistical control procedures and suitable cost model.

10.2

Integrated Statistical and Automatic Process Control

The analysis of the results is further developed into control procedures for process and product quality control for minimising output product variance. For the benefit of the process control practitioners, process regulation schemes have also been developed in this monograph, which if appreciated in the proper prospective, can be of immense value and benefit in process control and minimising variance of the outgoing product quality output by the skillful (heuristic) use of the adjustment intervals. The probability model and cost model developed in this monograph can be suitably modified and adapted depending upon the process operating conditions and process control circumstances and put to practical use. For the non-control engineering background process control practitioners, the concept of dead-time (time-delay) in a process has been introduced; and also the relevant Smith Predictor and Dahlin algorithm concepts have been discussed. The parallel processing computer architectures, though, may be based and following concepts of the last decade when computer science was a nascent developing technology, show that the feedback control algorithm of this or similar nature can be incorporated suitably in process control computers and better process control can be achieved for the benefit of the community at large. Lastly and most importantly, the ideas and concepts of sampled-data control, Direct Digital Control have been brought into the text of this monograph in such a way that this monograph will serve the best interests of not only those engaged in modelling and algorithm development areas but also will serve general readers who have interests in digital control. Some recent references in the integration of Statistical and Automatic Process Control are given below for the benefit of readers in the two process control disciplines.

REFERENCES Fugee Tsung; Kwok-Leung Tsui, (2003). A mean-shift pattern study on integration of SPC and APC for process monitoring.(Statistical Process Control)(Automatic Process Control); IIE Transactions: vol. 35, Issue. Carmen Capilla, Alberto Ferrer, Rafael Romero & Angel Hualda, (1999). “Integration of statistical and engineering process control in a continuous polymerization process,” Technometrics, vol. 41, No., pp. 14-28. Karin Kandananond, (2010). Effectively Monitoring the Performance of Integrated Process Control Systems under Non-stationary Disturbances, International Journal of Quality, Statistics, and Reliability, Volume 2010 (2010), Article ID 180293, pp. 9 doi:10.1155/2010/180293.

Conclusion

10.3

Muneeb A. Akram; Abdul-Wahid A. Saif; M. Abdur Rahim, (2012). Quality monitoring and process adjustment by integrating SPC and APC: a review, Int. J. of Industrial and Systems Engineering, 2012 vol.11, No.4, pp.375 – 405. Abdul-Wahid A. Saif, Muneeb A. Akram, M. Abdur Rahim, (2011). A fuzzy integrated SPC/APC scheme for optimised levels of process quality, performance and robustness, International Journal of Experimental Design and Process Optimisation, vol. 2, Number 2/2011, 161 – 189. Jianli Yu; Zongwei Zhang; Liang Xu, (2009). Process Monitoring for Integration of SPC and APC Based on BP Neural Network,5287633 abstract Intelligent Computation Technology and Automation, 2009. ICICTA ‘09. Tsung F, Tsui K. L., (2003). A study on integration of SPC and APC for process monitoring. IIE Transactions; 35:231–242. Xiu Hong Wang, (2011), Applied Mechanics and Materials, 130–134, 2513, Research on Process Monitoring and Adjusting Methods, Applied Mechanics and Materials (vol. 130 – 134) Mechanical and Electronics Engineering III, Editor Huan Zhao, 2513 – 2516. Wang Xiu Hong, (2010). Optimize transition stages of the integrated SPC/EPC process using improved ant colony algorithm; Dept. of Ind. Eng., Zhengzhou, Inst. of Aeronaut. Ind. Manage. Zhengzhou, China, Natural Computation (ICNC). W. JIANG & K.-L. TSUI, (2000). An economic model for integrated APC and SPC control charts, IIE Transactions vol. 32, Issue 6, 505-513. A Compositive Method of Neural Networks and Control Charts for Monitoring Process Disturbance Based on Integrated SPC/EPC, Computational Intelligence and Software Engineering, CiSE 2009. Wang, Hai-yu, (2011). Statistical process control on time delay feedback controlled processes, International Journal of Experimental Design and Process Optimisation 2011 - vol. 2, No.2 pp. 161 - 189. Donald, S., Holmes & A., Erhan Mergen, (2011). Using SPC in Conjunction with APC, Quality Engineering, 23:4, 360-364. Link to this article: http://dx.doi.org/10.1080/08 982112.2011.602941. Zongwei Zhang, Liang Xu, (2009). Process Monitoring for Integration of SPC and APC Based on BP Neural Network, Intelligent Computation Technology and Automation, 2009. ICICTA ‘09, pp. 378-381. Montgomery, D.C., Keats, J.B., Runger, G.C., & W.S., Messina, (1994). “Integrating statistical process control and engineering process control,” Journal of Quality Technology, vol. 26, no. 2, pp. 79–87. Duffuaa, S.O., Khursheed, S.N. & Noman, S.M., (2004). “Integrating statistical process control, engineering process control and Taguchi’s quality engineering,” International Journal of Production Research, vol. 42, no. 19, pp. 4109–4118. Kandananond, K., 2008. “The effect of autocorrelation (Stationary Data)on the integrated statistical process control system,” in Proceedings of the 3rd World Conference on Production and Operations Management, pp. 2433–2439. Runger, G., Testik, M.C. & Tsung, F., (2006). “Relationships among control charts used with feedback control,” Quality and Reliability Engineering International, vol. 22, no. 8, pp. 877–887.

10.4

Integrated Statistical and Automatic Process Control

Gultekin, M., Elsayed, E.A., English, J.R. & Hauksottir, A.S. (2002). “Monitoring automatically controlled processes using statistical control charts,” International Journal of Production Research, vol. 40, no. 10, pp. 2303–2320. Jiang, W. & Tsui, K.L., (2002). “SPC monitoring of MMSE- and PI-controlled processes,” Journal of Quality Technology, vol. 34, no. 4, pp. 384–398. Box, G.E.P. & Luceno, A., (1997). Statistical Control by Monitoring and Feedback Adjustment, John Wiley & Sons, New York. Park, M., Kim, J., Jeong, M.K., Hamouda, A.M.S., Al-Khalifa, K.N. & Elsayed, E.A., (2012). Economic cost models of integrated APC controlled SPC charts, International Journal of Production Research, vol. 50, Issue 14. pp. 3936-3955. Tsung, Fugee; Kwok-Leung Tsui, (2003). “A mean-shift pattern study on integration of SPC and APC for process monitoring.(Statistical Process Control)(Automatic Process Control).” IIE Transactions. Institute of Industrial Engineers, Inc. (IIE). Tsung,Fugee;Tsui,Kwok-Leung, (2006). High Beam Research., ., Qual. Reliab. Engng. Int., Published on line in Wiley InterScience (www. interscience.wiley.com). DOI: 10.1002/qre.780 Brockwell, P.J., Davis, R.A., (1991). Time Series: Theory and Method. Springer: New York. Lowrym, C.A., Woodall, W.H., Champ, C.W., Rigdon, S.E., 1992. A multivariate exponentially weighted moving averages control chart. Technometrics, 34:46–53. Hu, S.J., Roan, C., 1996. Change patterns of time series-based control chart. Journal of Quality Control; 28:302–312. Tsung. F., Wu H., Nair. V.N., 1998. On efficiency and robustness of discrete proportionalintegral control schemes,, Technometrics; 40:214–222. Shi, D., Tsung, F., (2003). Modeling and diagnosis of feedback-controlled processes using dynamic PCA and neural networks., International Journal of Production Research; 41:365–380. Jiang, W.A., (2004). Joint SPC monitoring scheme for APC-controlled processes. IIE Transactions; 36:1201–1210. Xiaolei Zhang and Zhen He., 2012. An Integrated SPC-EPC Study Based on Nonparametric Transfer Function Model, Appl. Math. Inf. Sci. 6-3S, 759-768 (2012) 759 Jiang, W., & Farr, J., (2007). Integrating SPC and EPC methods for quality improvement, Quality Technology & Quantitative Management, 4, 345-363 Yang, L. & S. Sheu, (2006). Integrating multivariate engineering process control and multivariate statistical process control, The International Journal of Advanced Manufacturing Technology, 29, 129-136. Box, G., & Luceno, A., 1997. Discrete proportional-integral adjustment and statistical process control, Journal of Quality Technology, 29, 248-261. TSUNGy, F., (2000). Statistical monitoring and diagnosis of automatic controlled processes using dynamic PCA int. j. prod. res., 2000, vol. 38, no. 3, 625± 637. Box, G.E.P., Coleman, D.E., & Baxley, R.V. Jr., (1997). A comparison of statistical process control and engineering process control. Journal of Quality Technology, 29, 128-130.

Conclusion

10.5

Fugee Tsung, (1999). Improving Automatic -Controlled Process Quality Using Adaptive Principal COMPONENT MONITORING, Qual. Reliab. Engng. Int. 15: 135-142. Messina, W.S., Montgomery, D.C., Keats, J.B. & Runger, G.C. (1996). ‘Strategies for statistical monitoring of integral control for the continuous process industries’, in Keats, J.B. and Montgomery, D.C., (eds), Statistical Applications in Process Control, Marcel Dekker, New York, 1996, pp. 193–215. Rong Pan, Enrique del Castillo, (2001). Integration of Sequential Process Adjustment and Process Monitoring Techniques, Industrial and Manufacturing Engineering, The Pennsylvania State University, University Park, PA 16802. Ruhhal, N.H., Runger, G.C. & Dumitrescu, M., (2000). Control charts and feed-back adjustments for a jump disturbance model. Journal of Quality Technology, 32 (4): 379-394. Guo, R.S., Chen, A. & Chen, J.J., (2000). An enhanced EWMA controller for processes subject to random disturbances. In Run to run control in semiconductor manufacturing, Moyne, J., Del Castillo, E., and Hurwitz, A., eds., CRC press. Ye, Liang, Pan Ershun & Xi Lifeng, (2010). Economic design of integrating SPC and APC with quality constraints, Control and Decision Conference (CCDC), 2010 Journal of Systems Engineering and Electronics, vol. 19, No. 2, 2008, pp.329-336. Box, et.al, (2010). ‘Statistical Control by Monitoring and Adjustment, John Wiley & Sons, UK. King, Myke, (2011). Process Control, Wiley, United Kingdom. Groover, (2007). Automation, Production Systems, and Computer-Integrated Manufacturing – Prentice-Hall. Seborg, Edgar, & Mellichamp, (2010). Process Dynamics and Control – Wiley. Roffel & Betlem, (2006). Process Dynamics and Control – Modelling for Control and Prediction – Wiley.

Index

Adaptive quality control 1.3 Adjusting 2.13 Adjustment Frequency (AF) 3.28, 5.5 Adjustment Interval (AI) 5.5, 8.1 Alarm control signal 4.7 Algorithmic Statistical Process Control (ASPC) 1.6 Amplitude-sensitive 4.19 Annual demand 2.12 Applied parallelism 7.16 Appropriate noise model 8.14 Arima model 2.9, 2.10 Artificial Intelligence (AI) 7.4, 7.13 Artificial Neural Network (ANN) 7.4 Auto correlations 2.11, 4.51, 8.8 Autocovariance 8.12 Automatic Process Control (APC) 1.1, 3.1, 5.3, 6.1, 10.1 Automatic self-reconfiguration 7.14 Automation 2.11 Autoregressive Integrated Moving Average (ARIMA) Models 2.9 Average Outgoing Product Quality Level (AOQL) 4.45 Average Run Length (ARL) 4.44 Backward shift operator 2.7 Benefits of integral control 5.10 Bleaching of pulp 2.5

Capital 2.12 Carry-over 2.11 Central composite design 3.28, 6.12 Central Processing Units (CPUS) 7.13 Central super-computer 7.13 Characteristic equation 3.14 Characteristics of inertia 6.9 Chemical 1.1 Chemical plant 2.13 Clock period 6.4 Closed-loop gain 1.10 Closed-loop time constant 5.10 Compensation 1.10 Complex roots (curve c) 3.19 Compromise 2.11, 2.13 Conditional maximal 3.26 Constrained minimum variance control 4.34 Constrained variance control 4.30 Continuous process industries 2.5 Control action 3.2 Control charts 1.11 Control Charts and Statistical Process Control (SPC) 1.4 Control charts 1.4 Control drifts 2.9 Control engineering terminology 2.10 Control engineers 1.9 Control error sigma 3.24, 4.10, 4.14, 6.18

I.2 Index Control error variance 5.3, 4.35 Control limits values 4.49 Control philosophy. 1.4 Control stability 3.13 Control system 1.9 Control system engineers 2.8 Controller 2.6, 6.6 Controller gain (CG) 4.14 Controller set point 2.6, 2.9 Cost models 2.2, 2.10, 2.13, 8.7 Cost variable 2.12 Cost-of-living 2.4 Costs of sampling 2.13 Criteria 2.9 Critically damped 5.8, 7.7 Cumulative sum (CUSUM) charts 1.4 Current forecast 4.10 Dahlin algorithm 10.2 Dahlin’s controller 6.13 Dahlin’s parameter 6.3 Dead band 4.19 Dead-time 2.5, 3.11 Dead-time compensation 4.48, 6.3, 6.12, 9.10 Dead-time processes 2.6, 6.2 Dead-time simulation 5.2, 6.20 Dead-zone 4.19 Deed-beat’ response 9.2 Demand 2.11, 2.13 Design of exponentially weighted moving average schemes 1.11 Desired value 2.4 Deterministic model 2.7, 2.8 Development of time series models 3.8 Deviation (error) 2.6 Deviations 2.4 Device 2.10 Digital controllers 9.14 Digital techniques 2.5 Direct Digital Control (DDC) 6.3, 6.4 Discrete systems 2.3 Discretely coincident 2.2, 2.7

Disturbance 2.6 Disturbance causes variability 2.6 Disturbance model 2.9 Disturbances 2.7, 2.9, 3.3 Disturbances 2.7, 2.9 Down time 2.11 Drift off 2.6 Drifting 3.3, 5.2 Drifting processes 3.27 Dynamic optimization 9.12 Dynamic parameters 2.3, 7 Dynamical system 2.3 Dynamic-stochastic model 4.2 Economic models 2.2 Economic Order Quantity (EOQ)’ 2.12 Economical sampling interval 8.10 Efficiency 2.10 Engineering control application 6.15 Equilibrium condition 2.6 Error 1.9, 2.4 Error detection 2.12 Error free 2.12 Error signal 1.10 Error variance feature 3.24 EWMA chart 4.7 EWMA chart control limits 4.4 EWMA control limits (l) 8.2 EWMA forecasting 4.10 EWMA process control 4.15 EWMA weights 3.27 Experimental design 4.14 Expert systems methodology 7.4 Exponentially Weighted Moving Average (EWMA) 1.4, 4.4, 6.2 Fast and slow drifts 4.11 Feed forward control 2.4 Feedback (control) system 2.4 Feedback control 2.4 Feedback control algorithm 1.1 Feedback control difference equation 3.8 Feedback control error 6.19

I.3

Index Feedback control loop 4.2 Feedback control scheme 4.42 Feedback control stability 2.6 Feedback control strategy 2.6 First order (linear) differential equation 2.7 First-order dynamic system 2.7 Fixed cost 2.12 Formalisation 2.8 Fortran programme 3.13 Frequency response analysis 3.14 GDPS 2.2 Geometric Moving Average (GMA) 4.9, 4.16 GMA theta 4.24 Gross domestic product 2.2 Heterogeneous 2.5 Hybrid process control 3.5 Ima disturbance 3.28 Ima parameter 4.4 Important considerations for feedback 3.13 Increases in Standard Deviation (ISTD) 4.49 Inertial 2.3 Input adjustments 1.2 Input manipulated variable 2.7, 3.9 Input parameters 2.2 Input random shocks 4.47 Input variable 2.4 Instrument society of america standard on process instrumentation terminology (ISA – S51.1/1976) 1.9 Insurance 2.12 Integral controller 9.1, 9.3 Integral Time (IU) 4.18, 5.9 Integrated hybrid (process) control system 1.4 Integrated hybrid process control methodology 1.1 Integrated moving average (IMA) 5.4 Integrated moving average (IMA) parameter 2.10 Integrating SPC 1.4 Interpretation 2.8, 2.9

Inventory control 2.12 Inventory cost 2.12 Inventory holding cost 2.12 Inventory items 2.11 Inventory level 2.4 Lag-dominant 9.9 Lags 2.6 Laplace transforms 2.8 Lead time l 3.24 Linear difference 2.3 Linearity 2.8 Local Area Network (LAN) 7.2 Long monitoring intervals 4.33 Loop gain 1.10 Machine 2.11 Machine shop 2.4 Maintenance 2.12 Manageable cost 2.11 Manufacturing cost 2.13 Manufacturing process 2.4 Market 2.12 Massively Parallel Processing (MPP) 7.5 Mathematical model 2.3 Mathematical, stochastic and cost models 2.1 Mean 2.6 Mean of forecast error 4.20 Mean square error (MSE) 4.2, 4.18, 6.20, 9.18 Mechanism 2.3 Meteorological models 2.2 Mimo feedback control methodology 7.8 Mimo process control 7.4 Minimization of variance 1.2 Minimize integrated error 6.16 Minimum integral absolute error 5.10 Minimum mean square 8.7 Minimum mean square error 6.20, 8.7 Minimum variance control 4.2, 4.31 Minimum variance controllers 3.2 Model misspecification 8.10

I.4 Index Model parameters 2.1 Model-based control of product quality output 9.20 Model-based controller 9.1, 9.19 Modelled as linear ‘stationery’ systems 2.5 Monitoring processes 1.2 Moving average filter 2.10 Multi-bus 7.3 Multi-ported-memory 7.2 Multi-process control system 7.5 Multi-processor system 7.14 Multi-ring 7.3

Objectives 1.1, 2.13 Observation cost 8.7 Off-target 4.43, 8.2 Off-target cost 8.10 On-line statistical process control 5.3 Operational failure 2.11 Optimal (feedback control) scheme 2.9 Optimal compromise 2.11 Optimal decisions 2.11 Order quantity 2.12 Outgoing product quality variable 1.6 Out-of-control signal 5.6, 9.5 Output controlled variable 2.4, 2.7, 4.26 Output product quality variable 2.6 Output variance (VARCE) 4.3, 4.42

Parameters 2.2 Perfect solution 9.13 Performance 1.2, 2.6 Performance index 5.12 Performance measurement 4.44 Period of oscillation 9.8 Phase shift 6.7 PID controller 3.2 Pipelining 7.11, 7.13 Practical control strategies 4.56 Practical integral controller 9.6 Prices controlled 2.4 Priori’ information 6.8 Probabilistic 2.8, 3.3 Process adjustments 2.6 Process behaviour 2.5 Process control 1.1, 2.5, 2.6 Process control automation 7.1 Process control charts 2.9 Process control industries 1.4 Process control methodology 1.4, 5.6 Process control practitioner 1.4, 2.13, 4.2 Process control problems 2.7 Process control system 1.4 Process control variables 2.9 Process Gain (PG) 6.18 Process improvement 1.4 Process input 2.7 Process level 2.6 Process modelling 2.5, 6 Process time constants 2.7 Product giveaway 5.11 Product quality control 1.4, 2.9 Product quality output 2.9 Product variability 4.30, 4.41 Production line 2.4 Production rate 2.4 Production run 2.11, 2.12 Progressive reduction 9.15 Proportional Integral (PI) 3.2

Paper-making processes 2.5 Parallel processing 7.3 Parallelism 7.11

Quality index 8.3 Quality variation 1.4 Quantities controlled 2.4

Natural material 2.5 Natural parallelism 7.16 Nelder-mead search algorithm 4.15 Non-conforming product 4.45 Non-linear 2.5 Non-self-regulating 4.19 Non-stationary process 2.10 Normal distribution 2.10 Normalized performance index 9.17 Normally distributed 2.8 Null hypothesis 4.9

Index Random (input) shocks 2.10 Random inputs 2.8 Random noise 1.8 Random output variations 2.10 Random shock NID 4.26 Random variable 2.8, 10 Random-walk model 6.10 Rationing 2.4 Raw materials 2.5 Real roots (curve r) 3.19 Reduce transfer lag 4.57 Reference input 2.4 Reformer 9.3 Regulation 2.4 RMS input (signal) 9.13 Robustness 9.17 Robustness index 9.16 Root causes 1.4 Routh’s stability criterion 3.14 Routh’s test 3.14 Run length 8.2, 8.3 Sample period 4.4 Sample size 4.51 Sampled-data controller 5.6 Sampling 4.8 Scan period 5.11, 9.10 Scheduling 2.4 Second-order dynamic model 2.7, 3.11 Second-order process model 2.7 Second-order transfer function 2.5 Self-regulating 4.19 Self-tuning 6.6 Self-tuning regulator 6.6 Servo-controller 5.1 Set point 2.3 Shewhart charts 1.4 Shewhart charts in SPC 1.4 Shorter dead-time 4.57 Simulation strategy 4.13 Slow’ drifts 4.10 Smith predictor 6.13, 10.2 Software development 2.11

I.5 Span shift 6.7 SPC approach 4.42 SPC methodologies 1.4 Special causes 1.4 Special or assignable causes of variability 1.4 Speed-up factor 7.15 Square-loop hysteresis 4.19 Standard Shewhart chart 4.12 Standardised action limit 4.50 Star-ring 7.3 Star-ring parallel processing system 7.10 State of control 2.6 State-space’ models 5.3 Stationary autoregressive filter 2.10 Statistical analysis 2.9 Statistical control application 6.16 Statistical control limits, 1.4 Statistical model 2.8, 2.9 Statistical monitoring 1.4 Statistical process control (SPC) 1.1, 5.3, 6.1, 9.1 Statistical quality control practitioner 1.4 Steady-state gain 4.5 Steel making processes 2.5 Sticky innovations 1.2 Stochastic (statistical) models 2.7 Stochastic feedback control algorithm 2.9, 3.3, 3.5 Stochastic information 2.8 Stochastic models 2.2, 2.7, 2.8 Stochastic output process 2.9 Stochastic proces 1.4 Stochastic process control 1.4 Stochastic variable 2.10 Stochastically 2.8 Stock-out cost 2.12 Student’s t-test 2.8 Super-scaling 7.11, 7.13 Symmetric Multiprocessing System (SMP) 7.5 System dynamics 6.9 System gain 3.10

I.6 Index Target 2.3, 2.9 Target value 2.4 Technical feedback 2.4 Three-parameter transfer function-noise model 2.7 Time 2.12 Time delays 1.1 Time series 2.9, 2.10, 3.1 Time series controller performance measures 4.12 Time series controllers 3.4, 3.5, 3.20 Time series model 2.9, 2.10, 3.4 Time-delay system 2.7 Traffic flows on roads 2.11 Transforms 2.3 Transmission’ lags 2.6 Tuning parameter combinations 3.26

Type I error 4.9 Type II error 4.9 Uncorrelated random shocks 2.10 Uncorrelated random variables 2.10 Unit cost 2.12 Variables 2.8 Variance of control error 4.4 Variance of the control adjustment 4.35 Variance shocks 1.2 Variances 1.1, 1.2, 2.4, 6 Wandering 1.5 White noise 2.10 Wood 2.5 Wood-fibre treatment processes 2.5 Yields 2.9