Risk Management for Medical Device Manufacturers [1 ed.] 1636940137, 9781636940137

As a quality professional in the medical device industry, you know all too well the importance of a risk management proc

239 121 10MB

English Pages 258 [259] Year 2022

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Risk Management for Medical Device Manufacturers [1 ed.]
 1636940137, 9781636940137

Table of contents :
Cover
Title page
CIP data
Contents
List of Figures and Tables
Dedication
Acknowledgments
Preface
List of Abbreviations
Section 1_Building a Risk Program
Chapter 1_The Scope of Risk Management
Chapter 2_What is “Risk”?
Chapter 3_The Sequence of Events—What to Measure
Chapter 4_Control and Monitoring of the Sequence of Events
Chapter 5_How to Define Occurrence Criteria in the RMP
Chapter 6_How to Define Severity Criteria—A Master Harms List
Chapter 7_Establishing Design Inputs and Process Controls
Chapter 8_Risk Analysis and Evaluation
Section 2_Risk in Verification/Validation
Chapter 9_Determining What’s Critical to Quality
Chapter 10_AQL or LTPD?
Chapter 11_Confidence
Chapter 12_Reliability
Chapter 13_How to Distribute Samples—The Invalid Assumption
Chapter 14_Continued Process Verification
Chapter 15_Test Method Validation
Section 3_Using Risk
Chapter 16_What is the Requirement?
Chapter 17_Risk-Based Decisions Regarding the Need for an Investigation
Chapter 18_Quality System Nonconformities
Section 4_Information for Users/Patients
Chapter 19_Two Types of Information
Chapter 20_Warnings, Precautions, Contraindications, and Adverse Reactions/Events
Chapter 21_Information for Safety and Training
Chapter 22_Residual Risk
Section 5_Other Information
Chapter 23_More Bad Guidance in ISO 14971:2019
Chapter 24_Linking Your FMEAs (NCEAs)
Chapter 25_Overlapping Definitions
Chapter 26_Quality Data for Post-Market Surveillance
Chapter 27_Why Investigations Are Illegal in a Nonconformance Report (NCR)
Chapter 28_Don’t Blame the People
Chapter 29_Flowcharts
Summary of Key Takeaways
Checklist of Questions
Bibliography
Index
About the Author

Citation preview

Risk Management for Medical Device Manufacturers ●  ●  ●

Joe Simon

Quality Press Milwaukee, Wisconsin

American Society for Quality, Quality Press, Milwaukee, WI, 53203 All rights reserved. Published 2022. © 2022 by Joe Simon No part of this book may be reproduced in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the publisher. Publisher’s Cataloging-in-Publication data Names: Simon, Joe W., author. Title: Risk management for medical device manufacturers / Joe Simon. Description: Includes bibliographical references and index. | Milwaukee, WI: ASQ Quality Press, 2022. Identifiers: LCCN: 2021951458 | ISBN: 978-1-63694-013-7 (paperback) | 978-1-63694-014-4 (epub) Subjects: LCSH Medical instruments and apparatus industry—Risk management. | Medical instruments and apparatus—Quality control. | Medical instruments and apparatus—Safety measures. | BISAC BUSINESS & ECONOMICS / Industries / Manufacturing | BUSINESS & ECONOMICS / Industries / Healthcare | MEDICAL / Instruments & Supplies Classification: LCC R856.6 S56 2022 | DDC 610.284—dc23 ASQ advances individual, organizational, and community excellence worldwide through learning, quality improvement, and knowledge exchange. Bookstores, wholesalers, schools, libraries, businesses, and organizations: Quality Press books are available at quantity discounts for bulk purchases for business, trade, or educational uses. For more information, please contact Quality Press at 800-248-1946 or [email protected]. To place orders or browse the selection of all Quality Press titles, visit our website at: http://www.asq.org/quality-press. Printed in the United States of America. 26  25  24  23  22  SWY  7  6  5  4  3  2  1

Contents ●  ●  ●

List of Figures and Tables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dedication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Preface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . List of Abbreviations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

v ix xi xiii xix

Section 1  •  Building a Risk Program. . . . . . . . . . . . . . . . . . 1 Chapter 1 The Scope of Risk Management . . . . . . . . . . . . 3 Chapter 2 What is “Risk”?. . . . . . . . . . . . . . . . . . . . . . . . . . 7 Chapter 3 The Sequence of Events—What to Measure. . . 11 Chapter 4 Control and Monitoring of the Sequence of Events. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Chapter 5 How to Define Occurrence Criteria in the RMP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Chapter 6 How to Define Severity Criteria— A Master Harms List. . . . . . . . . . . . . . . . . . . . . . 29 Chapter 7 Establishing Design Inputs and Process Controls . . . . . . . . . . . . . . . . . . . . . . . . . 33 Chapter 8 Risk Analysis and Evaluation . . . . . . . . . . . . . . . 41 Section 2  •  Risk in Verification/Validation. . . . . . . . . . . . . 47 Chapter 9 Determining What’s Critical to Quality. . . . . . . 49 Chapter 10 AQL or LTPD?. . . . . . . . . . . . . . . . . . . . . . . . . . 53 Chapter 11 Confidence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Chapter 12 Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

iii

iv

● 

Contents 



Chapter 13 How to Distribute Samples—The Invalid Assumption. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Chapter 14 Continued Process Verification. . . . . . . . . . . . . . 73 Chapter 15 Test Method Validation. . . . . . . . . . . . . . . . . . . . 79 Section 3  •  Using Risk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 Chapter 16 What is the Requirement?. . . . . . . . . . . . . . . . . . 103 Chapter 17 Risk-Based Decisions Regarding the Need for an Investigation. . . . . . . . . . . . . . . . . . . . . . . 107 Chapter 18 Quality System Nonconformities . . . . . . . . . . . . 111 Section 4 Information for Users/Patients. . . . . . . . . . . . . 117 Chapter 19 Two Types of Information . . . . . . . . . . . . . . . . . 119 Chapter 20 Warnings, Precautions, Contraindications, and Adverse Reactions/Events. . . . . . . . . . . . . . 123 Chapter 21 Information for Safety and Training. . . . . . . . . . 127 Chapter 22 Residual Risk. . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Section 5 Other Information . . . . . . . . . . . . . . . . . . . . . . . 131 Chapter 23 More Bad Guidance in ISO 14971:2019. . . . . . 133 Chapter 24 Linking your FMEAs (NCEAs). . . . . . . . . . . . . . 143 Chapter 25 Overlapping Definitions . . . . . . . . . . . . . . . . . . . 147 Chapter 26 Quality Data for Post-Market Surveillance. . . . . 153 Chapter 27 Why Investigations Are Illegal in a Nonconformance Report (NCR). . . . . . . . . . . . . 169 Chapter 28 Don’t Blame the People. . . . . . . . . . . . . . . . . . . . 175 Chapter 29 Flowcharts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Summary of Key Takeaways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Checklist of Questions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bibliography. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . About the Author. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

207 211 219 225 231

List of Figures and Tables ●  ●  ●

Figure 0.1

Closed-loop risk management process. . . . . . . . . . . . . . . . xvii

Figure 1.1

The scope of risk management. . . . . . . . . . . . . . . . . . . . .

Figure 3.1 Figure 3.2 Figure 3.3

The ISO 14971 example sequence of events . . . . . . . . . . 11 A typical “risk expert” sequence of events . . . . . . . . . . . . 12 Sequence of events for manufacturers. . . . . . . . . . . . . . . . 15

Figure 4.1 Figure 4.2 Figure 4.3

A typical FMEA (NCEA). . . . . . . . . . . . . . . . . . . . . . . . . . 18 The sequence of events—initial probability. . . . . . . . . . . 19 The sequence of events—residual probability . . . . . . . . . 21

Table 5.1

Example occurrence criteria. . . . . . . . . . . . . . . . . . . . . . . 25

Figure 6.1

Building a master harms list . . . . . . . . . . . . . . . . . . . . . . . 30

Figure 7.1 Figure 7.2 Figure 7.3 Figure 7.4 Figure 7.5

Defining user requirements. . . . . . . . . . . . . . . . . . . . . . . . 33 Tracing user requirements to risk . . . . . . . . . . . . . . . . . . . 34 Design specifications to control use risks. . . . . . . . . . . . . . 37 Tracing design specifications to risk. . . . . . . . . . . . . . . . . . 38 Process specifications to control design risks and tracing process specifications to risk . . . . . . . . . . . . . . . . . 39

Figure 8.1 Figure 8.2

Example harm input and summary table. . . . . . . . . . . . . 42 Example nonconformance input and summary table. . . . 44

Table 9.1

Example severity criteria for ISO 14971. . . . . . . . . . . . . . 51

v

4

vi

● 

List of Figures, Tables, and Equations 



Figure 10.1

OC curve and example formulas. . . . . . . . . . . . . . . . . . . . 55

Table 11.1

Example severity criteria for ISO 14971 with confidence for validations . . . . . . . . . . . . . . . . . . . . . . . . . 58

Figure 12.1

Example risk evaluation criteria . . . . . . . . . . . . . . . . . . . . 63

Table 13.1 Table 13.2

Example effective sample size spreadsheet . . . . . . . . . . . . 69 Limits of testing the sources of variance. . . . . . . . . . . . . . 71

Table 15.1 Table 15.2

Characteristics for test method validations. . . . . . . . . . . . Where test method validation characteristics may be tested. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cumulative confounded variances (“Joe’s mustache curve”) . . . . . . . . . . . . . . . . . . . . . . . . . . Cumulative confounded variances—two sources . . . . . . . Cumulative confounded variances—two sources, offset. . Cumulative confounded variances with measurement system threshold. . . . . . . . . . . . . . . . . . . . . Cumulative confounded variances with measurement system bias. . . . . . . . . . . . . . . . . . . . . . . . . . Cumulative confounded variances with measurement system bias in limits. . . . . . . . . . . . . . . . . . . Cumulative confounded variances with bias (+/–) thresholds. . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cumulative confounded variances with bias (+/–) thresholds within measurement variance. . . . .

Figure 15.1 Figure 15.2 Figure 15.3 Figure 15.4 Figure 15.5 Figure 15.6 Figure 15.7 Figure 15.8

8 2 8 5 8 8 90 92 92 95 96 97 98

Figure 17.1

Risk-based decision tree. . . . . . . . . . . . . . . . . . . . . . . . . . . 107

Figure 18.1 Table 18.1

The four pillars of risk. . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 A comprehensive example of severity criteria. . . . . . . . . . 114

Figure 24.1

Linking FMEAs (NCEAs) . . . . . . . . . . . . . . . . . . . . . . . . . 144

Figure 25.1 Table 25.1 Table 25.2

Venn diagram of overlapping definitions . . . . . . . . . . . . . 148 Questions for understanding definitions. . . . . . . . . . . . . . 148 Questions for understanding definitions with actions. . . . 151

Figure 26.1 Table 26.1

Quality data for post-market surveillance. . . . . . . . . . . . . 153 Example strategy for post-market clinical evaluation options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Human failure types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177

Figure 28.1



● 

List of Figures, Tables, and Equations 



vii

Figure 29.1 Figure 29.2 Figure 29.3 Figure 29.4 Figure 29.5 Figure 29.6

Legend for flowcharts . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Risk management process. . . . . . . . . . . . . . . . . . . . . . . . . 184 Risk management plan (RMP) . . . . . . . . . . . . . . . . . . . . . 186 Risk tools (e.g., FTA, NCEA). . . . . . . . . . . . . . . . . . . . . . . 188 Risk management report (RMR). . . . . . . . . . . . . . . . . . . . 190 Risk management post market (EU/MDR/IVDR combined) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 Figure 29.7 Risk management post market (EU MDR). . . . . . . . . . . . 194 Figure 29.8 Risk management post market (EU IVDR). . . . . . . . . . . . 196 Figure 29.9 Risk management post-market summary . . . . . . . . . . . . . 198 Figure 29.10 Product development process—clinical (EU MDR). . . . . 200 Figure 29.11 Product development process—clinical (EU IVDR). . . . . 202 Figure 29.12 Risk assessment (the closed-loop detailed view). . . . . . . . . 204

Electronic versions of the flowcharts available at https://asqassets.widencollective.com/ portals/lfhmz53z/(H1593)SupplementalFiles-RiskManagementforMedicalDevice Manufacturers.

Dedication

This book is dedicated to my parents, Bette and Orin Simon. Their continuous support and love made my life, education, work, and this book possible.

ix

Acknowledgments ●  ●  ●

T

o Ravi Bhullar (a genuine Indian Guru), without whom this book would not have happened.

To Steve Stegmeier, who provided the epiphany for the mustache curve (and, as I recall, had a pretty good mustache himself). To an old supervisor, who said validation was too complicated for me—challenge accepted and obliterated.

xi

Preface ●  ●  ●

Y

ou may be asking, “Why do I need to read this book? What makes this different from all the risk books, articles, and presentations I’ve been exposed to in the past?” The short answer is…that there is no short answer. You’re reading this book because the ISO 14971 standard is not geared toward manufacturers. As a result, your risk documents have been collecting dust in a corner while your teams do the best they can to get product out the door and not hurt people in the process. Your systems are likely disconnected, and your people are aimlessly guessing how to design, validate, monitor, and update your products. This book will help you fix all that. It will show you how to properly build and connect your risk documents, so you can have an efficient, effective, and compliant risk process that supports (rather than hinders) your people and your processes. The concepts in this book will help you make better products without all the guesswork and help you respond to issues faster and more effectively. As a bonus, when your people see how the work they do connects and helps improve the product (and, by extension, the lives of your users and patients), they will enjoy their jobs more and take better ownership of their work. After finishing this book, you will understand that risk management really is simple and intuitive when done right. Yes, it’s broad in scope—

xiii

xiv

● 

Preface 



but it doesn’t have to be so difficult, confusing, and convoluted. If you only take one thing from this book, I hope it’s that the occurrence rates in your risk documents need to be meaningful. They need to connect directly to your verification/validations and be the thresholds for your nonconformance (NC) and complaint processes (Chapters 3-4, 8, 12, and 16-17). If the only change you are able to make at your company is to implement the concepts in Chapters 3-4 (i.e., to capture your “NC” rates in your risk documents), much of the rest of the changes may happen organically. The information presented in this book assumes the reader has at least a basic understanding of risk management—specifically, familiarity with the concepts in ISO 14971, hands-on experience with risk tools (for example, fault tree analysis and failure mode and effects analysis), and knowledge of the applicable regulations that apply to their products and to risk (for example, 21 CFR 809, 21 CFR 820, and the EU Medical Device Regulation [MDR] and/or the In Vitro Diagnostic Regulation [IVDR]). Hopefully, you are open to fixing some of the issues you have been struggling with for years. (If you are new to risk management, read through the regulations and standards, and consider taking an introductory-level course prior to reading further.) As someone experienced with risk management, you already know that there are numerous inputs and outputs for each step in the risk process. Additionally, the process is very iterative. In other words, sometimes you need to begin the process before you’re able to clearly define your criteria. For example, if you are building a risk management process for your company’s first product, you won’t have similar products from which to define a “master harms list.” Because of this variety of inputs/outputs and the iterative nature of the process, it’s not very linear. This made it difficult to write a book and still address all the necessary background before digging into each topic. As a result, some topics may rely on information that is presented later in the book. I have made every effort to minimize this and present the information in as linear a fashion as possible, but the reader should be aware that this is a broad and interdependent process.



● 

Preface 



xv

The information presented here is based on my knowledge and experience gained over many years of working and consulting in various areas of risk management. The information is also based on the regulations and standards that were available at the time this text was being written. As regulations and standards are updated, the infor­mation and references presented here will undoubtedly become outdated. Therefore, it is recommended that the reader not only assess how best to implement the concepts in this text, but also assess whether any changes have occurred to the referenced regulations and standards that would alter the implementation of the concepts. Some of the concepts presented here were developed and honed while assisting a medical device company that was dealing with thousands of lawsuits for its trans-vaginal mesh products (you may remember the lawyers’ commercials). We were asked to help the company make its risk processes and labeling “bulletproof ” (their words)—specifically, to ensure the benefits of its products objectively outweighed the risks (so the company could defend keeping its products on the market) and to properly disclose the risks to minimize the company’s future liability. While not all of these concepts were developed during that project, it emphasized for me the importance of the risk management process and started an effort to develop more meaningful and applicable risk documents. With the advent of the EU MDR and IVDR, there has been a focus to make sure risk management processes have the necessary procedural relationships to ensure the proper data are flowing in and the proper decisions are flowing out to drive necessary changes.

A man was so lazy he decided to exercise so he wouldn’t have to carry around those extra pounds all his life. Do you want to carry around the extra weight of an inefficient risk process for your entire career, or are you lazy enough to make it efficient?

xvi

● 

Preface 



If you follow the steps laid out in this book: ✔ Your risk documents will have meaningful criteria and thresholds. ✔ Your products and manufacturing processes will be designed to control risks. ✔ You will have validated and verified your risks (initial and ongoing testing). ✔ Your verification/validation (V/V) sampling plans and sample sizes will be based on risk. ✔ Your V/V sampling plans will be adjusted based on where the variation is in your process. ✔ Your test method validations will be based on risk. ✔ Your NCs and complaints will use the risk thresholds to determine when a corrective action/preventive action (CAPA) is needed. ✔ Your NC process will work for “product, process, and quality system” nonconformances. ✔ Your risks will be properly communicated to the users and patients through labeling, instructions for use (IFU), training, etc. ✔ You will have an efficient “closed-loop” risk management system. And, • Your post-market surveillance (PMS) process can feed into the risk management system to determine if you need additional clinical/performance studies. • Your risk documents will be audit ready.



● 

Preface 



xvii

Evaluate quality data

Design

Validation/ verification (initial and continuous)

Analyze/ evaluate risks

Information for user/patient

CAPA*

*CAPA is not typically required to drive a re-design during initial development of a product/process.

Figure 0.1

Closed-loop risk management process.

These “bulletproof ” strategies have been used at large, multinational manu­ facturers with hundreds of products and various departments working collaboratively. They have also been used at a small dental company with a single quality engineer tasked to do it all herself for a hand­ful of products. They have been used in medical device companies as well as in vitro diagnostic medical device companies. Because the con­ cepts rely heavily on ISO 14971 and ISO 13485, which are medical device standards, they are not specifically tailored to the pharmaceutical industry (though they could be, but I’ll leave that for someone else to write). Some of the concepts presented involve or rely heavily on statistics. While my degrees are not in statistics, I was blessed to have mathe­ma­ ticians for parents. I have also worked with several statisticians, and much of my career has involved hands-on work with large piles of data, spread­ sheets, trending, etc. Most of the formulas presented here reference the source; however, some were created by me or by my coworkers. All formulas should be vetted for your company before applying them to your processes and products.

List of Abbreviations ●  ●  ●

5W2H

Who, What, When, Where, Why, How, and How Much

8D

Eight Disciplines Problem-Solving Process

AAMI

Association for the Advancement of Medical Instrumentation

AFAP

As Far as Possible

ALARP

As Low as Reasonably Practicable

ANOVA

Analysis of Variance

ANSI

American National Standards Institute

AQL

Acceptance Quality Limit

ASQ

American Society for Quality

BOM

Bill of Materials

BS

British Standard

CAPA

Corrective Action/Preventive Action or Corrective and Preventive Action

CEN

European Committee for Standardization

CENELEC

European Committee for Electrotechnical Standardization

CEP/PEP

Clinical Evaluation Plan/Performance Evaluation Plan

xix

xx

● 

List of Abbreviations 



CER/PER

Clinical Evaluation Report/Performance Evaluation Report

CFR

Code of Federal Regulations

CGMP

Current Good Manufacturing Practice

CNC

Computer Numerical Control

Cpk

Process Capability Index

CPSP

Clinical Performance Study Plan

CPSR

Clinical Performance Study Report

CQE

Certified Quality Engineer

CS

Common Specification

CTQ

Critical to Quality

dFMEA

See FMEA.

DFU

Directions for Use

DMR

Device Master Record

DOE

Design of Experiments

DV&V

Design Verification and Validation

EC

European Commission

EMA

European Medicines Agency

EN

European Standard (from Europäische Norm)

ERC

Essential Requirements Checklist

ETSI

European Telecommunications Standards Institute

EU IVDR

European Union In Vitro Diagnostic Medical Device Regulation 2017/746

EU MDR

European Union Medical Device Regulation 2017/745

Eudamed

European database on medical devices

FDA

Food and Drug Administration

FD&C Act

Federal Food, Drug, and Cosmetic Act

FMEA

Failure Mode and Effects Analysis (aka FMECA, Failure Mode and Criticality Effects Analysis); may include use, design, and process variations

FTA

Fault Tree Analysis



● 

List of Abbreviations 



xxi

GR&R

Gage Repeatability and Reproducibility

GSPR

General Safety and Performance Requirements (checklist)

ICH

International Council for Harmonization of Technical Requirements for Pharmaceuticals for Human Use

ICS

International Classification for Standards

IEC

International Electrotechnical Commission

IFU

Instructions for Use

IQ

Installation Qualification

IOQ

Installation and Operational Qualification

ISO

International Organization for Standardization

ISO/FDIS

International Organization for Standardization/ Final Draft International Standard

ISO/TR

International Organization for Standardization/ Technical Report

IT

Information Technology

IUS

Intended Use Statement

IVD

In Vitro Medical Device

IVDR

See EU IVDR.

LOD

Limit of Detection (aka Detection Limit or MDL)

LOQ

Limit of Quantitation

LTPD

Lot Tolerance Percent Defective

MAUDE

Manufacturer and User Facility Device Experience

MD

Medical Device

MDL

Method Detection Limit or Minimum Detection Limit (aka LOD)

MDR

See EU MDR.

MDR/MDV

Medical Device Reporting/Medical Device Vigilance

NB

Notified Body

NC Nonconformance NCEA

Nonconformance Effect Analysis (may include use, design, and process variations)

xxii

● 

List of Abbreviations 



NCi

Initial Probability of Occurrence of the Nonconformance

NCr

Residual Probability of Occurrence of the Nonconformance

NCR

Nonconformance Report

NIST

National Institute of Standards and Technology

OC

Operating Characteristic

OOS

Out of Specification

OQ

Operational Qualification

ORA

Office of Regulatory Affairs

OSHA

Occupational Safety and Health Administration

OUS

Outside of the United States

P (including Pi and Pr)

(Initial and residual) Probability of occurrence of the harm

Px (including

Probability of occurrence of one of the events leading

PDP

Product Development Process

PEP

See CEP/PEP.

pFMEA

See FMEA.

PMS

Post-Market Surveillance

Ppk

Process Performance Index

PPM

Part Per Million

PPQ

Process Performance Qualification

PQ

Process Qualification

P/T

Ratio of the precision of a measurement system to the (total) tolerance of the manufacturing process of which it is a part

P0, P1, P2, PE, PF) to harm

PMCF/PMPF Post-Market Clinical Follow-up/Post-Market Performance Follow-up PMSP

Post-Market Surveillance Plan

PMSR

Post-Market Surveillance Report

PSUR

Periodic Safety Update Report

PV

Process Verification or Process Validation



● 

List of Abbreviations 



xxiii

QMS

Quality Management System

RAPS

Regulatory Affairs Professionals Society

RCA

Root Cause Analysis

RM

Risk Management

RMP

Risk Management Plan

RMR

Risk Management Report

RQL

Rejectable Quality Limit (See LTPD.)

S-CAP

Standards and Conformity Assessment Program

SCAR

Supplier Corrective Action Request

SDS

Safety Data Sheet

SEMATECH

Semiconductor Manufacturing Technology

SmPC

Summary of Product Characteristics

SOP

Standard Operating Procedure

SPC

Statistical Process Control

SSCP/SSP

Summary of Safety and Clinical Performance/ Summary of Safety and Performance

Stewardship

An extension of an internal audit program where samples of records generated for a quality system element (for example, NC, CAPA, design change, etc.) are periodically evaluated against a standard set of criteria.

TF

Technical File (aka Technical Documentation)

TM

Test Method

TMV

Test Method Validation

UAI

Use As Is.

uFMEA

See FMEA.

U.K. (or UK)

United Kingdom

UNODC

United Nations Office on Drugs and Crime

URA

Use Risk Assessment

U.S. (or US)

United States

V/V

Verification/Validation

(including QS)

Section 1 Building a Risk Program ●  ●  ●

Chapter 1 The Scope of Risk Management Chapter 2 What is “Risk”? Chapter 3 The Sequence of Events­—What to Measure Chapter 4 Control and Monitoring of the Sequence of Events Chapter 5 H  ow to Define Occurrence Criteria in the RMP Chapter 6 H  ow to Define Severity Criteria— A Master Harms List Chapter 7 Establishing Design Inputs and Process Controls Chapter 8 Risk Analysis and Evaluation

1

Chapter 1 The Scope of Risk Management ●  ●  ●

W

hen most people hear the term “risk management,” they think of an engineer sitting at a desk writing up a couple of documents to file and forget about until an auditor asks for them. For medical device/ in vitro diagnostic (IVD) companies, these documents are typically just a failure mode and effects analysis (FMEA) or two. Because of the recent European Union Medical Device Regulation (EU MDR)/In Vitro Diagnostic Regulation (IVDR) changes, they might now include a risk management report (RMR). The risk management plan (RMP) was likely ignored for their legacy products, but some companies may have added a paragraph to a standard operating procedure (SOP) or the RMRs, thinking that is all they need. However, ignoring or patching over these documents is just asking for trouble—for your organization and for your customers. An inadequate risk management process may, at best, be noncompliant and may, at worst, cause significant inefficiencies within your organization and put users/patients at additional risk. Risk management is far more than just a few documents thrown in the corner; it’s the link between most of your quality management system (QMS) elements. There are some processes (e.g., post-market surveillance) that are entirely within the scope of risk management. In PMS, we monitor and evaluate the real-life data (e.g., nonconformance reports, complaints, etc.) against the risk documents to see if our products are as safe as we believe them to be. Additionally, several other QMS processes should link to risk management for nearly all of their activities. The scope of risk management is shown in Figure 1.1. 3

4

● 

Chapter 1 



Risk Management • Risk management plan • Risk tools • Risk management report

Clinical/ Performance • • • • •

CEP/PEP Scientific validity Analytical performance Clinical performance PMCF/PMPF

Quality Data Post Market Surveillance • PMSP • PMSR • PSUR

Regulatory • • • •

NCRs Complaints Trending Etc.

Design Inputs/ Process Controls

External standards GSPR Technical file Eudamed

Information for User/Patient • • • • •

Figure 1.1

• • • •

Labeling IFU SDS Training Marketing material

CAPA

Validation/ Verification • CTQ • Sample size

The scope of risk management.

This book will first describe how to build a risk management program for generating those documents. It will then discuss how to use these tools properly to ensure your products are designed and built to control the associated risks. It will discuss how to use the risks to identify what is critical to the quality of your products and an objective, formulabased method for determining your validation and verification sample sizes and sampling plans. From there, the book will address how to analyze your quality data properly—through continuous monitoring and the PMS process—to determine when you need a CAPA to investigate issues and drive changes. Finally, the book will review how to use your risk documents to properly feed into your labeling, training, marketing, and advertising in order to accurately communicate the risks of your products to the customers/patients—and what they can do (or not do) to help mitigate those risks.



● 

The Scope of Risk Management 



5

Regulations and external standards serve as inputs to the user/ customer requirements (the regulators are customers too). Additionally, a number of risk documents are required for inclusion in the product/ product-family’s technical file (for example, post-market surveillance plan (PMSP) and report (PMSR), periodic safety update report (PSUR), clinical/performance evaluation report (CER/PER), verifications/vali­ da­tions, RMR, and labeling) and are required for submission into the new Eudamed database (for example, PSUR, trend reports, clinical performance study plans (CPSP) and reports (CPSR), and summary of safety and (clinical) performance (SSCP/SSP)). Note: This book will describe some of the regulatory requirements and procedural relation­ ships (inputs/outputs) as part of some specific topics, but it does not include a separate discussion of regulatory requirements. Similarly, the book will describe how clinical/performance studies can provide data for building your risk documents and how post-market surveillance (including the continuous process monitoring of your quality data) can identify the need for additional clinical/performance studies; however, the clinical/performance activities are not discussed separately.

Key Takeaway • Some processes are entirely within the scope of risk management, and others should link with the risk management process for nearly all of their activities.

Chapter 2 What is “Risk”? ●  ●  ●

T

he ISO 14971 standards have many very good concepts; however, they focus on the users and patients, and aren’t always geared toward the manufacturers (e.g., the definitions, the sequence of events in the annexes, etc.). Regulators and notified bodies also, understandably, focus their efforts on protecting users and patients. As an experienced risk professional, you likely already know the definitions of “risk” and “harm” from these regulations and the standards, but they are presented here for context.

risk “combination of the probability of occurrence of harm and the severity of that harm”

harm “injury or damage to the health of people, or damage to property or the environment”

7

8

● 

Chapter 2 



Pet Peeve Note that “detectability” is NOT part of risk. Detectability has never been part of the definition of “risk” in EN 1441 and ISO 14971, and the same definitions of risk were added to the EU MDR/IVDR a couple years ago. By definition, risk is just the occurrence and severity of the harm. Even though detectability, by regulation, is not part of the definition and was removed more than 13 years ago from the risk estimation guidance in ISO 14971, I still find companies trying to use detectability, and I recently read an article from a presumably reliable source touting its use for other industries. If you are one of those people who still think detectability is part of risk: STOP. Not only stop using it, but stop reading right here. Go back to the regulations and standards, and possibly take an introductory-level course prior to reading further. There are so many reasons why “detectability” is improper, but we will only address some of them here. Readers familiar with test method validation (see Chapter 15) will recognize that detectability is just a euphemism for the power of a test (1 – β), where β is the rate of the type II error (aka false negative—for example, how often a bad product passes). As such, it only applies to a subset of the various types of risk controls and monitoring (see Chapters 3 and 4). By misapplying a factor from a small subset of their risk controls, companies artificially reduce their overall risk evaluation and waste time focusing on activities that may not benefit users and patients. Besides, a good auditor shouldn’t care about rationalizations/excuses for how obvious your bad products are. They should only care about how often your products harm users/patients and how badly they are harmed—which are the two regulatory parts of risk. See Chapter 3 for additional discussion of detectability.



● 

What is “Risk?” 



9

Notice also that the definition of “harm” covers more than just harm to the users and patients; it also covers harm to property (for example, processes) and the environment. Remember this definition. It will come up again.

Key Takeaway • The examples in the standards are not geared toward manufacturers.

Chapter 3 The Sequence of Events— What to Measure ●  ●  ●

E

N 1441 Medical devices—Risk analysis was first published in 1997 and required the identification of “possible hazards” and the “risk for a given hazard.” It described the hazards occurring “in both normal and fault conditions,” and in one of the notes it hinted at a link between hazards, failures, and failure modes. The first technique described in the annexes of EN 1441 was for the use of “failure mode and effects analysis (FMEA).” When EN ISO 14971 replaced EN 1441 in 2001, it continued to use similar language and added the concept of the “hazardous situation.” Then with the 2007 revision, it added the concepts of P1 and P2. To this day, most manufacturers don’t use this terminology outside of their risk management processes, and it is this disconnect that has rendered their risk documents meaningless and unusable. The ISO 14971 standard describes there being a “foreseeable sequence of events” that leads up to the harm and that the examples in the standard only describe the last three events in sequence (that is, hazard leading to a hazardous situation, which in turn leads to the harm), as shown in Figure 3.1. P2

Hazard

P1

P

Hazardous situation

Harm

Figure 3.1 The ISO 14971 example sequence of events. 11

12

● 

Chapter 3 



What you may not know is that the standard does make passing reference (in the notes) to the fact that there are events prior to the hazard. While it is understandable why the authors of the standard and the regulators would focus on the harm to the users and patients, the examples they provide have done a huge disservice to the manufacturers for years. This is because manufacturers generally don’t control or measure the occurrence of the hazard or the hazardous situation. Many manufacturers just try to mimic the examples in the standard; however, they end up spending significant time and resources trying to fit a square peg (their manufacturing and complaint data) into a round hole (the examples). Alternately, the engineers sitting at their desks may just use their “best estimates.” Manufacturers aren’t typically in the operating room or the customer’s home to measure how often the hazards happen before or during the use of the products. They also typically are not able to measure how often people are exposed to the hazards (that is, hazardous situations). λ

?

α

β

P2

P?

PE

PF

P0

P1

P

Fault

Error

Failure

Hazard

Hazardous situation

Harm

Figure 3.2

A typical “risk expert” sequence of events.

The examples in ISO 14971 show P1 and P2 and define them as the probabilities of “a hazardous situation occurring” and “a hazardous situation leading to harm,” respectively. If you are familiar with risk manage­ment content, you undoubtedly have seen people assign Greek letters, Px, and a seemingly random selection of terms (for example, nonconformance, fault, error, use error, misuse, failure, failure mode, malfunction, etc.) to describe the sequence of events (see Figure 3.2). The probability of occurrence of harm is often labeled as “P,” but even that may vary depending on the company or presenter. These strategies



● 

The Sequence of Events—What to Measure 



13

are great for showing the many intricacies that could lead to harm; however, they all have a couple of big problems. First, they require a lot of data. If you follow these strategies, regardless of which terms you use or which event you choose to start with, you would need to gather data for as many of these steps as possible to defend your estimates of “P.” Since Annex H of the prior revision of EN ISO 14971 (2012) stated that “a review of complaint files can be sufficient verification” for low risks, this level of data analysis may not be necessary to defend your estimates of “P.” (Note that this annex was removed in the 2019 revision.) The other big problem with these strategies is that most manu­fac­ turers don’t measure the rates of their faults, errors, failures, etc. They typically lump all of these into a nonconformance report (NCR) system, which aligns with the expectations of the Food and Drug Administration (FDA) and most outside of the United States (OUS) regulators/notified bodies. Based on the FDA’s definition of a “nonconformity” in 21 CFR 820, the NCR system is designed to cover inherent weakness, deficiency, or imperfection of the design or implementation (that is, nonfulfillment of a specification), acts that depart from or fail to achieve a requirement, and the inability of a product or component to perform its required functions (that is, nonfulfillment of a requirement). 21 CFR 820.3(q): “Nonconformity means the nonfulfillment of a specified requirement.”

As a manufacturer, you undoubtedly already measure and control the rate of nonconformances in your design and manufacturing processes in order to satisfy the FDA (and to align with ISO 13485 for OUS compliance). You undoubtedly also measure your complaint rates. By combining these various types of events (faults, errors, failures, etc.) into the NCR system, you can capture the “probability of nonconformance” in your risk documents. Then you can calculate your “P” value based on whatever intervening data you have available (the same way you would using any Px—it just gives you a much more relevant number for your metrics).

14

● 

Chapter 3 



It is always remarkable how resistant people can be to change and how tightly people will hold on to their old ways, even when those ways involve additional work and distraction. As discussed in Chapter 2, there are still companies that try to force detectability into their risk programs. So, how does detectability fit into the sequence of events described in Figure 3.2? Detectability would only represent the efficiency of an inspection (for example, part of λ or α, depending on how you define your sequence) and/or part of the users/patients’ ability to detect a hazardous situation themselves (P2). Even the pharmaceutical industry guideline, International Conference on Harmonization (ICH Q9), Quality Risk Management, defines it only for the discovery of “hazards.” Within those steps in the sequence there may also be other controls to which detectability wouldn’t apply (for example, manufacturing controls, filters, etc.), and detectability would only apply at all if the inspection were applied to the entire batch (not just a sample pulled for testing). As such, it only partially contributes to a couple of the steps in the sequence and obviously does not apply to the whole sequence. Also, the only way to determine λ, α, β, or P2 would be to calculate them from the initial and residual probabilities of occurrence at the applicable step anyway (for example, λ = PF/PE from the sequence in Figure 3.2). So, what really matters (and is measured) are the occurrence rates. Therefore, calculating detectability is a waste of time and applying some arbitrary detectability factor across the whole sequence not only violates the regulatory definition of “risk,” but it’s also just bad math (specifically, it causes compounding of the factor due to a violation of the distributive property). You wouldn’t take a factor used to calculate P and then apply it again to artificially reduce it further. Rather than wasting your time and improperly distributing and applying arbitrary factors, just use the nonconformance and harm rates that you already track. Focusing on just the nonconformance and harm rates as shown in Figure 3.3 is faster, easier, more intuitive, and more compliant, and helps your organization focus on what really matters.



● 

The Sequence of Events—What to Measure 



15

Risk Control Options Safe design/ manufacture

NC

Protective Information measures/alarms for safety P

Nonconformance

Figure 3.3

Harm

Sequence of events for manufacturers.

By capturing your NC rates in your risk documents, you would be using the data you are already capturing and trending. (A square hole for your square-peg data.) As a manufacturer, you would then use the effectiveness of your various controls (safe design and manufacture, protective measures/alarms, and information for safety) to estimate how often a NC results in harm (P). As mentioned previously, this may only require “a review of complaint files,” without the wasted effort of calculating all the interim rates—unless you need a detailed analysis. It is definitely more intuitive for most manufacturers to evaluate how often bad things (harms) happen as a result of their NCs. When viewed from this perspective, and using language they use every day, the numbers can finally start to make sense to everyone, from the operator on the floor to the validation engineer to the complaint investigator. The systems will start to link, and the thresholds will be meaningful. It is through this lens that the risk controls also become more intuitive. That NC rate in your risk documents can now be used to determine the verification and validation sampling plans, and can be used as the NC and complaint thresholds to determine when a CAPA is needed.

Key Takeaways • Manufacturers typically have NC data—not P1 or P2 data. • Risk documents are the input for validation and the threshold for NCs and complaints.

Chapter 4 Control and Monitoring of the Sequence of Events ●  ●  ●

I

f you use a typical FMEA format, as shown in Figure 4.1, your docu­ ments align with the sections in ISO 14971 by capturing the following: • initial risk assessment (identification, estimation, and evaluation of risks prior to the implementation of risk controls), • the risk controls, and • residual risk assessment (estimation and evaluation of risks after the implementation of risk controls).

During the assessment of the initial risks associated with your product, you would estimate the rates (rate of harm and any preceding events you monitor) without accounting for any of your controls (see Figure 4.2 on p. 19). You are effectively estimating the rates as if you were to take the product right out of your process without any inspections, protective measures, or labeling and send it right to the customer. Even in this situation, there are risk controls outside of your influence that may reduce the likelihood of a harm occurring.

17

A typical FMEA (NCEA).

Residual risk estimate Risk control verification New hazards references NCr Pr Risk Accept? introduced?

Residual Risk Assessment

Chapter 4 

Figure 4.1

Hazard Spec. ID# ref.

Risk estimate Risk control activity Harm Justification Hazard (potential for rating Risk control Risk Inherent Protective Information verification (system effect(s) (existing/ effect) of noncon- NCi Pi Risk Accept? current reducible? safety by measures for safety plan design formance) controls)

Risk Control

● 

Therapy phase/ Potential design Require- nonconformance feature/ ment (anti requirement/ process specification) step

Initial Risk Assessment

18 ●



● 

Control and Monitoring of the Sequence of Events 

Event



19

Initial Probability NCi

Nonconformance Hazard

Probability of NC leading to hazard Probability of exposure

Hazardous situation Pi

Harm

User/patient detection

Occurrence Rate !""#$$%&"%'()*% P1

Figure 4.2

The sequence of events—initial probability.

Where, NCi = Initial probability of occurrence of the nonconfor­ mance and Pi = Initial probability of occurrence of the harm: • Not every nonconforming product will lead to a hazard (for example, even if a circuit board short-circuits, it may still provide the function needed by the user). • People won’t always be exposed to the hazard (for example, if the cap falls off a scalpel, the kit it’s in may be discarded without anyone reaching into that section of the kit). • Users and patients may detect the issue without any prompting (for example, they see the sharp edge of the scalpel, or they notice the burner is hot). You can use these controls as justification to reduce your estimates of Pi to a rate that is lower than your nonconformance rate, NCi.

20

● 

Chapter 4 



Once you have estimated the initial probabilities of occurrence for your NCs and the associated harms (NCi and Pi, respectively), you can add the controls that are within your influence to drive down the rates of the NC escaping and the harm occurring (residual probabilities, NCr and Pr, respectively) (Figure 4.3). Your internal controls (for example, inspections, filters, manufacturing controls, etc.) influence the rate at which nonconforming product makes it out the door, NCr. This rate represents the defect rate you test to during your finished goods testing. Once the product is outside of your control, those same controls that are outside of your influence will still apply; however, you can add some limited influence to reduce the rate of harm once the NCs occur in the field by adding protective measures and supplementing the users’/ patients’ ability to detect the issue by providing them information for safety and training. The information for safety and training “is instructive and gives the user clear instructions of what actions to take or to avoid, in order to prevent a hazardous situation or harm from occurring” (per ISO/TR 24971). Notice the priority order of risk controls, as delineated in the EU MDR/IVDR and ISO 14971, follows the same sequence of events. The elimination or reduction of risks through safe design and manufacture are activities that are within the control of the manufacturer and are typically monitored through various metrics (for example, NC rates such as scrap/yield, process trends (inputs), product trends (outputs), etc.). By measuring and monitoring the NC rates you control, you are not only aligning your risk documents with the expectations of the regulation, but you are also making the thresholds in your risk documents meaningful and applicable. This is a revolutionary concept when compared to P1 and P2!

Occurrence Rate

The sequence of events—residual probability.

Pi

User/patient detection

Probability of exposure

Probability of NC leading to hazard

Harm

Hazardous situation

Hazard

Nonconformance

Pr

NCr

Residual Probability

No influence Probability of exposure

No influence

Some influence

Limited influence

Protective measures

Information for safety User/patient detection

No influence

Ability to control and monitor (e.g., V/V, metrics)

Probability of NC leading to hazard

Internal risk control(s)

Manufacturer Has:

Control and Monitoring of the Sequence of Events 

Figure 4.3

Harm

NCi

Initial Probability

● 

Hazardous situation

Hazard

Nonconformance

Event



21

Occurrence Rate

22

● 

Chapter 4 



Many of you will read this chapter and say, “Well, we do this already—we just call it something else (for example, “O,” “F”).” If that’s the case, what is preventing people from using the rates from your risk documents in their everyday activities? It’s likely they are either confused by the terminology or unable/unwilling to convert their data to the same units of measure used by the risk documents. To reduce possible confusion from the terminology, it is recommended that you title the rate/threshold “NC,” not some other term or letter. By using the same language in your risk documents that you use in your other quality system processes, people will be more likely to understand the connections. Manufacturers are interested in how often harms result from the issues they monitor and control (for example, NCs). For consistency in your terminology, you may want to describe your risk tools as nonconformance effect analyses (NCEAs), instead of FMEAs. (They are an analysis of the effects when things are not right.) Note: The term “FMEA” will continue to be used throughout this book to assist experienced risk-people with under­standing the concepts, but the term should switch to “NCEA” when you implement the concepts for consistency with your other QMS processes. Some people don’t use the thresholds from the risk documents because their quality data (for example, complaint and NC) trending often calculate the number of events, but the risk documents typically delineate a rate (for example, %, ppm, etc.) instead. When their system data do not use the same units of measure as the risk document, they would need to convert them somehow. Since people often don’t know how to compare their data to the risk thresholds, they create new ones; that is, they guess or pull their own limits out of thin air. For this reason, it is imperative that whatever formula (denominator) you use to calculate the risk thresholds for your risk documents also be used for the quality data (e.g., complaint and NC) trending. The next chapter will address how



● 

Control and Monitoring of the Sequence of Events 



23

to calculate and define the occurrence criteria in the associated RMPs. Your trending procedures should instruct the analysts to use the same formula (denominator) delineated in the associated product/productfamily RMPs, so your people can do apples-to-apples comparisons.

Key Takeaway • Measure what you can control, control what you can measure.

Chapter 5 How to Define Occurrence Criteria in the RMP ●  ●  ●

T

o capture the probability of occurrence, most manufacturers deline­ate a table of occurrence criteria, similar to the one shown in Figure 5.1. The rows within these tables may be described as categories, ratings, or levels. Depending on the company, there may be anywhere from three to ten of these categories. These tables generally include a “common” or “qualitative” term, some qualitative description, and some objective, quantitative rates (typically decimal or part/complaint per million rates). Often these tables are predefined in some SOP, which can make them virtually unusable when the company has a mix of high-sales and low-sales products. Occurrence

Harm/Reliability Rates

Numerical Rating

Qualitative Term

Qualitative Definition

Decimal

5

Frequent

The event is expected to occur at a high rate or multiple times during the life of the product.

10-3 ≤ ×

4

Probable

The event is expected to occur often during the life of the product.

10-4 ≤ × < 10-3

3

Occasional

The event is expected to occur sometimes during the life of the product.

10-5 ≤ × < 10-4

2

Remote

The event is not expected to occur, but could occur in rare situations.

10-6 ≤ × < 10-5

1

Improbable

The event is extremely unlikely to occur during the life of the product.

× < 10-6

Table 5.1 Example occurrence criteria. 25

26

● 

Chapter 5 



Per ISO 14971, the risk evaluation shall use the criteria “defined in the risk management plan.” The occurrence criteria need to be delineated for each product or product-family and should be based on the estimated sales and/or use of the product (that is, the opportunities for something to happen). The occurrence table for a given product or product-family needs to provide sufficient resolution of the data to identify trends. In other words, your product trending should be able to use each of the rating levels in your table. If you have an occurrence rating level in your table that you can never achieve, get rid of it and make your rating levels realistic for your product/product-family. The reliability threshold rate used for the lowest occurrence rating (typically assigned to category “1”) in the table should be no less than 1 1 Sales or Uses for the anticipated monitoring/trending period. If a lower threshold rate is used, a single occurrence of the event will automatically cause the actual rate to exceed the threshold rating. For example, using a 1 ppm reliability threshold rate for a single-use product that sells 200,000 units/year would require five years of sales (one million total units sold) before a single event would be acceptable and remain within a 1 ppm threshold for a rating of “1.” If the lowest category on your table has an occurrence rate that is set so low that even a single event would bump you out of that category, then you are effectively saying that any events you score at that lowest category won’t actually happen. Since the definition of risk (as previously discussed) includes an element of the occurrence of the harm, if there is no occurrence anticipated, there is no risk. If there is no risk, then it doesn’t belong in your risk documents, and the occurrence criteria should be adjusted to provide resolution for the rates of actual risks for the product or product-family. As discussed in the previous chapter, it is imperative that the quantitative rates you delineate in your RMPs connect as directly as



● 

How to Define Occurrence Criteria in the RMP 



27

possible to the trending for your quality data (for example, complaints and NCs). To effectively accomplish this connection, you have two options: (1) adjust your data to align with the criteria, or (2) adjust the criteria to align with the data. If you use a formula to calculate the threshold rates (for example, occurrences per opportunity for the trend period) in your risk docu­ ments, then you need to use that same formula to adjust your quality data (for example, complaints and NCs). Alternately, if the number of opportunities, as described previously, are fairly consistent for your product/product-family, it may be possible to instead adjust the occurrence criteria from a rate to the number of occurrences. This would give your people an easy apples-to-apples number for comparing the threshold number of occurrences to the actual number of occurrences seen in your quality data.

Key Takeaway • Risk tables need to be realistic for your product/product-family and documented in the RMP, not an SOP.

Chapter 6 How to Define Severity Criteria— A Master Harms List ●  ●  ●

R

emember: From our definitions of “risk” and “harm,” per ISO 14971, harms should be evaluated for effects to people, property, and the environment. Similar to “occurrence,” most manufacturers delineate a table of severity criteria. Depending on the company, there may be anywhere from three to ten of these categories as well. These tables generally include a summary term and a more detailed description. We will evaluate example criteria for severity ratings later. For now, we will focus on just defining the terminology used to describe the harms and connecting the harms to the ratings. You should be able to generate a list of the various harms associated with your devices (for example, from clinical/performance studies, quality data from similar products, etc.), as shown in Figure 6.1. The list should use terminology typically used by the users/patients, so it will align with the language used when someone calls in a complaint. This list can then be reviewed by your medical/clinical teams to assign the severity ratings associated with each harm. The list can then be incorporated into a table in your FMEAs and used as a drop-down for the risk team to use when filling in the detailed analysis. This way your team won’t need to pull ratings “out of thin air,” waste time arguing about the ratings, or change them within and between documents. The medical/clinical representative should be someone familiar with the actual clinical use of the product (for example, a doctor who has performed the surgery or diagnosed the disease/ condition, nurse who used the product to treat patients, etc.). 29

Building a master harms list.

similar products) • Production and post-production data • Literature search • Review of similar products • External standards

Harms list

2 2 5 1 2 2 1 2 2 4 2 3 2 3 4 2 4 3

Chapter 6 

Quality data (product and

Clinical/ performance reports

Burns Cut, scrape or puncture wound Death Delay of procedure Delay of treatment Discomfort—moderate Dissatisfaction Electrical shock Hearing damage Drug toxicity Infection—local Infection—systemic Injury—minor Injury—moderate Injury—serious Irritation/inflammation Progression of disease Unnecessary treatment

Information Source

● 

Figure 6.1

Some data and testing may not be available when initiating the risk analysis activities; and the analysis may need to be updated as new information becomes available through the development process.

Criteria for estimation of risk(s)

Index 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

Harm Clinical/Patient, User, Environment

Harms (Potential Effects) Severity

Event

30 ●



● 

How to Define Severity Criteria—A Master Harms List 



31

To ensure the severity ratings are consistently applied across all of your products, generate a “master” list by combining the harms identified for each product. By having your medical/clinical representatives predefine the ratings for the harms, you will increase the accuracy of the ratings applied to each harm, increase the efficiency with which your teams can generate risk documents, and add consistency to your risk management process across your entire product portfolio.

Key Takeaway • Having medical/clinical representatives predefine ratings for the harms increases accuracy, efficiency, and consistency.

Chapter 7 Establishing Design Inputs and Process Controls ●  ●  ●

N

ow that we have a risk management plan for the product that delineates the occurrence and severity criteria and a master harms list that delineates how bad each harm is, we can start developing our product using a methodical process that involves risk assessment and control. As mentioned previously, this may be an iterative process since we may adjust the sales and use estimates that fed into our occurrence criteria. There may also be a lot of iteration as the product design and manufacturing processes are developed. As shown in Figure 7.1, to start developing a product, we first need to know what the users want/need. These wants/needs are typically docu­mented in some type of user requirements document (this could go by another name, such as customer requirements or marketing specifi­ cation). These requirements need to include not only the users’ wants/ needs but also the regulatory requirements and any external standards that will be applied to the product.

User requirements

Figure 7.1

Defining user requirements.

33

34

● 

Chapter 7 



Once you have defined what your product needs to be able to do, you can start to identify the risks associated with your product not meeting those needs. This may be documented in a use failure mode and effects analysis (uFMEA) (Figure 7.2) or similar document (for example, application FMEA, use risk assessment). If the instructions for use (IFU) are available for the product or for a similar product, the user errors and reasonably foreseeable misuse can also start to be identified in the same tool. User requirements uFMEA & trace matrix

Figure 7.2

Tracing user requirements to risk.

As discussed in Chapter 5, since the definition of risk includes an element of the occurrence of the harm, if there is no occurrence anticipated, there is no risk. As such, if you have real-life data (for example, for a legacy product, from similar products, or from clinical/ performance studies), you can use the real-life data to ensure your risk documents are capturing realistic risks. Remember, the purpose of a clinical study is for the patients to report 100% of their symptoms. Therefore, your clinical trials should provide a list of all the harms and the rates of those harms for your product and any associated procedures. Since post-market complaint reporting is generally less than 100% of the actual events (that is, not everything gets reported to you), the clinical study rates provide a good starting point for the master harms list and the occurrence thresholds. This also provides a basis for determining whether an investigation is needed. For example, if during the life of the product you encounter a harm that wasn’t seen during the clinical studies, or if your rates have exceeded the rates from the clinical studies, you will want to determine why and what changed.



● 

Establishing Design Inputs and Process Controls 



35

Lawsuits may skew reporting to greater than 100%. Once the lawyers start advertising, you could end up with more lawsuits than products sold. It is at this point that you may also apply other standards. For example, if you are manufacturing medical electrical equipment, you will likely apply an electrical standard (for example, IEC 60601-1) that requires you to evaluate risks “under normal condition and single fault condition.” If you apply EN 62366-1 Application of usability engineering to medical devices to your product, you would “assess and mitigate risks associated with correct use and use errors, i.e., normal use.” It is important to note that these standards require you to assess the risks that occur when your product works perfectly; however, few companies actually assess these risks adequately. Everyone wants to blame a problem/harm/complaint on something, so they typically try to force all risks to have a cause (for example, failure mode/nonconformance) from their device and just ignore the assessment of risks from correct use of the product. By definition, an FMEA identifies risks associated with “failure modes.” Since this tool may not be applicable for determining the risks under normal/correct use of the device, there are a couple alternative approaches you could choose to take. One simple method would be to identify each requirement/specification and list the nonconformance as “correct use,” “normal use,” “no nonconformance,” or some similar language, and then analyze and evaluate the associated risks. The other method would be to identify and document these risks through a different tool. As described in ISO 14971, the criteria for risk acceptability needs to “take into account available information such as the generally acknowledged state of the art.” The standard defines “state of the art” to include the applicable “products, processes, and services.” Presumably the assessment of the state of the art would include an evaluation of the

36

● 

Chapter 7 



risks associated with the process (for example, even if your pacemaker works properly, the patients are typically already in a compromised condition, and there are risks associated with the surgery itself) as well as the risks associated with the products (for example, similar devices). (See the Risk Evaluation vs. Benefit-Risk and Acceptability section in Chapter 23 for additional discussion of using the state of the art to set the risk evaluation criteria.) The risks associated with the normal/correct use of the product may be assessed and documented along with the assessment of the state of the art. There are many options available for requirementsmanagement software; however, few include traceability through the associated risks and fewer companies actually use these software options effectively. Because of the limited use of requirements-management software and the nearly ubiquitous use of spreadsheets (for example, MS Excel, Google Sheets, etc.), the examples in this book are spreadsheets. By linking the user requirements (including the IFU steps) to the uFMEA in a traceability matrix, you can ensure that risks have been identified for all the requirements. Additionally, by linking the require­ ments and risks through a traceability matrix, the identification of new, additional risks may indicate a user requirement that had been previously overlooked or undocumented. This link between user requirements and risks may be a many-to-one relationship and/or a one-to-many relation­ ship. In other words, there may be many risks associated with a given user requirement; and conversely, a given risk (and associated design control) may apply to a variety of user requirements. The traceability matrix will help map these relationships and ensure the impact of any future changes can be more easily assessed (Figure 7.3). The management of use risks also involves identifying risk control measures. Companies can control how people use their products by the way they design them. They can design their products so people can only use them in a way that is safe and still meets their user requirements. If part of the design does not mitigate/control a risk associated with the use



● 

Establishing Design Inputs and Process Controls 



37

Controls User requirements

Design specifications uFMEA & trace matrix

Figure 7.3

Design specifications to control use risks.

of the device, then we probably don’t need a specification for it. There are a few things to consider regarding linking the design specifications to the use risks, however. If a design is deemed to be necessary, then there’s probably an associated user requirement. Some design specifications (for example, color) may be considered as only cosmetic; however, customer dissatisfaction can be considered a harm—typically a low-severity harm but still a harm nonetheless. Also, the design specifications need to be characterized to correspond with the limits of the user requirements. Just as with the traceability matrix from the user requirements to the uFMEA, the matrix should grow as you develop your product and process to link the user requirements and their associated risks to the applicable design specifications that control the risks. When a new, additional design specification is identified, you will need to add the user requirement and use-risks associated with that design to the user requirements and uFMEA. When determining the design specifications, studies should be performed to ensure the specifications are meaningful and actually align with the point where the associated user requirement will no longer be met. In other words, the upper and lower design limits must be established. Some companies may also define another set of limits within the limits of failure (for example, alert or action limits) to guard against limit failures. These extraneous, tighter limits are often referred to as “guard bands”; however, this can be viewed as excessively restrictive in some cases, especially when the variability of the design and process are characterized and the validation (as we will discuss) demonstrates the process to be both capable and stable within the specifications.

38

● 

Chapter 7 



If a company wants to establish some type of guard band within the specifications, these should be clearly delineated as target values and not specifications. The company should also delineate in its procedures that NCRs and CAPA investigations are not necessary for results that are outside of these guard bands and are still within the specification that was established to correspond to the user requirement. It is amazing how much time and effort companies waste tracking, evaluating, and inves­ tigating results that don’t meet a target but are still within specification— typically only to write a “use-as-is” rationale anyway. Once the specifications for the design of the product are estab­ lished, the risks associated with not meeting those specifications can be identified (for example, in a design FMEA [dFMEA]) (Figure 7.4). Again, the traceability matrix should continue to grow to link the design specifications to the associated risks and vice versa. As with the uFMEA, this will ensure that risks have been identified for all the specifications and will provide the mechanism to drive updates to the design specifications and user requirements as new, additional risks are identified.

Controls User requirements

Design specifications uFMEA & trace matrix

Figure 7.4

dFMEA & trace matrix

Tracing design specifications to risk.

This process is then repeated as the manufacturing processes are developed. As part of the management of design risks, risk control measures need to be identified. We can control the design of our products by controlling the process used to make them. We can develop our manufacturing process so it only produces products that ensure the risks are controlled to meet the design requirements. If part of the process does not mitigate/control a risk associated with the design of the device, then we probably don’t need a specification for it. Conversely, if



● 

Establishing Design Inputs and Process Controls 



39

a process specification is deemed to be necessary, then there is probably an associated design requirement; thus, the associated dFMEA, user requirements, uFMEA, and traceability matrix should be updated accordingly (see Figure 7.5). Controls User requirements

Design specifications uFMEA & trace matrix

Figure 7.5

Controls

dFMEA & trace matrix

Process specifications pFMEA & trace matrix

Process specifications to control design risks and tracing process specifications to risk.

Once you establish the specifications for the product manufacturing process, you can identify the risks associated with not meeting those specifications (for example, in a process FMEA [pFMEA]). Again, the trace­ability matrix should continue to grow to link the process specifi­ cations to the associated risks and vice versa. As before, this ensures you have identified risks for all of your specifications and have a mechanism to drive updates.

Key Takeaway • Don’t pull risks out of thin air; product development is a methodical process that involves risk assessment and control.

Chapter 8 Risk Analysis and Evaluation ●  ●  ●

A

typical FMEA includes rows and rows of all the things that could possibly go wrong with a device. As discussed earlier, each row identifies a different sequence of events and the associated harm from that sequence of events. Some of these documents may identify more than 1,000 lines of possible events (I know this from experience—I used to make some that big). Typically, the severity of the associated harm and the occurrence of that sequence of events (for that specific row) are then evaluated against the risk criteria to determine if the risk is acceptable or unaccept­able. (Note: The terms “acceptable” and “unacceptable” are used in this text; see “Risk Evaluation vs. Benefit-Risk and Acceptability” in Chapter 23.) Some companies color-code this evaluation (for example, red/green, red/yellow/green). There are, however, compliance and prac­­ tical issues associated with evaluating thousands of lines of possible events. Per ISO 14971 and the EU MDR/IVDR, the residual risk evalu­ ation is for the risk, not the sequence of events (for example, the hazard and hazardous situation). Therefore, the evaluation of the occurrence of a sequence of events (for a specific row in an FMEA) does not meet the letter or the intent of a risk evaluation. To properly assess residual risks in which multiple nonconformances (that is, sequences of events) lead to the same harm, the sum of the probability of occurrences of that harm from all associated nonconformances should be used. In other words, you need to evaluate the severity of a harm and the total residual rate of that harm from all the various causes (for example, the P-Total) against the risk evaluation criteria. 41

Index

2 2 5 1 2 2 1 2 2 4 2 3 2 3 4 2 4 3

Reported Rate % (if applicable) 0 0 0 0 3 0 0 0 0 0 0 0 0 0 0 0 0 2

Instances Utilized

Green

Yellow

0.000101

Risk

0.000021

Total Threshold Rate (P-total)

Chapter 8 

Example harm input and summary table.

Burns Cut, scrape or puncture wound Death Delay of procedure Delay of treatment Discomfort—moderate Dissatisfaction Electrical shock Hearing damage Drug toxicity Infection—local Infection —systemic Injury—minor Injury—moderate Injury—serious Irritation/inflammation Progression of disease Unnecessary treatment

Information Source

● 

Figure 8.1

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

Harm Clinical/Patient, User, Environment

Severity

Harms (Potential Effects)

42 ●

One way to do this is to use an Excel spreadsheet for your FMEAs (Figure 8.1). Then you can use your harms list (specific to that product/ product-family) to count the number of times a harm was identified in the rows of possible events and add up the rates from each of those rows. This will give you the total threshold rate for that harm. You can then use this total threshold rate with the severity for the risk evaluation.



● 

Risk Analysis and Evaluation 



43

By summing the rows of possible events into a total rate, there is a side effect with extraordinary practical benefit. If your risk team generates more than 1,000 lines of useless estimates for events that will never actually occur, they will drive up the total projected threshold rates for the harms. This may cause the total occurrence rate for these harms to move to an unacceptable level. As discussed in the chapter for setting the occurrence criteria, there is an element of the occurrence of the harm in the definition of “risk.” If there is no occurrence anticipated, there is no risk, and the row should simply be removed. By totaling the rows, the risk team is more likely to make the assessment realistic. This smaller, realistic risk document then becomes far more usable than the 1,000-plus line monstrosities. (Note: I don’t make 1,000-plus line FMEAs any longer, and neither should you.) One way to ensure you aren’t artificially inflating your risk documents is to start with the data from your NC and complaint processes (for the product and/or similar products) and from your clinical/performance studies. If your product is a legacy product that has been on the market for many years, it could easily be argued that if a nonconformance and risk haven’t happened yet, they probably never will. In that case, all your risk documents really need to include are the risks associated with actual events. (Remember, occurrence is part of risk, and if it hasn’t happened/ won’t happen, then it isn’t a risk.) Starting with a risk document that only includes real data will already give you a much more manageable and efficient starting point. Starting with real data will also make the drafting of the risk documents easier and more objective. If a small, realistic risk document makes you nervous, you could add some of the risks you think might still happen someday. In discussions with risk engineers, some thought it would be valueadded to keep (or generate) a version of the FMEAs that included everything that had been considered, not just those that made the final list of risks that may actually occur. While this type of detailed analysis could be beneficial to show the evidence of the risk management activities and beneficial for investigations as new issues occur, it is not a functional, user-friendly document and, as noted, could provide false

44

● 

Chapter 8 



estimates of the risk. If you want to capture this information, you could include it in an appendix (for example, separate tab) in the FMEA or file it as a separate document to assist with product/process investigations. To keep your risk documents relevant and functional, non-risks (that is, events that will not actually happen) should not be included. Now that the rows of possible events have been whittled down to those that may actually occur, there is another step to make the docu­ ments even more relevant and functional. By compiling a list of the various nonconformances, you can define the thresholds for your NCRs and complaints (Figure 8.2). Compiling the list of nonconformances has the added benefit of driving consistency to the wording used to describe the events. These lists then become the “buckets” (categories) for the complaint and NCR processes. (You’ll have the terminology for categorizing and trending the events, and the thresholds to determine when to start investigating.)

1 2 3 4 5 6 7

Too small Too big Does not open Does not close NC 5 NC 6 NC 7

Figure 8.2

Information source

Instances utilized

Nonconformance

Reported rate % (if applicable)

Index

Nonconformances (Faults/Errors/Failures/Failure Modes)

2 1 1 1 0 0 0

Total Total threshold threshold rate rate (NCi-total) (NCr-total) 1.000100 1.000000 0.001000 0.000100

0.100010 0.000100 0.000100 0.000100

Example nonconformance input and summary table.

As discussed in Chapter 4, “Control and Monitoring of the Sequence of Events,” the initial rates (NCi) represent the rates for issues prior to the risk controls. By totaling the rates for all the rows for each noncon­formance, you can use these total initial rates as the thresholds for nonconformances that are identified before or during production (for example, incoming inspections, some events in production, etc.).



● 

Risk Analysis and Evaluation 



45

Since the residual rates (NCr) represent the rates after all of your risk controls, you can use the total of the rates of all the rows for each nonconformance as the threshold for your complaints and for some nonconformances that are identified during or after production (for example, some events in production, final acceptance testing, etc.). We will discuss how to use these thresholds more in a later chapter.

Key Takeaways • Sum up how often a harm (risk) can happen from all the various causes (P-Total). • The NCi and NCr totals define how often an NC can occur and how often a complaint can occur.

Section 2 Risk in Verification/Validation ●  ●  ●

Chapter 9 Determining What’s Critical to Quality Chapter 10 AQL or LTPD? Chapter 11 Confidence Chapter 12 Reliability Chapter 13 H  ow to Distribute Samples—The Invalid Assumption Chapter 14 Continued Process Verification Chapter 15 Test Method Validation

47

Chapter 9 Determining What’s Critical to Quality ●  ●  ●

I

’ve worked with several companies that thought it was critically important (pun intended) to define which characteristics were critical to the quality (CTQ) of the product. Some defined a characteristic as CTQ if the process specification or design specification was tied to any risk in a risk document. As mentioned earlier, when designing a product and process, why would you define a specification if it wasn’t associated with some risk? Other companies independently assign some design and process specifications as being CTQs, often using an arbitrary, pull-itout-of-the-air judgement by some engineer. Either way, if you define CTQ characteristics as being tied to any risk or arbitrary judgement, I’m not sure what it gets you.

There are plenty of regulatory requirements for verifying and vali­dat­ing a design and a process. If you have properly linked your specifications to the risks they control, then you will undoubtedly need to verify and/or validate the effectiveness of that control (that is, you will need to test each specification for the product/process to ensure the NC rate is less than the NC threshold in the risk document). Period. If all the specifications tied to risks are defined as CTQs, then the term becomes redundant, and if CTQs are assigned arbitrarily only to some specifications, you still need to validate them all. Either way, it doesn’t change what you need to do. However, if you (or your management) are adamant that you need to delineate CTQs for some reason, I recommend doing so using the

49

50

● 

Chapter 9 



severity of the risks associated with the characteristic. As described in Chapter 6, most manufacturers delineate a table of severity criteria. Typically, “death” and “serious injury” are used as the top two categories for the patient harms in these tables. Coincidently, per 21 CFR 803, the FDA requires companies to report “deaths and serious injuries that a device has or may have caused or contributed to.” If your engineers want to tie the CTQs to the function/operation of the device, by defining severity criteria for people, property, and the environment as described in ISO 14971 (see Table 9.1), you could align the criteria for the critical harms to the property/process and the environment. If you define “CTQ” as any characteristic associated with a critical risk (for example, in the top two levels of severity), then you can assess the various types of risks associated with your product and process and adjust the amount of verification/validation you do. However, as you will see in the next few chapters, the severity can be used to adjust the amount of verification/validation without the need for defining CTQs. The only possible use I have found for CTQs is to differentiate between warnings and precautions, which we will address in a later chapter.

Key Takeaway • Don’t waste your time defining CTQs. But if you do, tie them to the severity.

Patient/Process Operator

Negligible/slight disruption to process.

Inconvenience, temporary discomfort, nuisance or cosmetic impact, or minor discomfort/ stress.

Minor

Negligible/ cosmetic

2

1

Example severity criteria for ISO 14971.



Table 9.1

Determining What’s Critical to Quality 

Virtually no negative effect. Cosmetic damage to the environment, facility, or equipment.

Environmental, facility, or equipment damage that is reversible without remediation (e.g., damage to equipment repairable by operator or destruction of other supplies).

Process performance degraded, but is operable and able to produce product without effect to subsequent processes or process steps; components or materials reworked with little to no associated scrap at the station.

Temporary, reversible, or nonserious illness, injury not involving intervention, or moderate discomfort/stress.

Moderate

3

Environmental, facility, or equipment damage that is reversible only with professional remediation (e.g., damage to equipment requiring engineering repair).

Process performance degraded and subsequent processes or process steps are affected or tool/equipment repair required; components or materials sorted (not at work station) with a portion being scrapped.

Serious

4

Temporary or reversible illness, injury, or impairment. May require intervention; however, intervention is not to preclude permanent impairment/damage.

Extensive, irreparable environment, facility, or equipment damage (e.g., destruction of building or room).

Process is inoperable or process is unable to produce product within specification.

Irreparable damage to the environment, facility, or equipment (e.g., destruction of equipment).

Environment

Business/Process

Process is disrupted, adjustment or temporary process is needed to produce product or tool/equipment repair required; components or materials all scrapped.

Catastrophic Death/life-threatening

Term

Descriptions

● 

Serious Injury (permanent impairment of a body function or permanent damage to a body structure or necessitates intervention to preclude permanent impairment/damage).

5

Numerical Rating

Severity

51

Chapter 10 AQL or LTPD? ●  ●  ●

D

o you use an acceptance quality limit (AQL) sampling plan (for example, ASQ/ANSI Z1.4 and Z1.9, ISO 2859-1, etc.) for initial process qualification (validation) testing? Do you claim your validations are “risk based” but don’t use both severity and occurrence to set the sample size? If you are using AQL sampling for the initial qualification/ validation, be sure to do a quick search of the FDA’s warning letter database; there are plenty of examples of warning letters being issued to companies that tried using AQLs improperly. Although it is no longer available on the FDA’s website, a warning letter issued to ONBO Electronics Company Ltd. on August 19, 2008, highlighted the use of the lot tolerance percent defective (LTPD) “for the sampling plan used in process validation.” It noted that the use of AQL testing for batch release “is not adequate” for the validations since “it does not establish that the process is validated with a high degree of assurance.” Conversely, LTPD (aka rejectable quality limit [RQL]) sampling is appropriate for validations but would not be appropriate for batch release acceptance criteria since it is “irrespective of lot size.”

Acceptance quality limit (AQL) evaluates the producer’s risk. Lot tolerance percent defective (LTPD) evaluates the consumer’s risk.

53

54

● 

Chapter 10 



Also, recall that risk, by definition, has two parts: occurrence and severity. If your validations don’t use both the occurrence and severity thresholds from your risk documents, then your validations are not risk based. The American Society for Quality (ASQ) has described a “twopart process for selecting statistically valid sampling plans”1 that include defining the objective of the plan and demonstrating that the plan meets the objective. It recommends defining an operating characteristic (OC) curve to document the AQL and the LTPD (see Figure 10.1). ASQ goes on to add that “Using recognized standards does not ensure that a sampling plan is statistically valid. It is the responsibility of the user to make sure they are using an appropriate plan based upon the objective of the inspection. It is the actual AQL and LTPD of the sampling plan that describes its protection and determines its validity.” The National Institute of Standards and Technology (NIST) also provides some guidance for several types of lot acceptance sampling plans,2 of which we will focus on “single sampling plans.” Since the AQL evaluates the producer’s risk, it can be used for routine batch release testing; however, it is not appropriate for the initial validation testing. The initial validation testing is intended to be an evaluation of the protection of the consumer (that is, consumer’s risk). Proper initial validation sampling needs to assess the consumer risk using the LTPD. There are numerous formulas available for calculating LTPD sample sizes for attribute or variable characteristics, including binomial, Poisson, and hypergeometric variations. Most of these formulas calculate the sample size using the desired confidence and reliability. Confidence is the measure of certainty that the testing represents the actual population, and reliability is the measure of the defect/failure rate. (Recall that most manufacturers group failures into their NCR process.)



● 

AQL or LTPD? 



55

100%

Probability of accepting a lot

90%

Type I α = 0.05

80% 70%

x

(n)

α = Σ x px (1 – p)n–x i=0

60% OC curve

50% 40% 30%

n=

Z2α/2σ2 E2

Type II β = 0.10

20% 10% 0% 0.00

0.01

0.02 AQL

0.03 LTPD

0.04

0.05

Theoretical defect rate (fraction defective) of the lot

Figure 10.1

OC curve and example formulas.

Source: The Medical Device Validation Handbook and the CQE Primer.

Once the sample size is calculated, the variations of the process (for example, lot-to-lot, sample-to-sample, test-to-test) are used to determine how many lots, samples, and tests-per-sample are needed to meet the effective sample size. Finally, the initial validation needs to link to the strategy and sample sizes established for continued process verification by linking the LTPD and the AQL. Many of the LTPD (and AQL) formulas used to calculate sample sizes include the use of the Greek letter alpha (a). Be careful not to confuse the alpha in these formulas (typically related to the confidence) with the alpha used to describe type I (false positive) errors. Type I errors are related to the rejection of good product associated with the producer’s risk. Type II errors (false negatives or beta (b)) are related to the acceptance of bad product associated with the consumer’s risk.

56

● 

Chapter 10 



Key Takeaway • Initial validation demonstrates it’s safe for the consumer using “confidence” and “reliability.”

Endnotes 1. American Society for Quality (ASQ), http://asq.org/qic/displayitem/index.html?item=10477. 2. The National Institute of Standards and Technology (NIST), Engineering Statistics Handbook (Gaithersburg, MD: NIST 2012), https://www.itl.nist.gov/div898/handbook/pmc/section2/pmc22.htm.

Chapter 11 Confidence ●  ●  ●

D

epending on the formula used (for example, for binomial/attribute or variable data), the probability of finding x defects (for example, 1 – α) or the confidence interval (for example, Z) can be varied based on the severity of the harm, as determined in your risk documentation. For example, you may not need to be very confident whether a defect linked to a low-severity harm gets accepted; however, you may want to be very confident that a defect linked to a high-severity risk doesn’t get accepted. Your product’s risk management plan can define the confidence you want according to the product risks’ severity ratings, as shown in Table 11.1. Note that the confidence values in the table are for example only. Your confidence values should be defined by your company according to the product risks.

“A confidence interval is a range of values that is likely to contain an unknown population parameter. If you draw a random sample many times, a certain percentage of the confidence intervals will contain the population mean. This percentage is the confidence level.”1 The confidence (%) can then be converted to a decimal for use as alpha (α) or be used to calculate the confidence interval (Z ) using a normal distri­bu­tion curve function (for example, “=NORM.S.INV(confidence_ pct/100)” for one-sided tolerances or “=NORM.S.INV((100-(100confidence_pct)/2)/100)” for two-sided tolerances). Note that this assumes that your data are normally distributed.

57

Serious

Moderate

Minor

Negligible/ cosmetic

4

3

2

1

Process is inoperable or process is unable to produce product within specification. Process is disrupted, adjustment or temporary process is needed to produce product or tool/equipment repair required; components or materials all scrapped.

Process performance degraded and subsequent processes or process steps are affected or tool/equipment repair required; components or materials sorted (not at work station) with a portion being scrapped. Process performance degraded, but is operable and able to produce product without effect to subsequent processes or process steps; components or materials reworked with little to no associated scrap at the station. Negligible/slight disruption to process.

Death/life-threatening

Serious Injury (permanent impairment of a body function or permanent damage to a body structure or necessitates intervention to preclude permanent impairment/damage).

Temporary or reversible illness, injury, or impairment. May require intervention; however, intervention is not to preclude permanent impairment/damage.

Temporary, reversible, or non-serious illness, injury not involving intervention, or moderate discomfort/stress.

Inconvenience, temporary discomfort, nuisance or cosmetic impact, or minor discomfort/ stress.

Business/Process

Virtually no negative effect. Cosmetic damage to the environment, facility, or equipment.

Environmental, facility, or equipment damage that is reversible without remediation (e.g., damage to equipment repairable by operator or destruction of other supplies).

Environmental, facility, or equipment damage that is reversible only with professional remediation (e.g., damage to equipment requiring engineering repair).

Irreparable damage to the environment, facility, or equipment (e.g., destruction of equipment).

Extensive, irreparable environment, facility, or equipment damage (e.g., destruction of building or room).

Environment

80

90

95

97.5

99

Confidence Required for Validation (%)

Chapter 11 

Example severity criteria for ISO 14971 with confidence for validations.

Catastrophic

5

Patient/Process Operator

Descriptions

● 

Table 11.1

Term

Numerical Rating

Severity

58 ●



● 

Confidence 



59

Some statisticians will balk at this strategy and argue that confidence intervals do not describe the frequency of correct results or the probability of finding defects. While this is correct, the confidence level needs to (somehow) be designated prior to examining the data. They will indicate that the level of confidence is to be chosen by the investigator, but most statisticians don’t have a simple strategy that can be used (and made sense of) by the typical engineer. Linking the confidence to the severity of the risk provides that simple, risk-based strategy.

Key Takeaway • The “confidence” for your sample size calculation can be tied directly to the severity of the associated risk.

Endnote 1. https://blog.minitab.com/en/adventures-in-statistics-2/understandinghypothesis-tests-confidence-intervals-and-confidence-levels

Chapter 12 Reliability ●  ●  ●

I

t is amazing how companies determine the reliability values to use for their validations. I have seen companies that default their validations to use 95% reliability and then expect that their processes will produce products with less than 1 ppm of defective product. Some companies link their risk evaluation (for example, red/green, red/yellow/green) to different sample sizes. While this is somewhat risk based, it still falls short. As described in a previous chapter, an LTPD should be used for the initial process qualification (validation), so let’s break down the words in that acronym. Lot tolerance percent defective: The occurrence rate (percent defective) of the nonconformance (NC) that we as a business are willing to tolerate in each lot. In other words, the “reliability” for your sample size calcu­­ lation is the occurrence rate you set in your risk document. This point cannot be emphasized enough. Your risk documents are completely useless and should be archived in the “circular file” if the occurrence rates don’t define your thresholds for reliability.

For this discussion, reliability = 1 – p (from the example formulas in Figure 10.1), where p is the proportion defective allowed in the population. In this example, the lower-case p is the occurrence threshold (NCi or NCr) rate expressed as a decimal. Note: There are some slight statistical differences between occurrence and reliability (such as when there are multiple issues that combine to reduce the reliability); 61

62

● 

Chapter 12 



however, for the purpose of this book and for the use of most companies, the difference is negligible and not worth differentiating. If your product is 95% reliable, then that means it isn’t reliable 5% of the time (that is, it does not fulfill a specification or requirement). The tolerable proportion nonconforming (p) or tolerable variance (σ ) typically found in LTPD formulas are measures of your desired reliability. These can be calculated by converting the occurrence rate from your risk document to either a decimal (for p) or converting from a percent to a variance (σ2) using a normal distribution curve (assuming your data are normally distributed). When working with variable data, if any pre-validation studies were performed, use the maximum variance from the risk document, the sample variances, and the population variance to calculate the sample size. 2

For example, if your pre-validation study data indicated that your one-sided attribute-data (binomial) process is running with a Ppk = 1.33, you could convert this to a decimal (for example, “=(CHISQ.DIST. RT(((Ppk*3)^2),1)*Sided/2)”). A Ppk of 1.33 indicates your process has a maximum historical proportion nonconforming of 0.000032 (0.0032%). Then you can use the maximum of the historical variability or the occurrence threshold percent from the risk documents as the tolerable proportion nonconforming (p) to calculate the validation sample size. (Note: Cpk can be viewed as the short-term capability of a process to meet a specification, while Ppk can be viewed as the long-term performance of the process. Readers will need to be aware of which type of data they are analyzing; however, as described in Chapter 14, when a process is in a state of statistical control, the capability continues to approach the performance and the values become effectively equivalent.) Similarly, if your pre-validation study data indicated that your two-sided variable-data process had a maximum sample standard deviation of 1.00 and a population standard deviation of 1.15, you could



● 

Reliability 



63

use the greater of these to calculate the maximum variance from the pre-validation (1.15^2) = 1.3225. Then you could convert your occurrence threshold percent to a variance (for example, “=(NORM.S.INV(1((LTPD/100)/Sided))/3)^2”). For example, a 0.01% LTPD would convert to a tolerable variance of 1.6819. Then the maximum of the historical or the tolerable variance (σ2) could be used to calculate the validation sample size. Notice that the term LTPD is based on the “tolerable” occurrence rate. It is because of this concept of tolerability that there are two options for the reliability value you use for your sample size. You can either use the occurrence rate for the particular risk or the highest occurrence rate that would still be tolerable for the particular risk based on the severity of the harm (Figure 12.1). Occurrence

Risk

1

2

3

4

5

Validate here? (might require a lot of testing)

Figure 12.1

Severity

5

Unacceptable (red)

4 3 2

Acceptable (green)

Or here? (likely a lot less testing)

1

Example risk evaluation criteria.

For example, you may have assigned a severity of “2” and an occurrence rate of “1” to a risk using a risk evaluation table like the one in Figure 12.1. In this case, you will need to decide if you really need to be sure that your process is producing product with a defect rate that is at or below the level “1” threshold. This may require a lot of testing (for example, if the “1” corresponded to a 1 ppm rate, to ensure your process is producing less than 1 ppm defect, you would need to test a lot of samples). However, you may instead decide that while the projected occurrence rate for the risk is low, you could tolerate up to

64

● 

Chapter 12 



an occurrence rate of “4” (using the example table in Figure 12.1) and still have an acceptable level of risk. This would likely require a lot less testing (for example, if the “1” corresponded to a 1 ppm rate and the “4” corresponded to a 1% rate, ensuring your process is producing less than 1% defect rate would require a lot less testing than a 1 ppm rate). As mentioned previously, however, this decision needs to be based on what you are trying to prove with your testing; that is, can you only tolerate a “1” or can you actually tolerate a “4”? Note: See the “Risk Evaluation vs. Benefit-Risk and Acceptability” section of Chapter 23 for additional discussion of risk evaluation tables.

Key Takeaway • The “reliability” for your sample size calculation is the occurrence rate you set in your risk document.

Chapter 13 How to Distribute Samples— The Invalid Assumption ●  ●  ●

I

f you have ever used an LTPD formula to calculate sample sizes, you are probably familiar with an attribute sample size of 59 samples for a 95/95 confidence/reliability. This is a fairly standard number used at a lot of companies based on formulas in the Zero Acceptance Sampling Plans, 4th edition by Nicolas Squeglia or The Medical Device Validation Handbook by Regulatory Affairs Professionals Society (RAPS). Even though the FDA’s 1996 CGMP Final Rule stated, “The requirement for testing from the first three production lots or batches has been deleted,” many companies still describe either using three lots or a “minimum” of three lots for their validations. Given a calculated sample size of 59 samples and an internal requirement to use a minimum of three lots, most engineers simply say, “I’ll do more—I’ll test 20 samples/lot for my three lots.” Unfortunately, there are a couple of fundamental problems with this approach; specifically, the formulas used by Squeglia and RAPS have an underlying assumption that may render them invalid, and using 20 sample/lot for three lots may not adequately evaluate the variation in the process. One of the fundamental assumptions that is often overlooked during validation sample size determinations is that the LTPD sample size is used to estimate the defect level in a single population. Many companies argue that this is applicable because it’s a single process; however, processes often involve multiple combinations of independent and confounded sources of variance (for example, lot-to-lot variance, the variation/uniformity within a lot, and the variations within the sampling 65

66

● 

Chapter 13 



and testing, to name a few), which render the assumption of a single population invalid. To determine how to distribute the sample size across the sources of variation, you will first need to identify and quantify the sources of variation within your product/process and the amount of their variance. Therefore, to determine the amount of lot-to-lot variance, the withinlot (process) variance, and measurement variance, you will need to first perform some sort of variance study (for example, analysis of variance [ANOVA], design of experiments [DOE], etc.). If your management is unwilling to allow pre-validation studies in order to understand the sources of variation within your process, you should at least analyze the variation retrospectively within your validation data to ensure you met the effective sample size. If so, you may need to retest or perform supplementary testing if your analysis indicates that you did not meet the effective sample size. Once you know the sources of variation within your product/ process and their variances (for example, from ANOVA, DOE, etc. studies), then you can use the Welch-Satterthwaite equation to distribute the sample size across the sources of variation (for example, multiple lots, samples/lot, and tests/sample). This will allow you to test more where there’s more variability and test less where there’s less variability. In the example of testing 20 sample/lot for three lots, it wouldn’t make sense to test 20 samples/lot if you were testing a highly uniform solution (for example, a reagent, a highly soluble solution, etc.), especially if you had an ANOVA study that showed the variation was lot-to-lot instead.



● 

How to Distribute Samples—The Invalid Assumption 



67

Using the Welch-Satterthwaite equation:

(∑ ) n

vx1 ≈

i=1

n



i=1

kisi2

(kisi2)2 vi

2

Substituting the population standard deviation, σ, for the sample standard deviation, s, in the WelchSatterthwaite equation may also be done based on the assumption that the process is in a state of statistical control (Cpk ≈ Ppk). That assumption would then be proven or rejected based on the results of the testing.

Where: s represents the uncertainties (measurement process errors), k is a real positive number, typically k = 1/(v +1), v represents the degrees of freedom, and n represents the number of sample variances, the effective sample size, Neff, can then be calculated using the number of measurements (observations) for each source of variance, Ni, where Ni - 1 = vi. If the sources of variance are independent (for example, two different dimensions that each affect the fit of a product), then the WelchSatterthwaite equation would just need to be rearranged as follows:

Neff = 1 +

(

s12 s22 + N1 N2

)

2

s14 s24 + 2 N1 (N1 – 1) N2 (N2 – 1) 2

The equation would simply be expanded as needed to account for as many independent sources of variance as necessary. However, if the sources of variance are confounded (for example, lots/samples/tests), the individual sources of variance would be used to calculate the total process variance using the following equation: σ2Tot = σ2Lot + σ2Proc + σ2Meas

68

● 

Chapter 13 



Then the Welch-Satterthwaite equation would need to be converted to account for the impact of the confounding. For example, when the lot-to-lot and within-lot (process) variance have been estimated, the following may be used: • The mean squared error of the lot (MSL) estimates σ2Proc + NProcσ2Lot • The mean squared error of the process (MSP) estimates σ2Proc The effective sample size, Neff, is Neff = 1 +

(

(σ2Lot + σ2Proc)2

σ2Lot + σ NProc

) (

2 2 Proc

NLot – 1

+

)

NProc– 1 σ2 2 Proc NProc NLot (NProc– 1)

When the lot-to-lot, within-lot (process), and measurement variances have been estimated, a similar conversion process would yield the following: The effective sample size, Neff, is Neff = 1 +

(

σ2Lot +

(σ2Lot + σ2Proc + σ2Meas)2

σ2Meas σ2Proc + NProc NProc NMeas NLot – 1

) +[ 2

(

)] + (

σ2 NProc – 1 + σ2Proc + Meas NProc NMeas NLot (NProc – 1)

2

)

NMeas – 1 2 σ Meas NMeas

2

NLot NProc (NMeas – 1)

Various combinations of the number of lots (NLot), number of samples per lot (NProc), and number of repetitions (tests) per sample (NMeas), as applicable, can then be used to determine an acceptable sampling plan. An acceptable sampling plan would be one in which the effective sample size calculated from the sources of variation is equal to or greater than the sample size calculated from the LTPD based on the risk (Table 13.1).



● 

How to Distribute Samples—The Invalid Assumption 



69

Pre-Validation Study Values Lot-to-lot variance (σ2Lot)

0.41133

Within lot (process) variance (σ2Proc)

0.21571

)

0.29998

Total variance (σ2Tot)

0.92702

Measurement variance (σ

2 Meas

Sample Size Type of data

Binomial

Calculated effective sample size NLot

NProc

NMeas

Neff

Acceptable Sampling Plan

3

6

5

8

No

3

6

6

8

No

3

6

7

9

Yes

9 Total Samples

Total Tests

18

126

3

6

8

9

Yes

18

144

3

6

9

9

Yes

18

162

3

7

2

8

No

3

7

3

8

No

3

7

4

9

Yes

21

84

3

7

5

9

Yes

21

105

3

7

6

9

Yes

21

126

3

7

7

9

Yes

21

147

3

7

8

9

Yes

21

168

3

7

9

9

Yes

21

189

3

8

2

8

No

3

8

3

9

Yes

24

72

3

8

4

9

Yes

24

96

4

2

4

8

No

4

2

5

9

Yes

8

40

4

2

6

9

Yes

8

48

4

2

7

9

Yes

8

56

4

2

8

9

Yes

8

64

4

2

9

9

Yes

8

72

4

3

2

9

Yes

12

24

4

3

3

9

Yes

12

36

5

2

2

9

Yes

10

20

5

2

3

10

Yes

10

30

Table 13.1

Example effective sample size spreadsheet.

70

● 

Chapter 13 



Now you have an opportunity to use economics in your decisionmaking for your sampling plan. If you have evaluated various combina­ tions of lots/samples/repetitions that meet the effective sample size, you can select the combination that is most economical. For example, if each lot is very expensive to make (or would take a long time), you may want to select the option that produces an acceptable sampling plan with the fewest lots. However, if your lots are relatively cheap (or you produce them often) but the testing is time-consuming or expensive (for example, destructive testing for $1 million instruments), you may want to select the option that produces an acceptable sampling plan with the fewest samples or tests. Because “N” is part of the denominator in each factor within the equation, the factors will approach zero (0) as N -> ∞. In other words, there is a point of diminishing return for testing within each source of variation. For example, even if the lot-to-lot variance is low, if you don’t test enough lots, no number of samples per lot and tests per sample can compensate (Table 13.2).

Key Takeaway • Test more where there’s more variability, and test less where there’s less variability.



● 

How to Distribute Samples—The Invalid Assumption 



71

Pre-Validation Study Values Lot-to-lot variance (σ2Lot)

0.21571

Within lot (process) variance (σ2Proc)

0.41133

)

0.29998

Total variance (σ2Tot)

0.92702

Measurement variance (σ

2 Meas

Sample Size Type of data Calculated effective sample size NLot

NProc

NMeas

Neff

Acceptable Sampling Plan

3

2

2

6

No

3

20

20

28

No

3

200

200

36

No

3

2000

2000

37

No

3

20000

20000

37

No

3

200000

200000

37

No

3

2000000

2000000

37

No

3

20000000

20000000

37

No

3

200000000

200000000

37

No

3

2000000000

2000000000

37

No

3

20000000000

20000000000

37

No

3

2E+11

2E+11

37

No

3

2E+12

2E+12

37

No

3

2E+13

2E+13

37

No

3

2E+14

2E+14

37

No

3

2E+15

2E+15

37

No

Table 13.2

Limits of testing the sources of variance.

Binomial 59 Total Samples

Total Tests

Chapter 14 Continued Process Verification ●  ●  ●

I

f you have followed the steps described so far, then you already have your OC curve defined, the severity of the risk identified, and the LTPD successfully tested. The next step, in accordance with the FDA Guidance for Industry—Process Validation: General Principles and Practices, is continued process verification to ensure that your process “remains in a state of control (the validated state) during commercial manufacture.” Many companies simply pull an AQL sampling plan “out of thin air” and apply it across the board to all their incoming, inprocess, and finished goods testing. However, if you have a defined OC curve for your product/risk (see the example in Chapter 10), then you already have a defined AQL for your testing. In that case, the AQL is already ensured to test to a defect level that is lower than the defect level tested in the LTPD. If, like many companies, you skipped this step in the process, don’t panic. Your initial validation data can be used to calculate the Ppk. This will give you an estimate of where your process is actually running and what your overall defect level is likely to be. Using the following formula, which is similar to the LTPD formulas used earlier: c

n! AQLx (1 – AQL)n–x x! (n – x)! x=0

1–=∑

and assuming that c = 0 (no acceptable rejects), the formula can be reduced and solved for the AQL defect rate to become: AQL = 1 – (1 – )1/n 73

74

● 

Chapter 14 



Then by using the confidence (%) based on the severity of the risk and the n from an AQL table (based on the nominal lot/batch size and the general inspection level), you can calculate the AQL level that you should use. This is accomplished by substituting the confidence (%) (converted to a decimal) for “1– α” in the formula. Note that you will need to round your calculated AQL down to the next tighter level listed on your AQL table in order to be conservative. You can then assess whether your estimated actual defect rate (from your initial validation data) is lower than your targeted AQL rate. Also, be sure to check that the AQL rate you are using for releasing product is lower than the LTPD rate you set for the consumer’s risk. Some scholars recommend that your actual defect rate should be 1/5 to 1/50 of the tested validation rate (for example, AQL, LTPD). In other words, your actual defect rate needs to be far less than the rate-value used for validation for the testing to actually have a reasonable chance of passing; that is, if 1% of the products manufactured on a process are defective and we test that process to an AQL 1%, we will likely fail. Just as with the initial validation, once you have a calculated sample size and AQL for your continued process verification, you can use the Welch-Satterthwaite equation and the calculated variances to determine how many samples and tests per sample you should use for each lot to meet the effective sample size for the AQL testing.

A State of Statistical Control (Capability and Stability) Many companies think of validation as that initial snapshot you take when setting up a process—and then you throw it in the corner and forget about it (just like the risk documents). Some companies may take the extra step and use their initial validations to help them set up their continued process monitoring, as described previously. The initial validations likely only demonstrate that the process is “capable”



● 

Continued Process Verification 



75

of producing good products. They may even calculate a Cpk from the validation data, which by definition is an index of the capability. Their continuous monitoring may even repeatedly confirm that their process is still capable. Through their continuous monitoring, most companies likely have captured a lot of data over the years; however, very few companies actually use that data to confirm that their processes are actually “stable” too. For a process to be in a state of control, it should be both capable and stable. The Global Harmonization Task Force Quality Management Systems—Process Validation Guidance states, “the manufacturing process should be capable and stable to assure continued safe products that perform adequately.” It also includes a “primer” that describes require­ ments for testing both the stability and the capability with some useful pictures to show how they are different. The FDA Guidance for Industry, Process Validation: General Principles and Practices describes the overall approach to process valida­ tion, including guidance that the testing should provide confidence “both within a batch and between batches” and connects the “confidence level” to the risk analysis. It also defines the term “state of control” using a reference from the ICH Q10 guidance. State of control: A condition in which the set of controls consistently provides assurance of continued process performance and product quality. (ICH Q10) From the guidance documents and the ICH definition, it becomes clear that for a process to be considered as in control, there must be some analysis of long-term stability “between batches,” not just an analysis of the capability within each batch or the short-term stability in the typical process verification or validation (PV). Most sampling standards are geared toward AQL sampling plans; however, there are a few standards that include validation sampling plans. One of these actually provides some help in determining whether a process is “stable.”

76

● 

Chapter 14 



ISO 3951-1, Sampling procedures for inspection by variables—Part 1: Specification for single sampling plans indexed by acceptance quality limit (AQL) for lot-by-lot inspection for a single quality characteristic and a single AQL, defines a technique to calculate whether a process is in a “state of statistical control.” 22.2 State of statistical control Calculate the upper control limit for each of the 10 lots (or other number of lots specified by the responsible authority) from the expression cuσ, where cu is a factor that depends on the sample size n and is given in Table H.1 [found in the standard]. If none of the sample standard deviations, sj, exceeds the corresponding control limit, then the process may be considered to be in a state of statistical control; otherwise, the process shall be considered to be out of statistical control. Note 1: If the sample sizes from the lots are all equal, then the value of cuσ is common to all the lots. Note 2: If the sample sizes from each lot vary, it is not necessary to calculate cuσ for those lots for which the sample standard deviation, sj, is less than or equal to σ. This technique could be used to assess the stability of a process; however, as noted, this technique requires the use of “10 lots (or other number of lots specified by the responsible authority)” to determine whether a process is in a “state of statistical control.” Some companies may be hesitant to test 10 lots as part of the initial validations prior to launching a product; however, most companies with legacy products likely already have more than 10 lots of data. On p. 67, we stated that “Substituting the population standard deviation, σ, for the sample standard deviation, s, in the WelchSatterthwaite equation may also be done based on the assumption that the process is in a state of statistical control (Cpk ≈ Ppk). That assumption would then be proven or rejected based on the results of the testing.”



● 

Continued Process Verification 



77

If your process is capable and stable (as described in the guidance documents and as calculated in ISO 3951-1), then your mean values aren’t shifting and your variations aren’t increasing and decreasing. As a result, the sample standard deviation, s, continues to approach the population standard deviation, σ, and cu approaches “1.” If your process is not capable and stable, then the strategy for distributing samples in Chapter 13 may not be applicable—and you may need additional controls before you can consider your process to be in a state of statistical control.

Key Takeaways • Your routine monitoring should also be tied to your risk documents. • Your routine monitoring should demonstrate that your processes are capable and stable.

Chapter 15 Test Method Validation ●  ●  ●

“5

.15 S,” “30% P/T,” or “just do a gage repeatability and repro­ ducibility (GR&R).” If you have ever validated a test method, you are proba­bly familiar with the multitude of bad guidance out there that recom­mends using predefined criteria rather than properly linking the test method validation (TMV) to risk. To properly determine whether a method will work for a given application (validate), we first need to know what type of method it is. Once we know the method type, we will know what characteristics we need to evaluate within the method. Then we need to determine how much of the tolerance range is available and how much of that can be taken up by the method. (Hint: The amount of the available tolerance that can be taken up by the method depends on the risk.) Standard or FDA official methods need verification to ensure that the laboratory is capable of performing the analysis. Verification of an analytical procedure is the demonstration that a laboratory is capable of replicating with an acceptable level of performance a standard method. Verification under conditions of use is demonstrated by meeting system suitability specifications established for the method, as well as a demonstration of accuracy and precision or other method parameters for the type of method. (ORA-LAB.5.4.5 Section 6.2 A)

79

80

● 

Chapter 15 



Two Types of Test Methods The International Council for Harmonization of Technical Requirements for Pharmaceuticals for Human Use (ICH) provides some excellent guidance for test method validations. Yes, they serve the pharmaceutical industry, not the medical device industry, but their guidance is still good, and medical device manufacturers can still learn a lot from them. The U.S. Department of Commerce’s NIST provides an Engineering Statistics Handbook1. The information in the handbook aligns with the ICH recommendations and actually provides some additional insight. The ICH Q2(R1) Validation of analytical procedures: text and methodology was originally drafted in two parts (Q2A and Q2B). In 2005, the ICH incorporated the two into Q2(R1) but did not revise any of the content. The combined version is available through the FDA2 and the European Medicines Agency (EMA).3 The various versions of the guidance will be referred to here collectively as “ICH Q2.” ICH Q2 describes four “most common” types of analytical proce­ dures (test methods): • Identification tests • Quantitative tests for impurities’ content • Limit tests for the control of impurities • Quantitative tests of the active moiety For medical devices, only the identification tests (for example, some attribute tests) and the quantitative tests (for example, variable tests) are typically found. Some IVD methods may involve the testing of impurities and would possibly involve the other two types, but only attribute and variable test methods will be discussed here. The United Nations Office on Drugs and Crime (UNODC) Guidance for the Validation of Analytical Methodology and Calibration of Equipment used for Testing of Illicit Drugs in Seized Materials and Biological Specimens4 confirms that “an important distinction should always be made between qualitative and quantitative methods.”



● 

Test Method Validation  



81

While there are differences in the terms binary, binomial, and Boolean, the term attribute is used here to describe any type of binary or Boolean data for which you would use a binomial distribution to describe it (for example, red/not red, pass/fail, yes/no, go/no-go, etc.). It may also be referred to as “qualitative” data. In ICH Q2, the intent of identification tests is “to ensure the identity of an analyte.” In other words, the test is to determine whether the analyte “is” or “is not” the target analyte. Since this describes an attribute, the concepts of the identification tests in ICH Q2 will be discussed and applied to “attribute” methods for medical devices. The term variable is used here to describe data that are continuous (that is, may have multiple or an infinite number of possible values between any two values). Variable data are generally numeric and may be used to create X-bar charts, range charts, S-charts, etc. They may also be referred to as “quantitative” data. The distinction of test methods into the two types and the definition of those types as “attribute” and “variable” will help align the testing and statistics used with the formulas generally applied for validations (c.f., “attribute” in the Medical Device Validation Handbook and “variable” in the CQE Primer).

Characteristics of a Test Method Validation While many people are aware that the FDA has its own laboratories to test product, few people know that the policies and procedures for these laboratories are available to the public. While the regulations generally only tell you “what” you need, if you want an example of “how,” it might be beneficial to review how the FDA tells its own people to perform testing in the FDA’s Office of Regulatory Affairs (ORA) laboratory manual and procedures.5 In ORA-LAB.5.4.5 Methods, Method Verification and Validation, the FDA describes “typical validation characteristics which should be considered.” ICH Q2 also describes the “typical validation charac­ teristics which should be considered” and provides a handy table that

82

● 

Chapter 15 



“lists those validation characteristics regarded as the most important for the validation of different types of analytical procedures” (test methods). Since we have already clarified that most medical device tests can be classified into two types, only those two will be described. Table 15.1 identifies the typical characteristics that are recommended to be evaluated for each type of test method. Type of Analytical Procedure/Characteristics

Attribute (Identification)

Variable (Quantitative Test)

Accuracy (bias)



+

Precision Repeatability Intermediate precision Reproduciibility

— — —

+ + +

Specificity

+

+

Limit of detection





Limit of quantitation





Linearity (aka curve fitting)



+

Range



+

Ruggedness/robustness



+

System suitability



+

Resolution (NIST)



+

Where: – signifies that this characteristic is not normally evaluated. + signifies that this characteristic is normally evaluated. Note: The precision of a test method is generally considered by evaluating the three levels.

Table 15.1 Characteristics for test method validations.

The terminology used for these characteristics may vary. For example, the EU IVDR Annex II Paragraph 6.1.2.1 describes “accuracy of measurement” as the combination of “trueness” and “precision.” In Annex I, it also describes trueness as “trueness (bias).” While your organi­zation may use different terminology than what is presented in Table 15.1, it is recommended that you ensure all the characteristics of a test method validation are properly evaluated regardless of what you call them.



● 

Test Method Validation  



83

The table in the ICH guidance does not include “Reproducibility,” “Ruggedness/Robustness,” or “System Suitability,” and lists “Detection Limit” as applying to pharmaceutical identification tests. Reproducibility, ruggedness/robustness, and system suitability are described elsewhere in ICH Q2 but were just not included in the table in the guidance. The definition of “precision” in the guidance indicates precision comprises repeatability, intermediate precision, and reproducibility. In the paragraphs leading to the table, the guidance states, “it should be noted that robustness is not listed in the table but should be considered at an appropriate stage in the development of the analytical procedure.” Additionally, system suitability is described at the very end of the guidance. While a limit of detection (LOD), detection limit, or method/ minimum detection limit (MDL) study would not be applicable to most medical device attribute methods, there may be some methods where it would apply (for example, IVD attribute chemical testing). The application of “specificity” for attribute test methods may seem like overkill for some simple methods, but even a “red/not red” color check by your operators may need to confirm that the operator isn’t red-green color blind. This could also apply to a snap gage caliper trying to be used to test something that is too flexible—or even just to fit the device. It could also apply to a vision-system to confirm that it can differentiate the characteristic. While ICH Q2 discusses “the resolution of the two components which elute closest to each other” as part of the demonstration of speci­ ficity, it doesn’t identify “resolution” as a distinct characteristic for TMV. NIST uses resolution as part of the analysis of bias and defines resolution as “the ability of the measurement system to detect and faithfully indicate small changes in the characteristic of the measurement result.” ***Warning: The number of digits displayed does not indicate the resolution of the instrument.***

84

● 

Chapter 15 



Any engineer who has ever commissioned a new piece of equipment, calibrated an instrument, or validated a manufacturing process will look at this list of TMV characteristics and correctly say, “We test those elsewhere—we don’t need to duplicate the work in TMV.” The title of the United Nations Office on Drugs and Crime (UNODC) guidance even indicates that the validation of methods includes “calibration of equipment.” Since many of these characteristics may be more applicable to the instruments used for the testing than for the test itself, it may be possible to reference the existing tests for those instruments. If testing is to be leveraged from a different process (for example, calibration, installation qualification [IQ]/operational qualification [OQ]), then the TMV protocol/report should document how those characteristics were considered, evaluated, and judged to be acceptable. For example, if your calibration of a balance already defines the useable range, you may not need to determine the limit of detection or the limit of quantitation if it’s below that range. The TMV report should also point to objective data that confirm the acceptability of the characteristic referenced (not just some vague statement that “it’s tested over there”). Be aware, though, that an instrument may be calibrated over a greater range than is necessary for use with a product-specific test method. Just like a process validation engineer does when testing the OQ of a piece of equipment, you need to be aware that you need to test the limits of the equipment itself, as well as the limits of each specific product on that equipment. For process validation, this may take the form of testing an oven from room temperature up to 500°C, as well as testing that a product melts at 250°C. For TMV, this may take the form of OQ testing a vision system under a variety of lighting conditions, as well as restricting or filtering the lighting during TMV for a specific product to get proper contrast. Based on the data typically tested in other processes, Table 15.2 describes where data may already exist to support the TMV, along with some additional comments. If the data are not likely tested in any other system, they would typically be tested during TMV (for example, specificity and possibly limit of detection and ruggedness/robustness).



● 

Test Method Validation  



85

Characteristics

Where it may be tested?

Accuracy (bias)

Calibration

Basic function of calibration.

Precision

GR&R

Note the three kinds of precision. However, typically GR&R refers to gage repeatability and reproducibility:

Comments

• Repeatability: The variation in measurements taken by a single person or instrument on the same or replicate item and under the same conditions. • Reproducibility: the variation induced when different operators, instruments, or laboratories measure the same or replicate specimen. Specificity

TMV

Ability of machine to measure (e.g., fit) the device, ability of a vision-system to differentiate the characteristic, user's ability to differentiate a dye, etc.

Limit of detection

DOE/IQ/TMV

Part of range? If the lower end of the range is determined, this may not be necessary.

Limit of quantitation

IQ/OQ

Part of range? If the lower end of the range is determined, this may not be necessary.

Linearity (aka curve fitting)

Calibration

Linearity of inputs to outputs. Note that with proper curve fitting (e.g., least-squares (R2) fit), a non-linear curve could be used…

Range

OQ/calibration

This is often assumed to only refer to testing the accuracy—note that precision and linearity are also tested as part of the range.

Ruggedness/ robustness

DOE/TMV

E.g., a drift study.

System suitability (ICH Q2)

Use log/SPC

(E.g., the daily calibration check.) This may involve checking any of the attributes, but also may include using performance characteristics such as: • blanks in chemistry, or un-inoculated media in microbiology, to assess contamination; • laboratory control samples - spiked samples for chemistry or positive culture controls for microbiology, to assess accuracy; • precision based on the analysis of duplicates; • calibration check standards analyzed periodically in the analytical batch for quantitative analyses; and • monitoring quality control samples, usually through the use of control charts

Resolution (NIST)

IQ/calibration

The ability to differentiate between two similar inputs (samples).

Note: OQ would typically refer to the qualification of the instrument, not of the process.

Table 15.2

Where test method validation characteristics may be tested.

86

● 

Chapter 15 



Any existing data within these other processes would need to be evaluated to determine whether they are applicable or whether additional product-specific testing should be performed.

The Amount of Tolerance ICH Q2 defines precision for a test method as, “the closeness of agree­ ment (degree of scatter) between a series of measurements obtained from multiple sampling of the homogeneous sample under the prescribed conditions.” This sounds a lot like the definition of standard deviation (“a measure of how dispersed the data is in relation to the mean”—NIH). Specifically, it sounds like the standard deviation of the measurement system (σMeas). While precision may be expressed in terms of standard deviation, coefficient of variation, or confidence interval, this discussion will focus on the use of standard deviations. However, the concepts described here for standard deviation could be extrapolated to other methods of expression used for precision. The standard deviation is the square root of its variance. Standard deviation and variance are both determined by using the mean of the group of numbers in question. The mean is the average of a group of numbers, and the variance measures the average degree to which each number is different from the mean. Standard deviation is a statistic that looks at how far from the mean a group of numbers is, by using the square root of the variance.6 Some companies use the sample standard deviation to calculate a confidence interval for their TMVs—often using the total repeatability instead of just the measurement system repeatability. Your company may even (improperly) use some predefined formula for calculating the precision of your methods, for example: Repeatability % Tolerance =

5.15 x S (USL – LSL)



● 

Test Method Validation  



87

Where: S is the sample standard deviation, USL is the upper specification limit, and LSL is the lower specification limit. You may also apply a predefined threshold (for example, Repeatability % Tolerance ≤ 30%) required by your company’s SOPs. However, if you do, it’s likely that you either pulled these from some example or just pulled them “out of thin air.” If you aren’t adjusting these criteria based on the associated risks, then your TMV isn’t risk based. Also, if you aren’t adjusting the available tolerance range to compensate for the amount of product variability and the centeredness/ bias of your process and measurement system, then you’re also missing the point. (What good is “5.15 x S” if your mean result is right at the specification—half of your results would still fail.) Let’s start by addressing the available tolerance range. Chapter 13 discussed how multiple sources of variance can be confounded (for example, lots/samples/tests) and the individual sources of variance are used to calculate the total process variance using the following equation: σ2Tot = σ2Lot + σ2Proc + σ2Meas

For TMV precision, we are really interested in the σMeas (the standard deviation from the measurement system). It describes the amount of variability we get from our measurement system, which could be made up of several sources of variation (for example, repeatability by the same operator, intermediate precision by different analysts/ equipment/etc., reproducibility between laboratories, and possibly including ruggedness/robustness over time). The terminology you use may be different. If necessary, you may need to identify and quantify the sources of variability within the measurement system (for example, σ2Meas = σ2Repeatability + σ2Analysts + σ2Equipment + etc.); however, this chapter will just discuss the evaluation of the measurement system collectively. Note: The UNODC guidance indicates that “individual component uncertainties that are less than 20% of the highest component uncertainty

88

● 

Chapter 15 



have little impact on the overall uncertainty and can be omitted from the calculation.” .

The contributions to the total variance from the various sources could be displayed graphically, as in Figure 15.1.

90

91

92

93 Lot

Figure 15.1

94

95 Process

96

97

98

99

100

Measurement

Cumulative confounded variances (“Joe’s mustache curve”).

As with the distribution of lots/samples/tests described in Chapter 13, this analysis requires some type of variance study (such as ANOVA, DOE, etc.) to estimate the mean and quantify the variance coming from the sources of variation. The graph in Figure 15.1 demonstrates how confounded sources of variances with equal standard deviations (0.9 in this example) would add to the variability around a mean of 95 with limits of 90 ≤ x ≤ 100. (For these examples, we will assume the units of measure are a percent (%) of a target value.) In this example, if we assume that the worst-case situation has occurred (for example, our lot, process, and measurement system are all off in the same direction by 3σ—either to the positive direction or to the negative), we still fall within our tolerance. However, if our mean value is closer to either of our limits (instead of perfectly centered) or if the lot-to-lot or within-lot (process) variance takes up more of the tolerance, we would need to adjust our expectations for the measurement system. Note: This example uses +/– three standard deviations; however, as discussed in Chapter 12, the number of standard deviations you would use for the lot and process variation should come directly from the occurrence rate that was set in the risk documents (converted from a percent to a standard deviation (σ) using a normal distribution curve (assuming your data are normally distributed)).



● 

Test Method Validation  



89

It is at this point that we need to distinguish the sample from the population (that is, the test results from the actual product). Some test method characteristics, such as accuracy (bias), may cause good samples to appear to be bad, but they can also result in bad samples appearing to be good. Additionally, due to limitations from sampling, the samples may not perfectly represent the overall population (for example, you might happen to grab a bunch of samples from one end of the range).

Stop and Ponder The average medical patient would be shocked to know how often I’ve heard people say that their test methods are validated because they produced passing results.

What really matters when it comes to our validations? Is it more important that our products appear to be good or that they actually are good? When it comes to the probability of occurrence of harm (or the occurrence of the nonconformance that might lead to harm), it’s more important that the products are actually good. As such, the occurrence thresholds in the risk documents apply directly to the variation coming from the lot-to-lot and within-lot (process). (As described in Chapters 11 and 12, various formulas can be used to convert percentages and decimals into standard deviations (sigmas), based on the normal distribution (bell curve) probability theory.) Recall that back in Chapter 10 we described confidence as the measure of certainty that the testing represents the actual population. In other words, it’s a measure of how close we need our testing (measure­ ments) to be to the true value of the population. Confidence intervals are generally described in terms of the estimates of the population mean ± a margin of error. Similarly, confidence levels describe the frequency of tests (with a given confidence interval) that contain the true value. That margin of error in the confidence interval is some constant (for example, c) multiplied by the standard deviation.

90

● 

Chapter 15 



In other words, the mean values of our actual product (with the variability from the lot-to-lot and within-lot (process)) need to be at least some number of standard deviations away from the limits, based on the occurrence rate. Additionally, the mean value of our test data needs to be within some number of standard deviations of the mean value of the actual product, based on the confidence level we want. Therefore, the total acceptable variance can be expressed as: σ2TotAcc = (cXσTot)2 = (cOσLot)2 + (cOσProc)2 + (cS σMeas)2

Where: σ2Tot Acc is the total acceptable variance, cX is a constant, c O is the threshold number of standard deviations (based on the occurrence rate in the risk documents), and cS is the threshold number of standard deviations (possibly based on the severity rating in the risk documents). To describe this in more detail, let’s start by removing the measurement system (dotted) line from the example above to just look at the variability coming from the lot-to-lot and the within-lot (process) (Figure 15.2). From this we can see that in this example +/– three standard deviations from the lot and process would still allow approximately 2.4% (the total available amount of the tolerance range left over) for the measurement system (that is, the test method).

90

91

92

93

94

95 Lot

Figure 15.2

96

97

Process

Cumulative confounded variances—two sources.

98

99

100



● 

Test Method Validation  



91

In this example, it is apparent that part of the tolerance (above and below the lot and process distribution) is available for the measurement system (test method) variability. We can actually calculate the standard deviation that our test method would need in order to still fit within the available range, using the confidence level required by our risk documents. This calculation is done by first understanding that the cXσTot from the equation on p. 90 could use up the entire available range. cXσTot ≤ MIN (USL – µ, µ – LSL); therefore, (cXσTot)2 ≤ MIN (USL – µ, µ – LSL)2

Where: USL is the upper specification limit, LSL is the lower specification limit, μ is the mean of the population, and cXσTot, USL – μ and μ – LSL are positive numbers (this is important—as you’ll see later). Notice that this is surprisingly similar to the equation on p. 86 (“5.15 x S” is roughly the number of standard deviations on both sides of the mean for 99% confidence and reliability or cX = 2.575). If you are a statistician, you may also recognize how similar this is to a Ppk equation. However, the two equations on p. 90 & 91 instead use your occurrence and severity to adjust the values based on the associated risks rather than just using a default constant. Equations on p. 90 & 91 can then be combined and solved for σMeas. This will provide the maximum test method standard deviation (σMeas): σMeas ≤

] – [(c σ ([MIN (USL – µ, µ – LSL) c 2

2 S

)

)2 + (cOσProc)2]

O Lot

In the aforementioned example, using a 99% confidence level, the calculated maximum for σMeas is approximately 1.25%. Using a 95% confidence level instead, the calculated maximum for σMeas would be approximately 1.65%.

92

● 

Chapter 15 



This is a nice example in which the data are perfectly centered; however, we all know that never happens in real life. So, here’s a similar example (Figure 15.3) with the data offset to have a mean value of 96 instead:

90

91

92

93

94

95 Lot

Figure 15.3

96

97

98

99

100

Process

Cumulative confounded variances—two sources, offset.

In this case, because the data are not centered, the test method has to be significantly better. The total amount of the tolerance range remaining (2.4%) is the same (we just shifted the curves; we didn’t change their size); however, we now only have about 0.18% left between the within-lot (process) range and the upper limit. As a result, we have much less “available” for the test method.

90

91

92

93 Lot

Figure 15.4

94

95 Process

96

97

98

99

100

Measurement

Cumulative confounded variances with measurement system threshold.

In Figure 15.4, using a 99% confidence level, the calculated maximum for σMeas is only approximately 0.46% (compared to 1.25% on the earlier graph). Requiring only a 95% confidence would expand the calculated maximum for σMeas to approximately 0.61% (compared to 1.65% on the earlier graph).



● 

Test Method Validation  



93

As you can see from these examples, the precision required for the test method depends on how much of the tolerance range is already being taken up by the lot-to-lot and within-lot (process) variability. It also depends on how close the process is running to the limits. By using the occurrence rate from the risk document to determine the number of standard deviations the mean needs to be from the limit, we are effectively only using half of the definition of “risk.” By linking the severity rating from the risk document to the confidence level, it can be used to determine the confidence interval to ensure our testing represents the actual population. Then, by using both the occurrence and the severity, your TMVs will truly be “risk based.” To make your TMVs additionally “risk based,” the severity and occurrence would also be used to determine the number of tests required for each level of precision (repeatability, intermediate precision, and reproducibility) using a distribution based on the sources of variance, similar to what was described in Chapter 13. Additionally, some tests, such as limit of detection, may use “confidence” as part of the evaluation (for example, ORA-LAB.5.4.5 describes, “method detection limit (MDL) is the minimum concentration of a substance that can be measured and reported with 99% confidence that the analyte concentration is greater than zero.”). For these tests, the confidence could be scaled according to the severity of the associated risk, as described in Chapter 11.

Accuracy (Bias) The total available range (from the population mean to the nearest specification limit) needs to accommodate any measurement inaccuracy (bias) as well as the variability from lot-to-lot, within-lot (process), and the measurement system.

94

● 

Chapter 15 



Before we dig into this topic, it’s important to define the terms. According to NIST: Accuracy is a qualitative term referring to whether there is agreement between a measurement made on an object and its true (target or reference) value. Bias is a quantitative term describing the difference between the average of measurements made on the same object and its true value. Based on these definitions, the distinction appears to be that accuracy is describing the difference for a single value, but bias is describing the difference for the average value. For the purposes of this discussion, the terms will be used interchangeably or with the combined phrases “accuracy (bias)” or “inaccuracy (bias)” to describe how far either a single value or the mean (average) may be skewed. We have discussed previously how offsetting the data impacts the “available” amount of the tolerance range (see Figure 15.3). This offset can be the result of the process simply running closer to one end of the range, but it may also be the result of inaccuracy (bias) in our measurements. As shown in Figure 15.3, the amount of “available” tolerance for your measurement system is affected by the amount of offset of the process; however, the tolerable amount of offset (from the process being offset and/or from inaccuracy (bias) of the measurement system) may also depend on the variability of the measurement system. In other words, the accuracy of your measurement system should also equate to the confidence you need (potentially from the severity of the associated risks, per Chapter 11); which creates another Catch-22. The amount of accuracy (bias) offset that a measurement system can have depends on how much of the tolerance is left over after the precision; however, the threshold for the precision depends on how offset the data are.



● 

Test Method Validation  



95

Using Accuracy to Determine the Precision Threshold It is important to note that inaccuracy (bias) of the measurement system can result in good samples appearing to be bad, but it can also result in bad samples appearing to be good. This will depend on where the process is actually running and the direction and extent of the inaccuracy (bias). Just because your testing had results that passed doesn’t mean your product is good. If your measurement system (test method) produces results that are always 10% off (that is, a 10% bias), your actual product could be well outside of your limits and still appear to be good. In Figure 15.5, you would be confident in getting a passing result, but you would not be confident that the interval contains the true value. This is what can happen if you don’t confirm that USL – μ and μ – LSL are positive numbers. The easiest way to deal with this is to ensure your product stays within the range of your data, which we will discuss when “Using Precision to Determine the Accuracy Threshold.”

80

82

84

86 Lot

Figure 15.5

88

90 Process

92

94

96

98

100

Measurement

Cumulative confounded variances with measurement system bias.

If your accuracy is known and is consistent (that is, your process has a bias in a specific direction), you can possibly use this to your advantage. As discussed earlier, if your process is already offset toward one of your specification limits, and your bias moves your testing even closer to that limit, you will likely need tight precision from your method (because there won’t be much available tolerance left). However, if your process is offset toward one of your specification limits, but your bias brings your testing back toward the center of the range, you effectively get a larger available tolerance range for your measurement system.

96

● 

Chapter 15 



If you know the accuracy (bias) of your measurement system, you can adjust the formula for the calculation of the threshold of the precision (Figure 15.6): σMeas ≤

– LSL) ] – [(c σ ([MIN (USL – (µ + Bias), (µ – Bias) c 2

)2 + (cOσProc)2]

O Lot

2 S

90

91

92

Lot

Figure 15.6

93 Process

94

95

96

Measurement–Bias

97

98

99

)

100

Measurement+Bias

Cumulative confounded variances with measurement system bias in limits.

This will ensure that your results are within your thresholds, but as noted, it may not ensure your product is good, especially if your process drifts over time. If your bias causes your lot and process variation to extend outside of your measurements (the dashed line extends outside the dotted line), you still need to confirm that μ (the mean of the population) +/– the lot-to-lot and within-lot (process) variability (the dashed and solid lines) are actually producing product that is within your thresholds.

Using Precision to Determine the Accuracy Threshold As discussed previously, the threshold for the precision depends on how offset the data are; however, the amount of accuracy (bias) offset that a measurement system can have also depends on how much of the tolerance is left over after the precision. This “amount of accuracy (bias) offset that a measurement system can have” effectively describes the tolerable error in the estimate of the mean (the “E” in the example equations in Figure 10.1), often described as the difference between the sample mean (X) and the population



● 

Test Method Validation  



97

mean (μ), calculated as X – μ. The threshold for this value can be calculated as the distance of the sample mean (X) value from a pre-validation study (for example, ANOVA, DOE, etc.) to the minimum required distance of the population mean (μ) from the tolerance limit; that is, E = X – (LSL + cXσTot) or E = (USL –cXσTot) – X, depending on the direction of the bias. In other words, if your occurrence rate requires your population mean to be 3σ inside your limits, then measure from that/those point(s) to your sample mean (X). That’s how far your method could be skewed and still ensure you will get passing results, and the product is actually within your 3σ buffer. There is a problem with this strategy though. Since accuracy (bias) is typically only in a single direction, if your data generated either of the dotted-line curves, there’s still a chance that your product could actually be outside of that curve (note that the dashed curve extends outside of both dotted-line curves) (Figure 15.7). Then, if your process were to drift, you could end up with passing results for bad product. To account for the possibility that your product could actually be outside of the tolerances even though your data are within, you would still need to also calculate and check the population mean and distribution.

90

91

Lot

Figure 15.7

92 Process

93

94

Measurement

95

96

97

Measurement – Bias

98

99

100

Measurement + Bias

Cumulative confounded variances with bias (+/–) thresholds.

However, rather than checking both your measured data (the dotted line) and your actual product (the dashed line), there is an alternative. You can define the limit of your accuracy (bias) to ensure your product variation remains within the bounds of your measured variation (that is, the ends of the dashed curve are always within the dotted curve). This will ensure that your passing results represent good product.

98

● 

Chapter 15 



This means your accuracy (bias) would have to be less than or equal to the maximum amount your measurement system variance contributes to the overall variation. In these examples, this can be calculated by effectively determining the difference between the point at the end of the dotted curve and the point at the end of the dashed curve. If your accuracy (bias) remains less than or equal to that amount, then the variation of your population from lot-to-lot and within-lot (process) variations won’t fall outside of your measurement range (Figure 15.8). (Note that this assumes a stable measurement system.)

90

91 Lot

Figure 15.8

92 Process

93

94

95

Measurement

96

97

98

Process–Bias

99

100

Process+Bias

Cumulative confounded variances with bias (+/-) thresholds within measurement variance.

Once you have confirmed the variances (including the measurement system variance), you can use formulas to calculate the accuracy (bias) threshold accordingly: Accuracy (Bias) ≤

(cOσLot)2 + (cOσProc)2 + (cSσMeas)2 –

(cOσLot)2 + (cOσProc)2

If cO is linked to the occurrence rate and the confidence level you use for cS is linked to the severity of the associated risks, this will ensure that the accuracy of your measurement system (test method) is also risk based. If you have already tested your accuracy and precision (for example, for a legacy product), you can use your TMV data and these strategies to determine if the accuracy and precision of your measurement system are applicable for your process and specification limits (tolerance range).



● 

Test Method Validation  



99

Simply use the equation on p. 96 to determine if your precision is appropriate and use the formulas in the equation on p. 98 to determine if your accuracy (bias) is appropriate.

Key Takeaways • There are two main types of test methods for most medical devices: attribute and variable. • The characteristics for your TMV depend on the type of test method. • The characteristics may be tested in other processes, but evaluation of all the data should be documented in the TMV report. • Precision testing should link to the occurrence thresholds and account for the other sources of variation. • Accuracy (bias) and precision are linked.

Endnotes 1. National Institute of Standards and Technology (NIST), Engineering Statistics Handbook (Gaithersburg, MD: NIST 2012), http:// www.itl.nist.gov/div898/handbook/. 2. U.S. Food and Drug Administration (FDA), https://www.fda.gov/ regulatory-information/search-fda-guidance-documents/q2r1validation-analytical-procedures-text-and-methodology-guidanceindustry 3. European Medicines Agency (EMA), ICH Q2, https://www.ema. europa.eu/en/ich-q2-r1-validation-analytical-procedures-textmethodology. 4. United Nations Office on Drugs and Crime (UNODC). Guidance for the Validation of Analytical Methodology and Calibration of Equipment used for Testing of Illicit Drugs in Seized Materials and Biological Specimens (New York: UNODC, 2009), https://www.unodc.org/unodc/en/ scientists/guidance-for-the-validation-of-analytical-methodology-andcalibration-of-equipment.html.

100

● 

Chapter 15 



5. U.S. Food and Drug Administration (FDA), Field Science—Laboratory Manual (FDA 2021), https://www.fda.gov/science-research/fieldscience-and-laboratories/field-science-laboratory-manual. 6. Investopedia.com.

Section 3 Using Risk ●  ●  ●

Chapter 16 What is the Requirement? Chapter 17 Risk-Based Decisions Regarding the Need for an Investigation Chapter 18 Quality System Nonconformities

101

Chapter 16 What is the Requirement? ●  ●  ●

A

s discussed in Chapter 1, many people still think that risk documents exist just to satisfy an auditor and that they have no real place in a company’s everyday activities. Companies will often, improperly, substitute SOP-driven limits for their everyday work with complaints and nonconformances, including thresholds used for batch yields, rework, scrap, etc. I worked at a pharmaceutical company many years ago, and we would say that a single event was an anomaly and two events was just a coincidence. Once there were three events, we would call it a trend and start an investigation. While this strategy was humorous and easy to remember, it wasn’t at all defendable and wasn’t based on risk. Sadly, I’ve seen similar decisions being used at numerous companies over the intervening years and still continue to see it. Even when the decisionmaking processes use thresholds other than three, the thresholds are often arbitrarily set by management or some engineer, and the limits are often not recorded in the risk documents. For example, one company used a $250 per part limit to determine if it would investigate; one company used “25 complaints” as a standard threshold for investigation; and another company required 95% yield rates for all manufacturing processes. Annex I, Paragraph 3(e-f) of the EU MDR/IVDR states that manufacturers are to “evaluate…information from the production phase…[and]…the post-market surveillance system, on hazards and the

103

104

● 

Chapter 16 



frequency of occurrence thereof, on estimates of their associated risks…” and that “based on the evaluation…amend control measures” (emphasis added). This sentence (and particularly the word “their”) may be the most overlooked part of the entire regulation. In other words, manufactures are required to compare the severity and occurrence rates of their nonconformances (including yield rates, rework, scrap, etc.) from their manufacturing/production lines and their complaints against the severity and occurrences set in the risk documents. (See the “Risk Evaluation vs. Benefit-Risk and Acceptability” section in Chapter 23 for additional discussion of the term evaluate.) EN ISO 13485:2016 Section 8.3 Control of nonconforming product 8.3.2  Actions in response to nonconforming product detected before delivery a) Take action to eliminate the detected nonconformity [e.g., rework] b) Take action to preclude its original intended use or application [e.g., scrap, discard, deface, convert to an engineering or sales/marketing sample, etc.] c) Authorize its use, release, or acceptance under concession [e.g., use-as-is (UAI),* concession] 8.3.3  Actions in response to nonconforming product detected after delivery [e.g., field actions, recalls, advisory notices] * One client I worked with had a great approach to UAIs. He contended that if you were going to say nonconforming product was acceptable to use, then your specification wasn’t characterized properly—and you had to change the specification.



● 

What is the Requirement?  



105

21 CFR 820.90 is specific to nonconforming “product.” It delineates the activities that are required by the manufacturer to deal with the pile of bad product, specifically, the “identification, documentation, evaluation, segregation, and disposition.” However, beyond dealing with the pile of product, the regulation includes an additional requirement: “The evaluation of nonconformance shall include a determination of the need for an investigation.” Notice that nowhere in this section of the regulation does it describe performing an investigation. Per the regulation, you only determine “the need” for one. It is recommended that you use the severity and occurrence thresholds in your risk documents for this determination. (Then you could say your processes really are “risk based.”) Once you have determined that you need an investigation, 21 CFR 820.100 states where the investigation is required to happen—in a CAPA. (See Chapter 27 for additional discussion regarding where to perform an investigation.)

Key Takeaways • NC (production phase) and complaint (post market) data are required to be evaluated against the severity and occurrence of their risks (as set in the risk documents), not some arbitrary threshold set by management in an SOP. • The “need for an investigation” (that is, CAPA) should be based on risk.

Chapter 17 Risk-Based Decisions Regarding the Need for an Investigation ●  ●  ●

A

s discussed earlier per ISO 14971, risk is the “combination of the probability of occurrence of harm and the severity of that harm.” Therefore, if your decision is to be truly “risk based,” you need to evaluate both the severity and the occurrence rate of the events (Figure 17.1). You should have identified the sequences of events and the associated harms for the product during clinical/performance trials (or other sources during development), and you should have documented in the risk documents everything that has happened and those things you think may happen. If you organized your risk documents (for example, FMEAs) to align with the steps in ISO 14971, then you have identified “initial” and “residual” risks.

NC confirmed

Threshold exceeded?

No

Investigate anyway?

Yes Yes

CAPA/SCAR

Figure 17.1

Risk-based decision tree.

107

No

Track/trend

108

● 

Chapter 17 



The initial risk estimations (severity and occurrence ratings assigned) represent the thresholds that you have encountered previously, validated to, and are willing to accept. These initial risk estimates represent the rates prior to the implementation of your risk controls. Production events (for example, some in-process nonconformances) and pre-production events (for example, incoming inspection nonconformances) generally occur prior to the application of risk controls and should be compared against the initial risk ratings. The residual risk estimations (severity and occurrence ratings assigned) represent the thresholds that you have validated your risk controls to and are willing to allow in the market. Pet Peeve If someone on your team or in your management is still holding onto the outdated concept of “zero defect,” their arguments on the topic are not based in reality. Yes, Virginia, there is no Santa Claus in risk management. Bad things are going to happen. It’s everyone’s job to make sure they happen as infrequently as possible, but bad things still might happen. It’s the purpose of the risk documents to identify, realistically, how often we can expect and will allow those bad things to happen. Post-production events (for example, final release nonconformances, complaints) occur after the application of risk controls and should be compared against the residual risk ratings.

Per the ISO 14971 standard, the severity of harm listed in a risk document represents a “measure of the possible consequences of a hazard.”



● 

Risk-Based Decisions Regarding the Need for an Investigation  



109

When evaluating an event, the actual severity of the event is compared against the “possible consequences” (severity rating) assessed in the risk document. There are three possible outcomes of this comparison: 1. The event was not previously assessed in the risk document (for example, a new nonconformance/sequence/harm). If this is the case, why didn’t we see this event in our clinical/ performance studies, and why didn’t we expect this event to happen? We need to investigate. 2. The actual severity is worse than (higher than) the severity rating assessed in the risk document. If this is the case, why is this event causing worse harm than we saw in the clinical/ performance studies? Why didn’t we know or expect that it could be this bad? What has changed? We need to investigate. 3. The actual severity is less than or equal to the severity rating assessed in the risk document. It’s right where we expected it to be, so let’s check the occurrence rate. The occurrence rate set in the risk documents represents how much the company is willing to tolerate for the issue. This can be alternately stated as the tolerable percent defective or the lot tolerance percent defective (LTPD). (See Chapter 12 for additional discussion of occurrence rates and LTPD.) When evaluating an event, the actual rate of occurrence is compared against the tolerable occurrence rating assessed in the risk document. Since we already identified whether the event was in the risk document while evaluating the severity, there are only two possible outcomes of the occurrence rate comparison: 1. The actual rate is higher than the occurrence rating assessed in the risk document. If this is the case, why is it happening more often than we expected (for example, more often than it did during the clinical/performance studies)? We need to investigate.

110

● 

Chapter 17 



2. The actual rate is less than or equal to the occurrence rating assessed in the risk document. It’s no worse than we expected, and it’s happening at a rate we expected. Dump it into a bucket and track/trend. If your post-market surveillance (PMS) process identifies new harms or higher occurrence rates than what was seen during the clinical/ performance studies (your studies or those found in the literature), part of your investigation could include additional clinical/performance studies. These changes may indicate something has changed in the way people are using the product, risks to a special population, and so on that may not have been previously studied. One additional note: A supplier corrective action request (SCAR) may go by various names; but remember, it’s just a CAPA that may involve someone outside your organization helping you with the investigation and being responsible for implementing the corrective action. As the manufacturer, however, you’re still accountable to ensure it’s done right and meets the regulatory requirements of a CAPA.

Key Takeaways • The thresholds you set in your risk documents are what determine when you take action. • Yes, it’s really that simple.

Chapter 18 Quality System Nonconformities ●  ●  ●

A

s discussed at the end of Chapter 16, 21 CFR 820.90 is specific to nonconforming “product.” However, the next section in the regulation (21 CFR 820.100 Corrective and preventive action) states that CAPAs are for “investigating the cause of nonconformities relating to product, processes, and the quality system.” When viewed together, it becomes apparent that your NCR process needs to address more than just nonconforming “product”; it also needs to address nonconforming processes and nonconforming quality system elements. The severity criteria examples provided in Chapters 9 and 11 include criteria for “persons” (patient/process operator), “property” (business/process), and “the environment” in order to align with EN ISO 14971; however, those examples do not include any criteria for quality system (compliance) risks. Some people really struggle with comparing compliance risks with the other types of risks; however, all four pillars of risk apply to your products (Figure 18.1). You may ask, “How can you compare ‘death’ to a compliance risk?” From an emotional perspective, we can all relate to this concern; however, from a pragmatic business perspective, this is one of those hard decisions that sometimes has to be made. The correlation of business/process, environment, and compliance risks to something as catastrophic as death may also depend on the industry or type of product you manufacture. For example, if you make automated external defibrillators, death may not be unexpected for patients and wouldn’t be viewed the same as death caused by a tongue 111

112

● 

Chapter 18 



depressor. While the life of every patient is important and valuable, the inherent conditions of patients in some industries may make it easier to correlate to other types of risks. If this correlation is difficult or doesn’t make sense for your product, an alternative could be to put “death” in its own severity rating—and simply “N/A” comparable descriptions for the other pillars.

Compliance

Environment

Business/Process

Patient/User

RISK

ISO 14971 (Patient/User, Process & Environment) ISO 13485 & 21 CFR 820/809 & EU MDR/IVDR (Compliance)

Figure 18.1

The four pillars of risk.

The EU MDR and IVDR describe requirements for products to comply with applicable “harmonised standards, CS (common specifications) or other solutions” (for example, Articles 8-9 and Annex II Paragraphs 4(c-d)). Even ISO 13485 describes requirements for the quality management system processes and “medical device type or medical device famil[ies]” to comply “with applicable regulatory requirements” (ISO 13485 Section 4.1.3(e) and Section 4.2.3). As such, if we are required to comply with regulations, standards, and other requirements, we should assess and evaluate the risks associated with noncompliance. Some companies generate a separate table just for the evaluation of compliance risks; however, by expanding the example severity criteria table to include “compliance” risks, a single set of aligned criteria can be used. Putting all of your criteria into a single, aligned table will allow



● 

Quality System Nonconformities  



113

you to have apples-to-apples discussions when trying to evaluate multiple events (for example, to prioritize CAPAs or remediation activities) and when trying to compare different types of risks. This will allow you to include the evaluation and control of compliance risks (for example, if your product doesn’t align with a CS) in your product risk documents. Chapter 17 explained how the NCR process should evaluate the severity and occurrence of an issue against the threshold delineated in the associated risk document. By classifying each NCR according to the types described by the Code of Federal Regulations (CFR) (that is, “product, process, [or] the quality system”), deciding what risk document to use becomes intuitive. For product nonconformances, the dFMEA would typically apply. For process nonconformances, the pFMEA would typically apply. By using a combined set of criteria in your NCR process, you can then use your qualitative occurrence criteria and your default evaluation criteria to determine whether your quality system risks are acceptable or unacceptable (that is, if you need to investigate). Once the NCR process becomes mature enough using these concepts, some companies have developed (and documented) quality system risk thresholds for recurring issues/categories of issues. Some manufacturers may be concerned about the possibility of misuse of the “compliance” criteria when evaluating product risks. For those who are concerned, the single, aligned table could be applied to their business, project, or NC/CAPA risk process (Table 18.1). Then the criteria could be adjusted in the risk management plan for specific products/product-families. Note: If the people/process/environment criteria are used in multiple processes (for example, risk management, complaint handling, NCR, and CAPA), it is recommended that the default severity criteria align across the systems. The criteria generally aren’t adjusted in the complaint, NCR, and CAPA processes but can be adjusted for specific products/product-families in the risk files. By ensuring the default severity criteria align in your systems, you give everyone the same starting point on which to build. (Reminder: As described in Chapter 5, the quantitative occurrence criteria should be product/product-family specific, but the qualitative descriptions might use default language.)

Serious

Moderate

Minor

Negligible/ cosmetic

4

3

2

1

Process is disrupted, adjustment or temporary process is needed to produce product or tool/equipment repair required; components or materials all scrapped Process performance degraded and subsequent processes or process steps are affected or tool/equipment repair required; components or materials sorted (not at work station) with a portion being scrapped Process performance degraded, but is operable and able to produce product without effect to subsequent processes or process steps; components or materials reworked with little to no associated scrap at the station. Negligible/slight disruption to process

Serious Injury (permanent impairment of a body function or permanent damage to a body structure or necessitates intervention to preclude permanent impairment/damage)

Temporary or reversible illness, injury, or impairment. May require intervention; however, intervention is not to preclude permanent impairment/damage.

Temporary, reversible, or non-serious illness, injury not involving intervention, or moderate discomfort/stress

Inconvenience, temporary discomfort, nuisance or cosmetic impact, or minor discomfort/stress

Virtually no negative effect. Cosmetic damage to the environment, facility, or equipment.

Environmental, facility, or equipment damage that is reversible without remediation (e.g., damage to equipment repairable by operator or destruction of other supplies)

Environmental, facility, or equipment damage that is reversible only with professional remediation (e.g., damage to equipment requiring engineering repair)

Irreparable damage to the environment, facility, or equipment (e.g., destruction of equipment)

Extensive, irreparable environment, facility, or equipment damage (e.g., destruction of building or room)

Environment

Presents a risk of negatively impacting business performance factors (e.g., scrap rate, rejection rate)

Noncompliance with internal requirements not associated with governing regulations and standards (e.g., process steps within a work instruction or minor internal audit finding)

Noncompliance with external submissions or filings (documentation) that are isolated to a specific product or product family (e.g., test method failure)

Noncompliance to a specific quality system element from governing regulations or standards (e.g., process validation system not compliant with regulations)

Noncompliance with government regulations—capable of jeopardizing site, facility, or corporation regulatory approvals or leading to regulatory warnings, certification revocation, or other undesirable actions

Compliance

80

90

95

97.5

99

Confidence Required for Validation (%)

Chapter 18 

A comprehensive example of severity criteria.

Catastrophic

5

Process is inoperable or process is unable to produce product within specification

Business/Process

Death/life-threatening

Patient/Process Operator

Descriptions

● 

Table 18.1

Term

Numerical Rating

Severity

114 ●



● 

Quality System Nonconformities  



Key Takeaway • A single, aligned set of criteria will help you address the four pillars of risk.

115

Section 4 Information for Users/Patients ●  ●  ●

Chapter 19 Two Types of Information Chapter 20 Warnings, Precautions, Contraindications, and Adverse Reactions/Events Chapter 21 Information for Safety and Training Chapter 22 Residual Risk

117

Chapter 19 Two Types of Information ●  ●  ●

Information for Users/Patients may be communicated in different ways including, but not be limited to, the following: • Labels and labeling • Instructions for use (IFU) • Directions for use (DFU) • Package inserts • Post-operative care instructions

• Implant card • Installation manual • Operator’s/ owner’s manual • Operating instructions • Quick reference guide

• Software displays • Training materials • Marketing materials • Advertising • Patient brochure • Safety data sheet (SDS)

“The distinction between labeling and advertising, both of which draw attention to the article to be sold, is often superficial or nebulous. Both are used for a similar purpose, i.e., to provide information about the product. Thus, according to an appellate court decision: ‘Most, if not all, advertising is labeling. The term ‘labeling’ is defined in the FD&C Act as including all printed matter accompanying any article. Congress did not, and we cannot, exclude from the definition printed matter which constitutes advertising.’” (FDA 89-4203) Note that all three branches of the government are aligned: advertising is labeling.

119

120

● 

Chapter 19 



At this point, you have designed your product to control the associated risks, built your process to control the design, validated your design and process, and you use risk as part of your continuous monitoring. Beyond the design of the product itself, you also need to consider the design of your labeling and other communications. The information you provide to users and patients can help control risks, but it can also communicate those risks. In Chapter 7, we discussed how to link your IFU for the product or for a similar product to the use-risk assessment, but we didn’t address how to build the IFU in the first place. The instructions for the users and patients must be written so as to help the user safely use the product; however, this creates a Catch-22. How do we assess the risks if a user doesn’t follow the steps in the IFU until we’ve written the IFU? But we’re supposed to write the steps to control risks. Because of this Catch-22, the process must be iterative. If you have instructions for a similar product, you can draft instructions, assess the risks, and then rewrite the instructions and reassess the risks until you’re satisfied they have reduced the risks as far as possible (AFAP). If you don’t have instructions from a similar product (for example, a new technology), then you may assess the use risks from the user needs and develop the design and process accordingly. Then, using the identified risks, you can build the instructions and feed them back into the risk process. Note that per EN ISO 14971:2019, the necessary information shall be included “in the accompanying documentation.” The standard defines “accompanying documentation” to include “materials…for the installation, use, maintenance, decommissioning and disposal” of the device. It also extends the definition beyond written/printed documents to include “auditory, visual, or tactile materials and multiple media types.” As you develop the information for your users and patients, you need to keep in mind that there are two types of information that you need to communicate: instructions and information. As described in the EU MDR and IVDR (Annex I, Paragraph 4) and EN ISO 14971:2019 (Sections 7.1 and 8), the third type of risk control



● 

Two Types of Information  



121

measure is to “provide information for safety (warnings/precautions/ contra-indications) and, where appropriate, training to users.” They go on to require that “Manufacturers shall inform users of any residual risks.” • Information for safety and training “is instructive and gives the user clear instructions of what actions to take or to avoid, in order to prevent a hazardous situation or harm from occurring” and may be considered a risk control measure. (ISO/TR 24971 Annex D) • The disclosure of residual risks is informative but not instruc­ tional (for example, “…known carcinogen”), and should not be used to reduce the probability of harm. It is important to understand the distinction between these two, and the language used in the regulations and standards adds to the confusion. It’s unfortunate that the regulations and ISO 14971 choose to describe the third type of risk control as “information for safety.” While it is certainly information, it is a specific type; specifically, in order to affect the safety, it needs to be instructional. Since the reduction of risk requires actionable instructions, it would reduce confusion if it had been called “instructions for safety.” For the information to help control a risk, it needs to make people change their behavior in some way. The way we change people’s behavior it to provide them instructions, not just information. As described in ISO/TR 24971, the instructions have to tell people “what actions to take or to avoid, in order to prevent a hazardous situation or harm from occurring.” If we simply inform users/patients that a product can cause cancer, we’re not telling them what they can do to reduce this risk. Knowing that a product can cause cancer may help them decide not to use it, but it doesn’t change the risk once they decide to use it.

Key Takeaways • There are two types of information for users/patients. • The wording for your labeling, training, and marketing materials should come from your risk documents.

Chapter 20 Warnings, Precautions, Contraindications, and Adverse Reactions/Events ●  ●  ●

T

he IFUs for most products include separate sections for warnings, precautions, and contraindications; however, some companies may not know that they need to include sections for adverse reactions/events and potential adverse reactions/events. Additionally, many companies struggle to determine what information to put into each section, and how to arrange and organize the sections. The procedures at these companies may not clearly delineate which of these sections are supposed to include instructions and which are just informational. The regulations (for example, 21 CFR 801, 21 CFR 809.10, EU MDR/IVDR) require that manufacturers provide “information for safety (warnings/precautions/contra-indications),” but they don’t pro­ vide specific definitions or explanations of these terms. Fortunately, the FDA provides some old, but still applicable, guidance1: • Labeling—Regulatory Requirements for Medical Devices (FDA 89-4203), August 1989 • Document Device Labeling Guidance #G91-1 (Blue Book Memo), March 1991 The European Commission also provides A Guideline on Summary of Product Characteristics (SmPC). While this guide­line specifically address­es requirements for the application process of “medicinal products,” it provides some good insight for medical device companies as well.

123

124

● 

Chapter 20 



Not only do these guidance documents provide explanations of the terms, but they also describe how to arrange the sections, and they clarify that not all the sections may be necessary. For example, per G91-1 “the ‘Contraindications’ section shall immediately follow the ‘Indications for Use’ section of the labeling. If no contraindications are known, this section of the labeling should state ‘None known.’” G91-1 does not include any similar requirements to include the other sections; however, as mentioned in the prior chapter “manufacturers shall inform users of any residual risks.” Based on G91-1 and the regulations/standards, the contraindications and adverse reactions sections are required, but the warnings, precautions, etc. would only be included as needed. It is also interesting to note that while G91-1 only describes arranging the adverse reactions according to their risk (“clinical significance as determined by their severity and frequency”), the sections of the guidance document itself are arranged according to their significance. For example, contraindications are before warnings, which are before precautions, which are before adverse reactions. (Per G91-1, special patient population information is to be described in the other sections, as appropriate.) While FDA 89-4203 broadly describes requirements regarding the format and order of information for the various types of products, it doesn’t address warnings, precautions, contraindications, etc. with the same level of detail. The European Commission (EC) guideline (SmPC) provides additional discussion of special populations (for example, elderly, renal impairment, particular genotype, pediatric, etc.). It also includes some good reminders to consider the environmental risks and any special precautions for disposal, and clarifies that the occurrence rates should use the “crude incidence rates (not differences or relative risks calculated against placebo or other comparator).” The EC guideline (SmPC) reinforces the need to order the sections “by importance” and states, “the order of warnings and precautions should in principle be determined by the importance of the safety information provided.”

● 

Warnings, Precautions, Contraindications, and Adverse Reactions/Events  



125

By reviewing these guidance documents, we can determine what sections would contain information for safety and training, and what sections would contain residual risks: •

Information for Safety and Training – Contraindications – Warnings – Precautions –  Special Patient Populations

• Residual Risk(s) –  Adverse Reactions/Events –  Potential Adverse Reactions/Events While not identified in the guidance documents as a separate section, directions for use (DFU) may include the step-by-step and special instructions to “use the device safely and for the purposes for which it is intended” (G91-1). Since the DFUs are instructional, they would also be part of the information for safety and training. As risk controls are being considered, some may point to specific steps within the directions and/ or the associated contraindication, warning, etc. As a risk control option, the information for safety and training may be documented for the control of individual risks and/or nonconformances within the applicable risk document (for example, FMEA). In other words, you may put information on your labeling to address a single line item in your FMEA or the information may apply to several lines—just be sure to document it in your risk documents. The information for users/patients should be documented in the RMR to delineate the order of all the risk controls and summarize the residual risks. The review of the information for users/patients should be performed prior to the evaluation of overall residual risk. One of the added benefits of recording the information in a separate section within your RMR is that it will give your labeling, training, marketing/ advertising, and other groups a single location to go to for the information.

126

● 

Chapter 20 



Key Takeaway • Some of the sections in your labeling are instructional and some are just informative.

Endnote 1. U.S. Food and Drug Administration (FDA), Device Labeling (FDA 2020), https://www.fda.gov/medical-devices/overview-deviceregulation/device-labeling.

Chapter 21 Information for Safety and Training ●  ●  ●

A

s indicated in Annex I, Paragraph 2 of the EU MDR/IVDR, risks must be reduced “as far as possible without adversely affecting the benefit-risk ratio.” Since, per the EU MDR/IVDR, the third option is information for safety and training, this means that instructions for what “action(s) to take or not to take” need to be considered for all risks. FDA 89-4203 delineates “the FORMAT and ORDER” of the information for labeling, and G91-1 provides guidance on determining in which section the risk control should be placed, for example: • Contraindication—“risk of use clearly outweighs any possible benefit.” Note: “The ‘Contraindications’ section shall immediately follow the ‘Indications for Use’ section of the labeling. If no contraindications are known, this section of the labeling shall state ‘None known.’” • Warnings—“serious adverse reactions and potential safety hazards…and steps that should be taken.” Note: Warnings, precautions, special patient populations sections would only be included as needed. While defining a risk as a contraindication is fairly well-defined, deciding which risks are associated with warnings and which are just precautions is more of a gray area. Because warnings are associated with “serious” issues, this is a rare potential use for critical to quality characteristics (CTQs). If you have defined the risks in your top two

127

128

● 

Chapter 21 



severity categories (for example, death and serious injury) as CTQs, then this could be used as a tool for grouping the associated risk controls into warnings (for CTQs) or precautions (for non-CTQs). If you didn’t waste your time defining risks as CTQs, you can just use the associated severities to do the same thing. If possible, representatives from the product design/development, complaint handling/post-market surveillance, risk management, and technical writing groups should work with a medical/clinical representative to determine the phrasing of the information for safety. The order of the information within each section may be based on the severity of the associated risk; however, the order may be adjusted by the medical/clinical representative. The medical/clinical representatives may even adjust information into higher-risk sections if they deem it necessary; this may indicate the severity ratings for the associated risk may need to be reviewed. Note: G91-1 describes ordering the information according to the “clinical significance as determined by their severity and frequency,” and the EC guideline (SmPC) describes grouping the adverse reactions by frequency, and then within each frequency grouping “in the order of decreasing seriousness.” Each medical device manufacturer should determine which sequence (“severity then frequency” or “frequency then severity”) is more applicable for organizing its products’ risk information. Reminder: The medical/clinical representative should be someone specifically familiar with the actual clinical use of the product (a doctor who has performed the surgery, a nurse who used the product to treat patients, etc.) and/or treatment of the applicable condition. Regulatory and legal representatives may also need to be consulted to ensure the content and phrasing of the information for safety is appropriate.

Key Takeaway • Information for safety and training must be considered for all risks and should be arranged according to severity but may be adjusted to the audience.

Chapter 22 Residual Risk ●  ●  ●

I

f you are an experienced risk management professional, then you likely already know that ISO 14971 doesn’t always align with the EU MDR/IVDR (see Chapter 23 for additional discussion of some of the differences). One of the ways in which the standard differs from the regulations is in the handling of residual risks. While ISO 14971:2019 indicates that “significant” residual risks need to be disclosed, the EU MDR/IVDR do not limit the requirement to only “significant” risks and instead require that “any residual risks” be disclosed (emphasis added). Annex C of ISO/TR 24971 even states that “communicating too many residual risks so that the user has difficulty understanding which ones are really important.” However, Article 7 (c) of the EU MDR/IVDR states that it is “prohibited to…mislead the user or the patient…by failing to inform the user or patient of a likely risk.” The authors of the standard appear to have missed the point. While the manufacturer may consider the harms as insignificant, the patients who may be affected by them probably won’t think they’re insignificant—especially if they weren’t informed of the possibility. Also, the list of harms (and how they are presented) doesn’t have to be long and complicated. If a master harms list and/or harms summary table from an FMEA is available, all the identified harms for the product should be listed in the sections for residual risks. By summarizing the harms into a single list or table, it becomes a more manageable amount of information. Even for the trans-vaginal mesh products (with all of

129

130

● 

Chapter 22 



their issues), all the harms associated with the procedure and the product fit into a few short paragraphs. As with the information for safety and training, a multidisciplinary team including a medical/clinical representative is recommended to determine the phrasing used to disclose the risks; however, per G91-1, “adverse reactions should be listed in descending order according to their clinical significance as determined by their severity and frequency.” It is there­fore recommended that the information be sequenced in descend­ ing order of severity (as delineated in the applicable master harms list, if available, or other applicable risk estimation tool) for the applicable harm, then descending order by the sum of the projected proba­bility of occurrence of the applicable harm. Where multiple risks have been assigned equivalent projected probabilities, the order may be adjusted as actual data are obtained during post-market surveillance. • Risks that have actually occurred (for example, identified during pre-clinical/performance studies, clinical/performance trials, and/or during actual use) would be categorized as adverse reactions/events. • Risks that have been identified in the associated risk documents as potentially resulting as an effect of the device but have not been seen in the studies, trials, or actual use would be categorized as potential adverse reactions/events.

Key Takeaway • All risks must be disclosed.

Section 5 Other Information ●  ●  ●

Chapter 23 More Bad Guidance in ISO 14971:2019 Chapter 24 Linking your FMEAs (NCEAs) Chapter 25 Overlapping Definitions Chapter 26 Quality Data for Post-Market Surveillance Chapter 27 W  hy Investigations Are Illegal in a Nonconformance Report (NCR) Chapter 28 Don’t Blame the People Chapter 29 Flowcharts

131

Chapter 23 More Bad Guidance in ISO 14971:2019 ●  ●  ●

A

s discussed throughout this book, the examples in the ISO 14971 standard have led the manufacturers of medical devices astray for years. The understandable focus of the standard on hazards, hazardous situations and harms, and the examples of P1 and P2 are not practical and useable for most manufacturers. However, as discussed, manufacturers have tried for years to fit the square peg of their data into the round hole of these examples. Beyond the cost of the time and effort wasted by manufacturers in the futile attempt to align with these examples, the attempt to follow these examples has kept manufacturers from properly building risk management processes that would efficiently identify and drive necessary corrective actions to improve the safety of products for users and patients. Beyond this fundamentally bad guidance, the recent updates to ISO 14971 from ISO 14971:2007 and EN ISO 14971:2012 to EN ISO 14971:2019 have also created or failed to correct content deviations between the standard and the associated EU regulations (for example, EU MDR and EU IVDR). This chapter will identify several of these deviations that have not been addressed in the 2019 revision; however, before getting into the deviations, we’ll first review how harmonized standards are supposed to link to the regulations.

Link between EU MDR/IVDR and “harmonised standards” If you work in a regulatory department, you probably know what a “harmonised standard” is. However, if you ask most people what the 133

134

● 

Chapter 23 



difference between an ISO standard, an EN ISO standard, and a BS EN ISO standard is, they may timidly admit that they don’t know. Since there is so much focus on ISO 14971 when discussing risk management, it’s important to understand the connections between the groups involved and which ones can be used to demonstrate compliance. The International Organization for Standardization (ISO) develops and publishes international standards. A European Standard (Europäische Norm – EN) is a document that has been ratified by one of the three European standards organizations (CEN, CENELEC, or ETSI). A British Standard (BS) is the UK implementation of a standard. The link between the EU medical device regulations and the standards starts with Article 8 of the EU MDR and IVDR. In this article, the regulations delineate that a standard is considered “harmonised” when it has “been published in the Official Journal of the European Union.” While there is a subtle difference between a standard being approved and being published, for the typical person working at a manufacturer, a quick way to determine if a standard is harmonized is to confirm the standard has “EN” at the beginning of the title (for example, “EN ISO 14971” instead of just “ISO 14971”). To ensure it’s not just approved but also published, consult the website for the journal. There are two key points to remember when attempting to use a standard to demonstrate compliance of your product to the regulations. First, Article 8 of the EU MDR and IVDR states that if your device is “in conformity with the relevant harmonised standard, or the relevant parts of those standards” that it “shall be presumed to be in conformity with the requirements of this Regulation….” Most readers stop there; however, the last part of that sentence (“…covered by those standards or parts thereof.”) is pretty important. In other words, if your product complies with a standard and that standard doesn’t cover everything



● 

More Bad Guidance in ISO 14971:2019  



135

required in the regulation, then you still need to prove your product complies with the rest of the regulation. As you will see in this chapter, there are several areas where the EN ISO 14971:2019 standard does not cover everything required in the EU MDR and IVDR. Second, just because your QMS complies with EN ISO 13485 and your product’s risk documents comply with EN ISO 14971 doesn’t mean your product necessarily complies with the regulation. As discussed in the first point, standards only apply to the applicable parts of the regulation. There may also be other requirements beyond the quality system and risk management that apply to your product (for example, product-specific standards and common specifications). Note that the FDA does not use harmonized standards but instead uses “recognized consensus standards.” In general, these would reference the original standard from the organization that created it (for example, ANSI, AAMI, ISO 14971:2019), not the EU harmonized version (that is, EN). According to the FDA’s Standards and Conformity Assessment Program (S-CAP), “While manufacturers are encouraged to use FDA-recognized consensus standards in their premarket submissions, conformance is voluntary, unless a standard is incorporated by reference into regulation.”

Practicability In Section 4.2, Note 1, the EN ISO 14971:2019 standard states that manu­facturers “can define the approaches to risk control: reducing risk as low as reasonably practicable, reducing risk as low as reasonably achiev­able, or reducing risk as far as possible without adversely affecting the benefit-risk ratio.” This note appears to indicate that the concepts of as low as reasonably practicable (ALARP) or as low as reasonably achiev­able are still acceptable; however, Annex I, Paragraph 2 of the EU MDR and IVDR both state that “The requirement in this Annex to reduce risks as far as possible means the reduction of risks as far as possi­ble [AFAP] without adversely affecting the benefit-risk ratio.” In addi­tion, the removal of the “content deviations” from the EN ISO 14971:2012 Annex Z’s to EN ISO 14971:2019 has now removed the

136

● 

Chapter 23 



guidance that the reduction of risks as far as possible is to be done “without there being room for economic considerations.” By removing this content deviation and including a note that allows the concept of ALARP again, the EN ISO 14971:2019 standard is providing guidance that could be detrimental to patient safety if manufacturers use economic considerations to decide against reducing risks because they don’t deem it “economically practical.” This guidance also creates the potential for a compliance risk if auditors identify the use of ALARP instead of the manufacturer reducing risks AFAP. The concept of “practicability” is repeated in several places within the EN ISO 14971:2019 standard, including in Section 7.1, Note 4. This note states that the manufacturer “shall conduct a benefit-risk analysis of the residual risk” if it determines “that risk reduction is not practicable.” This is not only bad guidance regarding the concept of reducing risks AFAP, but it also appears to indicate that manufacturers wouldn’t need a benefit-risk analysis if they have reduced risks AFAP (or even ALARP). Section 7.4 continues this by stating that the benefit-risk analysis may be performed when “further risk control is not practicable.” Since the EU MDR and IVDR do not include similar exemptions for the benefit-risk analysis, following the EN ISO 14971:2019 standard (by not performing a benefit-risk analysis) may result in a compliance risk for manufacturers. EN ISO 14971:2019 Annex A is intended to be “informative” in order “to make this document more useful to manufacturers.” However, once again the standard repeats the guidance (in A.2.7.1) that safe design and manufacture of a product is only “if practicable.” Additionally, in Annex A.2.7.3, the EN ISO 14971:2019 standard states that risk reduction is to continue “until further risk control is not practicable.” However, Annex I, Paragraph 4 of the EU MDR and IVDR both state that “… manufacturers shall, in the following order of priority: (a) eliminate or reduce risks as far as possible through safe design and manufacture.” Since the regulation requires that safe design and manufacturer risk controls “shall” be implemented, there is no room for practicability. An early draft of ISO/FDIS 14971:2019(E) included Annex Z’s to address the relationship between the standard and various EU



● 

More Bad Guidance in ISO 14971:2019  



137

regulations. These annexes tried to clarify that the risk management process needs to comply with the applicable regulations; however, the annexes were removed from the final release of EN ISO 14971:2019. There is one additional point regarding “practicability” for the EU IVDR. As noted, Annex I, Paragraph 2 of the EU IVDR requires risks to be reduced AFAP, without the option to consider practicability; however, several subsequent sections of Annex I of the EU IVDR do allow for practicability (for example, 10.3, 20.1(b), and 20.2(t)). In this regard, the EU IVDR itself is providing discrepant requirements. As such, it is recommended that manufacturers comply with the more stringent AFAP concept to better protect users and patients.

Acceptability In Section 6, the EN ISO 14971:2019 standard states “if the risk is acceptable, it is not required to apply the requirements” for risk control, and the applicable risk controls steps may be skipped. It goes on to state that “if the risk is not acceptable, then the manufacturer shall perform risk control activities.” In Section 7.3, the EN ISO 14971:2019 standard states “if a residual risk is not judged acceptable…further risk control measures shall be considered.” Annex I, Paragraph 2 of the EU MDR and IVDR both state that “The requirement in this Annex to reduce risks as far as possible means the reduction of risks as far as possible [AFAP] without adversely affecting the benefit-risk ratio.” Additionally, the “content deviations” from EN ISO 14971:2012 Annex Z described that “the manufacturer must apply all the ‘control options’ and may not stop his endeavours if the first or the second control option has reduced the risk to an ‘acceptable level’ (unless the additional control option(s) do(es) not improve the safety).” With the removal of this clarification regarding how to implement controls to reduce risks AFAP, the 2019 revision of the standard appears to indicate that manufacturers can stop reducing risks once they are “judged acceptable.” By stopping their “endeavours” to reduce the risk further once it’s “judged acceptable,” manufacturers will not be reducing risks AFAP. This will result in higher risk for the users and patients, and in a potential compliance risk for the manufacturer. It could also result in additional legal liability.

138

● 

Chapter 23 



Section 7.4 of the EN ISO 14971:2019 standard continues this by stating that the benefit-risk analysis may be performed when the “residual risk is not judged acceptable.” As discussed earlier in this chapter, the EU MDR and IVDR do not include similar exemptions for the benefit-risk analysis. As a result, following the EN ISO 14971:2019 standard (by not performing a benefit-risk analysis) may result in a compliance risk for manufacturers. In Section 8, the EN ISO 14971:2019 standard states “if the overall residual risk is not judged acceptable…the manufacturer may consider implementing additional risk control measures….” As discussed, per the regulations (EU MDR and IVDR), risks are to be reduced AFAP, not just when “not judged acceptable.” The “content deviations” from EN ISO 14971:2012 Annex Z described that “all risks have to be reduced as far as possible…regardless of any ‘acceptability’ assessment.”

Risk Evaluation vs. Benefit-Risk and Acceptability Per the EU MDR and IVDR, “‘benefit-risk determination’ means the analysis of all assessments of benefit and risk of possible relevance for the use of the device for the intended purpose, when used in accordance with the intended purpose given by the manufacturer.” Per Annex I, Paragraph 1, “…any risks which may be associated with their use constitute acceptable risks when weighed against the benefits to the patient….” EN ISO 14971:2019 (and prior) and the examples therein delineate the determination of acceptable or unacceptable risks at a step prior to the benefit-risk determination; however, the EU MDR/IVDR require acceptability be determined by weighing the risks “against the benefits to the patient.” In sections 7.3 Residual risk evaluation and 7.4 Benefit-risk analysis, the standard uses the term acceptable to describe the evaluation of the



● 

More Bad Guidance in ISO 14971:2019  



139

risk (occurrence and severity) of a manufacturer’s product against the state of the art, and then describes the benefit-risk analysis as a separate, subsequent activity. E·val·u·ate (verb) “to determine or fix the value of.” (Merriam-Webster) Val·ue (noun) “a numerical quantity that is assigned or is determined by calculation or measurement” (Merriam-Webster) At this point, it is important to understand the word “evaluate.” The use of this term has appeared to be quite deliberate in most regulations; however, many people confuse “evaluate” with “investigate.” While there are some similarities, the root of evaluate is “value;” and the word is used for the determination of the “numerical quantity.” When viewed in this context, the term can be used to describe the process of determining the scope of a problem (for example, how many items were nonconforming) or to describe the process of comparing a number against a limit/ threshold/specification (for example, does the severity or occurrence rank of an event exceed the threshold set in the risk document?). It is this latter definition that is intended for the “risk evaluation,” and the limit/ threshold/specification is to be based on the “state of the art.” EN ISO 14971:2019 defines the term “risk evaluation” as the “process of comparing the estimated risk against given risk criteria to determine the acceptability of the risk.” The EU MDR and IVDR repeatedly discuss how products (and specifically risk control measures) have to take into account “the generally acknowledged state of the art,” and EN ISO 14971:2019 Section 4.2 Management responsibilities delineates that the “criteria for risk acceptability” be based on several sources of information, including the state of the art. In other words, we are to compare the “numerical quantity” of our product’s risk (severity and occurrence) against the “numerical quantity” of the state of the art’s risk (severity and occurrence).

140

● 

Chapter 23 



While the risk evaluation (of a product’s risks against the risks associated with the state of the art) is part of the benefit-risk deter­mina­ tion, it is not all that goes into determining whether the risk is acceptable. Per Annex I, Paragraph 8 of the EU MDR and IVDR, risks “shall be minimized and be acceptable when weighed against the evaluated benefits to the patient and/or user…” Based on the regulation, acceptability is to include the evaluation of the benefits, not just the comparison of the product’s risks to the state of the art. Once again, the EN ISO 14971:2019 standard and its examples have contributed to the propagation of the idea that risk acceptability would somehow be separate or prior to the benefit-risk determination. The examples in ISO/TR 24971:2020 included risk evaluation matrices with “acceptable risk” and “unacceptable risk” being based only on the risk, without consideration of the benefits. While this type of risk evaluation works well to compare the product’s risks to the risks of the state of the art (for example, the risks of the state of the art would define the boundary between the regions), it would significantly reduce the confusion of the manufacturers if the regions were simply titled as “Aligns/Doesn’t Align with the State of the Art” or similar language. This way, the comparison of the risks would be a separate activity from the determination of acceptability. By separating these activities and using more appropriate language, the determination of acceptability could be realigned with the regulations.

Unacceptable Risks and “Significant” Residual Risks In Section 8, the EN ISO 14971:2019 standard states that “if the overall residual risk is judged acceptable, the manufacturer shall inform users of significant residual risks.” There are two points to discuss regarding the quoted section. First, the standard goes on to state that if the risks can’t be controlled to a point where the residual risk is acceptable, they can modify the device or its intended use. “Otherwise, the overall residual risk remains unacceptable.” The standard does not clearly delineate that if the risk is unacceptable, you shouldn’t put it on the market (and thereby wouldn’t need to worry about informing users of the risks). By the standard not



● 

More Bad Guidance in ISO 14971:2019  



141

clearly delineating what should or should not happen when the risks for a product are found to be unacceptable, the manufacturer may improperly try to justify putting an unacceptable product into the market. If/when that were to happen, it would be a risk to the user/patients and both a compliance and a legal risk for the company. Second, Annex I, Paragraph 4 of the EU MDR and IVDR both state “Manufacturers shall inform users of any residual risks.” In other words, per the regulations, all the risks need to be communicated, not just the “significant” ones. Communication of only some risks (for example, what the manufacturer deems “significant”) would not give users and patients all the relevant information necessary for them to proactively decide whether to use the product and would potentially expose users and patients to risks they may have otherwise avoided. Once again, following the guidance from the standard will result in higher risk for the user and patients, will cause a potential compliance risk for the manufacturer, and could lead to additional legal liability. By implementing a master harms list as described in the earlier chapters, a manufacturer would make the communication of the residual risks more manageable. The master harms list drives consistent wording, so you’re not describing the same issue using multiple different terms, which in turn reduces the size of the list. In the end, there are two categories of residual risks that need to be communicated to the users and patients (adverse reactions/events and potential adverse reactions/ events): those that have happened and those that might happen.

Stakeholders As discussed previously in this chapter, the “criteria for risk acceptability” described in Section 4.2 of the EN ISO 14971:2019 standard is to be based on several sources of information, including the state of the art. One of the other sources of information is identified as “known stakeholder concerns.” The introduction of the standard describes several examples of stakeholders and describes the manufacturer as one of the stakeholders. While the standard is careful in this section not to include any discussion of the manufacturer’s economic considerations/ concerns, this is a slippery slope.

142

● 

Chapter 23 



Annex A.2.7.4 of the EN ISO 14971:2019 standard does clarify that the benefit-risk analysis “cannot be used to weigh residual risks against economic advantages or business advantages.” From this, it is clear that economics is not to be factored into the benefit-risk analysis; however, the standard could be interpreted by manufacturers to mean that their economics could still be used to set the criteria for acceptability.

Chapter 24 Linking your FMEAs (NCEAs) ●  ●  ●

I

n Chapter 7 we discussed how to identify use, design, and process risks and develop the design and process to control those risks. The traceability matrix links the requirements/specifications and associated risks across all of these documents. In addition to linking the requirements/specifications and risks, you can gain further consistency by linking some of the sections within the risk documents (for example, master harms list, uFMEA, dFMEA, and pFMEA) (Figure 24.1). The harms and accompanying severities identified in the master harms list should feed into the list of harms and severities used in the other risk documents for the product/product-family. If clinical/performance studies or other sources of data (for example, similar products) are available, you can use these to ensure all applicable harms are included in the list. This will help ensure that the associated sequences of events are considered and documented for all the known harms for a product. It is during the development of the uFMEA that you identify the sequences of events from the use nonconformances to the associated harms. This is also when you identify the controls for the nonconformances and harms. As described in Chapter 7, you can control how people use your products by the way you design them. It is because of this connection between the design and the use that you should be able to connect the risk controls identified in the uFMEA with the designs that feed into the dFMEA.

143

Figure 24.1

Linking FMEAs (NCEAs).

Input • Design manufacturing environment • Design manufacturing process • BOM (product requirements; e.g., material type, specifications, package type, CNC) • Validation process • Preliminary process control plans • Packaging process • Sterilization process

Nonconformance The means by which the product/process fails from the manufacturing process

Nonconformance The means by which the product/process fails from the intended use

Nonconformance The means by which the product/process fails from the intended use

Hazard Estimate potential source of harm from intended use or foreseeable misuse

Hazard Estimate potential source of harm from intended use or foreseeable misuse

Hazard Estimate potential source of harm from intended use or foreseeable misuse

Harm (Effect) Physical injury or damage to the health of people

Harm (Effect) Physical injury or damage to the health of people

Harm (Effect) Physical injury or damage to the health of people

Harm (Effect) Physical injury or damage to the health of people

Output (Control Points) Define harm Define severity of harm Define cause of nonconformance Define environment

Output (Control Points) Compliant process and safe devices

Output (Control Points) Specifications/drawings: • Manufacturing environment • Material selection • Required testing • Packaging • Sterility requirements • Instructions for use • Labeling

• • • •

Output (Control Points) • Define harm • Define severity of harm

Chapter 24 

Process NCEA

Input Design out cause of nonconformance Design out cause of harm Manufacturing environment requirements Define sterility requirements Packaging requirements Label requirements

Input

Input

● 

• • • • • •

Design NCEA

• Intended use • Instructions for use

Use NCEA

Clinical events

Harms (e.g., from Master Harms List, FTA, etc.)

144 ●



● 

Linking your FMEAs (NCEAs)  



145

For example, a user’s inability to properly operate the handle on your device may be the result of a design nonconformance. The handle has a specified shape (a design specification or requirement), but when the handle doesn’t meet that shape (the nonfulfillment of the specified requirement) it inhibits the proper use of the device (the hazard) and may lead to harm. This sequence of events would typically be identified in the dFMEA; however, the proper/improper use of the device and subsequent events are typically identified in the uFMEA. In this example, while the improper use of the device is a use nonconformance, it is also linked to the hazard generated by the design nonconformance. Similarly, the sequence of events from the design nonconformances to the associated harms and risk controls feed into the pFMEA. Since you can control the design of your products by controlling the process used to make them, the risk controls identified in the dFMEA would link to the inputs (for example, process controls/specifications) for the pFMEA. Additionally, the design nonconformances would typically be the hazards generated by process nonconformances. By linking the information from one document to the next, the development of the risk documents becomes faster and easier, and involves far less bantering over wild ideas and opinions. Once the existing information from the prior documents are linked and fleshed out, additional brainstorming may identify additional risks that hadn’t been previously considered. This iterative process will help drive improvements back to the design and information for safety (for example, instructions for use). Once complete, you can check the logic between your risk documents by ensuring the hazards (system effects) that result from process failures align with the design nonconformances (for example, if a process is out of specification, it will generate products that are out of specification) and that the hazards that result from design failures align with the use nonconformances (for example, if a product is out of specification, it will lead to improper use/conditions of the device).

Chapter 25 Overlapping Definitions ●  ●  ●

W

hen a doctor throws a surgical tool across the room, is that considered reasonably foreseeable misuse, abnormal use, or abuse? While most of us would describe that as abuse, ISO 14971 doesn’t use the word “abuse”—nor does EN 62366-1 or ISO/IEC Guide 63. If someone tries but doesn’t follow the intended use, how is that different from someone who tries not to follow the intended use but ends up following it anyway? And, does the end result make a difference? If the user ends up with the same or better result, is there still a risk to consider? How do all of these affect the actions that manufacturers are required to take to address these situations? If we look at all the definitions in the EN and ISO standards and in the regulations, it can quickly make our heads swim. In trying to sort out all the intertwining definitions, there are three fundamental questions that can help: • Did the user’s actions follow the manufacturer’s intended use? • Did the user intend to follow the manufacturer’s intended use? • Was the result the same, better, or worse than intended by the manufacturer or expected by the user? From these questions we end up with a dozen different combinations. Figure 25.1 and Table 25.1 offer some descriptions and examples that may assist with understanding the various combinations.

147

148

● 

Analysis, evaluation, and control of risks associated with reasonably foreseeable misuse is required.

Chapter 25 

Reasonably foreseeable misuse



Abnormal use

Analysis, evaluation, and control of risks associated with abnormal uses that are not reasonably foreseeable is NOT required.

Some abnormal uses fall under reasonably foreseeable misue. Analysis, evaluation, and control of risks in this segment is required; however, user-interface controls may not be effective and the usability engineering process is not required to assess or mitigate risks (per EN 62366-1).

Figure 25.1

Was there intent to follow the intended use?

What was the result?

User Result Intent

Did they follow the intended use?

Action

Venn diagram of overlapping definitions.

Y

Y

Same

Y

Y

Example: Drinking grapefruit juice can make some medicines potent (e.g., creating a new risk by dropping blood pressure Better more too low). This may result in better device effectiveness, but a worse patient outcome.

Y

Y

user tried to follow the instructions, did follow them—and Worse The got a worse result.

Y

N

Same

Y

N

Y

N

Better The user tried not to follow the instructions, followed them Worse anyway, and got a different result.

N

Y

Same

N

Y

N

Y

Better The user tried to follow the instructions but didn't—and got a Worse different result.

N

N

Same

N

N

N

N

Better The user tried not to follow the instructions and didn't follow Worse them—and got a different result.

Table 25.1

Description

The user tried to follow the instructions and did follow them.

The user tried not to follow the instructions but ended up following them anyway.

The user tried to follow the instructions but didn't—and got the same result anyway.

The user tried not to follow the instructions and didn't follow them—and got the same result.

Questions for understanding definitions.



● 

Overlapping Definitions  



149

Abnormal Use, Normal Use, Correct Use, and Use Error The definition of “normal use” in EN 62366-1 limits the scope to uses “according to the instructions.” Since there is no evaluation of user intent, even if the user meant to do something else, if they ended up following the intended use would it still be considered “normal use”? The definition of “abnormal use” in EN 62366-1 limits the scope to “intentional” actions or omissions. Figure 1 in EN 62366-1 shows all medical device uses falling into either “normal use” or “abnormal use”; however, neither of these definitions accounts for a person intending to follow the instructions but not actually succeeding in following them. Within the scope of normal use, Figure 1 in EN 62366-1 shows there being two options: correct use and use error. The definition of “use error” in EN 62366-1 limits the scope to uses that lead “to a different result than that intended,” and “correct use” is just defined as any normal use that isn’t a use error. The standard creates a conflict in the notes for the definition of “correct use” by stating that “deviation from instructions” is considered use error. Since use error is part of normal use in EN 62366-1, and normal use is limited to uses “according to the instructions,” it wouldn’t be possible for use errors to “deviate from instructions.” This conflict does not apply to the definitions in ISO 14971 though. Since ISO 14971 references IEC 62366-1 for the definition of “use error” but does not include definitions of “normal use,” the concept of use error can be applied to all situations in which a different result is achieved, regardless of whether the actions deviated from the instructions.

Intended Use and Reasonably Foreseeable Misuse The definitions of “intended use” and “reasonably foreseeable misuse” in ISO 14971 (as taken from ISO/IEC Guide 63) are fairly straightforward. “Intended use” applies when the product, process, or service is used as intended, and “reasonably foreseeable misuse” applies when used “in a way not intended.” The definitions do not include any criteria regarding

150

● 

Chapter 25 



the intent of the user or the outcome and specifically state in the notes that they apply whether the actions are “intentional or unintentional.”

Intended Use, Reasonably Foreseeable Misuse, and Abnormal Use As shown in Figure 1 of ISO 14971, the risk management process includes several steps, starting with the analysis, evaluation, and control of the risks. However, given the dozen different scenarios described previously, which do we need to consider when trying to predict what could go wrong? In ISO 14971 Section 5.4, the identification of hazards and hazardous situations is to be “based upon the intended use, reasonably foreseeable misuse and the characteristics related to safety in both normal and fault conditions.” However, this assumes that the hazards and hazardous situations lead to harm. If we assume that if the use of the product resulted in the intended/expected result there was no harm (hopefully a valid assumption for your product), then we can eliminate any scenarios that resulted in the intended/expected result. Based on the definitions of “intended use” and “reasonably foreseeable misuse,” combined with the definition of “risk” only applying to harms, if the result was the same as intended by the manufacturer, there is no risk, and risk management is not required. Risk management activities, including analysis, evaluation, and control of the risks, would be required for all the scenarios in which the result is different from the intended result (whether better or worse). (Note: As described in Tables 25.1 and 25.2, some “better” results may still involve risks to the patient.) The definition of “abnormal use” in EN 62366-1 does change the activities slightly though when it applies. Per the definition, abnormal use is “beyond any further reasonable means of user interface-related risk control by the manufacturer.” EN 62366-1 Annex A.2 confirms that while the process can be used to identify abnormal uses, it doesn’t require the process to be used to assess or mitigate those risks.

N

N

Y

Y

Y

N

N

N

Y

Y

N

N

N

N

N

N

Normal use—use error

Normal use—use error

Abnormal use

Abnormal use

Abnormal use

None

None

None

Normal use—use error

Normal use—use error

Analyze/evaluate/control

Reasonably Use error foreseeable misuse

Use error

Use error

None

Reasonably foreseeable misuse

Reasonably foreseeable misuse

If reasonably foreseeable misuse, analyze/evaluate/control, but user-interface controls may not be effective and the usability engineering process is not required to assess or mitigate risks (per en 62366-1).

None—no risk

Analyze/evaluate/control

Reasonably foreseeable misuse

Use error

Reasonably foreseeable misuse

None—no risk

None

Analyze/evaluate/control

Analyze/evaluate/control

None—no risk

Analyze/evaluate/control

Analyze/evaluate/control

Reasonably foreseeable misuse

Intended use

Intended use

Intended use

Intended use

None—no risk

Action Required

Intended use

Use error

Use error

None

Use error

Use error

Intended use

ISO 14971 ISO/IEC Guide 63

The user tried not to follow the instructions and didn't follow them—and got a different result.

The user tried not to follow the instructions and didn't follow them—and got the same result.

The user tried to follow the instructions but didn't—and got a different result.

The user tried to follow the instructions but didn't—and got the same result anyway.

The user tried not to follow the instructions, followed them anyway, and got a different result.

The user tried not to follow the instructions but ended up following them anyway.

The user tried to follow the instructions, did follow them—and got a worse result.

Example: Drinking grapefruit juice can make some medicines more potent (e.g., creating a new risk by dropping blood pressure too low). This may result in better device effectiveness, but a worse patient outcome.

The user tried to follow the instructions and did follow them.

Description



Questions for understanding definitions with actions.

Worse

Better

Same

Worse

Better

Same

Worse

Better

Same Normal use—correct use

Worse

Better

None

EN ISO 14971

Overlapping Definitions  

Table 25.2

Y

N

Y

Y

Y

Same Normal use—correct use

EN 62366-1

Definitions

● 

Y

Y

Y

Did they follow the intended use? Was there intent to follow the intended use?

User Result Intent

What was the result?

Action

151

152

● 

Chapter 25 



In other words, if a risk is reasonably foreseeable (didn’t follow the intended use) but is also abnormal use (user intentionally didn’t follow the intended use), as the manufacturer you might not be able to stop the user from bypassing the intended use through user-interface controls (see Figure 25.1 and Table 25.2). If you are (or become) aware of users bypassing your intended use, you may need to redesign the product in a way that precludes users from controlling the device.

Chapter 26 Quality Data for Post-Market Surveillance ●  ●  ●

A

s discussed in Chapter 1, there are some processes (such as PMS) that are entirely within the scope of risk management. In PMS, we monitor and evaluate the real-life data (for example, NCRs, complaints, etc.) against the risk documents to see if our products are as safe as we believe them to be (see Figure 26.1). Quality Data • Feedback – Production trending ° Process/product trends – Post-production ° Complaints ° MDRs/MDVs/field actions ° Trending* • Conformity to product requirements – NCRs – Yield/scrap/rework • Audits • Design/process changes • Literature search (from PMCF/PMPF, CER/PER) • Review of similar products (from CER/PER) • External standards • Labeling • Benefit-risk determination (for PSUR) • Sales volume (for PSUR)

Figure 26.1

Quality data for post-market surveillance.

Experienced risk management professionals know that PMS has been a required activity all along, and that it involves more than just com­ plaint handling. The lack of understanding of PMS by many companies

153

154

● 

Chapter 26 



has provided years of consulting work, especially as these companies realize the new EU MDR and IVDR have increased focus on PMS and properly feeding the results of PMS back to drive improvements. Even for seasoned risk management professionals, it can be difficult to determine all the various types of data that should be evaluated in PMS. Part of this difficulty is because the requirements/ recommendations come from a variety of sources, including the CFR, MDR/IVDR, and ISO standards. When viewed individually, it would be difficult to understand what data are needed; however, when viewed collectively, the picture becomes clear. The EU MDR/IVDR define “post-market surveillance” to mean “proactively collect and review experience gained from devices they place on the market, make available on the market or put into service for the purpose of identifying any need to immediately apply any necessary corrective or preventive actions.” Notice that the definition requires the activities to be “proactive” and to drive improvements. In Annex III, the EU MDR/IVDR identify some of the types of data required for PMS analysis, specifically: • Field safety corrective actions • Non-serious incidents/undesirable side effects • Trend reporting (per EU MDR 2017-745 Article 88 and EU IVDR 2017-746 Article 83) • Specialist or technical literature, databases, and/or registers • Feedbacks • Complaints • Information about similar medical devices If your products are Class IIa, IIb, or III (MD) or Class C or D (IVD), the EU MDR/IVDR also require that your PMS include: • Conclusions of the benefit-risk determination • Main findings of the PMCF/PMPF • Sales volumes and population size/characteristics/usage frequency



● 

Quality Data for Post-Market Surveillance  



155

While this is a good starting point, this is not a comprehensive list of all the data that should feed into the PMS process, and it doesn’t explain what each item includes. 21 CFR 820 includes requirements to monitor and review data for several processes, including, but not limited to, the following sections: • 30 (i) Design changes • 70 (a)(2) Process parameters and component and device characteristics during production • 70 (b) Production and process changes • 75 Process validation (see discussion in Chapter 14 regarding continued process verification) • 80 Acceptance Activities • 250 Statistical techniques ISO 14971 provides its own list, which includes information from the following: • Monitoring of the production process • The user • Those accountable for installation, use, and maintenance • Supply chain • Public • The state of the art Since the first part of the PMS requirement is the “collect[ion] and review [of] experience,” Section 8.2 Monitoring and measurement in ISO 13485 provides some additional examples and a more detailed description of data to consider. The standard delineates six types of data to analyze in order to drive improvements: • Feedback • Complaint handling • Reporting to regulatory authorities • Internal audit • Monitoring and measurement of processes • Monitoring and measurement of product

156

● 

Chapter 26 



There are some obvious overlaps with the data identified from these sources. By compiling these lists and evaluating what is intended for each of these types of data, we can get a clearer picture of the entire scope of PMS. While the data can be analyzed in any order you choose, they are presented here in an order that has worked well at several companies. The order follows fairly closely with sections 5.6.2 Review input and 8.4 Analysis of data in ISO 13485.

Feedback While “feedback” is identified in the EU MDR/IVDR, ISO 13485 provides some additional detail regarding what is intended. The feedback process is defined as including data “from production as well as post-production.” This section provides a review of production trending (for example, monitoring and measurement of the process and/or of the product) and a review of post-production trending (for example, customer complaints, including those from serviced devices, reports to regulatory authorities, and any Eudamed trend reporting), and is aimed at discovering: • whether any previously recognized risks have increased or • any new risk that was not previously recognized or evaluated in the risk management activities. Production data can be divided into the monitoring and measure­ ment of the processes (the inputs) and the monitoring and measurement of the products (the outputs). The monitoring and measurement of the process involves any data regarding the process parameters used (that is, do you track/trend what settings you use?). These are the inputs/ control points that you are setting while making your products and may already be tracked in many companies using statistical process control (SPC) charts. Test-result run charts are a common example of the monitoring and measurement of the products. The product data are generally an evaluation of the product output from your process, and the ISO 13485 standard describes these data as verification that product requirements have been met. For those familiar with 21 CFR 820, this sounds a lot



● 

Quality Data for Post-Market Surveillance  



157

like the terminology used in Sec. 820.80 for the receiving, in-process, and finished device acceptance. Additionally, since the device master record (DMR) is required to include information regarding “installation, maintenance, and servicing” (21 CFR 820.181), data from these activities would also be considered part of “production.” The process data (inputs) alone may be difficult to connect to problems, but when evaluated during PMS along with product data (outputs), they can quickly help you identify when a process setting might be causing problems. Post-market/post-launch generally refers to the period after a company receives approval to put a product/productfamily on the market. Post-production generally refers to the period after an individual device is manufactured (produced, including the shipping and installation). Post-production activities involve those processes that may apply to a device once it’s in the field. These include the activities most people have historically considered to be part of PMS, that is, complaints, reportable events, field actions, etc. However, post-production activities also include a new requirement in the EU MDR/IVDR for “trend reporting.” This require­ment is different from the typical complaint trending done at most companies. It involves a specific requirement to evaluate “any sta­ tis­tically significant increase in the frequency or severity of incidents that are not serious” but that “could have a significant impact on the benefitrisk analysis…and which have led or may lead to unacceptable risks.” If you have properly set up the risk documents and the complaint handling process accurately links to the thresholds set in those risk documents, then you are already evaluating your complaints’ severity and occurrence on an ongoing basis. The PMS activities then become a double-check to ensure nothing was missed and to evaluate the data holistically with other data. The trend reporting requirement takes the threshold-based decision process one step further by looking at the data proactively to determine if issues “may lead” to problems.

158

● 

Chapter 26 



Complaint data may be analyzed by determining the complaint rates (for example, number of complaints/number of opportunities, where the number of opportunities may be the number of units sold for a single-use product or the number of uses by all units within the period for capital products) associated with each harm. These rates are then compared to the associated thresholds identified in the risk files for each harm. This evaluation can be done for a defined rolling period (for example, the past five years) but may also include an evaluation of the rate over the entire life of the product since its initial market launch. It is also recommended that the evaluation of complaints include discussion of those that alleged “death or serious deterioration of health.” The com­plaints alleging death or serious injury should align with those submitted as reportable events. It is important to be aware that your evaluation of reportable events (for example, medical device reporting (MDRs) for the FDA, medical device vigilance (MDVs) for the EU, etc.) should not be limited to just those that your company has submitted for your product/productfamily. For PMS, the evaluation should include data from the applicable public database (for example, MAUDE) in order to identify reports that may have been submitted for your product by people outside of your organization (for example, some distributors and healthcare providers are required to also report issues). In addition to a complaint process, some companies track other types of feedback separately (for example, from surveys, during sales calls, recommendations, positive feedback, etc.). It is recommended that however your organization captures other types of user/patient feed­ back, you should include an evaluation of that feedback in your PMS.

Conformity to Product Requirements Since the production feedback already includes monitoring and measure­ment of products (“verification that product requirements have been met”), it makes sense that there should conversely be an evaluation of when the products/processes don’t meet requirements. Per 21 CFR 820.3 (q), “Nonconformity means the nonfulfillment of a specified requirement.” As such, nonconformance data should be included in the PMS reviews.



● 

Quality Data for Post-Market Surveillance  



159

If we look into the “actions in response to nonconforming product detected before delivery” as described in ISO 13485, we see that there are only three dispositions for an NCR: • Eliminate the nonconformity (for example, rework) • Preclude its use or application (for example, scrap) • Use/release under concession (for example, use-as-is) Some companies include a plethora of different possible dispositions; however, most are either subsets of these three or aren’t really dispositions. For example, re-sorting/ inspection of product could be viewed as rework (if you are simply re-executing a routine, predefined inspection step) but is more likely just part of the evaluation of the problem scope. After sorting, you still must decide what to do with any nonconforming product. Also, “return-to-vendor” is not the final disposition—it’s typically just a step in the process. You still need to decide if the vendor is to rework and send it back to you or scrap the product/sell it to someone else to preclude it from being used in your process. Data from the NCR process, including data from any rework, scrap, and yield analyses, should be included in the PMS reviews. Note that any “pseudo” NCR processes your company may have (for example, some companies track laboratory OOS separately) should be combined back into the NCR data for a comprehensive analysis for PMS as well as for your management review process. Many CAPAs are initiated due to a decision in an NCR regarding the need for an investigation (see Chapters 17 and 27). However, some CAPAs—especially preventive actions—are not spawned from NCRs. It is because of the inherent link between the NCR and CAPA processes and the additional data that can be captured in the CAPA process that it is recommended that CAPAs be evaluated in the same section of the PMS reviews.

160

● 

Chapter 26 



Audits While ISO 13485 includes an evaluation of internal audits, it is recommended that the PMS review include an evaluation of other types of audits (such as FDA, notified body [NB], third party, etc.). If your CAPA process is properly designed, you don’t just copy/paste your audit findings into your CAPAs; instead, you use the audit finding simply as part of the background and describe each systemic issue involved in its own CAPA. A proper problem description in an NCR and a CAPA ensures all the “interrogatives” (aka the 5W2H’s) are answered. These are the same questions that a roving reporter from the Daily Bugle uses when trying to get the scoop. An NCR description answers these for the specific event, and the CAPA description answers these for the systemic issue involved. (NC and CAPA descriptions are not copy/paste of each other.) Who?

Who was involved in the occurrence and/or detection of the issue?

What?

What product/component/process/quality system was impacted?

When?

When (date/time) did the defect/issue occur? When was it detected?

Where?

Where did the defect/issue occur (for example, in the process/procedure, geographically, physically on the object)?

Why?

Why is it a defect/issue? Describe the actual and required/expected condition/specification (NOT why did it happen).

How?

How was the defect/issue found?

How much? Scope of the problem—number of products/ components/lots/processes involved



● 

Quality Data for Post-Market Surveillance  



161

If you have properly linked your audit and NCR/CAPA processes, then just like the complaint process, you should be capturing all the issues on an ongoing basis. In that case, the PMS activities then become a double-check to ensure you didn’t miss anything and to evaluate the data holistically with other data.

Design/Process Changes By evaluating design and process changes along with the other data, it becomes easier to identify when any design or process changes may have resulted in a shift to the other data. Since some data (such as complaints or end-of-life failures) may lag behind a change—in some cases appearing after significant time has passed—it is important to include an evaluation of changes to the design and process in PMS.

Literature Search A requirement of the EU MDR/IVDR and ISO 14971 states publicly available information (for example, literature, database/registry data, etc.) must be evaluated as part of PMS. Since there are already require­ ments for manufacturers to evaluate these types of data in their postmarket clinical/performance follow-up studies (PMCF/PMPF) and summarize the findings in their clinical/performance evaluation reports (CER/PER), the work done for those processes can just be summarized in the PMS process. Because the CER/PER information feeds into the PMS process, it is recommended that those reports be included in the same schedule you develop for your PMS reviews—the CER/PERs would just be performed ahead of the PMS. Since PMCF/PMPF studies are only performed as needed (or justified in the PMS plan if “not applicable”), their main findings can be included in this same section of the PMS review.

Review of Similar Products The term “state of the art” is defined in ISO 14971 as the “developed stage of technical capability at a given time as regards products, processes and services…” Most people interpret these fancy words to mean the

162

● 

Chapter 26 



standard practice in the market/alternative therapies (for example, a manual process that your product automates) and competitors’ products/ alternative medical devices (the “developed stage” of other technology being used). Annex A.2.10 of ISO 14971 includes some discussion that supports this interpretation. Since the literature reviewed in the prior section frequently includes information regarding other products and therapies on the market, these two sections are often closely tied together. If your organization has performed any comparative studies, you would summarize those here. In addition, you should include a review and summary of similar products within your company’s portfolio and any new risks that have been identified (for example, in the PMS for those products) that may also be a concern for the product/product-family. This once again serves as a double-check to ensure all identified risks and any applicable corrective/ preventive actions were properly applied to all applicable products/ product-families.

External Standards There are several references in ISO 14971 (for example, Section 10.2, Annex A.2.10) that clearly state that the information related to the “state of the art” includes “new or revised standards.” Unfortunately, many companies have historically put the burden of evaluating external standards on their regulatory departments, which in turn often just subscribe to a service to notify them when a standard is changed. While this is fine for identifying when one of the standards on your list changes (and possibly providing some details of the changes), it doesn’t identify when “new” standards are added that may apply to your product/ product-family. (Additionally, the regulatory department may not have the technical knowledge of the products that would be necessary to determine if a new standard is applicable.) As described in Chapter 7, the regulatory and external standard requirements are part of the user requirements that feed into the risk analysis. These regulatory and standard requirements should be documented as the source of the applicable design and process specifications. These regulations and standards are then often listed



● 

Quality Data for Post-Market Surveillance  



163

on the essential requirements checklist (ERC)/general safety and performance requirements (GSPR) checklist for the product/productfamily. In addition to these, some companies keep an “external standards list” that lists all the applicable standards for the company’s QMS and products. These lists can then identify which regulations and standards are applied for each product/product-family. The PMS review for external standards would compare the standards listed in the applicable specifications to the list of standards you tell the regulators that apply (that is, the ERC/GSPR). If new or revised standards apply to the product/product-family but haven’t yet been evaluated for compliance, the PMS process can help drive that review as well. In addition to the review of previously known/applied standards, it is recommended that the PMS reviewers check with the standards organizations to see if there are any new standards that may apply to their product/product-family. For ISO standards, the organization provides handy catalogs of standards that are grouped by the applicable industry/field.1

Labeling (aka Information for User(s) and Patient(s)) A review of the product/product-family labeling is not required by any of the regulations or standards, but it makes sense to include it in the PMS reviews. (See Section 4 of this book for additional discussion of information for users and patients.) Annex I, Paragraph 4 of the EU MDR/IVDR state that manu­ fac­turers shall “provide information for safety (warnings/precautions/ contraindications) and, where appropriate, training to users.” It goes on to require that “manufacturers shall inform users of any residual risks.” ISO 14971 Section 7.1 reiterates the need to provide information for safety and training, and Section 8 reiterates the need to disclose residual risks. (Note: ISO/TR 24971 Annex D provides some guidance on how information for safety and disclosure of residual risks are different.) So far, all the PMS activities have evaluated whether the reallife data (for example, NCRs, complaints, etc.) align with what was

164

● 

Chapter 26 



documented in the risk documents to see if the products are as safe as we thought they would be. Since information for safety can be used to help control risks, these PMS activities should tell us whether the information for safety is working properly. However, since the disclosure of residual risks does not control risks, the PMS activities don’t evaluate whether we have properly disclosed the residual risks. By performing an evaluation of the labeling as part of PMS, the residual risks can also be confirmed. During PMS, this evaluation can review the findings from companysponsored trials, literature reviews, customer complaints, and so on to determine if the information provided to users/patients (for example, labels and labeling, IFU/DFU, post-operative care instructions, implant cards, installation manual, operator/owner’s manual, software displays, training materials, marketing materials, advertising, etc.) is adequate to address the identified risks. The review could conclude that the existing information is appropriate and adequate or that the information needs to be updated. It is recommended that someone specifically familiar with the actual clinical use of the product (such as a doctor who has performed the surgery or diagnosed the disease/condition, a nurse who used the product to treat patients, etc.) be responsible for evaluating the labeling information. If the medical/clinical representative is involved, any recommendations to update the labeling could include the proposed new language/wording for the change. Note that recommendations documented in the PMS process would only be proposed changes. New risks and risk controls would still need to be evaluated through the rest of the risk management process prior to implementation of any new controls or residual risk disclosure.

Benefit-Risk Determination As noted earlier in this chapter, if your products are Class IIa, IIb, or III (MD) or Class C or D (IVD), EU MDR/IVDR requires you to include some additional information, including the “conclusions of the benefit-risk determination.” Your PMS report may also go by a different name, that is, “periodic safety update report” instead of “post-market surveillance report.”



● 

Quality Data for Post-Market Surveillance  



165

Since your PMS results are required per EU MDR/IVDR Annex I, Paragraph 3(e) to be evaluated for their impact to “the overall risk, the benefit-risk ratio and risk acceptability,” this evaluation really applies to all classes of products. The difference is that for lower risk-class products, the re-evaluation of the benefit-risk can be handled through the risk management process. For the higher risk-class products, the summary of that evaluation should be included in the PMS report. This creates a Catch-22 for manufacturers. You need the results of the PMS in order to re-evaluate the benefit-risk, but you need to include the conclusion in the PMS. Until the EU regulators and/or notified bodies provide some guidance regarding how they expect manufacturers to handle this Catch-22, it is recommended that the prior conclusions (for example, in the CER/PER and/or RMR) be referenced/discussed during PMS. This review of the prior conclusions could include an evaluation by the medical/clinical representative to determine if any of the new PMS information indicates the need for a re-evaluation of the benefit-risk ratio or additional studies (for example, PMCF/PMPF for new or higher risks identified since the CER/PER and/or RMR were written).

Sales Volume It is rather humorous that the EU MDR/IVDR include requirements to review “the volume of sales” for the higher risk-class products. Since complaint data may be evaluated as a “rate” for comparison to the associated thresholds identified in the risk files, the sales volume would likely already be part of the evaluation for all products. Since it’s a regulatory requirement, it’s recommended that these data be included (possibly as an appendix) and some discussion be documented. The evaluation may include a comparison of sales by region or by population, which may highlight some vulnerable populations or regions. The remainder of the regulation appears to include some potentially more insightful requirements. Specifically, the EU MDR/ IVDR go on to require that the evaluation include the “estimate of the size and other characteristics of the population using the device” and “where practicable, the usage frequency of the device.”

166

● 

Chapter 26 



This is the section where you would discuss any vulnerable populations (such as pregnant women, elderly, growing children/ adolescents, etc.) for your product/product-family. This section can also be used to describe how you determined the “opportunities for something to happen” that you used for your complaint rate calculations. Since this could include an assessment of the entire target population as well as the population using the device, the discussion could be linked to the intended use/indications for use descriptions in your PMS reports.

Clinical/Performance Evaluation Options A company can evaluate the clinical performance of its products in a variety of ways. Most medical device companies will perform clinical trials prior to placing their products on the market (especially if the product has a high risk classification). Similarly, in-vitro diagnostic medical device companies generally perform some performance studies (for example, analytical performance) prior to placing products on the market. Some of these companies may even actively monitor feedback (such as surveys) for some initial period after the product is launched. How­ever, as products become more established in the market, their focus often drifts to the newer, shinier products in their pipelines. Eventually, complaint monitoring is often all that remains. Not surprisingly, few companies delineate the process for switching from active monitoring to passive monitoring, and fewer include activities beyond complaint monitoring and PMS. ISO 14971 Section 4.4 (g) indicates that the RMP must include “activities related to collection and review of relevant production and post-production information.” Many people take it for granted that their complaint handling processes will cover this activity. More seasoned risk professionals may even include all the quality data described previously in their PMS and may delineate these activities properly in productspecific RMPs. However, even beyond the quality data, manufacturers should consider (and document in the RMP) what additional clinical and/or performance data they want to gather. (Table 26.1)

Greater than 5 Years Post Launch Select (2 from any Level)

0-2 Years Post-Launch Select (1 Level 1 & 1 Level 2)

Active Complaint Investigation • Detailed review (which may involve/require obtaining follow-up information from site) is conducted for predefined complaint categories • Returned component analysis • Causality assessment

Level 3

Literature and Publication Review • Protocol for search criteria and trending methods • Includes devices with similar design, materials, intended use • Competitor marketing materials • Company device with enough data thus relying on predicate device may not be necessary

Adverse Event Review Using Public Sources • Use of public sources—FDA, Medicare, payee, or other databases • Review adverse events for a given device or type of similar devices • Protocol/report with established search and acceptance criteria

PIF Study • Study per protocol • Defined endpoint and predefined criteria • Stats analysis and report

Voluntary Surveys • Survey distributed to a group of physicians and/ or patients • Survey designed to obtain specific information • Also capture physician comments on device risk/ benefit

Level 2—Patient Registry Study • All patients enrolled • Study per protocol • All hospitals • Device usage • Retrospective • Report to document the device usage and performance

Level 1—Patient Registry Study • Patient registration • Study per protocol • Prospective or retrospective • Device usage • Intent is to obtain safety information

Level of Study Management

Literature and Publication Trending • Protocol for search criteria and trending methods • Includes devices with similar design, materials, intended use • Competitor marketing materials • For company device with limited data thus relying on predicate comparative device

Active Surveying • Clinical group surveys selected physicians and/or patients on device performance • Study per protocol • Evaluate device use • Follow-up monitoring as appropriate • Intent is to confirm device performance Center of Excellence Review • Data collection guidance provided to site • Select key hospitals as COE • Define criteria for reporting device use and adverse events on periodic basis • Company review and document data to identify trends

Lowest

Quality Data for Post-Market Surveillance   ●

Example strategy for post-market clinical evaluation options.

Company Supported Grant • Study per protocol • Specific device, intended use, or patient population • Limited to 1-2 centers • Investigator conducts study and monitoring • Results captured in a report

Company-sponsored Prospective Clinical Trial • Followed per GCP

Highest

Level 2

Level 1

Level

● 

Table 26.1

2-5 Years Post Launch Select (1 Level 1) or (2 Level 2) or (1 Level 2 + 1 Level 3)

Product Age

167

168

● 

Chapter 26 



The EU MDR and IVDR each describe requirements to update the clinical/performance documentation “throughout the life cycle of the device concerned” (Article 61, Paragraph 11 and Article 56, Paragraph 6, respectively). Manufacturers should also be interested in proactively determining new applications and benefits for their devices, continuously learning what the marketplace wants and needs for improvements, new features, and new products. They should also be continuously monitoring the marketplace for changes to the state of the art. This monitoring should be documented, perhaps in clinical/ performance studies. It is because of the EU requirement and the continuous learning opportunities that plans should be documented in the product RMP for gathering additional clinical/performance data. These studies may be reactive post-market clinical/performance studies or may be proactive means for companies to stay ahead of safety and quality issues. Companies may also choose to use a combination of proactive studies and reactive reviews. The options may include expanding some of the quality data analyses described previously. For example, typical reviews of reportable adverse events may only look at the FDA and other NB sources; however, manufacturers may choose to expand the evaluation to other public sources such as Medicare, Payee, or other databases. Similarly, complaint investigations are required by 21 CFR 820.198 and Annex III in the EU MDR/IVDR; however, manufacturers may implement a more proactive or thorough investigation strategy for some products. It is recommended that manufacturers delineate the clinical/ performance study options available within their organizations. These options may be documented as a list or a table (e.g., Table 26.1) in their applicable PMS procedure. Then they can simply select and document the applicable options for each product within the applicable RMPs and update the options selected as the product ages in the market.

Endnote 1. International Organization for Standardization (ISO), ICS standards catalog, https://www.iso.org/standards-catalogue/browse-by-ics.html.

Chapter 27 Why Investigations Are Illegal in a Nonconformance Report (NCR) ●  ●  ●

Wow! Is it really illegal? Yes, the same way speeding is illegal. Yet most of us have sped right past a police officer’s speed trap without getting a ticket. Sometimes the police even drive right past us at an even higher speed. Similarly, many auditors will look right at a company’s NC process and not write a finding for performing investigations in the NCR. Auditors, like the police, are generally looking for the big offenders. (Additionally, some auditors may not know the distinction, either. Others may write you up even if they overlooked it 57 times before.)

B

e careful how you read the title of this chapter. It is not saying that you don’t investigate nonconformances; it is saying that you just don’t perform and document the investigation in the NC report. Before we jump into this, it’s important to understand some of the fundamental differences between an NCR and a CAPA. 21 CFR 820.90 describes NCRs addressing “the identification, docu­­ men­ tation, evaluation, segregation, and disposition of noncon­ form­­ing product.” 21 CFR 820.100 describes CAPAs including the identifying the systemic issue, investigating, planning, imple­menting the actions, and verifying they work.

169

170

● 

Chapter 27 



While there are similarities between these (for example, identify, plan, act), there are some fundamental differences (such as dealing with the pile of bad stuff vs. finding the cause of the pile, isolation vs. investigation, and different responsibilities). Many companies tend to overlap these processes; however, it’s important to separate the activities according to these fundamental differences—specifically, dealing with the pile itself versus finding the cause. (Yes, the euphemism of a “pile of bad stuff” is an oversimplification, but it does provide a good visual image to help most people differentiate the concepts.) So, what do I mean when I say it’s illegal? It says right in 21 CFR 820.90 and in ISO 13485 that we are to “evaluate” nonconforming product, right? Yes, but there’s a big difference between “evaluation” and “investigation.” Many people think of these terms as being equivalent, but the language used in the CFR and the ISO standard appear to have been intentionally chosen. As discussed in Chapter 23, the word evaluate means “to determine or fix the value of.” Per Merriam-Webster online, investigate means “to observe or study by close examination and systematic inquiry.” While there are some similarities, the root of “evaluate” is “value,” and the word is used for determining the “numerical quantity.” So, when the CFR and ISO standard are describing product NCRs “evaluating” the product, they’re talking about determining the scope of the problem. The CFR and ISO standard are not in any way indicating that you are to investigate the cause at this point. To be clear, both the CFR and ISO standard specifically indicate that the evaluation “shall include a determination of the need for an investigation” (emphasis added). That is only the “need,” not the investigation. To be extra clear, the next section of the CFR (21 CFR 820.100 Corrective and preventive action) specifically states that “(a) Each manufacturer shall establish and maintain procedures for implementing corrective and preventive action. The procedures shall include requirements for…(a)(2) Investigating the cause of nonconformities relating to product, process and the quality system” (emphasis added).



● 

Why Investigations Are Illegal in a Nonconformance Report (NCR)  



171

When we break down the sections of the regulation and the specific wording used, it becomes apparent that an NCR is not (by regulation) to include the investigation of the cause of the nonconformity, and a CAPA is (by regulation) to be the place where the investigation of noncon­ formities is to take place and be documented. However, as discussed, many auditors either don’t understand the details or are just looking for bigger issues (such as whether your process failed to fix a problem). Case Study—That won’t happen to me! On April 12, 2017, the FDA issued a warning letter to Abbott (St. Jude Medical Inc.) that included a finding regarding (not) following CAPA procedure(s). • Your firm failed to follow its CAPA procedures when evaluating a third-party report. • Your firm conducted a risk assessment and a corrective action outside of your CAPA system (emphasis added). Your firm did not:

– confirm all required corrective and preventive actions were completed, including a full root-cause investigation and the identification of actions to correct and prevent recurrence



– confirm that verification or validation activities for the corrective actions had been completed, to ensure the corrective actions were effective and did not adversely affect the finished device

While the details of the event aren’t published by the FDA, it’s recommended that you consider how the FDA and other NBs may respond to your audit, NC, and CAPA processes. As noted, the CFR and ISO 13485 describe how manufacturers must handle the pile of bad stuff. Specifically, manufacturers are to identify and document the problem, and then evaluate the scope of

172

● 

Chapter 27 



the issue and segregate (“to separate or set apart from others or from the general mass: isolate”—Merriam-Webster) the potentially affected product. Once you have control of the pile, you must develop the plan (disposition) for what to do with the pile. Note that the regulation and standard don’t require you to execute the plan (the correction), but they do require you to document it properly if/when you do execute it. (See Chapters 16 and 26 for additional discussion of the types of dispositions.) It’s important to understand that the evaluation and segregation activities are also very iterative throughout the NC and CAPA processes. Typically, there are several steps where the scope of the issue should be evaluated: • The initial containment/isolation—This is usually done by the person who found the issue and may involve that person literally or figuratively putting their arms around as much of the potentially affected product as possible. This should isolate anything suspected of possibly being affected and should be done before sorting the pile to determine which individual units actually are nonconforming. The isolation of “potentially” involved material helps mitigate several risks, including establishing control early, minimizing the impact of the problem, ensuring material/product is not processed further, and ensuring product doesn’t leave the manufacturer’s control, etc. • An interim containment/isolation—This is typically performed by a supervisor, engineer, or quality representative. It may involve the expansion to other affected lots (for example, prior/subsequent lots) but may also involve the sorting (that is, evaluation) of the pile to determine an accurate scope of the issue. (Notice that this does not include any investigation or identification of the cause; it’s just sorting those that are confirmed to be nonconforming.) • Subsequent containment/isolation—This is usually done after the CAPA investigation is completed. The scope should be re-evaluated to determine whether more product needs to be



● 

Why Investigations Are Illegal in a Nonconformance Report (NCR)  



173

segregated. After the investigation is completed in the CAPA, the CAPA could document the assessment of whether the scope of the NC was adequate or needs to be re-evaluated. However, if it is determined that the scope needs to be re-evaluated, either the existing NCR should be updated/reopened or a new NCR should be initiated to properly manage the activities in the process that are specifically designed for evaluation and segregation of the pile. The more you evaluate, the more or less you may decide to segregate. As a result, your isolation quantities and the product/lots isolated may change several times throughout the process. (Note: The terms initial, interim, and subsequent were created for the purpose of this discussion. They are not defined in any regulation or standard; however, the 8D problem-solving model also uses the term interim containment.) However, beyond just dealing with the pile of bad stuff, the FDA and ISO slip in a couple additional requirements for the NC process: • determination of the need for an investigation, and • notification of the persons or organizations responsible for the nonconformance. In Chapter 17, we discussed how to use the risk thresholds from the risk documents to determine when we need an investigation (CAPA). As discussed in Chapter 23, this aligns with the use of the term evaluate to describe the process of comparing a number against a limit/threshold/ specification. We will discuss in Chapter 28 how the notification of the persons or organizations responsible is not a corrective action.

174

● 

Chapter 27 



Case Study—Corrective Action vs. Preventive Action On August 7, 2015, the FDA issued a warning letter to Cardiac Designs Inc. that included a finding regarding (not) following CAPA procedure(s). • b. CAPA (b)(4) dated October 10, 2014, was initiated after a third-party audit identified your CAPAs were not being fully documented. The CAPA states the corrective action was to correct the deficient CAPA and provide additional training. The CAPA does not contain any record showing this training was conducted. In addition, the CAPA states correction will be verified during internal audits; however, there is no record of these verification activities being conducted. In addition, this CAPA was incorrectly identified as a “preventive action” although the nonconformance had already occurred (emphasis added). Further, the CAPA was ineffective as our investigation identified your corrective and preventive actions are not being fully documented. When you expand your CAPA actions to other products and processes, some people may want to claim it’s corrective for the initial product/process and preventive for everything else. I recommend that you don’t worry about having this discussion with an auditor and just identify all actions as “corrective” for a CAPA that was opened in response to a NC.

Chapter 28 Don’t Blame the People ●  ●  ●

M

anagers and executives often blame their people. We don’t ever want to take the blame ourselves and need to be able to point the finger at someone else. While that’s certainly a cynical view, years of reviewing CAPA root cause investigations at multiple companies have shown that finger-pointing is surprisingly common. It also seems to provide a quick resolution to a CAPA if we can blame the people and just retrain them. But is that really fixing the problem?

The 5 Whys is a common root cause analysis (RCA) tool with which many people are familiar. It’s so simple and intuitive that most toddlers use it to drive their parents crazy learn about the world. CAPA investigators should take the example from their toddlers and keep asking this question. Unfortunately, many investigators stop asking “Why?” once they identify that their people did something wrong. If you’re using a 5 Whys tool and reach the point where you blame the people—don’t stop. Keep asking “Why?”. Chances are there’s a reason the people did what they did. Maybe they are overworked and understaffed and didn’t have adequate time to focus on the work (a resourcing issue). Maybe they were inadequately trained in the first place (a training issue that might not be fixed if you just retrain them using the same process). Or maybe they just don’t think rules apply to them (an enforcement issue). There are a variety of possible root causes behind people’s actions. We shouldn’t just blame the people; we need to find out the real root cause.

175

176

● 

Chapter 28 



With risk tools (for example, FMEAs), we analyze the sequences of events that lead to harms. It’s surprising how few CAPA root cause investigators use the work that has already been done as a starting place for their investigations. Someone in your organization has already spent a lot of time compiling a list of what can go wrong with your product and what the known/likely cause(s) are. If your risk documents were properly developed to link nonconformances to the harms, you may also have captured the known/likely cause(s) of those nonconformances and probably aren’t just blaming your people. Additionally, if the issue is in your risk documents, then you have also identified risk control measures and can focus on why those may not be adequate. Years ago, while searching for a way to help companies quit blaming their people, I stumbled on several examples of flowcharts that took the RCA further. The UK’s Health and Safety Executive (the UK equivalent to OSHA) had a very helpful example.1 Figure 28.1 reorganizes the information from the UK toolkit to highlight the spectrum of root causes for human failures. Notice that some issues are the result of inadequate rules/procedures (retraining or revising the training may be appropriate), some are the result of inadequate resourcing (additional training would not be valid—and you might have to blame your management), some are the result of people using their best judgement in exceptional situations, and sometimes it really is the person’s fault.

Behavior based on remembered rules and procedures; misapplication of a good rule or application of a bad rule

No rules or routines available to handle an unusual situation; resorts to first-principles and experience

Human failure types.

Resourcing

Situation-specific factors (e.g., time pressure, workload, etc.)

Situational

Non-compliance is the “norm”; rules no longer apply

Routine

Enforcement/Culture

• Improve risk perception (raise awareness of “whys” and consequences—up to termination) • Increase likelihood of getting caught • Increase supervision • Eliminate reasons to cut corners (e.g., unrealistic workload) • Encourage reporting of violations—make non-compliance “socially” unacceptable

Calculated risk taken in highly unusual circumstance

Exceptional



Figure 28.1

Rules/Procedures

Provide sufficient resources/time for task Remove distractions/interruptions Checklist and reminders Independent cross-check Warnings and alarms

Short-term memory lapse; omit to perform a required action

Memory-based Lapse (Omission)

(Additional training not valid)

• • • • •

A simple, frequently-performed action goes wrong

Action-based Slip (Commission)

Violations

Non-compliance

Don’t Blame the People  

• Create/update procedures • Improve feedback (e.g., displays, shift handover, etc.) • Improve tools (e.g., flowcharts, schematics, job-aids, etc.) • Competency training

Rule-based Mistake

Action—not as planned

Action—as planned

Deliberate

● 

Knowledge-based Mistake

Action error

Thinking error

Error

Inadvertent

Human Failure

177

178

● 

Chapter 28 



21 CFR 820.90 (a): “The evaluation of nonconformance shall include…notification of the persons or organizations responsible for the nonconformance.” ISO 13485 8.3.1: “The evaluation of nonconformity shall include…notification of any external party responsible for the nonconformity.” Note that simply notifying people that they were respon­ sible for a nonconformity is not a corrective action, in that it does not provide any instructions regarding how their behavior or actions are to change. Similarly, retraining to the same old procedures won’t help if you have poor procedures or an inadequate training/retraining process.

Endnote 1. United Kingdom Health and Safety Executive, “Leadership and worker involvement toolkit,” https://www.hse.gov.uk/construction/ lwit/assets/downloads/human-failure.pdf.

Chapter 29 Flowcharts ●  ●  ●

I

have developed the following flowcharts to help readers understand some of the required procedural relationships and the flow of information into, within, and out of the risk management process. Some of the terminology comes from the regulations and standards (for example, risk management report, clinical evaluation report, etc.); however, some terminology is generic and may be described differently at your company (for example, customer requirements document, regulatory plan, etc.). Accompanying each flowchart is a brief description of what the flowchart represents and how the inputs, outputs, and flow were determined. These are examples, and your company’s processes may differ from these examples. You may want to create similar flowcharts of your company’s processes to help you and your peers better see the current structure and potential improvements that may be needed for your processes. Additionally, flowcharts of your company’s processes can assist in efficiently and effectively explaining your risk management system (and how it links to the rest of your QMS) to auditors and new employees.

Legend for Flowcharts (Figure 29.1) As discussed in Chapter 1, many of the activities performed for some processes are part of the risk management activities. While these

179

180

● 

Chapter 29 



activities are (or feed into) risk management activities, they are typically performed by different groups/departments within larger organizations and have separate procedures/processes even within smaller organi­za­ tions. Within these flowcharts, patterns of the shapes are intended to represent the typical group/department or procedure/process activity (Figure 29.1). As noted previously, these are examples, and your organi­ zation may organize the activities differently. Activities that are colored gray represent the activities people have traditionally associated with risk management (for example, the risk management plan, risk tools, and the risk management report described in ISO 14971). Activities with diagonal lines (lower left to upper right) represent the activities associated with post-market surveillance. Depending on whether your company manufactures and sells medical devices or in vitro diagnostic medical devices in Europe, you will follow either the EU MDR or the EU IVDR. While these regulations have many similarities, there are some differences, including how they reference the traditional “clinical” activities. While the EU MDR refers to many of these activities using the term “clinical,” the EU IVDR often describes activities as “performance” instead (for example, clinical evaluation reports vs performance evaluation reports). For those who work in these areas, the difference may be important; however, for the purpose of most of these flowcharts, I have combined them and referred to them by the collective title “clinical/performance.” Activities that fall into these categories have a horizontal line pattern. Most companies assign a regulatory department the responsibility to oversee the gathering and submission of the required documents for any governments or other agencies controlling the markets where their products are sold. The flowcharts include activities with diagonal lines (upper left to lower right) that would likely be owned by or fed into the regulatory department. Since many companies are currently focused on complying with the EU regulations, the flowcharts include a small “TF” icon in the corners of the documents that require technical documentation per Annexes II and III of the EU MDR and IVDR. Additionally, a small “database” shape titled “Eudamed” is included for documents and processes that are required to feed into the new



● 

Flowcharts 



181

European database on medical devices (Eudamed). (Note that there are other processes outside of risk management that are also required to feed into Eudamed.) The remaining activities may not align cleanly with any of these traditional roles; they are uncolored and are included in the flowcharts because they are definitely part of the risk management process (or at least feed into it). For example, the production and post-production quality data monitoring processes are absolutely required by ISO 14971; however, they may not be covered under the risk management policy and procedures at many companies.

Patterns/Boarders/Inset Images TF

Risk

Post-market surveillance

Non-risk process

Clinical/ performance

Technical file documentation

Required to be filed in Eudamed Eudamed

Regulatory

Output of risk management

Shapes

Figure 29.1

Process

Decision

Sub-process

Start/end

Document

External data

Off-page reference

Database

Legend for flowcharts.

182

● 

Chapter 29 



Risk Management Process (Figure 29.2) The risk management process presented here follows closely with the steps laid out in ISO 14971. The steps feed into each other and rely on various inputs along the way. This flowchart lays out the path and some possible inputs at each step. While executing the activities at each step, it is recommended that you consider the goals specific to that step and where you will be documenting your work. The flowchart includes a recommended step that is not specifically identified in ISO 14971. Section 7.6 of the standard describes performing the review of the risk control activities to ensure they are complete. When reading this, most engineers will automatically gravitate toward the activities they have done to address the first two risk control options (that is, safe design/manufacture and protective measures). Because labeling is often the responsibility of employees in a separate department, the reviews of risk control activities often gloss over the information for safety and training. Additionally, it’s important to ensure that the communication of the residual risks is addressed at the same time, prior to the final evaluation, per Section 8. To ensure that the communication of information for safety/training and the residual risks is complete prior to the final evaluation, it is recommended that a separate, thorough review be performed of the applicable labeling, training materials, advertising, etc. Additionally, if the detailed analysis of the information for users and patients includes a discussion of the specific residual risks, warnings, and precautions, this information can then easily feed into the SSCP/SSP as well. The inputs for some steps are described within ISO 14971 (for example, Section 6 describes the risk evaluation activity “using the criteria…defined in the risk management plan”). Some inputs are described within the EU MDR/IVDR (for example, Annex III, Paragraph 1.1(a) requires the “collection and utilization” of data (such as from technical literature and for similar medical devices) for postmarket surveillance, which would typically be captured in the CER/ PER). And some inputs align with industry best practice (for example, your clinical or performance studies are generally designed to capture all the harms people experience while using your products).

• Residual risk acceptability

• Verification of implementation for risk control measures • Verification of effectiveness for risk control measures

• Risk control measures

• Risk acceptability

• Categorization of the severity of harms • Categorization of the probability of occurrence of each harm • Categorization of the probability of occurrence of each nonconformance

• Sequence of events that can transform a nonconformance into a hazard, then into a hazardous situation with resulting harm • List of hazards • List of hazardous situations

Intended use Reasonably foreseeable incorrect or improper use Characteristics that could affect the safety of the device Limits of characteristics that could affect the safety of the device

(continued)

NCEA

NCEA

NCEA

NCEA

NCEA

NCEA

RMP

Example Tools



Risk management process.

2

Residual risk evaluation

RMP

Risk control

Risk evaluation

Risk estimation

Implementation and verification of risk control

Quality data (product and similar products)

Identification of hazards and hazardous situations

• • • •

Output Goals

Flowcharts 

Requirements/ specifications, DV&V, PV, labels

RMP

CER/PER PMSR/PSUR

CER/PER PMSR/PSUR

Identification of intended use, misuse, and safety characteristics

Risk Management Process Steps

● 

Figure 29.2

1

Customer/design/ process requirements & specifications

IUS, customer requirements

Possible Inputs

183

Figure 29.2

1

NCEA

Quality data

Risk management process.

Production and post-production activities

RMR

• Results of the risk management process • Demonstration that the RMP has been properly implemented • Demonstration that the overall risk is acceptable • Demonstration that appropriate systems are in place to obtain production and post-production information

PMSR/PSUR

RMR

• Evaluation of the medical benefits for the device(s) vs the overall risk • Medical judgment (conclusion) of acceptability/unacceptability of the medical benefits for the device(s) vs the overall risk

• List of nonconformances, hazards, and harms that were not previously analyzed • List of increased/unacceptable risks

RMR

RMR

• Assessment of whether all identified hazardous situations have been evaluated and controls are complete/implemented • Review of product labeling to ensure any potential hazards identified were assessed and included • Information to aid in the development and/or modification of information for safety and training for users and patients • List of information for safety (contraindications, warnings, precautions, and special instructions) • List of disclosed residual risks (adverse events, potential AEs)

RMR

• Changes necessary as a result of control measures • List of hazards or hazardous situations introduced as a result of implementing risk control • List of previously estimated risks affected by risk control measures

Individual benefit risk analysis, RMR

Example Tools

Chapter 29 

Risk management review

Evaluation of overall residual risk

Medical benefits from CER/PER

Completeness of risk evaluation

NCEA

Information for user/patient

Risk arising from control measures

NCEA

• Assessment of medical benefits and residual risks • Justification and approval of acceptability/unacceptability • Information for safety that is necessary to disclose the residual risk

Output Goals

● 

IFU, labeling, user training materials, patient brochures, advertising

Benefit-risk analysis

2

Risk Management Process Steps

Medical benefits from CER/PER

Possible Inputs

184 ●



● 

Flowcharts 



185

Similarly, the output goals may be described within ISO 14971 (for example, risk evaluations per Sections 6 and 7.3 determine if the risk is acceptable/unacceptable), the regulations (for example, Annex I, Paragraph 4 requires manufactures to “inform users of any residual risks”), or best practice (for example, PMS is for “the identification of needs for preventive, corrective or field safety corrective action,” which as discussed in this book is based on new or increased/unacceptable risks). Where you choose to document your work is typically up to you; however, the EU MDR/IVDR do require you to have an RMP (Annex I, Paragraph 3(a)). ISO 14971 describes putting certain information into the various documents but often leaves it up to the manufacturer by simply requiring that the information be “maintained in the risk management file” (somewhere).

Risk Management Plan (RMP) (Figure 29.3) Some companies have tried to rationalize their historic lack of product-specific or product-family-specific RMPs by arguing that their risk policy and procedures describe the expectations for risk management for every product. With the increased emphasis on “state of the art” in the EU MDR/IVDR and ISO 14971, it is critical for any company with a portfolio of products to evaluate the risks according to a plan that is specific to that product/product-family. Additionally, as discussed in Chapter 5, a single set of criteria likely won’t give you the resolution you need to identify issues and drive actions, especially if you have significant variability in your sales across your portfolio. Section 4.4 of ISO 14971 describes the parts of the RMP, and this flowchart aligns the activities with those delineated in the standard. While some information would feed into the RMP (for example, requirements that describe the device), there are some activities that would feed into and draw information from the RMP. For example, ISO 14971 requires that the RMP delineate the “requirements for review of risk management activities.” Since this activity would likely overlap with the ISO 13485 requirements for management review “to ensure [the QMS] continuing suitability, adequacy and effectiveness,” these two activities should be closely aligned and feed into each other.

Figure 29.3

Identify and describe the life-cycle phases

Risk management plan (RMP).

Identify and describe the device

Assign responsibilities and authorities

Plan requirements for review of RM activities

Plan verification activities

Clinical/ performance evaluation plan (CEP/PEP)

Plan data collection and review activities

Post-market surveillance plan (PMSP)

Production and post-production quality data

Per ISO 14971:2019, “The criteria for risk acceptability are essential for the ultimate effectiveness of the risk management process. For each risk management plan the manufacturer needs to establish risk acceptability criteria that are appropriate for the particular medical device.” (emphasis added)

Define criteria for risk acceptability

State-of-the-art determination

Supply chain plan

Usability engineering plan

Chapter 29 

Risk management plan (RMP)

Management review

Quality data • External standards • Expected sales volume

Regulatory plan

Master validation plan

Per ISO/TR 24971:2020, “activities for the collection and review of production and post-production information, and how this information is used to determine if the risks associated with the medical device are acceptable,” including the “method for collecting” and the “the frequency of review” of the information.

● 

Requirements management plan

Customer requirements document

Intended use

Per ISO 14971:2019, “criteria are based upon applicable national or regional regulations and relevant International Standards and take into account available information such as the generally accepted state-of-the-art…”

186 ●



● 

Flowcharts 



187

Similarly, the production and post data collection activities will depend on what data are currently available and will likely identify additional information that may need to be collected. As data collection systems change within your organization, you should update your RMP (and PMSP) accordingly.

Risk Tools (e.g., FTA, NCEA) (Figure 29.4) The term “risk management file” is used throughout ISO 14971 to describe all the various records (“and other documents”) that are generated by risk management. The file would include your RMP, RMR, and a host of other possible records/documents. The term “risk tool” is used here to describe the various documents you would use between the RMP and RMR for the analysis, evaluation, and documentation of the risk controls, and residual risk evaluation. Since each company seems to have its own favorite tool for the task, they are referred to collectively here as “risk tools” or “risk management tools.” Most companies use some type of FMEA document. Some think that by adding “criticality” into the acronym they will sound smarter (for example, FMECA), but as discussed in Chapters 3 and 4, the only things that really matter are the NC rates and the harm rates (P) for the bad things that will actually happen. What tool you use to get there is up to you. The flow of activities within the risk tools, as presented in this flowchart, follow activities in the first four boxes in ISO 14971 Figure 1, which cover Sections 5–8 (with some exceptions). Note that the benefitrisk analysis and the reviews of risks that arise from risk control measures and the completeness of risk controls may be performed in the risk tools but are often documented in the RMR. As discussed, you may not have a CTQ scorecard. As with the RMP flowchart, some information feeds into the risk tools, and some activities feed into and draw information from the risk tools. For example, the design and process verifications and validations should be based on the associated risks, as described in Section 2 of this book, but the objective evidence for the results needs to be documented in the risk tool to support that the control has been implemented and is effective.

Risk tools (e.g., FTA, NCEA).

Implementation of risk control measures

Estimation of risk(s) CTQ scorecard

Risk control option analysis

Identification of hazards

Risk control

Risk reduction (i.e., AFAP)

Risk evaluation

Intended use/ safety characteristics

Risk analysis

Residual risk evaluation

Risk management plan (RMP)

Note: The PDP activities identified on this flowchart are not all-inclusive of activities that may need to link to risk management. Specific interface controls, use cases, reliability tests, test method validations, supplier controls, etc., may require input from or output to risk.

Chapter 29 

Figure 29.4

Risk management tools (e.g., NCEA)

Information for user/patient (e.g., IFU, labeling, training)

Customer/design/ process requirements & specifications

Risk management plan (RMP)

Clinical/ performance plans & reports

Traceability matrix(ces)

Process validation protocols & reports

Clinical/ performance reports

Quality data (product and similar products) Production and post-production data Literature search Review of similar products External standards Expected sales volume

● 

A master harms list makes the identification and estimation of the severity of harms more consistent.

Some data and testing may not be available when initiating the risk analysis activities; and the analysis may need to be updated as new information becomes available through the development process.

• • • • •

Intended use

Design/software/ packaging/stability protocols & reports

Usability protocols & reports

188 ●



● 

Flowcharts 



189

Risk Management Report (RMR) (Figure 29.5) When ISO 14971 states that the risk management review must ensure that “the risk management plan has been appropriately implemented,” does that just mean you need to ensure the RMP was approved in a document control system? You should interpret the standard to mean that you need to review all the activities you have done for risk management to ensure you did everything you planned. The RMR flowchart is designed with the expectation that you are going to review the activities you performed in your risk tools and that you have the necessary PMS processes in place. As shown in the aforementioned risk management process flow­ charts, a step is added in this flow to include a review of the information for users and patients. While this activity could be combined with the review for the completeness of risk controls, it is recommended that it be performed separately and thoroughly. If the detailed analysis of the information for users and patients includes discussion of the specific residual risks, warnings, and precautions, this information can then easily feed into the SSCP/SSP as well. The CER/PER are listed as being “interdependent” with the individual benefit-risk analysis and the overall residual risk acceptability steps. As discussed for the risk tools flowchart, the benefit-risk analysis is often documented in the RMR rather than in the individual tools. Generally, this is because it’s necessary to have information regarding the “benefits” to make a benefit-risk analysis. Since the benefit information usually comes from the CER/PER (per Articles 61 and 56 of the EU MDR/IVDR, respectively, the benefit risk acceptability must be based on “sufficient clinical evidence”), these analyses are typically just performed at the end of the clinical and risk processes. Since the benefitrisk relies on both the benefit (from the CER/PER) and the risks (from the tools and summarized in the RMR), and since the determination of acceptability should be performed by someone with relevant medical/ clinical experience, the decision is often documented in the CER/PER but could be documented in the RMR instead. Either way, the benefitrisk analysis is dependent on data from both processes.

Risk evaluation summary

Risk control implementation and verification

Risk control option analysis

Risk reduction

Risk control summary

Risk management report (RMR).

Risk analysis summary

Risk tools

Traceability matrix(ces)

Residual risk evaluation summary

Risk tools

Risk management plan (RMP)

Clinical/ performance evaluation report (CER/PER)

Interdependent

Individual benefit-risk analysis

Risk tools

Risk management plan (RMP)

Risk arising from risk control measures

Risk tools

Completeness of risk control

Risk tools

Clinical performance evaluation report (CER/PER)

Post-market clinical performance follow-up report (PMCF/PMPF )

Post-market clinical/ performance follow-up report (PMCF/PMPF )

Information for user/patient (e.g., IFU, labeling, training)

Clinical/ performance evaluation report CER/PER)

Interdependent

Overall residual risk acceptability

Risk tools

Periodic safety update report (PSUR)

Information for user/patient

Risk management plan (RMP)

Post-market surveillance report (PMSR)

Clinical/ performance evaluation plan (CEP/PEP)

Post-market surveillance plan (PMSP)

Production and post-production quality data

Production and post-production activities

Chapter 29 

Figure 29.5

Risk management report (RMR)

Risk tools

Risk management plan (RMP)

Periodic safety update report (PSUR)

Periodic safety update report (PSUR)

● 

Risk tools

Intended use

Master validation plan

Post-market surveillance report (PMSR)

Post-market surveillance report (PMSR)

190 ●



● 

Flowcharts 



191

Risk Management Post Market (EU MDR/IVDR Combined) (Figure 29.6)

This flowchart is a composite view of the flow of activities described in both the EU MDR and the EU IVDR collectively. Subsequent flowcharts show the flow of activities specific for each of these regulations individually. This flowchart of risk management activities performed after the product is launched to the market (that is, post market) includes several activities/documents that would likely be initially performed or developed prior to launch. For example, the RMP would be drafted early in the development process (as described in Chapter 7). The RMP and the other plans are shown in this flowchart for two reasons: 1. To highlight that if you haven’t developed the plans prior to launch, you need to develop them. 2. Since the PMS activities are iterative, the plans may be updated as you repeat the PMS cycle. As mentioned previously, the flow of activities is based on the activities described in the EU MDR and IVDR collectively. Note that there are differences in the device classification schemes, which change the PMS reports and PMS schedules accordingly. Additionally, the regu­ la­tions delineate different clinical/performance activities and describe the summary document submitted to Eudamed differently. Some of the flow of activities is based on ISO 14971 (for example, for the RMP, risk tools, and RMR and their connection to design/process controls and validations). Some of the flow is based on the regulations (for example, the PMSR and PSUR articles in the EU MDR/IVDR delineate the classification of products used for each). Additionally, some of the flow of activities is based on industry best practice and common sense (for example, the EU MDR/IVDR do not require that data from the PMSR/PSUR feed into the trend reporting. However, since the trend report includes an evaluation of frequency and severity that is similar to that performed during the drafting of the reports, it makes sense to link the two.) The list of quality data that feeds into the PMS process is described in Chapter 26 and comes from a variety of sources.

GSPR

Technical file

Eudamed

Clinical performance study report (CPSR)

Analytical performance Clinical performance report

As needed As needed

TF

As needed

Verification/ validation

Quality data • Feedback • Audits – Production trending • Design/process changes trends Process/product ° • Literature search (from – Post-production PMCF/PMPF, CER/PER) ° Complaints • Review of similar products ° MDRs/MDVs/field actions (from CER/PER) ° Trending* • External standards • Conformity to product • Labeling requirements • Benefit-risk determination – NCRs (for PSUR) – Yield/scrap/rework – CAPAs/SCARs • Sales volume (for PSUR)

Eudamed

Trend reporting*

Design inputs/ process controls

Risk management post market (EU MDR/IVDR combined).

Regulatory plan

Eudamed

Clinical performance study plan (CPSP)

As needed

Confirmatory/clinical investigations

Exploratory investigations

Scientific validity

Post-market clinical/ performance follow-up report (PMCF/PMPF report)

Main findings

Eudamed

Field action

Risk control Risk TF management report (RMR)

Clinical/ TF performance evaluation report (CER/PER)

MDR: Class I: When necessary Class IIa: At least every 2 yrs. Class IIb & III: At least annually IVDR: Class A & B: When necessary Class C & D: At least annually

Note: Some documents initially generated during PDP may need to be re-evaluated/updated post-market (e.g., risk tools, performance evaluation plan, regulatory plan, tech file, etc.) *Trending: Per EU MDR 2017-745 Article 88 and EU IVDR 2017-746 Article 83.

Eudamed

Summary of safety and (clinical) performance (SSCP/SSP)

Repeat PMS cycle

TF

Information for user/patient (e.g., IFU, labeling, training)

MDR: Implantable and Class III Only IVDR: Class C & D only

Residual risk evaluation

Risk tools (e.g., NCEA, master harms list)

Chapter 29 

IVDR

TF

Periodic safety update report (PSUR)

Post-market TF clinical/ performance follow-up plan (PMCF/PMPF plan)

MDR: Class II & III only IVDR: Class C & D only

CAPA

Requirements traceability

Risk evaluation

Need to be interdependent

Clinical performance evaluation plan (CEP/PEP)

MDR

TF

Post-market TF surveillance report (PMSR)

Risk analysis

● 

Figure 29.6

Start PMS cycle

Post-market surveillance plan(PMSP)

MDR: Class I only IVDR: Class A & B only

As needed

RM–PDP

Quality data (see inset)

Risk management plan(RMP)

192 ●



● 

Flowcharts 



193

The CER/PER is shown as being “interdependent” with the RMR. As discussed in the flowchart for the RMR, this is to demonstrate the dependency of the benefit-risk analysis on the benefits identified in the CER/PER and the risks identified in the risk tools and summarized in the RMR. Since the determination of acceptability should be performed by someone with relevant medical/clinical experience, the decision is often documented in the CER/PER but could be documented in the RMR instead. Either way, the benefit-risk analysis is dependent on data from both processes.

Risk Management Post Market (EU MDR) (Figure 29.7) The risk management post-market (EU MDR) flowchart is similar to the prior “combined” flowchart but has removed the IVDR activities to present just the typical flow used for MDR PMS. See the comments for the combined flowchart above for additional details.

TF

As needed As needed

Risk management post market (EU MDR).

Technical file

Confirmatory/clinical investigations

Exploratory investigations

TF

Verification/ validation

As needed

Design inputs/ process controls

Quality data • Feedback • Audits – Production trending • Design/process changes trends Process/product ° • Literature search (from – Post-production PMCF, CER) ° Complaints • Review of similar products ° MDRs/MDVs/field actions (from CER) ° Trending* • External standards • Conformity to product • Labeling requirements • Benefit-risk determination – NCRs (for PSUR) – Yield/scrap/rework – CAPAs/SCARs • Sales volume (for PSUR)

Eudamed

Trend reporting*

Field action

Risk control Risk TF management report (RMR)

Clinical TF evaluation report (CER)

Repeat PMS cycle

*Trending: Per EU MDR 2017-745 Article 88.

Note: Some documents initially generated during PDP may need to be re-evaluated/updated post-market (e.g., risk tools, clinical evaluation plan, regulatory plan, tech file, etc.).

Eudamed

Summary of safety and clinical performance (SSCP)

TF

Information for user/patient (e.g., IFU, labeling, training)

Class I: When necessary Class IIa: At least every 2 yrs. Class IIb & III: At least annually

Implantable and Class III only

Residual risk evaluation

Risk tools (e.g., NCEA, master harms list)

Need to be interdependent

GSPR

Post-market clinical follow-up report (PMCF report)

Main findings

Eudamed

Periodic safety update report (PSUR)

Post-market TF clinical follow-up plan (PMCF plan)

Class IIa, IIb, & III only

CAPA

Requirements traceability

Risk evaluation

Chapter 29 

Regulatory plan

Clinical evaluation plan (CEP)

TF

Post-market TF surveillance report (PMSR)

Risk analysis

● 

Figure 29.7

Start PMS cycle

Post-market surveillance plan(PMSP)

Class I only

As needed

RM–PDP

Quality data (see inset)

Risk management plan(RMP)

194 ●



● 

Flowcharts 



195

Risk Management Post Market (EU IVDR) (Figure 29.8) The risk management post-market (EU IVDR) flowchart is similar to the prior “combined” flowchart but has removed the MDR activities to present just the typical flow used for IVDR PMS. See the comments for the combined flowchart above for additional details.

GSPR

Technical file

Eudamed

Clinical performance study report (CPSR)

Analytical performance Clinical performance report

Post-market performance follow-up report (PMPF report)

Main findings

Eudamed

As needed As needed

Risk management post market (EU IVDR).

Regulatory plan

Eudamed

Clinical performance study plan (CPSP)

TF

Periodic safety update report (PSUR) TF

Verification/ validation

As needed

Design inputs/ process controls

Quality data • Feedback • Audits – Production trending • Design/process changes trends Process/product ° • Literature search (from – Post-production PMPF, PER) ° Complaints • Review of similar products ° MDRs/MDVs/field actions (from PER) ° Trending* • External standards • Conformity to product • Labeling requirements • Benefit-risk determination – NCRs (for PSUR) – Yield/scrap/rework – CAPAs/SCARs • Sales volume (for PSUR)

Eudamed

Trend reporting*

Field action

Risk control

Eudamed

Summary of safety and performance (SSP)

Class C & D only

Performance evaluation report (PER)

TF

Risk TF management report (RMR)

Class A & B: When necessary Class C & D: At least annually

Repeat PMS cycle

TF

Information for user/patient (e.g., IFU, labeling, training

Note: Some documents initially generated during PDP may need to be re-evaluated/updated post-market (e.g., risk tools, performance evaluation plan, regulatory plan, tech file, etc.) *Trending: Per EU IVDR 2017-746 Article 83.

Residual risk evaluation

Risk tools (e.g., NCEA, master harms list)

Need to be interdependent

Scientific validity

Post-market performance follow-up plan (PMPF plan)

TF

Class C & D only

CAPA

Requirements traceability

Risk evaluation

Chapter 29 

As needed

Performance evaluation plan (PEP)

TF

Post-market TF surveillance report (PMSR)

Risk analysis

● 

Figure 29.8

Start PMS cycle

Post-market surveillance plan(PMSP)

Class A & B only

As needed

RM–PDP

Quality data (see inset)

Risk management plan(RMP)

196 ●



● 

Flowcharts 



197

Risk Management Post Market Summary (Figure 29.9) The risk management process uses a variety of quality data (e.g., complaints, NCRs, CAPAs, design/process changes, etc.) to determine if changes are needed for the design or the process—or if information needs to be provided to the patient/user (e.g., IFU, training) or to the EU (e.g., trending, SSP). Executives often just want the short answer. This flowchart is the scaled-down version of the prior risk management post-market flowcharts for presentation to executives and other people who may not be familiar with the details of risk management (or need them). See the comments for the combined flowchart above for additional details. The flowchart highlights how risk management feeds into the risk controls as required by Annex I of the EU MDR/IVDR. • Safe design and manufacture • Protection measures, including alarms • Information for safety (contra-indications/warnings/ precautions) and training It also highlights the information that needs to be provided to the patient/user (for example, IFU, training) and to the regulators (for example, trending, SSCP/SSP, technical file) as required by the EU MDR/IVDR.

198

● 

Chapter 29 



Information TF for user/patient (e.g., IFU, labeling, training)

Risk management plan/tools/report

Design inputs/ process controls

TF

Trend reporting

Post-market surveillance

Eudamed

RM–PDP

Quality data

Summary of safety and (clinical) performance (SSCP/SSP)

Clinical/ performance

Eudamed

Regulatory

Figure 29.9

GSPR

Technical file

Risk management post-market summary.

Product Development Process—Clinical (EU MDR) (Figure 29.10) Chapter 1 described the scope of risk management, which includes significant overlap with the clinical/performance processes. The rest of the book has only tangentially discussed these interactions. Because the clinical departments at many companies are involved in studies that are outside the walls of the manufacturing facility, these groups often end up working in their own silo. As a result, the risk management projects often have minimal interaction with these groups. However, as can be seen by this flowchart and the risk management post-market flowcharts, the clinical/performance activities need to be closely linked to the rest of the risk management process—especially the connections to the clinical/ performance plans, post-market surveillance, and benefit-risk analysis.



● 

Flowcharts 



199

The PDP-clinical flowchart presented here for the EU MDR activities was built to align the activities with those delineated in the standard. For example, Annex XIV Paragraph 1(a) of the EU MDR delineates the contents of the clinical evaluation plan to include “a specification of the intended purpose of the device”; Annex IX, Paragraph 2.1 describes a requirement to keep the clinical evaluation plan (CEP) up to date “taking into account the state of the art”; and Article 61 states that for Class IIb and III devices “the manufacturer shall give due consideration to the views expressed by the expert panel.” Annex XIV, Paragraph 1(a) also delineates the flow “from exploratory investigations, such as first-in-man studies, feasibility and pilot studies, to confirmatory investigations, such as pivotal clinical investigations, and a PMCF…” Similarly, Annex II, Paragraphs 6.1(a-b) describe several types of preclinical and clinical data, including the results of “engineering, laboratory, simulated use and animal tests” and some of the “detailed information” required. While the EU MDR does not specifically require that the SSCP be an output from the CER, the required content of the SSCP includes information that would be found in the CER (for example, “the intended purpose,” “harmonised standards and CS applied,” and “the summary of clinical evaluation”) per Article 32 of the regulation. The SSCP also requires content that would likely be found in the RMR (for example, identification of the device, description of the device and previous generation(s) or variants, and “information on any residual risks and any undesirable effects, warnings, and precautions.” Since the CER and RMR are interdependent, this information could flow from both sources at the same time the benefit-risk analysis is performed. The CER is shown as being interdependent with the RMR. As discussed in the flowchart for the RMR, this is to demonstrate the dependency of the benefit-risk analysis on the benefits identified in the CER and the risks identified in the risk tools and summarized in the RMR. Since the determination of acceptability should be performed by someone with relevant medical/clinical experience, the decision is often documented in the CER but could be documented in the RMR instead. Either way, the benefit-risk analysis is dependent on data from both processes.

• Identification of the GSPR • Intended purpose • Target groups (indications and contraindications • Intended clinical benefits • Methods to be used • Parameters to be used • Benefit-risk strategy • Clinical development plan (CDP)

Clinical evaluation plan (CEP)

Pilot study

Feasibility study

• Biocompatibility • Physical, chemical, and microbiological characterization • Electrical safety and electromagnetic compatibility • Software verification/validation • Stability, including shelf life • Performance and safety

Animal tests

Laboratory tests Simulated use tests

Engineering tests

Confirmatory/ clinical investigations

Eudamed

Per the EU MDR (2017-745), “in the case of implantable devices and class III devices, clinical investigations shall be performed…” (with some exceptions)

TF

Post-market clinical follow-up plan (PMCF plan)

Summary of safety and clinical performance (SSCP)

Implantable and Class III only

Clinical TF evaluation report (CER)

Interdependent

TF Risk management report (RMR)

RM—post market

Chapter 29 

First-in-man study

Exploratory investigations

Equivalence study

Scientific (peer-reviewed) literature search

● 

Figure 29.10 Product development process—clinical (EU MDR).

Start clinical activities

Risk State-of-the-art management determination tools (e.g., NCEA)

Risk management report (RMR)

Quality Data • Extended standards • Expected sales volume Product Technical requirements requirements document specification

Regulatory plan

Customer requirements document

Expert panel (Class IIb & III)

Intended use/purpose

200 ●



● 

Flowcharts 



201

Product Development Process—Clinical (EU IVDR) (Figure 29.11) The product development process (PDP)-clinical (EU IVDR) flowchart is similar to the prior EU MDR flowchart, but it is tailored for the IVDR requirements. See the comments for the PDP-clinical (EU MDR) flowchart for additional details. The PDP-clinical flowchart presented here for the EU IVDR activities was built to align the activities with those delineated in the regulation. For example, Annex XIII, Paragraph 1.1 of the EU IVDR delineates the contents of the performance evaluation plan to include “specification of the intended use of the device,” and Annex IX, Paragraph 2.1 describes a requirement to keep the PEP up to date “taking into account the state of the art.” Article 56, Paragraph 3 of the EU IVDR requires the performance evaluation to “follow a defined a methodologically sound procedure for the demonstration of the following…(a) scientific validity; (b) analytical performance; (c) clinical performance.” While the EU IVDR does not specifically require that the SSP be an output from the PER, the required content of the SSP includes information that would be found in the PER (for example, “the intended purpose,” “harmonised standards and CS applied,” and “the summary of the performance evaluation”) per Article 29 of the regulation. The SSP also requires content that would likely be found in the RMR (for example, identification of the device, description of the device and previous generation(s) or variants, and “information on any residual risks and any undesirable effects, warnings and precautions.” Since the PER and RMR are interdependent, this information could flow from both sources at the same time the benefit-risk analysis is performed. The PER is shown as being interdependent with the RMR. As discussed in the flowchart for the RMR, this is to demonstrate the dependency of the benefit-risk analysis on the benefits identified in the PER and the risks identified in the risk tools and summarized in the RMR. Since the determination of acceptability should be performed by someone with relevant medical/clinical experience, the decision is often documented in the PER, but it could be documented in the RMR instead. Either way, the benefit-risk analysis is dependent on data from both processes.

Risk management tools (e.g., NCEA)

Clinical performance study report (CPSR) Eudamed

Clinical performance study plan (CPSP)

Eudamed

As needed

Published experience

Scientific (peer-reviewed) literature search

Clinical performance report

Analytical performance study

Scientific validity report

Analytical performance report

TF

Eudamed

Summary of safety and performance (SSP)

Class C & D Only

Performance evaluation report (PER)

TF

Interdependent

Risk management report (RMR)

RM—post market

Chapter 29 

Figure 29.11 Product development process—clinical (EU IVDR).

Performance evaluation plan (PEP)

Risk management report (RMR)

Product requirements document

Proof of concept studies

Consensus expert opinions

Scientific (peer-reviewed) literature search

● 

Start clinical activities

Technical requirements specification

State-of-the-art determination

• Extended standards • Expected sales volume

Quality Data

Customer requirements document

Regulatory plan

Intended use/purpose

202 ●



● 

Flowcharts 



203

Risk Assessment (The Closed-Loop Detailed View) (Figure 29.12) This is the detailed version of the “closed-loop” shown in the Preface. It is important to note that there are effectively two loops for a proper closed-loop risk management process (shown clearly in Figure 0.1). The primary loop flows from the development and validation of the design and process through the monitoring and back to redesign when CAPA actions are deemed necessary. The smaller loop is within the continuous monitoring. Specifically, during routine monitoring/testing of the product, we need to evaluate the quality data to determine if they exceed the risk (severity or occurrence) thresholds. If they do, we move to a CAPA and potentially provide some additional information to the users and/or patients. However, if the data indicate we are within the risk thresholds, we loop back and continue to monitor/test the product. This image contains a lot of details, and it’s difficult to clearly show all the information in the pages of a book. The image works much better as a banner hung on the wall of your company’s office.

ISO 14971:2019

3 Terms and definitions 7.2 Implementation of Risk Control(s)

EN ISO 14971:2012

Controls

uFMEA & trace matrix

User requirements

= Risk

Regulations & standards 21CFR809 21CFR820 MDD/IVDD MDR/IVDR EN ISO IEC

2 Terms and definitions 6.3 Implementation of Risk Control(s) Annex H Guidance for IVD

Controls

Transfer and launch

pFMEA & trace matrix

Process specifications

Design Verification and freeze validation

dFMEA & trace matrix

Design specifications

1

Design Design and review development

Process design

Definition, Concept Design planning, research goals and feasibility Monitoring/ surveillance

*Trending: Per EU MDR 2017-745 Article 88 and EU IVDR 2017-746 Article 83

Quality data • Feedback – Production trending ° Process/product trends – Post-production ° Complaints ° MDRs/MDVs/field actions ° Trending* • Conformity to product requirements – NCRs – Yield/scrap/rework – CAPAs/SCARs • Audits • Design/process changes • Literature search (from PMPF/PER) • Review of similar products (from PER) • External standards • Labeling • Benefit-risk determination (for PSUR) • Sales volume (for PSUR)

FDA Guidance for Industry Process validation (2011)

EN ISO 14971:2012 ISO 14971:2019 9/10 Production and post-production information/activities

EN ISO 13485:2016 5.6 Management review 8.2 Monitoring and measurement 8.4 Analysis of data

MDR/IVDR Annex I.3 General requirements Annex III (Post-market surveillance)

21CFR820 30 Design controls 70 Production and process controls 75 Process valiation 250 Statistical techniques

Manufacturing and commercialization

Process qualification

21CFR820 90 Nonconforming product 100 Corrective and preventive action EN ISO 13485:2016 8.3 Control of nonconforming product

NCs

MDR/IVDR Article 2 Definition of ‘post-market surveillance’ Annex I.3 (e-f)

Per 21CFR820.3(q), “Nonconformity means the nonfulfillment of a specified requirement.”

Risk Assessment

Serious

Moderate

Minor

Negligible/ cosmetic

4

3

2

1

90

80

Noncompliance with internal requirements not associated with governing regulations and standards (e.g., process steps within a Work Instruction or minor internal audit finding) Presents a risk of negatively impacting business performance factors (e.g., scrap rate, rejection rate)

Environmental, facility, or equipment damage that is reversible without remediation (e.g., damage to equipment repairable by operator or destruction of other supplies) Inconvenience, temporary discomfort, nuisance or cosmetic impact, or minor discomfort/stress

Negligible/slight disruption to process

or process steps; components or materials reworked with little to no associated scrap at the station.

Frequent Probable Occasional Remote Improbable

5 4 3 2 1

10-3 ≤ ×

10-5 ≤ × < 10-4 10-6 ≤ × < 10-5

× < 10-6

The event is not expected to occur but could occur in rare situations. The event is extremely unlikely to occur during the life of the product.

10-4 ≤ × < 10-3 The event is expected to occur sometimes during the life of the product.

during the life of the product.

Decimal

Harm/Reliability Rates Qualitative Definition The event is expected to occur at a high rate or multiple times during the life of the product.

95

• Actual severity or occurrence worse than the risk document estimate

• Not assessed in risk document(s)

Risk is the threshold(s) for initiating a CAPA, e.g.:

Table 5.1 Example occurrence criteria.

Qualitative Term

Occurrence Numerical Rating

Cosmetic damage to the environment, facility, or equipment.

Noncompliance with external submissions or filings (documentation) that are isolated to a specific product or product family (e.g., test method failure)

Environmental, facility, or equipment damage that is reversible only with professional remediation (e.g., damage to equipment requiring engineering repair) Process performance degraded but is operable and able to produce product

required; components or materials sorted (not at work station) with a portion being scrapped

Process performance degraded and subsequent processes or process steps

Temporary, reversible, or non-serious illness, injury not involving intervention, or moderate discomfort/stress

Temporary or reversible illness, injury, or impairment. May require intervention; however, intervention is not to preclude permanent impairment/damage.

99

97.5

Noncompliance to a specific Quality System element from governing regulations or standards (e.g., process validation system not compliant with regulations)

Confidence Required for Validation (%)

Irreparable damage to the environment, facility, or equipment (e.g., destruction of equipment)

Compliance Noncompliance with government regulations—capable of jeopardizing site, facility, or corporation regulatory approvals or leading to regulatory warnings, certification revocation, or other undesirable actions Process is disrupted, adjustment or temporary process is needed to produce product or tool/equipment repair required; components or materials all scrapped

Environment Extensive, irreparable environment, facility, or equipment damage (e.g., destruction of building or room)

Descriptions

Serious Injury (permanent impairment of a body function or permanent damage to a body structure or necessitates intervention to preclude permanent impairment/damage)

Business/Process Process is inoperable or process is unable to produce product within specification

Patient/Process Operator Death/life-threatening

Table 18.1 A comprehensive example of severity criteria.

Catastrophic

5

Risk severity and occurrence Term

Numerical Rating

Severity

Post-market surveillance is “for the purpose of identifying any need to immediately apply any necessary corrective or preventive actions;” and Manufacturers are to “evaluate…information from the production phase…[and]…the post-market surveillance system, on hazards and the frequency of occurrence thereof, on estimates of their associated risks…” and that “based on the evaluation…amend control measures.”

“evaluation of nonconformance shall include a determination of the need for an investigation”

EN ISO 13485:2016 8.4 Analysis of data

The path for verification/validation and to initiate a CAPA for a “closed-loop” risk management process.

Cont. Ver.

Chapter 29 

CAPA

● 

EISO 14971:2019 10.4 Actions

204 ●

Serious

Moderate

Minor

Negligible/ cosmetic

4

3

2

1

80

Non-compliance with internal requirements not associated with governing regulations and standards (e.g., process steps within a Work Instruction or minor internal audit finding)

Presents a risk of negatively impacting business performance factors (e.g., scrap rate, rejection rate)

Environmental, facility, or equipment damage that is reversible without remediation (e.g., damage to equipment repairable by operator or destruction of other supplies)

Inconvenience, temporary discomfort, nuisance or cosmetic impact, or minor discomfort/stress

Negligible/slight disruption to process

or process steps; components or materials reworked with little to no associated scrap at the station.

1

× < 10-6

The event is extremely unlikely to occur during the life of the product.

Training

The occurrence threshold (e.g., p or σ2) IS the reliability (aka the lot tolerable percent defective— LTPD) that we should use to determine the validation sample size.

The confidence value (e.g., 1-α or Z) can be scaled according to the severity rating (i.e., the worse the harm, the more confident we want to be with our controls)

We validate the controls for the risks.

n

i=1



(kisi2)2 vi

i=1

kisi2

(∑ )

2

The Welch-Satterthwaite equation can be used to determine how to distribute the sample size across multiple lots, samples/lot, and tests/sample (if the testing is non-destructive); however, you will need to know the sources of variation within your product/process first (e.g., from ANOVA, DOE, etc.).

Welch-Satterthwaite equation from: NIST

Example variable equation from: “CQE Primer” by the Quality Council of Indiana (pp. X1-10)

Example attribute equation from: The Medical Device Validation Handbook

vx1 ≈

n

Welch-Satterthwaite

Variable Z2 σ2 n = /22 E

AQL

0.02

0.03 LTPD

x

0.04

Z2 /2 E2

x

2

(nx) p (1 – p)

n=

i=0

Theoretical defect rate (fraction defective) of the lot

0.01

OC curve

= n–x

0.05

If you haven’t defined an OC Curve for your risk, your initial validation can be used to calculate the Ppk of your process. This will give you an estimate of where your process is actually running and what your overall defect level is likely to be. Then by using the confidence (%) based on the severity of the risk and the “n” from an AQL table (based on the nominal lot/batch size and the general inspection level), you can calculate the AQL (p1) level that you should use.

Example AQL equation from: The Medical Device Validation Handbook

Type II = 0.10

Type I = 0.05

1/n

n! AQLx (1 – AQL)n–x x! (n – x)!

AQL

p1 = 1 – (1 – )

x=0

=∑

0% 0.00

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

1–

c

Continuous monitoring

Note: The regulations, standards and guidance documents referenced are not intended to be a comprehensive list, but are included as examples to support the activities and the flow of information.

If a single table were to be used to identify the harms associated with “product, process, and the quality system” (21CFR820.100 (a)(2)) nonconformances, then the harm to patient/user, p1 = 1 – (1 – process/property, and the environment (ISO 14971:2019) could be assessed consistently, along with compliance risks. Product and process risks would then be evaluated against the applicable risk document(s) (e.g., for complaint and NC trending) and QS risks would be evaluated against a standard, aligned set of criteria.

process starts again—creating the “closed-loop” for the risk management process.

T 1 A C T I O N K When the action involves a E design or process change, the



Figure 29.12 Product development process—clinical (EU IVDR).

SDS

Marketing

ISO 14971:2019 7.2 Implementation of Risk Control(s) c.f. A.2.7.2

MDR / IVDR Annex II Paragraph 4 Annex II Paragraph 6

21CFR820 30 (f) Design Verification 30 (g) Design Validation 75 Process Validation 250 Statistical Techniques

= Reliability (LTPD)

≈ Confidence

()

n x p (1 – p)n–x x i=0

x

=∑

Attribute

Initial Validation

Flowcharts 

Labeling

Info for User/Patient

MDR / IVDR Annex I Paragraph 4 Annex I Paragraph 23.1 (g)/ 20.1 (g) EN ISO 14971:2012 6.5 Risk/Benefit Analysis Annex J Information for Safety and Residual Risk ISO 14971:2019 7.1 Risk Control Option Analysis 8 Evaluation of Overall Residual Risk

IFU

10-6 ≤ × < 10-5

The event is not expected to occur, but could occur in rare situations.

Example occurrence criteria.

Remote

Improbable

2

10-5 ≤ × < 10-4

10-4 ≤ × < 10-3

The event is expected to occur sometimes during the life of the product.

during the life of the product.

10 ≤ ×

-3

Decimal

Harm/Reliability Rates

Qualitative Definition

The event is expected to occur at a high rate or multiple times during the life of the product.

95

Risk Controls Supplier, Design, Process, Service, CSV, Inspections, Training

● 

Table 5.1

Probable

Occasional

3

Frequent

5

4

Qualitative Term

Occurrence

Numerical Rating

Cosmetic damage to the environment, facility, or equipment.

90

Non-compliance with external submissions or filings (documentation) that are isolated to a specific product or product family (e.g., test method failure)

Environmental, facility, or equipment damage that is reversible only with professional remediation (e.g., damage to equipment requiring engineering repair)

Process performance degraded, but is operable and able to produce product

required; components or materials sorted (not at work station) with a portion being scrapped

Process performance degraded and subsequent processes or process steps

Temporary, reversible, or non-serious illness, injury not involving intervention, or moderate discomfort/stress

Temporary or reversible illness, injury, or impairment. May require intervention; however, intervention is not to preclude permanent impairment/damage.

97.5

Non-compliance to a specific Quality System element from governing regulations or standards (e.g., process validation system not compliant with regulations)

99

Confidence Required for Validation (%)

Irreparable damage to the environment, facility, or equipment (e.g., destruction of equipment)

Compliance

Non-compliance with government regulations - capable of jeopardizing site, facility, or corporation regulatory approvals or leading to regulatory warnings, certification revocation, or other undesirable actions

Process is disrupted, adjustment or temporary process is needed to produce product or tool/equipment repair required; components or materials all scrapped

Environment

Extensive, irreparable environment, facility, or equipment damage (e.g., destruction of building or room)

Descriptions

Serious Injury (permanent impairment of a body function or permanent damage to a body structure or necessitates intervention to preclude permanent impairment/damage)

Business/Process

Process is inoperable or process is unable to produce product within specification

Patient/Process Operator

Death/life-threatening

Table 18.1 A comprehensive example of severity criteria.

Catastrophic

5

Risk severity and occurrence

Term

Probability of accepting a lot

Severity

Numerical Rating

205

Summary of Key Takeaways ●  ●  ●

Chapter 1

Some processes are entirely within the scope of risk management, and others should link with the risk management process for nearly all of their activities.

Chapter 2

The examples in the standards are not geared toward manufacturers.

Chapter 3

Manufacturers typically have NC data, not P1 or P2 data.



Risk documents are the input for validation and the threshold for NCs and complaints.

Chapter 4

Measure what you can control, control what you can measure.

Chapter 5

Risk tables need to be realistic for your product/ product-family and documented in the RMP, not an SOP.

Chapter 6

Having medical/clinical representatives predefine ratings for the harms increases accuracy, efficiency, and consistency.

Chapter 7

Don’t pull risks out of thin air; product development is a methodical process that involves risk assessment and control.

207

208

● 

Summary of Key Takeaways 



Chapter 8

Sum up how often a harm (risk) can happen from all the various causes (P-Total).



The NCi and NCr totals define how often an NC can occur and how often a complaint can occur.

Chapter 9

Don’t waste your time defining CTQs. But if you do, tie them to the severity.

Chapter 10

Initial validation demonstrates it’s safe for the consumer using “confidence” and “reliability.”

Chapter 11

The “confidence” for your sample size calculation can be tied directly to the severity of the associated risk.

Chapter 12

The “reliability” for your sample size calculation is the occurrence rate you set in your risk document.

Chapter 13

Test more where there’s more variability, and test less where there’s less variability.

Chapter 14

Your routine monitoring should also be tied to your risk documents.



Your routine monitoring should demonstrate that your processes are capable and stable.

Chapter 15

There are two main types of test methods for most medical devices: attribute and variable.



The characteristics for your TMV depend on the type of test method.



The characteristics may be tested in other processes, but evaluation of all the data should be documented in the TMV report.



Precision testing should link to the occurrence thresholds and account for the other sources of variation.



Accuracy (bias) and precision are linked.



● 

Summary of Key Takeaways 



209

Chapter 16

NC (production phase) and complaint (post market) data are required to be evaluated against the severity and occurrence of their risks (as set in the risk documents), not some arbitrary threshold set by management in an SOP.



The “need for an investigation” (that is, CAPA) should be based on risk.

Chapter 17

The thresholds you set in your risk documents are what determine when you take action.



Yes, it’s really that simple.

Chapter 18

A single, aligned set of criteria will help you address the four pillars of risk.

Chapter 19

There are two types of information for users/patients.



The wording for your labeling, training, and marketing materials should come from your risk documents.

Chapter 20

Some of the sections in your labeling are instructional and some are just informative.

Chapter 21

Information for safety and training must be considered for all risks and should be arranged according to severity but may be adjusted to the audience.

Chapter 22

All risks must be disclosed.

Checklist of Questions ●  ●  ●

T

he following list of questions and references to some of the relevant chapters in this book may help you determine if your risk management process is sufficient for your needs. This list is not intended to be comprehensive, and it is recommended that any checklist be tailored for your specific needs and organization. ✔ Does our quality system include all the processes described in Figure 1.1? (Note: Your company may use different names and/or combine some activities, but it should have written procedures/processes for all the described activities.) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 1 ✔ Do the quality system elements clearly instruct readers when to interact with other elements to ensure proper procedural relationships?. . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapters 1, 29 ✔ Does our risk management process define risk according to the definitions in the regulations and standards?. . . . . . . . Chapter 2 ✔ Does our risk management process include extraneous criteria, such as detectability?. . . . . . . . . . . . . . . . . . . . . . . Chapters 2, 3 ✔ Do our risk documents include intermediate probability ratings for events that we don’t measure (for example, hazard, hazardous situation, etc.)?. . . . . . . . . . . . . . . . . . . . . . Chapter 3 ✔ Do our quality metrics (for example, nonconformances, complaints) link to the rates in the risk documents? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapters 3, 4, 5, 8, 10, 14, 16, 17 211

212

● 

Checklist of Questions 



✔ Are our validations, including continuous process monitoring, based statistically on the risk (and not just using a predefined sample size/sampling plan)? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapters 3, 4, 7, 9, 10, 11, 12, 13, 14 ✔ Do our risk documents use terminology that aligns with the data used in other quality system elements (for example, “nonconformance”)?. . . . . . . . . . . . . . . . . . . . . . . Chapters 3, 4 ✔ Do our risk documents use occurrence rates that align with the rates used in other quality system elements (for example, complaint rates, manufacturing defect/yield rates, nonconformance rates, etc.)? Do they use the same units of measure? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 5 ✔ Do our risk documents include extraneous events (for example, rows of events that have never occurred)?. . . . . . . . . . Chapter 8 ✔ Are severity and occurrence tables defined for each product/ product family or are they predefined in a procedure?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapters 5, 6, 8 ✔ Do the occurrence tables provide adequate resolution based on the sales and use of the products?. . . . . . . . . . . . . . . . Chapter 5 ✔ Do our occurrence tables include rates that are not realistic? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 5 ✔ Do our risk documents include evaluation of the total rate of harms from all the various causes (for example, P-Total)? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 8 ✔ Do our risk documents include sums of the initial and residual nonconformance rates (for example, NCi-Total, NCr-Total)? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 8 ✔ Do our severity tables include extraneous harms (that have never happened for a given product)?. . . . . . . . . . Chapters 6, 8 ✔ Do we define severities according to the “four pillars of risk” to align with the regulations and applicable standards? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 18



● 

Checklist of Questions 



213

✔ Are the ratings assigned to severities defined by the engineers or are they defined by our medical/clinical teams?. . . Chapter 6 ✔ Do we have a master harms list (or similar document) to ensure consistency of the terminology and ratings used for harms across product lines? . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 6 ✔ Do our risk controls identify and evaluate those we control as well as those we don’t control?. . . . . . . . . . . . . . . . . . . Chapter 4 ✔ Does our product development use a methodical process that involves risk assessment and control?. . . . . . . . . . . . . . Chapter 7 ✔ Does our product development program develop user requirements (and their associated risks) prior to developing the design?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 7 ✔ Are our products designed based on mitigating the use risks? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 7 ✔ Are our manufacturing processes designed based on mitigating the design risks? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 7 ✔ Do we have traceability of all process risks and specifications through the design risks and specifications all the way back to the use risks and requirements?. . . . . . . . . . . . . . . . . . Chapter 7 ✔ Do our risk documents link and include traceability to each other?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 24 ✔ Do we waste time defining CTQ attributes or risks? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 9 ✔ If we define CTQs, do we do so based on the associated severity?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 9 ✔ Do we create OC curves to define the AQL and LTPD for each product and risk? . . . . . . . . . . . . . . . . . . . . . . . Chapter 10 ✔ Are our initial validation sample sizes based on the LTPD?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 10 ✔ Are our continuous process monitoring sampling sizes based on the AQL? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 10

214

● 

Checklist of Questions 



✔ Are our AQL and LTPD sample sizes calculated using confidence and reliability values that are statistically linked to the severity and occurrence rates for the associated risks?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapters 10, 11, 12 ✔ Are the effective sample sizes calculated by the AQL and LTPD distributed across the sources of variation using the Welch-Satterthwaite equation (or similar statistical justification)? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 13 ✔ Does our routine monitoring demonstrate our processes are capable and stable? . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 14 ✔ Does our TMV procedure require validation of all the characteristics identified in Table 15.1? (Note: Your company may use different names for some characteristics.) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 15 ✔ Does our TMV procedure define different requirements for attribute, variable, and other types of test methods?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 15 ✔ Does our TMV procedure define what other quality system elements may house data necessary for demonstrating method validation?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 15 ✔ Are variance studies (for example, ANOVA, DOE, etc.) used to determine the amount of the tolerance range available for the method precision?. . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 15 ✔ Does our method precision for each of our test methods link to the applicable occurrence thresholds?. . . . . . . . . . . . Chapter 15 ✔ Are the requirements for accuracy (bias) and precision linked for each of our test methods? . . . . . . . . . . . . . . . . . . Chapter 15 ✔ Do the elements of our quality system require that we evaluate information from the production phase and the post-market surveillance system, using the estimates of their associated risks from the risk documents?. . . . . . . . . . . . . . . . . . Chapters 16, 17



● 

Checklist of Questions 



215

✔ Do our metrics (for example, nonconformance, complaint, yield, etc.) use the relevant severity and/or occurrence thresholds set in our risk documents to determine when to take action?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 17 ✔ Does our risk management process and other associated quality management elements require that the wording used for all labeling, training, marketing materials, advertising, and safety data sheets align with (and come directly from) the risks identified in the risk management process?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapters 19, 20, 21, 22 ✔ Do our processes differentiate information for safety and training from the disclosure of residual risks?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapters 19, 21, 22 ✔ On our labeling (for example, IFU/DFU), does the “contraindications” section immediately follow the “indications for use” section? If no contraindications are known, does this section of the labeling state “None known”?. . . Chapters 20, 21 ✔ Are the warnings, precautions, contraindications, and adverse reactions/events sections of our labeling arranged according to their risk (“clinical significance as determined by their severity and frequency”)?. . . . . . . . . . . . . . . . . . . . . Chapters 20, 21, 22 ✔ Do our labeling procedures reference the applicable regulatory requirements and guidances (for example, FDA 89-4203, G91-1) to delineate “the FORMAT and ORDER” of the information for labeling and provide guidance on how to determine to which section the risk control should be placed? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 21 ✔ Do we disclose all residual risks for our products? Or do we only disclose some risks (for example, only the “significant” ones)?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapters 22, 23 ✔ Does our risk management process require that we reduce risks AFAP or just reduce them ALARP, as low as reasonably achievable, or some other euphemism?. . . . . . . Chapters 19, 23

216

● 

Checklist of Questions 



✔ Do we use (or have we ever used) economic considerations/ concerns to justify not implementing a risk control?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 23 ✔ Does our risk management process require that we evaluate all risk controls to ensure they don’t adversely affect the benefit-risk ratio?. . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 23 ✔ Does our risk management process require that we perform a benefit-risk analysis for all risks, not just when the “residual risk is not judged acceptable”?. . . . . . . . . . . . . . . . . . . . . Chapter 23 ✔ Does our risk management process allow us to stop adding or improving risk controls once the overall risk is deemed “acceptable” or are we required to reduce risks AFAP regardless of the acceptability?. . . . . . . . . . . . . . . . . Chapter 23 ✔ Does our risk management process require us to determine the acceptability of risks based on the risk evaluation or are we required to determine the acceptability of risks based on the benefit-risk analysis? . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 23 ✔ Do we have a process that requires us to determine and evaluate the benefits and risks associated with the “state of the art” for each product and delineates how to perform the evaluations?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 23 ✔ Does our risk management process require that we compare the “numerical quantity” of our product’s risk (severity and occurrence) against the “numerical quantity” of the state of the art’s risk (severity and occurrence)?. . . . . . . . . . . Chapter 23 ✔ Does our risk management process define abnormal use, normal use, correct use, use error, intended use, reasonably foreseeable misuse, and other terms according to the applicable regulations and standards? Does our risk management process provide instruction on how to determine when each term is applicable and any actions required?. . . . . . . . . . . . . Chapter 25



● 

Checklist of Questions 



217

✔ Does our PMS process require the analysis of all the applicable data described in Figure 26.1 and delineate where to find and how to analyze all the data?. . . . . . . . . . . . . . . . . . . . Chapter 26 ✔ Do our NCR and CAPA processes require users to answer all the interrogatives (5W2Hs – who, what, when, where, why, how, and how much) in the problem descriptions? . . . . . . Chapter 26 ✔ Do our NCR and CAPA processes clearly instruct users that they are not to include discussion of “why it happened” (aka “the root cause”) in the problem description?. . Chapter 26 ✔ Does our PMS process delineate various types of clinical/ performance evaluation options that can be used to gather clinical/performance data?. . . . . . . . . . . . . . . . . . . . Chapter 26 ✔ Do our RMPs describe the process for switching from active monitoring to passive monitoring for each product?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 26 ✔ Do we perform and document investigations in our NCR process/system, or do we only determine the need for an investigation in our NCR process/system, and then perform and document that investigation in the CAPA process/system? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 27 ✔ Does our CAPA process delineate that all actions resulting from a nonconformance are to be identified as “corrective actions”? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 27 ✔ Do our investigations include a review of the risk documents to help determine potential root causes for events?. . . . Chapter 28 ✔ Do our investigations blame the people involved, or do we require investigators to determine why the people acted as they did?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 28 ✔ Do we use retraining as a corrective action without changing the training material?. . . . . . . . . . . . . . . . . . . . . . . . . Chapter 28 ✔ Do we have flowcharts to graphically depict the procedural relationships within our QMS?. . . . . . . . . . . . . . . . . Chapter 29

Bibliography ●  ●  ●

21 CFR 801 Labeling. 21 CFR 803 Medical Device Reporting. 21 CFR 809 In Vitro Diagnostic Products for Human Use. 21 CFR 820 Quality System Regulation. American Society for Quality (ASQ). Selecting Statistically Valid Sampling Plans, http://asq.org/qic/display-item/index. html?item=10477. ASQ/ANSI Z1.4-2003 (R2018): Sampling Procedures and Tables for Inspection by Attributes. ASQ/ANSI Z1.9-2003 (R2018): Sampling Procedures and Tables for Inspection by Variables for Percent Nonconforming. CQE Primer, Quality Council of Indiana. Document Device Labeling Guidance #G91-1 (Blue Book Memo), March 1991. EN 62366-1:2015 Medical devices. Application of usability engineering to medical devices. EN ISO 13485:2016 Medical devices. Quality management systems. Requirements for regulatory purposes. EN ISO 14971:2012 Medical devices—Application of risk management to medical devices (ISO 14971:2007, Corrected version 2007-10-01).

219

220

● 

Bibliography 



EN ISO 14971:2019 Medical devices. Application of risk management to medical devices. EU MDR 2017/745 Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC. EU IVDR 2017/746 Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic medical devices and repealing Directive 98/79/EC and Commission Decision 2010/227/EU. European Commission. A Guideline on Summary of Product Characteristics (SmPC), September 2009, https://ec.europa.eu/ health/sites/health/files/files/eudralex/vol-2/c/smpc_guideline_ rev2_en.pdf. European Medicines Agency. ICH Q2 (R1) Validation of analytical procedures: text and methodology, https://www.ema.europa.eu/en/ ich-q2-r1-validation-analytical-procedures-text-methodology. European Medicines Agency. ICH Q9 Quality risk management, https://www.ema.europa.eu/en/ich-q9-quality-riskmanagement#current-effective-version-section. European Medicines Agency. ICH Q10 Pharmaceutical quality system, https://www.ema.europa.eu/en/ich-q10-pharmaceutical-qualitysystem. The Global Harmonization Task Force. Quality Management Systems – Process Validation Guidance, 2004, http://www.imdrf.org/docs/ ghtf/final/sg3/technical-docs/ghtf-sg3-n99-10-2004-qms-processguidance-04010.pdf. IEC 60601-1:2005 Medical electrical equipment—Part 1: General requirements for basic safety and essential performance. Investopedia, Standard Deviation vs. Variance: What’s the Difference? https://www.investopedia.com/ask/answers/021215/whatdifference-between-standard-deviation-and-variance.asp.



● 

Bibliography 



221

ISO 2859-1:1999 Sampling procedures for inspection by attributes—Part 1: Sampling schemes indexed by acceptance quality limit (AQL) for lot-by-lot inspection. ISO 3951-1:2013 Sampling procedures for inspection by variables. Specification for single sampling plans indexed by acceptance quality limit (AQL) for lot-by-lot inspection for a single quality characteristic and a single AQL. ISO 14971:2007 Medical devices—Application of risk management to medical devices. ISO/IEC Guide 63:2019 Guide to the development and inclusion of aspects of safety in International Standards for medical devices. ISO/FDIS 14971:2019(E) Medical devices—Application of risk management to medical devices. ISO/TR 24971:2020 Medical devices—Guidance on the application of ISO 14971. Medical Device Validation Handbook, Regulatory Affairs Professionals Society (RAPS), 2018. Minitab, “Understanding Hypothesis Tests: Confidence Intervals and Confidence Levels,” April 2, 2015, https://blog.minitab.com/ en/adventures-in-statistics-2/understanding-hypothesis-testsconfidence-intervals-and-confidence-levels. National Library of Medicine, Standard Deviation, accessed December 3, 2021, https://www.nlm.nih.gov/nichsr/stats_tutorial/section2/ mod8_sd.html. NIST, Welch Satterthwaite, July 20, 2017, https://www.itl.nist.gov/ div898/software/dataplot/refman2/auxillar/welchsat.htm. Squelia, N. Zero Acceptance Sampling Plans, 4th edition. U.K. Health and Safety Executive, Leadership and worker involvement toolkit – Understanding human failure, https://www.hse.gov.uk/ construction/lwit/assets/downloads/human-failure.pdf.

222

● 

Bibliography 



United Nations Office on Drugs and Crime (UNODC) Guidance for the Validation of Analytical Methodology and Calibration of Equipment used for Testing of Illicit Drugs in Seized Materials and Biological Specimens, 2009. U.S. Department of Commerce, NIST/SEMATECH e-Handbook of Statistical Methods, http://www.itl.nist.gov/div898/handbook/, January 2020. U.S. Department of Commerce, NIST/SEMATECH e-Handbook of Statistical Methods, What kinds of Lot Acceptance Sampling Plans (LASPs) are there? https://www.itl.nist.gov/div898/handbook/ pmc/section2/pmc22.htm, January 2020. U.S. Department of Health and Human Services, Food and Drug Administration. Guidance for Industry—Process Validation: General Principles and Practices, January 2011. U.S. Department of Health and Human Services, Food and Drug Administration. Labeling—Regulatory Requirements for Medical Devices (FDA 89-4203), August 1989. U.S. Department of Health and Human Services, Food and Drug Administration. MAUDE—Manufacturer and User Facility Device Experience, https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/ cfmaude/search.cfm. U.S. Department of Health and Human Services, Food and Drug Administration. Medical Devices; Current Good Manufacturing Practice (CGMP) Final Rule; Quality System Regulation, 1996, https://www.fda.gov/medical-devices/quality-system-qsregulationmedical-device-good-manufacturing-practices/medicaldevices-current-good-manufacturing-practice-cgmp-final-rulequality-system-regulation. U.S. Department of Health and Human Services, Food and Drug Administration. Q2 (R1) Validation of Analytical Procedures: Text and Methodology, March 1995, https://www.fda.gov/regulatoryinformation/search-fda-guidance-documents/q2-r1-validationanalytical-procedures-text-and-methodology.



● 

Bibliography 



223

U.S. Department of Health and Human Services, Food and Drug Administration. Standards and Conformity Assessment Program, October 28, 2019, https://www.fda.gov/medical-devices/deviceadvice-comprehensive-regulatory-assistance/standards-andconformity-assessment-program. U.S. Department of Health and Human Services, Food and Drug Administration. Warning Letter: Abbott (St Jude Medical Inc.), April 12, 2017, https://www.fda.gov/inspections-complianceenforcement-and-criminal-investigations/warning-letters/abbottst-jude-medical-inc-519686-04122017. U.S. Department of Health and Human Services, Food and Drug Administration. Warning Letter: Cardiac Designs Inc., August 7, 2015, https://www.fda.gov/inspections-compliance-enforcementand-criminal-investigations/warning-letters/cardiac-designsinc-08072015. U.S. Department of Health and Human Services, Food and Drug Administration. Warning Letter: ONBO Electronics Company Ltd., August 19, 2008, http://www.fda.gov/ICECI/ EnforcementActions/WarningLetters/2008/ucm1048173.htm. U.S. Department of Health and Human Services, Food and Drug Administration, Office of Regulatory Affairs (ORA). Field Science—Laboratory Manual, https://www.fda.gov/scienceresearch/field-science-and-laboratories/field-science-laboratorymanual.

Index ●  ●  ●

Note: Page numbers in italics indicate figures and tables.

A

B

Abbott (St. Jude Medical) case study, 171 abnormal use, 147–152 acceptable risk, 41, 138, 140–141 acceptance quality limit (AQL) sampling, 53–56 accompanying documentation, defined, 120 accuracy (bias), 93–94, 98 accuracy threshold, 96–99 adverse reactions/events, 123–126, 130 advertising, as labeling, 119–121 A Guideline on Summary of Product Characteristics (EC), 123–124 American Society for Quality (ASQ), 54 analytical procedures, most common, 80 Annex C of ISO/TR 2497, 129 Annex H of EN ISO 14971 (2012), 13 AQL sampling, 53–56, 73–76 attribute, defined, 81 attribute sample size, 65 audits, PMS and, 160–161

bad guidance acceptability and, 137–140 benefit–risk evaluation, 138–140 EU MDR/IVDR, 133–135 harmonized standards, 133–135 overview, 133 practicability, 135–137 significant residual risks, 140–141 stakeholders and, 141–142 unacceptable risks, 140–141 benefit–risk determination, 138–140, 164–165 benefit–risk ratio, 127, 135, 137, 165 bias, defined, 94 blame and finger-pointing, 175–178

C CAPA procedures case studies, 171, 174 CAPAs, NCRs and, 159 Cardiac Designs case study, 174 case studies, 171, 174 checklist of questions, 211–218 clinical/performance evaluation options, 166–168, 167t

225

226

● 

Index 

Code of Federal Regulations (CFR), 113 complaint data, 158, 165 complaint handling processes, 166 compliance risk, 65, 74, 111–113, 136–138, 141 confidence, risk management plans and, 54, 55, 57–59, 58t confidence intervals, 57, 59, 86, 89, 93 conformity to product requirements, 158–159 confounded variance, 65, 88f, 90f, 92f, 95f, 96f, 97f, 98f continued process variation, 55, 73–77 contraindications, 123–126, 127, 163 correct use, 35–36, 147–152 critical to quality (CTQ), 49–51, 127

D death, as a compliance risk, 111–112 decision tree, risk-based, 107f default severity criteria, 113 defect rates, 20, 55f, 63–64, 73–74 definitions, overlapping, 147–152, 148f, 148t, 151t design inputs, process controls and, 33–39 design nonconformance, 145 design/process changes, PMS and, 161 design specifications, use risks and, 37–39, 38f, 39f detectability, risk and, 8, 14 deviation from instructions, 149

E effective sample size spreadsheet example, 69t EN 1441 standard, 11 EN 14971 standard, 11–13 EN 62366-1, 35 Engineering Statistics Handbook (NIST), 80 European Commission (EC) labeling guidance, 123–124 European Union requirements, 168



evaluate, defined, 139 evaluate–investigate distinction, 170–171 Excel spreadsheets, use of, 42 external standards, PMS and, 162–163

F failure mode and effects analysis (FMEA), 11–12, 18f, 34–35, 317–18 false negatives, 8 FDA labeling guidance, 123–126 feedback process, PMS and, 156–158 finger-pointing, blame and, 175–178 5 Whys tool, 175–178 flowcharts legend for, 179–181, 181f overview, 179 product development process, 198–199, 200f, 201, 202f risk assessment (closed-loop), 203, 204f, 205f risk management plan, 185, 186f, 187 risk management post market, 191, 192f, 193–195, 196f risk management post market summary, 197, 198f risk management process, 182, 183f, 184f, 185 risk management report (RMR), 189, 190f risk tools, 187, 188f FMEA (NCEA), atypical, 18f FMEA (NCEA), linking, 143–145, 144f foreseeable sequence of events, 11, 11f, 12f

G gage repeatability and reproducibility (GR&R), 79 Global Harmonization Task Force, 75 Guidance for Industry—Process Validation (FDA), 73, 75



● 

Index 



227

H

J

harms defined, 7–9 evaluation of, 29–31, 30f input and summary table, 41–45, 42f probability of occurrence, 12–13 summary table, 129–130 hazardous situations, 11–12, 14, 19–20, 121, 133, 150 human failure types, 177f

Joe’s mustache curve, 88f, 90f, 92f, 96f

I ICH Q2(R1), 81–83, 86 ICH Q2A and Q2B, 80–83 ICH Q9, 14 ICH Q10, 75 illegal investigations, in NCR, 169–174 information for safety and training, 127–129 for users and patients, 119–121, 163–164 initial probability sequence of events, 19–20, 19f initial rates, 44 initial validation testing, 54–56, 73–76 intended use, 104, 140, 147–152 International Conference on Harmonization (ICH), 14 International Council for Harmonization (ICH), 80 invalid assumption, 65–71 investigate–evaluate distinction, 170–171 investigation, need for, 107–110 ISO 14971, 129–130 ISO 14971:2019 acceptability, 137–138 acceptability and, 138–140 benefit-risk and, 138–140 EU MDR/IVDR, 133–135 harmonized standards, 133–135 overview, 133 practicability, 135–137 risk evaluation, 138–140

K key takeaways, summary of, 207–209 known stakeholder concerns, 141–142

L labeling, PMS and, 163–164 labeling–advertising distinction, 119 language, of risk documents, 22, 35 lawsuits, 35 legacy products, risk and, 43 literature search, PMS and, 161 lot tolerance percent defective (LTPD) sampling, 53–56, 61-63, 65-71, 109 lot-to-variance sampling, 55, 65–68, 70, 88–90, 93, 96–98

M master harms list, 29–31, 30f, 33–34, 129–130, 141, 143 measurement variance, 66–68, 98f The Medical Device Valuation Handbook (RAPS), 65 Methods, Method Verification and Validation (FDA), 81–82

N National Institute of Standards and Technology (NIST), 54 NCEA—FMEA terminology, 22 NC rates, 14–15, 15f, 19–20, 22 NCR–CAPA differences, 169–171 NCR processes, “pseudo,” 159 nonconformance effect analyses (NCEAs), 22, 143–145 nonconformance input and summary table, 44f nonconformance report (NCR), illegal investigations in, 169–174 nonconforming product, 104, 158–159

228

● 

nonconformities, quality system, 111–115 nonconformity, FDA definition of, 13 normal use, 147–152

O occurrence rate comparisons, 109 occurrence criteria, 25–27, 25t Office of Regulatory Affairs (ORA), 81–82 ONBO Electronics Company Ltd., 53–56 operating characteristic (OC) curve, 54, 55f overlapping definitions, 147–152, 148f, 148t, 151t

P people/process/environment criteria, 113 pet peeves detectability and risk, 8 zero defects, 108 pharmaceutical industry guidelines, 14 “pile of bad stuff,” 170–173 PMCF/PMPF studies, 161 possible consequences, 109 post-market complaint reporting, 34 post-market surveillance (PMS) audits, 160–161 benefit-risk determination, 164–165 clinical/performance evaluation options, 166–168, 167t conformity to product requirements, 158–159 design/process changes, 161 external standards, 162–163 feedback process, 156–158 labeling, 163–164 literature search, 161 overview, 153–156, 153f process, 110 quality data for, 153–168 review of similar products, 161–162 sales volume, 165–166

Index 



post-production activities, 157 precautions, 123–126 precision, defined, 83, 86 precision threshold, accuracy and, 95–96 pre-validation study data, 62–63, 66 probability of occurrence, 25–27 process controls, design inputs and, 33–39 process nonconformances, 113 process stability, 75–77 Process Validation: General Principles and Practices (FDA), 73, 75 process variation, continued, 73–77 production data, 156 product nonconformances, 113 “pseudo” NCR processes, 159 P values, 12–14, 12f, 19–20

Q qualitative data, 81 Quality Management Systems—Process Validation Guidance, 75 quality system nonconformities, 111–115 questions, checklist of, 211–218

R reasonably foreseeable misuse, 147–152 Regulatory Affairs Professionals Society Medical Device Valuation Handbook, 65 rejectable quality limit (RQL) sampling, 53–56 reliability threshold rates, 26 reliability values, 61–64 reportable events, 158 requirement, for investigation, 103–105 requirements management software, 36 residual probability sequence of events, 21f residual rates, 45 residual risk evaluation, 41, 108, 125, 129–130, 141 resistance to change, 14



● 

rework, nonconforming product, 159 risk definitions of, 7–9, 26, 107 FMEA and, 35 four pillars of, 111, 112f harm and, 34 occurrence and severity, 54 regulatory parts of, 8 risk acceptability criteria, 35–36 risk analysis and evaluation, 41–45, 42f, 44f risk assessment, 17–18 risk-based decision making, 107–110 risk control measures, 121 risk controls, priority order of, 20 risk documents, 43–44, 103–105, 113, 157 language, 22 linking, 143–145 risk evaluation, defined, 139–140 risk evaluation criteria, 26–27, 63f, 138–140 “risk expert” sequence of events, 12–14, 12f risk management plans, confidence and, 57–59 risk management process, closed-loop, xviif risk management scope, 3–5, 4f risk thresholds, 22–23 risk tools, 175–178 root cause analysis (RCA), 175–178 routine batch release testing, 54

S safety and training, information for, 127–129 safety information, 121 sales volume, PMS and, 165–166 sample distribution, 65–71 sample size, reliability and, 61–64 sampling plans, statistically valid, 54 scrap, nonconforming product, 159 sequence of events control and monitoring of, 17–23 initial probability, 19–20, 19f for manufacturers, 14–15, 15f residual probability, 21f

Index 



229

severity criteria, 29–31, 49–50, 51t, 58t, 108–109, 114t significant residual risks, 140–141 similar products, review of, 161–162 sources of variance, testing limits, 71t specificity, attribute test methods and, 83 Squeglia, Nicolas Zero Acceptance Sampling Plans, 65 stakeholder concerns, 141–142 standard deviation, precision and, 86 state of statistical control, 75, 76 supplier corrective action request (SCAR), 110

T terminology, of risk documents, 22, 29, 44 terminology, TMV characteristics and, 82–83 test method validation (TMV), 79–100 accuracy (bias), 93–94, 98 accuracy and precision threshold, 95–96 amount of tolerance, 86–93 characteristics of, 82–86, 82t, 85t detectability and, 8 gage repeatability and reproducibility (GR&R), 79 precision and accuracy threshold, 96–99 two types of, 80–81 threshold rates, 26–27 thresholds, arbitrary use of, 103 tolerable variance, 62–64 tolerance, available, 94 traceability matrix, 37–39, 143 training information, 121 trans-vaginal mesh products, 129–130 21 CFR 820 nonconformity and, 13 post-market surveillance and, 155, 156–157 quality system nonconformities, 111–115

230

● 

Index 



U

W

UAI (use as is), 104 uFMEA and trace matrix, 34f, 36 UK Health and Safety Executive toolkit, 176, 177f unacceptable risk, 41, 138, 140–141 United Nations Office on Drugs and Crime (UNODC), 80, 84 use-as-is, nonconforming product, 159 use error, 147–152 use risk controls, 37f user requirements, 33–34

warnings, 123–126, 127 Welch-Satterthwaite equation, 66–68, 74 within-lot (process) variance, 66–68

V value, defined, 139 variable data, defined, 81 vulnerable populations, PMS and, 165–166

Z Zero Acceptance Sampling Plans (Squeglia), 65

About the Author ●  ●  ●

G

rowing up in International Falls, Minnesota (“the icebox of the nation”), Joe started his studies at St. Olaf College but transferred to Bemidji State University where he earned his bachelor of science degree in chemistry (with honors). He later earned a master of business administration (MBA) degree through Jones International University. He has been a member of the American Society of Quality (ASQ) since 2008. Starting out as a bench chemist performing environmental and nutritional labeling testing, Joe rewrote almost every analytical method he touched and developed a few new ones. It was during this time that he learned the importance of accuracy, linearity, range, and (especially) system suitability (for example, reagent blanks, matrix blanks, duplicate samples, blank-spikes, matrix spikes, check samples, etc.) for a test method. He also performed minimum detectable limit (MDL) studies for a battery of herbicides and pesticides. This also introduced him to the concepts of good laboratory practice (GLP) and FDA requirements. He eventually transitioned into pharmaceutical quality assurance and quality control. Working for five years at a pharmaceutical company that was under the FDA’s Application Integrity Policy (AIP), he learned the importance of being transparent with your quality problems and your processes. During his time in pharma, he became familiar with AQL sampling tables and started developing quality trending and metrics.

231

232

● 

About the Author 



In 2005, Joe started consulting, and his first project was helping a large, multinational medical device manufacturer build process FMEAs for all its products. In the intervening years, he worked with numerous companies to build or improve their complaint analysis, trending, postmarket surveillance (PMS), nonconformance (NC), corrective action/ preventive action (CAPA), stewardship, and risk management processes. With more than 25 years of experience, Joe recently retired from consulting. As a manager, he was responsible for direct management of the consulting personnel as well as ensuring the competency, quality, experience, project management, and service excellence of the engagement teams. He helped many companies by teaching them how to be more compliant and efficient. Now that he’s retired, he gets to enjoy spending more time with his family.

ASQ MEMBERS MAKE A DIFFERENCE Every day around the globe, quality professionals help to make the world a better place. Let ASQ membership make a professional difference for you.

Seeking new career resources?

Visit careers.asq.org for your next opportunity.

Want to raise your professional credibility? Find your pathway to the right quality certification at asq.org/cert.

Need peer support?

Join the myASQ.org online community to seek solutions and celebrate your quality projects!

How can we help you make a difference? Contact Customer Care: [email protected] livechat on asq.org

ASQ VIRTUAL AND E-LEARNING

Stay in Control of Your Time and Budget ASQ provides quality professionals with a wide range of learning options for your busy schedule and unique career goals. E-learning and live virtual courses are available in our catalog for all industries and career levels. Advance your expertise and help your business with high-caliber content developed by the best subject matter experts and instructional designers – plus all live courses are delivered by outstanding instructors. Whether you need the freedom and lower cost of e-learning to study at your own pace and location, or if you prefer the focus and engagement provided by instructor-led courses online or in-person, ASQ is ready to help you learn new quality skills and achieve your objectives.

LEARN MORE!

Visit asq.org/training/catalog