Structural Decision Diagrams in Digital Test: Theory and Applications 3031447336, 9783031447334

This is the first book that sums up test-related modeling of digital circuits and systems by a new structural-decision-d

117 85 12MB

English Pages 608 Year 2024

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Structural Decision Diagrams in Digital Test: Theory and Applications
 3031447336, 9783031447334

Table of contents :
Preface
The Idea
Acknowledgements
Contents
1 Introduction
2 Overview of Structural Decision Diagrams
2.1 A Short History of Structural Decision Diagrams
2.1.1 Binary Decision Diagrams
2.1.2 Logic-Level Structural Decision Diagrams
2.1.3 High-Level Decision Diagrams
2.2 Binary Decision Diagrams
2.3 Structurally Synthesized Binary Decision Diagrams
2.4 Shared Structurally Synthesized BDDs
2.5 High-Level Decision Diagrams
2.5.1 About the Similarity of Low- and High-Level Structural DDs
2.5.2 RTL Modeling of Digital Modules with HLDDs
2.5.3 RTL Modeling of Control Circuits with HLDDs
2.5.4 Instruction-Level Modeling of Microprocessors
2.6 Structural DDs as a Universal Tool for Digital Test
3 Structurally Synthesized BDDs
3.1 Synthesis of SSBDDs
3.1.1 Definition of SSBDD
3.1.2 Synthesis of the SSBDD Model for a Digital Circuit
3.1.3 Verification of the Correctness of SSBDDs
3.2 Properties and Specific Features of SSBDDs
3.2.1 Basic Operations on SSBDDs
3.2.2 Basic Properties of SSBDD
3.2.3 SSBDDs and Boolean Algebra
3.2.4 SSBDDs and Functional BDDs
3.3 Equivalent Transformations of SSBDDs
3.3.1 Mapping Between SSBDDs and FFRs of Digital Circuits
3.3.2 Restructuring of SSBDDs to Speed-Up Simulation
3.3.3 Optimization of SSBDDs by Reconfiguring the Structure
3.3.4 Restructuring of SSBDDs for Sharing the Sub-graphs
3.4 Shared Structurally Synthesized BDD
3.4.1 Definitions and the Structure of S3BDDs
3.4.2 Synthesis of S3BDDs
3.4.3 The Size of the S3BDD Model
4 Fault Modelling with Structural BDDs
4.1 Fault Modeling with Structural BDDs
4.1.1 Modeling Faults with SSBDDs
4.1.2 Measuring Fault Coverage Using SSBDDs
4.1.3 Modeling Faults with S3BDDs
4.1.4 Mapping of S3BDD Nodes to Signal Paths in Circuits
4.1.5 Making Faults Observable in S3BDDs
4.1.6 Modeling Path and Path Segment Delay Faults in S3BDDs
4.2 Extending the Class of Faults
4.2.1 Generalization of the SAF Model
4.2.2 Modeling Physical Defects by Boolean Differential Equations
4.2.3 Hierarchical Fault Modeling Using Conditional SAF
4.3 Fault Collapsing
4.3.1 Related Work and Definitions
4.3.2 Fault Equivalence and Fault Dominance in the SSBDD Model
4.3.3 Fault Collapsing in SSBDDs
5 Logic-Level Fault Simulation
5.1 Fault-Free Logic Simulation
5.1.1 Fault-Free Simulation with Structural BDDs
5.1.2 Multi-valued Logic Simulation with SSBDDs
5.2 Fault Simulation in Combinational Circuits
5.2.1 Related Work and Overview
5.2.2 Parallel Fault Simulation with SSBDDs
5.2.3 Critical Path Parallel Pattern Fault Backtracing
5.2.4 Fault Simulation for the Extended Class of Faults
5.2.5 Fault Simulation in Multiple Core Environments
5.3 Fault Simulation in Sequential Circuits
5.3.1 Combining Combinational and Sequential Simulation
5.3.2 Design for Testability for Fault Simulation in Sequential Circuits
5.4 Identification of Critical Paths with Logic Simulation
5.4.1 The Concept of True Critical Path Identification
5.4.2 Calculation of the Lengths of Sensitized Paths in Sequential Circuits with SSBDDs
5.4.3 Search for the Longest Critical Paths in Sequential Circuits
6 Test Generation, Testability and Fault Diagnosis
6.1 Test Generation Using Structural BDDs
6.1.1 Path Activation in SSBDDs
6.1.2 Test Pattern Generation with SSBDDs
6.1.3 Test Pattern Generation with S3BDDs
6.1.4 The Problem of Fault Masking
6.2 Test Pattern Generation for Multiple Faults with SSBDDs
6.2.1 Related Work
6.2.2 Contributions of the Method of Test Groups
6.2.3 The Problem of Fault Masking Revisited
6.2.4 SSBDDs and the Topological View on the Fault Masking
6.2.5 The Concept of Test Groups
6.3 Testability Analysis of Digital Circuits with SSBDDs
6.3.1 Related Work and Overview
6.3.2 Calculation of Signal Probabilities in the Logic Circuits
6.3.3 Calculating the Probabilistic Controllability with SSBDDs
6.3.4 Controllability of Signals at Internal Nodes of FFRs
6.3.5 Discussion on Different Methods of Controllability Calculation
6.3.6 Experimental Research Results on Calculating Signal Probabilities
6.3.7 Concluding Remarks
6.4 Fault Diagnosis in the General Case of Multiple Faults
6.4.1 Angel’s Advocate Approach to Fault Diagnosis
6.4.2 Boolean Differential Equations and Fault Diagnosis
6.4.3 A General Equation for Modeling Diagnostic Processes
6.4.4 Solving Boolean Differential Equations with SSBDDs
6.4.5 Test Groups and Hierarchical Fault Diagnosis
7 High-Level Decision Diagrams
7.1 Theoretical Basics of HLDDs
7.1.1 HLDDs as a Diagnostic Model for Digital Systems
7.1.2 Behavioral Level Synthesis of HLDDs
7.1.3 Structural Synthesis of HLDDs
7.1.4 Timed Superposition of DDs
7.2 High-Level DDs and Design Simulation
7.2.1 Simulation of Digital Systems with HLDDs
7.2.2 Vector High-Level Decision Diagrams
7.2.3 Code Coverage Measurement with HLDDs
7.2.4 Hierarchical Fault Simulation Combining SSBDDs and HLDDs
7.3 Hierarchical Multi-level Test Generation in Sequential Circuits with DDs
7.3.1 Related Work in Sequential Circuit Test Generation
7.3.2 Hierarchical Test Generation for Sequential Circuits
7.3.3 DECIDER
7.3.4 Untestability Identification in Sequential Circuits with HLDDs
8 HLDD-Based Test Generation for RISC Processors
8.1 Overview of High-Level Test Generation Concepts for Processors
8.1.1 Fault Modeling of Digital Systems at Different Levels
8.1.2 Related Work on the Microprocessor Test
8.1.3 Research on Software-Based Self-test of Microprocessors
8.1.4 Contributions Beyond the State of the Art
8.2 High-Level Fault Modeling of Microprocessors with HLDDs
8.2.1 Fault Modeling as a Trade-off of Complexity Versus Accuracy
8.2.2 Traditional High-Level Fault Models for Digital Systems
8.2.3 HLDD-Based Functional Fault Model for RISC Processors
8.3 Implementation-Independent Control Fault Model for Processors
8.3.1 Partitioning of a RISC Processor into Modules
8.3.2 Theoretical Foundations of the Implementation-Independent Test
8.3.3 High-Level Control Fault Model of MUTs on HLDDs
8.3.4 Mixed-Level Identification of Fault Redundancy
8.3.5 Optimization of HLDDs and High-Level Fault Modeling Trade-offs
8.4 Extension of Fault Classes for Implementation-Independent Testing
8.4.1 Mapping of Low-Level Structural Faults to the High-Level Constraints-Based Fault Model
8.4.2 Extension of the Fault Classes with Implementation-Independent Testing
8.5 Test Data Generation for the Modules of RISC Processors
8.5.1 Test Data for Testing the Control Part of a MUT
8.5.2 Pseudo-exhaustive Test Generation for the Data Part
8.6 Software-Based Self-Test Synthesis
8.6.1 Organization of Test Programs and the Concept of Test Templates
8.6.2 Conformity and Scanning Tests for a MUT
8.6.3 Test Generation for Delay Faults
8.7 Multiple Fault Testing in Microprocessors Using HLDDs
8.7.1 Topological View on Decision Diagrams for Multiple Fault Reasoning
8.7.2 Merged Cycle-Based HLDD Model for Digital Systems
8.7.3 Test Generation for Microprocessors Avoiding Fault Masking
References

Citation preview

Computer Science Foundations and Applied Logic

Raimund Ubar Jaan Raik Maksim Jenihhin Artur Jutman

Structural Decision Diagrams in Digital Test Theory and Applications

Computer Science Foundations and Applied Logic Editors-in-Chief Vijay Ganesh, University of Waterloo, Waterloo, Canada Ilias S. Kotsireas, Department of Physics and Computer Science, Wilfrid Laurier University, Waterloo, ON, Canada Editorial Board Erika Abraham, Department of Computer Science, RWTH Aachen University, Aachen, Nordrhein-Westfalen, Germany Olaf Beyersdorff, Friedrich Schiller University Jena, Jena, Thüringen, Germany Jasmin Blanchette, Garching, Germany Armin Biere, Informatik, ETH Zentrum, RZH, Computer Systems Institute, Zürich, Switzerland Sam Buss, Department of Mathematics, University of California, San Diego, La Jolla, CA, USA Matthew England

, Engineering and Computing, Coventry University, Coventry, UK

Jacques Fleuriot, The University of Edinburgh, Scotland, Selkirkshire, UK Pascal Fontaine, University of Lorraine, Villers Les Nancy Cedex, France Arie Gurfinkel, Pittsburgh, PA, USA Marijn Heule, Algorithms, University of Texas, Austin, TX, USA Reinhard Kahle, Departamento de Matematica, Universidade Nova de Lisboa, Caparica, Portugal Phokion Kolaitis, University of California, Santa Cruz, CA, USA Antonina Kolokolova, Department of Computer Science, Memorial University of Newfoundland, St. John’s, NL, Canada Ralph Matthes, Universität München, München, Germany Assia Mahboubi, Institut National de Recherche en Informatique et en Automatique, Nantes Cedex 3, France Jakob Nordström, Stockholm, Sweden Prakash Panangaden, School of Computer Science, McGill University, Montreal, QC, Canada Kristin Yvonne Rozier, Champaign, IL, USA Thomas Studer, Institute of Computer Science and Applied Mathematics, University of Berne, Berne, Switzerland Cesare Tinelli

, The University of Iowa, Iowa City, IA, USA

Computer Science Foundations and Applied Logic is a growing series that focuses on the foundations of computing and their interaction with applied logic, including how science overall is driven by this. Thus, applications of computer science to mathematical logic, as well as applications of mathematical logic to computer science, will yield many topics of interest. Among other areas, it will cover combinations of logical reasoning and machine learning as applied to AI, mathematics, physics, as well as other areas of science and engineering. The series (previously known as Progress in Computer Science and Applied Logic) welcomes proposals for research monographs, textbooks and polished lectures, and professional text/references. The scientific content will be of strong interest to both computer scientists and logicians.

Raimund Ubar · Jaan Raik · Maksim Jenihhin · Artur Jutman

Structural Decision Diagrams in Digital Test Theory and Applications

Raimund Ubar Department of Computer Systems Tallinn University of Technology Tallinn, Estonia

Jaan Raik Department of Computer Systems Tallinn University of Technology Tallinn, Estonia

Maksim Jenihhin Department of Computer Systems Tallinn University of Technology Tallinn, Estonia

Artur Jutman Department of Computer Systems Tallinn University of Technology Tallinn, Estonia Testonica Lab Ltd. Tallinn, Estonia

ISSN 2731-5754 ISSN 2731-5762 (electronic) Computer Science Foundations and Applied Logic ISBN 978-3-031-44733-4 ISBN 978-3-031-44734-1 (eBook) https://doi.org/10.1007/978-3-031-44734-1 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This book is published under the imprint Birkhäuser, www.birkhauser-science.com by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Paper in this product is recyclable.

Preface

The Idea The first ideas of structural Binary Decision Diagrams (BDDs) date back to the 70s. The primary motivation for introducing this model into the test field, initially under the name of Alternative Graphs (AG) [421], was reducing the complexity of fault reasoning in gate-level digital circuits and increasing the speed of fault simulation. Very soon, it was found that BDDs could serve as an efficient graph-based tool for Boolean function manipulation in a broad sense, and starting from the 80s, the BDDs became the state-of-the-art data structure for CAD tools in digital design [60]. The conceptual difference between the traditional BDDs and the AGs was in the purposes of these models. While the former one was tasked to represent a Boolean function only, regardless of its actual implementation, the purpose of the AG model was to associate the Boolean function with its specific gate-level digital circuit. Later, to emphasize this particular difference between the models, AG was therefore renamed Structurally Synthesized BDD (SSBDD). What makes SSBDDs specifically attractive for applications in digital test is their ability to simultaneously and explicitly represent all structural faults of a given circuit. The traditional BDD model, on the contrary, only represents a pure single function, either for a golden circuit or for a faulty circuit with one selected fault at a time. Fault simulation on a BDD, hence, requires comparing the calculations on both of them. SSBDDs, in contrast, have the capability to integrate the function, the essential structural details of the implementation, and the complete fault list in a compact (collapsed) form—all in the same model. This feature makes it possible not only to simulate many faults in parallel but also to carry out the reasoning of multiple simultaneous faults, such as the analysis of mutually-masking multiple faults. The advantages of graph-based SSBDDs stimulated the generalization of the logic-level SSBDD to the high-level DD (HLDD) for representing the topological structure and the faults in more complex systems, such as RTL designs and microprocessor modules. The HLDD model opened research avenues for employing fault

v

vi

Preface

simulation and test generation algorithms established for low levels of circuit abstraction to higher-level scalable representations. A notable example is the automation of Software-Based Self-Test for processors. The main motivation of this book is to collect into a single volume the research results in the development of the theory and applications of structural decision diagrams achieved collectively by the co-authors and their students at Tallinn University of Technology in Estonia during the last several decades. Tallinn, Estonia

Raimund Ubar [email protected] Jaan Raik Maksim Jenihhin Artur Jutman

Acknowledgements

Many of the ideas presented in this book also originate from the years when Estonia belonged to the Soviet Union. The authors express deep gratitude for the fruitful discussions and cooperation on the topics of the book to P. Parhomenko, A. Zakrevski, E. Sogomonjan, A. Birger, A. Matrossova, R. Šeinauskas, and many other digital test researchers in the former closed Soviet Union. Since the first years of Estonia regained independence, the doors opened to the West, and the authors of the book benefited from many collaborative research projects supported by the EU, such as Tempus, Copernicus, Framework Programs 3 to 7, Horizon 2020, and Horizon Europe. In particular, we highly appreciate the support of M. Glesner (TU Darmstadt), B. Courtois, D. Borrione, G. Saucier (TIMA, Grenoble), P. Prinetto, M. Sonza Reorda (Politecnico di Torino), H.-J. Wunderlich (U Stuttgart), Z. Peng (U. Linköping), G. Elst (Fraunhofer Institute, Dresden), Th. Vierhaus (TU Brandenburg), and H. Fujiwara (NAIST, Japan) for hosting our longer-term research visits to their laboratories and for mutual cooperation. We had a good fortune to learn from many excellent researchers, to work together with them in European research projects, and to have a lot of useful and interesting discussions. We would especially like to thank for that B. Bennetts, R. Dorsh, R. Drechsler, F. Fummi, D. Gizopoulos, E. Gramatova, V. Hahanov, J. Hlaviˇcka, T. Hollstein, S. Hellebrand, W. Kuzmicz, E. Larsson, R. Leveugle, A. Morawiec, Z. Navabi, O. Novak, A. Pataricza, W. Pleskacz, M. Schölzel, R. Stankovic, F. Vargas, D. Wuttke, Y. Zorian, and many others. We owe many thanks to our colleagues and former Ph.D. students in Estonia, who were involved in our research and participated in implementing the prototypes needed for our experimental investigations, especially to S. P. Azad, A. Buldas, S. Devadze, P. Ellervee, M. Gorev, A. Jasnetski, G. Jervan, L. Jürimägi, A. Karputkin, H. Krupnova, H. Kruus, M. Kruus, S. Kostin, H. Krupnova, J. Kõusaar, M. Mandre, E. Orasson, A. Oyeniran, M. Pall, A. Peder, U. Reinsalu, M. Tombak, and A. Tsertov. We are very grateful for the swift support and sincere encouragement provided by the team of Springer.

vii

viii

Acknowledgements

Finally, we wish to thank our families and particularly spouses, Tiiu, Jelena, and Anna, for the time they provided for us to complete this book. Without their understanding and support, this undertaking would not be possible. Raimund Ubar [email protected] Jaan Raik Maksim Jenihhin Artur Jutman

Contents

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1

2 Overview of Structural Decision Diagrams . . . . . . . . . . . . . . . . . . . . . . . . 2.1 A Short History of Structural Decision Diagrams . . . . . . . . . . . . . . . . 2.1.1 Binary Decision Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.2 Logic-Level Structural Decision Diagrams . . . . . . . . . . . . . . . 2.1.3 High-Level Decision Diagrams . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Binary Decision Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Structurally Synthesized Binary Decision Diagrams . . . . . . . . . . . . . 2.4 Shared Structurally Synthesized BDDs . . . . . . . . . . . . . . . . . . . . . . . . 2.5 High-Level Decision Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.1 About the Similarity of Low- and High-Level Structural DDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.2 RTL Modeling of Digital Modules with HLDDs . . . . . . . . . . 2.5.3 RTL Modeling of Control Circuits with HLDDs . . . . . . . . . . 2.5.4 Instruction-Level Modeling of Microprocessors . . . . . . . . . . 2.6 Structural DDs as a Universal Tool for Digital Test . . . . . . . . . . . . . .

5 5 5 7 9 12 16 20 26

3 Structurally Synthesized BDDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Synthesis of SSBDDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Definition of SSBDD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 Synthesis of the SSBDD Model for a Digital Circuit . . . . . . 3.1.3 Verification of the Correctness of SSBDDs . . . . . . . . . . . . . . 3.2 Properties and Specific Features of SSBDDs . . . . . . . . . . . . . . . . . . . 3.2.1 Basic Operations on SSBDDs . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Basic Properties of SSBDD . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 SSBDDs and Boolean Algebra . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.4 SSBDDs and Functional BDDs . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Equivalent Transformations of SSBDDs . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Mapping Between SSBDDs and FFRs of Digital Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Restructuring of SSBDDs to Speed-Up Simulation . . . . . . . .

37 37 38 40 44 47 47 50 56 62 70

26 28 29 31 33

70 75 ix

x

Contents

3.3.3 Optimization of SSBDDs by Reconfiguring the Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 3.3.4 Restructuring of SSBDDs for Sharing the Sub-graphs . . . . . 92 3.4 Shared Structurally Synthesized BDD . . . . . . . . . . . . . . . . . . . . . . . . . 96 3.4.1 Definitions and the Structure of S3 BDDs . . . . . . . . . . . . . . . . 97 3.4.2 Synthesis of S3 BDDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 3.4.3 The Size of the S3 BDD Model . . . . . . . . . . . . . . . . . . . . . . . . . 104 4 Fault Modelling with Structural BDDs . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Fault Modeling with Structural BDDs . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 Modeling Faults with SSBDDs . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.2 Measuring Fault Coverage Using SSBDDs . . . . . . . . . . . . . . 4.1.3 Modeling Faults with S3 BDDs . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.4 Mapping of S3 BDD Nodes to Signal Paths in Circuits . . . . . 4.1.5 Making Faults Observable in S3 BDDs . . . . . . . . . . . . . . . . . . 4.1.6 Modeling Path and Path Segment Delay Faults in S3 BDDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Extending the Class of Faults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Generalization of the SAF Model . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 Modeling Physical Defects by Boolean Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.3 Hierarchical Fault Modeling Using Conditional SAF . . . . . . 4.3 Fault Collapsing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Related Work and Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Fault Equivalence and Fault Dominance in the SSBDD Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.3 Fault Collapsing in SSBDDs . . . . . . . . . . . . . . . . . . . . . . . . . . .

113 113 114 115 116 119 122

5 Logic-Level Fault Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Fault-Free Logic Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 Fault-Free Simulation with Structural BDDs . . . . . . . . . . . . . 5.1.2 Multi-valued Logic Simulation with SSBDDs . . . . . . . . . . . . 5.2 Fault Simulation in Combinational Circuits . . . . . . . . . . . . . . . . . . . . 5.2.1 Related Work and Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Parallel Fault Simulation with SSBDDs . . . . . . . . . . . . . . . . . 5.2.3 Critical Path Parallel Pattern Fault Backtracing . . . . . . . . . . . 5.2.4 Fault Simulation for the Extended Class of Faults . . . . . . . . . 5.2.5 Fault Simulation in Multiple Core Environments . . . . . . . . . . 5.3 Fault Simulation in Sequential Circuits . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Combining Combinational and Sequential Simulation . . . . . 5.3.2 Design for Testability for Fault Simulation in Sequential Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

161 161 162 169 178 179 183 186 197 207 216 217

124 125 126 129 134 146 146 150 156

234

Contents

xi

5.4 Identification of Critical Paths with Logic Simulation . . . . . . . . . . . . 5.4.1 The Concept of True Critical Path Identification . . . . . . . . . . 5.4.2 Calculation of the Lengths of Sensitized Paths in Sequential Circuits with SSBDDs . . . . . . . . . . . . . . . . . . . . 5.4.3 Search for the Longest Critical Paths in Sequential Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

242 242

6 Test Generation, Testability and Fault Diagnosis . . . . . . . . . . . . . . . . . . 6.1 Test Generation Using Structural BDDs . . . . . . . . . . . . . . . . . . . . . . . 6.1.1 Path Activation in SSBDDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.2 Test Pattern Generation with SSBDDs . . . . . . . . . . . . . . . . . . 6.1.3 Test Pattern Generation with S3 BDDs . . . . . . . . . . . . . . . . . . . 6.1.4 The Problem of Fault Masking . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Test Pattern Generation for Multiple Faults with SSBDDs . . . . . . . . 6.2.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.2 Contributions of the Method of Test Groups . . . . . . . . . . . . . 6.2.3 The Problem of Fault Masking Revisited . . . . . . . . . . . . . . . . 6.2.4 SSBDDs and the Topological View on the Fault Masking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.5 The Concept of Test Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Testability Analysis of Digital Circuits with SSBDDs . . . . . . . . . . . . 6.3.1 Related Work and Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.2 Calculation of Signal Probabilities in the Logic Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.3 Calculating the Probabilistic Controllability with SSBDDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.4 Controllability of Signals at Internal Nodes of FFRs . . . . . . . 6.3.5 Discussion on Different Methods of Controllability Calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.6 Experimental Research Results on Calculating Signal Probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.7 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Fault Diagnosis in the General Case of Multiple Faults . . . . . . . . . . . 6.4.1 Angel’s Advocate Approach to Fault Diagnosis . . . . . . . . . . . 6.4.2 Boolean Differential Equations and Fault Diagnosis . . . . . . . 6.4.3 A General Equation for Modeling Diagnostic Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.4 Solving Boolean Differential Equations with SSBDDs . . . . . 6.4.5 Test Groups and Hierarchical Fault Diagnosis . . . . . . . . . . . .

269 270 270 273 280 281 286 287 288 289

7 High-Level Decision Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Theoretical Basics of HLDDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.1 HLDDs as a Diagnostic Model for Digital Systems . . . . . . . 7.1.2 Behavioral Level Synthesis of HLDDs . . . . . . . . . . . . . . . . . . 7.1.3 Structural Synthesis of HLDDs . . . . . . . . . . . . . . . . . . . . . . . . 7.1.4 Timed Superposition of DDs . . . . . . . . . . . . . . . . . . . . . . . . . . .

347 348 348 354 366 368

252 260

292 298 315 316 320 322 325 327 332 335 336 336 337 341 342 344

xii

Contents

7.2 High-Level DDs and Design Simulation . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 Simulation of Digital Systems with HLDDs . . . . . . . . . . . . . . 7.2.2 Vector High-Level Decision Diagrams . . . . . . . . . . . . . . . . . . 7.2.3 Code Coverage Measurement with HLDDs . . . . . . . . . . . . . . 7.2.4 Hierarchical Fault Simulation Combining SSBDDs and HLDDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Hierarchical Multi-level Test Generation in Sequential Circuits with DDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 Related Work in Sequential Circuit Test Generation . . . . . . . 7.3.2 Hierarchical Test Generation for Sequential Circuits . . . . . . 7.3.3 DECIDER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.4 Untestability Identification in Sequential Circuits with HLDDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

376 377 386 396

8 HLDD-Based Test Generation for RISC Processors . . . . . . . . . . . . . . . . 8.1 Overview of High-Level Test Generation Concepts for Processors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.1 Fault Modeling of Digital Systems at Different Levels . . . . . 8.1.2 Related Work on the Microprocessor Test . . . . . . . . . . . . . . . 8.1.3 Research on Software-Based Self-test of Microprocessors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.4 Contributions Beyond the State of the Art . . . . . . . . . . . . . . . 8.2 High-Level Fault Modeling of Microprocessors with HLDDs . . . . . 8.2.1 Fault Modeling as a Trade-off of Complexity Versus Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.2 Traditional High-Level Fault Models for Digital Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.3 HLDD-Based Functional Fault Model for RISC Processors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Implementation-Independent Control Fault Model for Processors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.1 Partitioning of a RISC Processor into Modules . . . . . . . . . . . 8.3.2 Theoretical Foundations of the Implementation-Independent Test . . . . . . . . . . . . . . . . . . . . . . 8.3.3 High-Level Control Fault Model of MUTs on HLDDs . . . . . 8.3.4 Mixed-Level Identification of Fault Redundancy . . . . . . . . . . 8.3.5 Optimization of HLDDs and High-Level Fault Modeling Trade-offs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Extension of Fault Classes for Implementation-Independent Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.1 Mapping of Low-Level Structural Faults to the High-Level Constraints-Based Fault Model . . . . . . . . . 8.4.2 Extension of the Fault Classes with Implementation-Independent Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

453

407 417 417 419 429 443

454 454 456 457 459 460 461 465 470 478 478 482 490 499 505 514 514

517

Contents

8.5 Test Data Generation for the Modules of RISC Processors . . . . . . . . 8.5.1 Test Data for Testing the Control Part of a MUT . . . . . . . . . . 8.5.2 Pseudo-exhaustive Test Generation for the Data Part . . . . . . 8.6 Software-Based Self-Test Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.1 Organization of Test Programs and the Concept of Test Templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.2 Conformity and Scanning Tests for a MUT . . . . . . . . . . . . . . 8.6.3 Test Generation for Delay Faults . . . . . . . . . . . . . . . . . . . . . . . 8.7 Multiple Fault Testing in Microprocessors Using HLDDs . . . . . . . . 8.7.1 Topological View on Decision Diagrams for Multiple Fault Reasoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.2 Merged Cycle-Based HLDD Model for Digital Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.3 Test Generation for Microprocessors Avoiding Fault Masking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xiii

522 523 529 537 538 541 554 562 563 565 568

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573

Chapter 1

Introduction

In this book, we present a novel approach to modeling digital circuits and systems by graphs, constituting structural decision diagrams jointly representing structural and functional information about digital circuits and systems. The model allows multilevel and hierarchical representation to cope with the complexity of systems. The primary objective and the novelty of the new model is enabling uniform and formal reasoning of cause-effect diagnostic relations between the faults and behavior of circuit components to support digital test and verification solutions. The complexity of digital systems is constantly growing and contributing to the phenomenon known as the “design gap”. At the same time, the nanometer-scale technology advancements impose new challenges to testing systems-on-chip. Shrinking geometries of devices and increasing the number of transistors produce new failure mechanisms, new manufacturing defect types, and reliability issues in electronic products. As a result, verification, test and diagnosis continue to dominate as crucial factors in time-to-market, dependability, and cost of modern many-core systems on chips. An exciting question in the testing of today’s digital systems is: how to improve the reasoning of cause-effect relationships between errors, faults, and defects at the continuously growing complexities of systems? Two extreme trends can be observed in the diagnostic modeling of digital systems: defect-oriented and high-level fault modeling and test generation. To consider both trends simultaneously, hierarchical cross-level approaches are identified as a promising choice. The difficulties in developing analytical high-level and hierarchical cross-level approaches to test generation and fault analysis are related to missing suitable tools and models. The register transfer level models, instruction set architectures, dataflow charts or hardware description languages (HDL, VHDL, Verilog, System C) are not well suited for formal cause-effect reasoning and test generation, forcing engineers to settle for heuristic and approximate methods, which, as a rule, do not provide satisfactory solutions.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 R. Ubar et al., Structural Decision Diagrams in Digital Test, Computer Science Foundations and Applied Logic, https://doi.org/10.1007/978-3-031-44734-1_1

1

2

1 Introduction

A promising solution to diagnostic modeling of cause-effect relationships in a uniform way at different levels of abstraction is established by Decision Diagrams (DD). Logic-level Binary Decision Diagrams have been the state-of-the-art data structure in VLSI CAD for more than three decades. Traditional use of BDDs introduced in [15, 60] is function-oriented. When, in fact, modeling of the structural fault-related details for testing purposes has never been the direct objective of the BDD-based models. Moreover, traditional BDDs are targeting only logic-level modeling. A special type of DDs was proposed and developed in [188, 421, 439, 481] for modeling the structure of gate- and higher-level implementations of digital circuits. These DDs were initially called Alternative Graphs (AG), and classified into lowand high-level decision diagrams. The low-level AGs were later renamed to Structurally Synthesized BDDs (SSBDD) to stress the similarity with traditional BDDs and the fact that the graphs were synthesized directly from the structure of the given gate-level circuit descriptions. SSBDDs provide a one-to-one mapping between the nodes of SSBDDs and signal paths or path segments in related circuits. Such a mapping allows for direct reasoning of gate-level faults in SSBDDs more compactly than the conventional gate-level representations. Using SSBDDs enabled to increase the speed of fault simulation. The fault reasoning became easier, allowing to develop efficient solutions in test generation for avoiding fault masking and reliable testing of multiple faults. The main goal of introducing high-level DDs (HLDD) was to generalize the test-related algorithms developed for SSBDDs for using them at higher levels of system abstraction and allowing formal and deterministic high-level test generation and fault reasoning [188, 232, 431, 439, 471]. HLDDs supported the creation of a novel efficient method for the synthesis of deterministic implementation-independent software-based self-test. Whereas the traditional use of BDDs is based on graph manipulation techniques targeting a single function, the SSBDD-based test-related algorithms, also generalized for high-level DDs, are based on graph-theoretical topology analysis that allows concurrent reasoning of multiple faulty versions of the circuit under test directly in the graph. SSBDDs represent a novel model where Boolean algebra and graph theory meet, producing synergy for processing Boolean formulas and reasoning errors in these formulas. In this view, the SSBDDs can raise an interest not only in the field of gate-level testing but also in other fields related to applications of Boolean algebra. On the other hand, HLDDs, in turn, are a model where graph theory and predicate calculus meet. It is the first comprehensive book about structural decision diagrams, which serve as a single tool for representing both the functionality and logic structure of digital circuits and systems at different levels of abstraction, such as gate, register transfer, and instruction-set architecture levels. The book gives a systematic overview of the development of SSBDDs, shared SSBDDs (S3 BDDs) and HLDDs. It covers the research history during the last two

1 Introduction

3

decades, targets the synthesis of the DDs, and investigates the properties of DDs, and their applications in fault modeling, fault simulation, test generation and fault diagnosis at diverse levels of abstractions of digital systems. Chapter 2 presents a short history of the development of decision diagrams and compares the traditional, referred to as functional BDDs, with the proposed structural BDDs (SSBDDs, S3 BDDs), demonstrating the very close similarity of SSBDDs and HLDDs. Chapter 3 presents the theoretical basics of the structural BDDs, and discusses the methods of synthesis and main properties of both SSBDDs and S3 BDDs. Here, we describe their relationships to Boolean formulas and compare them with traditional BDDs. We also consider the issues of optimization of the models by introducing the theory of equivalent transformation of SSBDDs and developing the lower bounds of the sizes of S3 BDDs created for representing the given gate-level circuit. The main essence of SSBDDs opens through the topological interpretation of the laws of Boolean algebra that allows new possibilities of manipulations with the model based on those derived from the graph theory. It is especially relevant to fault modeling with SSBDDs. Chapter 4 discusses the methods of fault modeling using SSBDDs and S3 BDDs. Groups of stuck-at faults (SAF) along signal paths in a circuit are mapped to single representative faults at the nodes of SSBDDs, resulting in considerable fault collapsing. The traditional SAF is generalized to the conditional SAF model that covers a large class of structural faults, such as shorts, opens and various other physical defects. We propose a formal technique to generate conditional SAF models by solving Boolean differential equations describing the physical behavior of defects in transistor-level circuits. Chapter 5 demonstrates several topology-based advantages of structural BDDs, which allow increasing the speed of parallel simulation methods in combinational and sequential circuits. For example, a novel parallel pattern critical path backtracing-based fault simulation method enables speed-up of several times compared with commercial gate-level fault simulators. We show how the inherent hierarchy of the SSBDD model allows increasing the efficiency of identification of timing critical paths with hierarchical logic simulation in combinational and sequential circuits. Chapter 6 introduces novel SSBDD-based methods for test generation, testability analysis and fault diagnosis in logic circuits. We have developed a novel concept of generating test groups for testing multiple faults in combinational circuits that allows avoiding fault masking and helps to prove the correctness of selected subcircuits in gate networks. Two views on test generation, called devil’s and angel’s approaches, are discussed and compared. The difference between them is that the Devil’s approach targets single fault cases ignoring the risk of fault masking, while the Angel’s approach concentrates on the correctness proof without listing the faults, excluding at the same time the risk of fault masking. The fault diagnosis is carried out under the multiple fault assumption by modeling diagnostic experiments with Boolean differential equations and finding the solutions using SSBDDs. Chapter 7 introduces the high-level decision diagrams for hierarchical test generation and fault simulation. We present the methods of HLDD synthesis for the RTL

4

1 Introduction

circuits, procedural descriptions, VHDL processes, and ISA-based models of processors. It is important to stress the topological similarity of SSBDDs and HLDDs, which allows easy transformation of the algorithms developed for low-level DDs to the algorithms of high-level DDs. For parallel high-level fault simulation, we generalize the HLDDs into a vector form VHLDDs by concatenating high-level variables. The chapter presents the hierarchical test pattern generator DECIDER, developed for the test generation of complex sequential circuits using SSBDDs and HLDDs. Chapter 8 is devoted to the automation of the software-based self-test (SBST) synthesis for RISC processors, using the HLDD model developed from the ISA description. We present a novel implementation-independent method for SBST synthesis, which does not need the knowledge of the gate-level descriptions of processors. However, we show that the method generates SBST programs, which detect a large class of structural faults covered by conditional SAF and functional faults similar to the fault class the March test covers in testing memory arrays. Theoreticians and practitioners in the fields of computer science and computer engineering are continually seeking new ideas and improved techniques to make the system design and testing process more efficient, cost-effective, fast and accurate. One of the promising and popular approaches of system modeling for a long time has been the BDDs. In this book, we present novel extended types of decision diagrams modeling the structure of circuits and generalizing the binary-level ideas of diagnostic modeling of circuits to higher levels of system abstractions. We believe this book will serve as a vital source of a new branch of DDs and their applications for researchers and engineers working in the fields of computer science, computer engineering, digital electronics, test and diagnosis, and the dependability of systems. The ideas and methods described in the book have been applied in PhD and MSc courses at Tallinn Technical University, Technical University of Darmstadt, and Linköping University, and presented in tutorials and short courses in a number of other universities over the years. We believe the book can serve as a reference source for graduate and advanced undergraduate-level courses in computer engineering and electronics at technical universities.

Chapter 2

Overview of Structural Decision Diagrams

This chapter presents a short history of the development of decision diagrams and compares the traditional BDDs, here referred to as functional BDDs, with the proposed new type of structural DDs. We classify the latter as logic-level Structurally Synthesized BDDs (SSBDD) and High-Level structural DDs (HLDD).

2.1 A Short History of Structural Decision Diagrams In the following, we first present an overview of BDDs, which has become a classic model today, highlighting also the many different types this model has developed over the years. We then provide a similar historical overview of the evolution of structural decision diagrams, which has paralleled the evolution of BDDs, considering the models of SSBDDs, and HLDDs.

2.1.1 Binary Decision Diagrams Binary decision diagrams (BDDs) have become the state-of-the-art data structure in the field of Digital Test and Design for the representation and manipulation of Boolean functions and their implementations. Using the graph technology was already considered in [244] for the representation of switching functions. The graph concept did not arouse interest among hardware designers this time. Apparently, the digital designs at that time were simple enough for applying traditional analytical methods. On the other hand, the novel graph concept proposed by Lee, attracted in the 60s researchers working in the fields of complexity theory and software design. The earliest papers of this kind [112] were devoted to the proof theory in predicate calculus. Several applications of graph theory appeared later

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 R. Ubar et al., Structural Decision Diagrams in Digital Test, Computer Science Foundations and Applied Logic, https://doi.org/10.1007/978-3-031-44734-1_2

5

6

2 Overview of Structural Decision Diagrams

in the fields of programming, compiling and switching theory. The research targeted the problems of evaluating Boolean expressions [59], and evaluating the complexity of binary programs [236]. In [326], a theory of atomic digraphs was developed with the goal of unifying certain design techniques in various areas of computer science. In the middle of the 70s, several papers appeared to introduce the decision graph concept also in the field of hardware design, such as [14, 15, 235, 237, 421]. In [235], the graphs were used for representing digital circuits, and in [236], a method was proposed for the synthesis of binary programs for representing Boolean functions of digital circuits. Sheldon Akers was the first to introduce the term Binary Decision Diagrams [14]. He developed a BDD-based methodology for generating tests for digital circuits [15]. In [421], a special type of BDDs was introduced and named as Alternative Graphs (AG). The specifics of AGs, compared with traditional BDDs, were to create a direct mapping between the AGs and the structures of related circuits. Based on the AGs, a method was proposed for calculating Boolean derivatives and test generation for gate-level circuits. A more general approach to using BDDs was presented by A. Thayse [409–411], who introduced a lattice of different BDDs, and developed algorithms for finding minimal BDDs. He applied his approach in many areas, including computer design, program optimization and artificial intelligence. The real boom of developments in the field of BDDs started after R.E. Bryant [60] formulated in 1986 the notion of reduced ordered binary decision diagrams (ROBDDs), showing their canonicity and easy manipulability. He developed algorithms for creating ROBDDs and operating upon them. However, in the general case, the size of ROBDDs may grow exponentially. To solve this issue, in the next 15 years, several extensions to the BDD presentations were developed with attempts to get more compact representations of Boolean functions, particularly for representing arithmetical circuits. This research has led to different refinements of data structures and improvements in operating algorithms on BDDs [95, 280]. Different types of BDDs have been proposed and investigated during decades such as Edge-Valued BDDs (EVBDD) [281, 405, 239], Shared or Multi-Rooted BDDs (SBDD) [281], Ternary Decision Diagrams (TDD), or in more general, Multi-Valued Decision Diagrams (MDD) [385], Functional Decision Diagrams (FDD) [227], Multiterminal BDDs (MTBDD) [80, 74], Zero-Suppressed BDDS (ZBDD) [279], Algebraic Decision Diagrams (ADD) [53], Kronecker FDDs (KFBDDs) [106], Extended BDDs (XBDDs) [315], Binary Moment Diagrams (BMDs) and Multiplikative BMDs [106, 40], Free BDDs [40], Hybrid BDDs (HBDD) [57], Residue BDDs [216], Partitioned OBDDs (POBDDs) [293], Fibonacci Decision Diagrams (FibDDs) for signal processing, including image processing [74, 397], abstract BDDs (aBDDs) [183], Indexed BDDs [179], Boolean Expression BDDs (BEDs) for verification of combinational logic circuits, tautology checking [12], Transistor Mapped BDDs (TM-BDDs) for transistor level synthesis for static CMOS circuits [240], Interval Decision Diagrams (IDDs) and Interval Mapping Diagrams (IMDs) for symbolic verification of process networks [399, 401], Clock Difference Diagrams (CDDs) for symbolic protocol

2.1 A Short History of Structural Decision Diagrams

7

verification [257], Queue BDDs (QBDDs) for symbolic protocol verification [139], Fuzzy Decision Diagrams (FuDDs) [390], Determinant Decision Diagrams (DDDs) for symbolic analysis of large analog circuits [400]. Overviews and more detailed information about different types of BDDs can be found, for example, in [95, 213, 380, 369]. Historically, the traditional application of BDDs was functional representation, i.e., the objective was to represent and manipulate the Boolean functions by BDDs as efficiently as possible. The problem of mapping the structural details of the related circuit directly into the structure of BDDs has not been the topic of research so far.

2.1.2 Logic-Level Structural Decision Diagrams The idea of considering the structural aspect of logic circuits in BDDs was first introduced in [421]. The target was to create a graph model in the form of BDDs, which would represent the structure of digital circuits, especially to represent the possible structural faults in circuits, with the objective of test generation and fault simulation. The graphs were synthesized in the form of decision diagrams directly from the topology of the gate-level network. To address the possibilities of branching in more than two binary directions, this new model was named Alternative Graphs (AG) by analogy to [373]. The main idea of AGs was to establish a one-to-one mapping between the nodes of an AG and the signal path segments in the related gate-level circuit. Such a mapping allowed to investigate directly in AGs a lot of problems of the digital test, such as fault modeling and fault simulation, fault collapsing, test generation, fault masking in multiple fault cases, fault diagnosis, testability analysis, etc. The first ideas and developments on AGs were published in Russian [375, 421], later in German [332, 427–430], and in English [334, 439]. Due to the similarity of AGs to BDDs, AGs were renamed later to Structurally Synthesized BDDs (SSBDD) [441]. The name stressed the method of how they were synthesized, i.e., directly from the gate-level network structures of digital circuits. The mathematical properties of SSBDDs were investigated in papers [187, 417]. The main motivation for introducing SSBDDs was to develop efficient algorithms for test generation and fault simulation in digital circuits by exploiting the reduced complexity of the model compared to the gate-level representation. The basic overview of the theory of SSBDDs was presented in [375, 429]. Novel test generation algorithms on SSBDDs were proposed for single stuck-at-faults (SAF) in [333, 441] and for multiple SAF in [467]. The first ATPG using SSBDDs was developed at the beginning of the 80s with published experimental data in [435]. In [429], a general fault model composed of SAF and logic conditions was proposed. A similar model was later independently proposed and discussed in many papers, called the conditional SAF model [55, 81, 212]. Based on this idea, defectoriented fault simulation and test generation methods were developed using SSBDDs

8

2 Overview of Structural Decision Diagrams

[359]. In [446], the SSBDDs were used for representing Boolean differential equations as a method for modeling diagnostic experiments, and the solutions represented the statements of SAF diagnosis in digital circuits, taking into account the fault masking possibilities in case of multiple SAF presence. Based on the one-to-one mapping between the nodes in SSBDDs and segment signal paths in circuits, efficient methods of deductive [357] and critical path backtracing [448] based fault simulation methods were developed. A fast fault simulation approach for combinational circuits, based on parallel reasoning of faults in SSBDDs simultaneously for many test patterns, was developed in [450]. The method was generalized for extended fault classes such as conditional SAF [451] and Xfault model [452]. The parallel critical path backtracing method was also extended for delay fault coverage calculation in combinational circuits in [228, 229] and for parallel critical-path backtracing based simulation for sequential circuits in [218]. In [148], a novel method of fault simulation was developed, which exploits two types of parallelism: (a) the bit-level parallelism for multiple pattern reasoning and (b) the distribution of the fault reasoning process to different cores in a multi-core processor environment. A novel algorithm for multivalued simulation, based on Boolean differential algebra, was implemented using SSBDDs in [441]. Later, SSBDDs were used to speed up the timing simulation of digital circuits [204] and to quickly identify true critical paths in sequential circuits [465]. SSBDDs have been used for the optimization of fault location processes in digital circuits [438], design error diagnosis [419, 443], testability evaluation of circuits [454], optimization of SSBDDs for the fast evaluation of the quality of test sets for digital systems [486], and for developing algorithms with linear complexity for fault collapsing [458, 462]. In [203], algorithms were developed for the calculation of probabilistic testability measures for digital circuits with SSBDDs. A new type of Shared SSBDDs (S3 BDDs) with multiple inputs was proposed in [289, 485] for further compaction of structural BDDs targeting fault collapsing [455, 198] and for speeding up logic simulation [461]. A theory of equivalent transformations of SSBDDs for the synthesis of S3 BDDs from SSBDDs was developed in [199, 200]. SSBDDs were the basis of several software tools developed for test synthesis and analysis used in the industry. For example, the first BDD-based automated test pattern generator (ATPG) was implemented at the beginning of the 80s and used in the defense and computer industries in the Soviet Union. The diagnostic software package Turbo-Tester, which includes tools for test generation and fault simulation, was licensed in many universities and institutions worldwide [184, 196].

2.1 A Short History of Structural Decision Diagrams

9

2.1.3 High-Level Decision Diagrams An important question in testing today’s digital systems is: how to improve the efficiency and quality of test-related methods, such as test generation, fault simulation and diagnosis, testability analysis, reliability assessment and enhancement, etc., at the continuously increasing complexity of systems? Two main opposite trends can be observed in the field of digital test: (a) physical defect orientation to increase the accuracy of handling faults and (b) high-level modeling of systems to speed up system diagnostic analysis. To simultaneously follow both trends, hierarchical approaches are a promising solution. The difficulties in developing the formalism of uniform cross-level and hierarchical approaches to digital test generation and fault simulation lay in the need to use different languages and models for different levels of abstraction. Most frequent examples are logic expressions for combinational circuits at a low logic level, classical automata theory and state transition diagrams to handle functional behavior of finite state machines (FSM), abstract execution graphs and system graphs to handle systems at the register transfer level (RTL), instruction set architecture (ISA) based modeling of controllers and microprocessors, flow-charts, hardware description languages (HDL, VHDL, Verilog, System C etc.) to model the hardware with software technology, Petri nets for system level description, etc. All the approaches referred above need dedicated mathematics and tools for developing test-related algorithms and fault models, making compatibility and joint crosslevel applications to maintain the hierarchy difficult and clumsy. For example, HDLbased modeling methods, which may be well and efficiently adapted for mixed-level fault simulation, lack the capability of analytical fault reasoning for test generation and fault diagnosis purposes. On the other hand, promising opportunities for cross-level and hierarchical diagnostic modeling of digital systems would provide the universal model of decision diagrams (DD) when extended for usage at higher abstract levels of digital systems. The most important impact of the high-level DDs (HLDD) could be expected from the possibility of generalization and extension of the methods developed for logic circuits based on SSBDDs to higher abstraction levels of digital systems on the uniform graph-based formalism. A lot of effort has already been made in this direction. The first attempt to generalize SSBDDs for representing digital systems at the RT level was made by introducing Vector Alternative Graphs (VAG) in [428, 431]. The basic idea of the VAG model was to partition the system into control and data parts. The nodes of VAG were divided respectively as well into two parts: internal nodes, labeled by the control signals, to describe the functionality of the control part in the form of SSBDDs, and terminal nodes, labeled by high-level functions executed in the data part. The logic level control signals represented the states of the FSM describing the control part, and the high-level functions of the data part were described using n-bit word register variables. In this way, the idea of the VAG model was to support test generation for controllers or microprocessors directly at

10

2 Overview of Structural Decision Diagrams

the microprogram or program level. In [433], a method was proposed for the formal synthesis of test programs with HLDDs for microprocessors. The first overview paper in English about the multi-level diagnostic modeling of digital systems with SSBDDs and HLDDs was published in [439]. HLDDs have been used in different applications of high-level and hierarchical test. New promising algorithms, techniques and prototype tools have been developed for simulating VHDL clock-driven multi-processes by Decision Diagrams [263], RT-level cycle-based simulation [471–473], fast RT-level fault simulation [348], hierarchical fault simulation [449, 478], fast analysis of classical code coverage metric [346], high-level test program automated synthesis [353, 354, 205], RT level testability analysis [341], and high-level design error diagnosis [347]. In [354], a novel Automated Test Pattern Generator (ATPG), called DECIDER, was proposed for RTL sequential circuits. This ATPG implemented a hierarchical top-down approach using mixed-level decision diagrams (HLDDs and SSBDDs). Both data and control parts of the design were handled uniformly. On the RT level, deterministic path activation in HLDDs was combined with simulation-based techniques in SSBDDs, used for random constraint solving. Experiments showed high fault coverages for circuits with complex sequential structures and higher speed in test generation compared with the plain gate-level ATPGs. A number of works have been published on implementing Assignment Decision Diagram (ADD) models introduced in [76]. ADDs have been combined with SAT methods to address test generation for RTL systems [129, 518], targeting the full coverage of modules in testing the RT-level datapaths. In [360, 361], novel HLDD-based high-level fault models were proposed. Combining the three new fault models and separate test generation for the control and data parts allowed higher SAF coverages compared to the state-of-the-art. In addition, the experimental results proved that RTL test methods targeting only the datapath functional units, like in the ADD-based approaches [129, 518], cannot guarantee high fault coverages for the overall design. In [358], the problem of identifying untestable faults in sequential circuits at the RT level was investigated. The goal was to improve the scalability compared to the previous works exploiting the plain logic level. A new specific class of sequentially untestable faults was introduced in [338], called register input logic SAF. A method was developed to identify these faults on the RT level. The paper [344] was the first publication, which considered the general case of sequentially untestable SAF within RTL modules, and provided proof of untestability at the RT level. In [492], a hierarchical method of identifying untestable faults in sequential circuits was proposed. The method is based on deriving, minimizing and solving test path constraints for modules embedded into RTL designs. In [131], a high-level ATPG framework was presented, which exploits two different computational models: HLDD-based DECIDER, combined with Extended Finite State Machine (EFSM)-based engine LAERTE++ from the University of Verona [107]. The HLDD-based engine relied on fault propagation, fault activation and constraint logic programming (CLP), and its particular target was to identify untestable areas in the design. On the contrary, the EFSM-based engine relies on

2.1 A Short History of Structural Decision Diagrams

11

learning, CLP and backjumping techniques. It addresses design areas not covered by the HLDD exploration, which were not marked as untestable. The integration of the two engines allowed analyzing the state space of the design more efficiently and generating very effective test sequences. The paper [494] introduced a deterministic algorithm that extracts constraints for activating test paths at RTL and subsequently applies a constraint-solving package [109] for assembling the tests. Experiments on ITC99 and HLSynth92/95 benchmarks showed that the proposed deterministic method offers shorter run times and increases fault coverage for hard-to-test designs with respect to earlier, semi-formal approaches listed above. HLDDs were also used for developing tools for the verification of digital designs. The papers [188–190] propose a temporal extension version of HLDDs, named Temporally Extended HLDDs (THLDD), aiming at supporting temporal properties expressed in Property Specification Language (PSL). A novel method was developed for simulation-based checking of PSL assertions. PSL assertions were integrated into fast HLDD-based simulation. It was shown how VHDL checkers could be mapped to HLDD constructs. The contributions of the papers are the methodology for direct conversion of PSL properties to HLDD and a method for modification of the HLDD-based simulator for assertion checking support. In [191], an HLDD-based hardware functional verification framework APRICOT was developed for Assertions checking (monitoring), formal PRoperty checkIng, verification COverage analysis, and Test pattern generation. The paper [164] addresses the functional verification of designs and proposes a new tool for mutation analysis using HLDDs. The tool is integrated into the APRICOT environment. The verification process is based on HLDD simulation and graph perturbation. In [345], a method is proposed for locating design errors at the source level of RTL hardware description language code, using the design representation of HLDD models and correcting them by applying mutation operators. The error location is based on backtracing the mismatched and matched outputs of the design on the HLDDs. In [232, 470], a theory was developed for the formal verification of RT-level designs. For this purpose, HLDDs were represented by characteristic polynomials as a canonical form of HLDDs. The polynomials can be used for proving the equivalence between two HLDDs, which have the same functionalities, yet may have different internal structures. Using the canonical HLDD theory, the methods for probabilistic high-level equivalent checking in [233], and automated correction of design errors in [234] were developed. In the recent decade, the HLDDs have been applied to developing Software-Based Self-Test (SBST) methods for controllers and microprocessors. In [479], a method is presented for modeling microprocessors with HLDDs, and a novel fault model was introduced, which consists of 3 fault classes defined for the nodes of HLDDs. It was shown that the model covers the widely used “classical” high-level fault models, such as the 14 functional fault classes for microprocessors [404] and 9 categories of functional faults for RTL statements [387]. In [193], this model was extended by a novel class of hard-to-test faults, named “unintended actions”. In [477], a method

12

2 Overview of Structural Decision Diagrams

for reordering nodes in the HLDDs of microprocessors was proposed to minimize the test length. A theory of SBST generation for detecting multiple faults in microprocessors was developed in [186, 484]. The proposed HLDD-based method is a generalization of the test group method developed previously for testing multiple faults in gate-level circuits using SSBDDs [467]. High-level faults of any multiplicity are assumed to be present. However, there is no need to enumerate them as test targets. The method excludes the mutual masking of multiple faults. In [298], a method was proposed for the approximate evaluation of the multiple fault coverage for the given test and for the given fault classes. The problem of fault redundancy was discussed in [297], and a method for the identification of redundant faults in microprocessors was proposed. The papers [302] proposed a novel HLDD-based method for implementation-independent SBST generation.

2.2 Binary Decision Diagrams In this section, we present a brief introduction to the traditional BDDs used to represent and manipulate Boolean functions, with reference to [380]. This is the area where the studies were developing in parallel to structural Decision Diagrams (DD), i.e., the topic of this book. A BDD can be viewed as a data structure for representing Boolean functions. A special case of BDDs is Binary Decision Trees (BDT), as a graphical representation of the Boolean expression f = f (x 1 , x 2 , … x n ), which can be derived by recursive application of the Shannon decomposition rule f = x i f 0 ∨ xi f 1 , where f is decomposed into co-factors f 0 = f (x 1 , …, x i = 0, … x n ), and f 1 = f (x 1 , …, x i = 1, … x n ) (Fig. 2.1a). By removing redundant information from BDT, we can create BDDs with reduced size (Fig. 2.1b).

x3

0

x3

1

1

0

x2

0

1

x1

x1

0

1

1

0

0 1

x2

0

x1 0

1

0

1

0

0 1

1

x3

x1

1

x3

0

1

1 0

(a) Fig. 2.1 A BDD and BDT for the Boolean function x 1 ∨ (x2 ∧ x3 ) [380]

0

1 0

1

(b)

2.2 Binary Decision Diagrams

13 y2

y2 y

x 0

1 Jump

y1

y

y1

x

x

1

1

Share y1

x

0

0

0

BDD1

BDD2

BDD1

1

BDD BDD

(a)

BDD2

(b)

Fig. 2.2 Reduction rules for BDDs [380]

In this way, a BDD is a directed acyclic graph with two terminal nodes, called 0-terminal and 1-terminal. Every non-terminal node has a label to identify an input variable of the Boolean function and has two outgoing edges, called 0-edge and 1-edge. For minimization of the nodes in BDT, the following reduction rules can be used: • Node elimination: Eliminate all redundant nodes whose two edges point to the same node (Fig. 2.2a). • Node sharing: Share all the equivalent subgraphs (Fig. 2.2b). An Ordered BDD (OBDD) is a BDD such that the input variables appear in a fixed order on all the paths of the graph and that no variable appears more than once in a path. Reduced Ordered BDDs (ROBDD) can be derived from OBDDs using the node elimination and node sharing rules. ROBDDs give canonical forms for Boolean functions when the order of variables is fixed. This property is used for checking the equivalence of two Boolean functions by checking the isomorphism for their ROBDDs. It is known that a BDD for an n-input function requires an exponential amount of memory in the worst case [251]. However, the size of the BDD varies with the kind of function. There is still a class of Boolean functions that BDDs can represent with polynomial size. It makes BDDs attractive for practical use [281]. The size of BDDs (and also canonical ROBDDs) depends strongly on how the variables are ordered, whereas the effect of variable ordering depends on the nature of the functions to be handled. An example of a drastic difference for an AND-OR two-level circuit with 8 inputs is illustrated in Fig. 2.3. To reduce the size of the BDD model, a set of multiple functions can be united into a single graph, which consists of all BDDs sharing sub-graphs with each other, as shown in Fig. 2.4. The graph called shared BDD [281] represents 4 functions: F1 = x2 ∧ x1 , F2 = x2 ⊕ x1 , F3 = x1 , F4 = x2 ∨ x1 . To extend BDDs to the word level, we need to introduce non-Boolean terminals that allow integers in terminal nodes. The resulting DDs are called Multi-Terminal BDDs (MTBDDs) [80] and Algebraic Decision Diagrams (ADDs) [53]. An example

14

2 Overview of Structural Decision Diagrams

(a) A circuit

(b) The best order

(c) The worst order

Fig. 2.3 Reduction rules of BDDs [280] Fig. 2.4 Shared BDD [281]

y2

y1 x2 0

1

x2

0

x1 0

0

1

y3

0

1 1

y4 x2 1

x1 0

1

of an MTBDD for the function 3x 1 + x 2 is depicted in Fig. 2.5. Note that the internal nodes are still labeled by Boolean variables. In [385], Ternary Decision Diagrams (TDDs) or, generally, Multi-Valued Decision Diagrams (MDDs) were proposed. In the case of multi-valued input variables, each decision node should have multiple branches. MDDs are the type of decision diagrams with three branches at each node. As shown in Fig. 2.6, MDDs can be represented using ordinary BDDs by encoding multi-valued inputs into binarycoded variables, and MDD operations can be translated to BDD operations. MDDs enable the utilization of BDD techniques for various applications in discrete variable problems. Another way to represent multi-valued functions is to use multiple ordinary BDDs. Integer numbers can be encoded into n-bit binary codes, and a given multi-valued

2.2 Binary Decision Diagrams

15

y2

y3

x2

y1

0

x1

0

1

1

0

3

1

x1

+

0

x2

0

1

x1

0

1

0

1

0

3

1

4

Fig. 2.5 Multi-terminal BDDs [380]

0

y

0 1

x 0

1 2

3

y0 0

3

2 0

x 123

0

x0 0

0

x1 1

1

y1

1

y0

1 0 0

1

x0

1

0

0

x0 1

x1 0

1

x0 1

1 0

1

Fig. 2.6 Implementation of MDD [385]

function can be decomposed into a vector of BDDs. In this case, each bit corresponds to a BDD, which may share subgraphs, as shown in Fig. 2.7. We highlighted a few extensions of BDDs based on graph sharing and representing multi-valued functions because of the similarity of techniques used in the following, in discussions in the next sections on structural BDDs and high-level DDs. In the case of HLDDs, the variables are multi-valued at both the non-terminal and terminal nodes. Introducing HLDDs aims to overcome the increasing logic level complexity of digital systems. Hence, when processing the functions represented by HLDDs, we have the similarity with MTBDDs [74, 80], and when processing

16

2 Overview of Structural Decision Diagrams

24 23

22

21

20

B-to-I function x1

x2

F(22, 21, 20)

0

0

0 (000)

0

1

1 (001)

1

0

3 (011)

1

1

4 (100)

x2

1 0

1

x2

0

0

x2

0

x1

1

x1

1

0

1

0

1

Fig. 2.7 A BDD vector representing the function 3x1 + x2 [380]

the multi-valued input variables, we have the similarity with multi-valued decision diagrams (MDDs), where the internal nodes of MDDs have multiple branches [385].

2.3 Structurally Synthesized Binary Decision Diagrams Traditional use of BDDs has been functional, i.e., the target has been to represent and manipulate the Boolean functions by BDDs as efficiently as possible. Less attention has been devoted to representing in BDDs the structural aspects of digital designs. Such a goal was first set up in [421, 439] and realized by introducing the possibility for a one-to-one mapping between the nodes of BDDs and signal paths or path segments in the related digital circuit. A special class of BDDs was introduced, called initially alternative graphs (AGs), renamed later as structurally synthesized BDD (SSBDD) [442] in accordance to the way they were synthesized, i.e., directly from the gate-level network structure of digital circuits. The direct mapping between SSBDDs and circuits allows modeling different testrelated objectives and relations of gate-level networks such as signal paths or path segments, faults in gates or connections, delays on paths, interactions between faults such as fault masking, fault equivalence and dominance, etc. These objectives cannot be modeled and analyzed explicitly on the topology of the traditional BDDs. Example 2.1 Let us have a gate-level combinational circuit with fan-outs only at inputs in Fig. 2.8a. Consider the fan-out free region (FFR) of the circuit with inputs at the fan-out branches or fan-out free inputs. This FFR corresponds to a Boolean expression with literals denoting the FFR inputs, as shown in Fig. 2.8a. y = x11 x21 ∨ x12 x31 x4 ∨ x13 x22 x32 .

(2.1)

2.3 Structurally Synthesized Binary Decision Diagrams 10

x1 x2 1 x3 x4

x11 x21 & x12 x31

0

x13

&

&

17

Structural SSBDD:

y

10 0 0

10

1

y

x11

1

x21

1

10

x12

x31

Functional BDD:

1 x4

y

x1

1

x22 x32

x4 x13

x22

x32

1

1

10

0

&

x2

x2

1

x3 0

0 (a)

(b)

(c)

Fig. 2.8 Representation of a circuit with SSBDD and functional BDD

We now represent the FFR in Fig. 2.8a by a graph in Fig. 2.8b, which we call Structurally Synthesized BDD (SSBDD, or SSBDD). Each node in this graph represents a literal in the Boolean expression (2.1), and a path in the FFR, respectively. For simplicity, we omit in this BDD the values on edges and omit the 0-terminal and 1-terminal nodes as well. Let us agree that the value for the edge leaving a node to the right is 1; downward, 0. Similarly, let us agree that the missing edge of the node to the right means entering the 1-terminal, and the missing edge downward means entering the 0-terminal. The literals with two subscript indexes in the formula (2.1) and in the SSBDD denote the branches of fan-out stems and represent signal paths in the circuit. The first index denotes the input number, and the second index numbers the branches from the fan-out stem. The node variables in SSBDD may have inversions. We invert the node variable when the number of inverters on the corresponding signal path in the circuit is odd. Figure 2.8c presents the “traditional” BDD of the Boolean function of the circuit in Fig. 2.8a. This BDD represents only the function of the circuit, and does not give any information about the internal structure of the circuit in terms of the fan-out paths of the circuit. For this reason, let us call from now on the traditional BDDs as functional BDDs, to emphasize their difference from structural BDDs in the form of SSBDD. We see that the BDD in Fig. 2.8c is more compact than SSBDD in Fig. 2.8b. The reason is that in SSBDD, the number of nodes is always equal to the number of signal paths in the related FFR, whereas the functional BDDs do not need to show this relationship, and therefore, the number of nodes in these BDDs is always the object for minimization. The explicit relationship between the nodes in the SSBDD and the signal paths in the related FFR allows direct handling of faults in the circuit and, therefore, the development of efficient algorithms for fault reasoning and fault simulation, including test generation and fault diagnosis.

18

2 Overview of Structural Decision Diagrams

For illustrating fault simulation, let us assume that we have applied to the inputs of the circuit in Fig. 2.8a, a test pattern T(x 1 , x 2 , x 3 , x 4 ) = (1101), which produces the output value y = 1. The task of fault simulation is to determine which faults the given test pattern may detect. Simulation of a test pattern in SSBDD means traversing the nodes in the graph according to the values of the node variables assigned by the test pattern. Let us name a path traversed in such a way as an activated path. To simulate a stuck-at-fault 0 or 1 of the node variable x means to insert the fixed constant x ≡ 0 or x ≡ 1, respectively, to the variable and to traverse (activate) the graph according to the given test pattern and the inserted faulty value. The fault is detectable (detected) by the pattern if the activated paths reach different terminals in cases of the inserted fault and without fault insertion. The test pattern 1101 in Fig. 2.8 activates the path l 1 = (x 11 , x 21 , #1) through the nodes x 11 and x 21 up to node #1 in the SSBDD in Fig. 2.8b, where #1 is the notation for the 1-terminal in BDDs. From the path l1 , it follows that only the nodes x 11 , and x 21 are responsible for the value of y = 1, and hence, may be the only suspected variables as possible fault location candidates. Hence, only the faults x 11 ≡ 0 and x 21 ≡ 0 have to be simulated with SSBDD to confirm if the test pattern can detect these faults or not. To simulate the fault x 11 ≡ 0, we traverse the graph starting from x 11 , guided by the test T, under injection of the value x 11 = 0. Reaching the 0-terminal through the path l0 = (x 11 , x 12 , x 31 , x13 , #0) results with the detection of the fault. Note, when simulating the fault x 11 ≡ 0, the values of x 11 = 0 and x 12 = 1 are different. In a similar way, we can simulate faults in the traditional BDD in Fig. 2.8c. However, since the node variables in the BDD are the primary input variables, we can simulate the faults only at inputs, but not the faults inside the circuit, which belong to the fan-out paths. The reverse task of fault simulation is test pattern generation, where to detect a given fault, we have to assign suitable values to node variables, so that the activated paths satisfy the fault detection condition described above. Consider test generation in the SSBDD for the fault x31 ≡ 0 in Fig. 2.9. In the first step, we assign x 31 = 1 (the opposite of the faulty value 0), and activate a path from the root x 11 through x 31 to the 1-terminal. In the second step, we assign the faulty value x 31 = 0, and activate a path from x 31 to 0-terminal. As a result, we have got the test pattern: T (x 1 , x 2 , x 3 , x 4 ) = (1011). It is easy to notice that the test generation using SSBDDs may proceed not only for the faults, one after the other, but also for the faults in a group simultaneously, group by group, which helps to increase the productivity of test generation. For example, in the example above, at the first step of activating the path from x 11 to 1-terminal, targeting the fault x31 ≡ 0, we can identify several candidate faults located along the activated path, and detect them with the same test pattern or by extending the pattern suitably. For example, the test created for the fault x 31 ≡ 0, also detects the faults x 12 ≡ 0 and x 4 ≡ 0. The functional BDDs can be used for generating test patterns similar to SSBDDs. However, the faults can be targeted explicitly only at the primary inputs. To target

2.3 Structurally Synthesized Binary Decision Diagrams

x1 x2

1 0

10

1

x3 x4

x11 x21 & x12 x31 1

x13

&

19

x11

y

0

x21 0

0

&

1

10

1

x31

x12

y

x4

1

0

0

x22 x32

1

x22

x13

&

x32

0

(a)

(b)

Fig. 2.9 Representation of a circuit with SSBDD. Generation of a test for the fault at the node x31 in the SSBDD

other faults, we have to modify the BDD for each fault separately, one by one. It is the extra effort caused by BDDs. When using the SSBDDs for generating tests for circuits containing internal convergent fan-outs, we consider this combinational circuit as a network of modules (see Fig. 2.10), where each module represents a fan-out-free region (FFR), whose inputs may be either the primary inputs or the fanout branches from outputs of other FFRs. The primary inputs are not highlighted in Fig. 2.10. This way of modeling the circuit at the FFR level, instead of the gate level, using SSBDDs allows keeping the complexity of the model (the total number of nodes in all graphs), for the full circuit, linear to the number of gates in the circuit. Test generation for such a network of FFRs, is performed for FFRs one by one. For example, for FFR3 in Fig. 2.10, first, the local test pattern is generated using the SSBDD. Then, the patterns (x 1 and x 2 ) are justified using the compact functional

x0

FFR1

x1

x3 FFR3 4

FFR4

FFR2

x2

Fig. 2.10 A combinational circuit represented as a network of FFRs

y

20

2 Overview of Structural Decision Diagrams

Table 2.1 Comparison of SSBDDs with gate-level circuits and ROBDDs for the ISCAS’85 circuits Circuit

# of gates

# of gate-level faults

ROBDD

# of nodes SSBDD

# of faults in SSBDDs

SSBDD gain versus # faults in circuits

# nodes in ROBDD

c432

232

864

30,200

308

616

1.4

98.1

c880

357

1760

7655

497

994

1.8

15.4

c1908

718

3816

12,463

866

1732

2.2

14.4

c3540

1446

7080

208,947

1648

3296

2.1

126.8

c5315

1994

10,630

32,193

2712

5424

2.0

11.9

949

4830

58,292

1206

2412

1.9

53.3

Average

BDDs for the FFRs involved in justification (FFR1 and FFR2 ), and finally, the results of (responses to) the local test patterns (x 3 ) are propagated to the primary output y through the related FFRs (in this example, through FFR4 ), using the compact functional BDDs again. The effect obtained by using SSBDDs is discussed in more detail further in the book. However, loosely, it can be outlined by the following: • the direct representation of faults in SSBDDs allows processing them simultaneously (in parallel); • the number of faults is collapsed to the node faults in SSBDDs, and there is no need to have a special list of collapsed faults for modifying the graphs as in the case of traditional (functional) BDDs; • SSBDDs are more compact than the gate-level presentation of FFRs, and • SSBDD provides a universal homogeneous structural model for the FFR modules, considered as “complex gates”, regarding diverse tasks of test-related processing. The gain in compactness of SSBDDs (in size of the model) versus ROBDDs [58] can be seen by comparing the columns ROBDD and SSBDD in Table 2.1. We see that the size of SSBDDs in the number of node-related faults is nearly linear. It is about two times less, compared to the number of gate-related faults in the different circuits of the Benchmark family ISCAS’85 [51], but the gain in the number of nodes in SSBDDs (in the size of the model) compared to ROBDDs tends to be 1–2 orders of magnitude.

2.4 Shared Structurally Synthesized BDDs In [334], a new type of multi-functional alternative graphs was introduced, illustrating the combination of functions in a single alternative graph is shown in Fig. 2.11, which presents a structural description of a 4-bit adder. Later, we renamed this type of SSBDD to Shared SSBDDs (S3 BDD), in accordance with the developments in the field of Shared or Multi-Rooted BDDs [281].

2.4 Shared Structurally Synthesized BDDs 10

SM

7 8

A3 B3 A2 B2 A1 B1 A0 B0

9

C0

1 2 3 4 5 6

S4

10

S3

11

S2

12

S1

13

S0

14

16

21

11 15

18

12 17

16

20

13 19

18

15

17

19

21

1

3

5

7

15

17

19

1

3

2

2

5

4

4

22 20

14 21

22

9

21 6

6

7

8

8

Fig. 2.11 Multi-functional AG (shared multi-rooted BDD) [334]

The adder circuit in Fig. 2.11, represented by five BDDs, describes 13 Boolean functions (5 output functions and 8 internal functions). Graphs 15, 17, 19 and 21, which represent XOR components of the adder, we can insert into the shared BDD on top of Fig. 2.11, to substitute the nodes with the same numbers, respectively. In this case, a single S3 BDD represents the adder. The goal of the introduction of S3 BDDs was to further compress the SSBDD model by retaining its advantages related to the direct representation of the structure of gate-level circuits. Example 2.2 An example of the combinational circuit c17 with its S3 BDD is presented in Fig. 2.12. Circuit c17 belongs to the Benchmark family ISCAS85 [51]. For simplicity, we omit the terminal nodes as in the case of SSBDDs, and assume by convention that leaving the graph to the right (down) means entering terminal node

x1 x2 x3 x4 x5

g1

g5

x31 & g2 x32 &

g3 z11 & z1

g4 z12 &

z21 &

y1

z2 g6 z22 &

Fig. 2.12 ISCAS circuit c17 [51] and its S3 BDD

y2

y1

x1

x31

z2

x2

x32

y2

z 22 x5

x4

22

2 Overview of Structural Decision Diagrams

#1 (#0). Additionally, if a node is labeled by a variable x, then the edges from the node heading, either to the right or downward, correspond to the values of x, either 1 or 0, respectively. As we see from Example 2.2, the S3 BDD allows a reduction in the size of the circuit’s model more than twice, compared to the size of the gatelevel network: in contrast to the 16 lines to be simulated in the gate-level circuit, the S3 BDD contains only 7 nodes as simulation objects. Unlike SSBDDs, which represent a single FFR, and have a single entry into the graph, S3 BDDs have more than one entries, where each entry variable corresponds either to the output or to the internal fan-out nodes of the original gate-level network. Denote the sub-graph of the S3 BDD with entry variable y as Gy . The graph in Fig. 2.12b contains three sub-graphs Gy1 , Gy2 , and G z2 , which represent the functions of the circuit at outputs y1 and y2 , and at the internal fan-out stem z2 , respectively. The reason why the sub-graph G z2 represents the inverted function of the circuit in the internal node z2 , is explained later, when we discuss the method of synthesis of S3 BDDs. Note, all three sub-graphs Gy1 , Gy2 , and G z2 , share the sub-graph, which consists of nodes x32 and x4 . On the other hand, G z2 is a sub-graph of Gy1 . Denote Gyi ⊂ Gyj , if Gyi is a sub-graph of Gyj . Each sub-graph Gy in the given S3 BDD represents a sub-circuit S y with output y within the given gate-level network S. If Gyi ⊂ Gyj , then also S yi ⊂ S yj . Each node x in the sub-graph Gyi of the given S3 BDD represents a signal path or a path segment in the sub-circuit S yi and in all other sub-circuits S yj so that S yi ⊂ S yj . For example, the highlighted node x 2 in the S3 BDD in Fig. 2.12, with entry y1 represents the full signal path in the circuit, shown in bold, from the primary input x 2 to the primary output y1 . On the other hand, the same node x 2 in the subgraph G z2 , represents the path segment in the sub-circuit S z2 . Note that the variables in the nodes of S3 BDD may be either inverted or not inverted. If the number of inverters in the respective path is odd, then the variable of the node, which represents the path, is inverted. Otherwise, if the number of inverters is even, there is no inversion. The nodes x 1 and x 5 in the S3 BDD in Fig. 2.12 represent signal paths in the circuit from x 1 to y1 and from x 5 to y2 , respectively. The node x 31 represents the path from the lower input of g1 to y1 . The nodes x32 and x4 in the S3 BDD belong to three subgraphs Gy1 , Gy2 , and G z2 , and represent the path segments from the related nodes x32 and x4 in the circuit to the outputs y1 , y2 , and internal fan-out node z1 , respectively. The procedures of fault simulation and test generation are carried out in S3 BDDs in a similar way as in SSBDDs. For example, by bold lines in the S3 BDD, in Fig. 2.12, a path is shown, which corresponds to the assignments x 1 = 0, x 2 = 1, and x 3 = 0. The pattern T(x 1 , x 2 , x 3 , x 4 , x 5 ) = (010–) is a test for the fault x2 /0 in S3 BDD for testing the bold path in the circuit from x 2 to y1 . Extending the pattern by the assignment x 4 = 1, we get the test T (x 1 , x 2 , x 3 , x 4 , x 5 ) = (0101-) for the fault x32 ≡ 0 in S3 BDD for testing the path in the circuit from x 32 to y1 . Extending the pattern by the assignment x 5 = 1, we get the test T (x 1 , x 2 , x 3 , x 4 , x 5 ) = (01,010) for the fault x5 ≡ 0 in S3 BDD for testing the path in the circuit from x 5 to y2 . Note, the detectable by the test pattern T = (01,010) faults x2 ≡ 0 and

2.4 Shared Structurally Synthesized BDDs

23

x32 ≡ 0 are propagated in the graph Gy1 , to the circuit output y1 , whereas the fault x5 ≡ 0 was propagated in the graph Gy2 to the circuit output y2 . This was an example to demonstrate the test generation procedure for several faults simultaneously by gradually extending the test pattern using the same graph structure. This is a fundamentally different approach compared to the traditional BDD manipulation technology, where each fault requires modification of the BDD, and, therefore, several faults cannot be addressed simultaneously. Example 2.3 Figure 2.13 illustrates the diagnostic modeling of sequential circuits with S3 BDDs. In Fig. 2.13a, the circuit s27 from the ISCAS’89 benchmark family [39] is depicted, and in Fig. 2.13b, the corresponding S3 BDD model is shown. In sequential circuits, we have to model two types of functions—state transfer and output functions. In the circuit in Fig. 2.13a, we have a single output 26, and three flip-flops T 1 , T 2 , and T 3 , to represent the states. The nodes labeled by the present state variables (flip-flops) are denoted by squares. The circuit has 4 fan-out nodes 8, 9, 14 and 21, represented by respective entries in the S3 BDD model. The total number of 10 nodes in the S3 BDD model is noticeably reduced compared to the number of 27 lines in the original gate-level sequential circuit s27. As a result,

3

15

1

T1

10 2

1

7

9

(91) (92)

(81) 1

&

8

12

11

13 (82)

&

6

(141) 14 17 16 (142) 4

22 (211)

18

1

5

1

& 1

20

21

23 (212) (213)

T2

81

T3

T2

2

3 91

92

4

141

T3

26 T1

&

24 19

(a)

T1

1

212

142

(b) Fig. 2.13 ISCAS’89 circuit s27 [51] and its S3 BDD

82

1

T3

25

T2 26

24 Table 2.2 Structural interpretation of the S3 BDD for s27

2 Overview of Structural Decision Diagrams

G

N

Signal paths in the circuit

GT 1

3

3–15–T1

3

2

2–9–10–15–T1

5

T1

T1 –7–9–10–15–T1

6

8

81 –12–25–T2

4

GT 2 G26 , GT 3

L

T2

T2 –5–21–23–26

5

9

92 –11–18–20–21–23–26

7

14

141 –17–18–20–21–23–26

7

4

4–19–20–21–23–26

6

T3

T3 –6–14–16–19–20–21–23–26

9

1

1–82 –13–142 –16–19–20–212 –23–26

10

the number of faults simulated significantly collapses, and the speed of test generation and fault simulation increases. We already mentioned that a one-to-one relationship exists between the nodes in S3 BDDs and signal paths or path segments in the original circuit. Table 2.2 presents the structural interpretation of the nodes in the S3BDDs in Fig. 2.13. Column G shows the sub-graphs of the S3 BDD model (for 3 state-transfer functions and 1 output function), Column N lists the nodes of the S3 BDDs, and Column “Signal paths in the circuit” shows the paths in the circuit, consisting of consecutive lines (starting from primary inputs or fan-out stems). Column L (“length”) shows how many lines the path contains. For example, the single node 1 in the graph G26 represents the signal path through 10 lines. When representing S3 BDDs, for each fan-out stem of the circuit, one of the branches is not represented in the form of a node in the graph. Instead of that, it is presented as an entry into the respective sub-graph. For example, for the two branches of the fan-out stem 9 in Fig. 2.13a, two different path modeling techniques in the S3 BDD model are illustrated in bold in Fig. 2.13b. Node 92 represents one branch of the fan-out and the respective path (92 , 11, 18, 20, 21, 23, 26). The second branch is represented by entry 91 in graph GT 1 . Table 2.2 presents the longest paths in the circuit in Fig. 2.13a, for each node in Fig. 2.13b. In fact, each node may represent different segments of the longest path depending on the sub-graphs and respective entries into them between the node under consideration and the highest entry into the graph. For example, the longest signal path for the node 1 in the S3 BDD is highlighted in the circuit in Fig. 2.13a by bold lines. Depending on which point the signal sent to input 1 is to be calculated and observed, there are different path segments to be simulated in the S3 BDD, determined by different entries into the graph, such as 82 , 142 , 21, 26, and highlighted in bold in the last row of Table 2.2, respectively. More precisely, the node 1 represents the path segment (1, 82 ) in the circuit for the sub-graph G8,2 in S3 BDD, (1, 142 ) for G14,2 and (1, 212 ) for G21,2 .

2.4 Shared Structurally Synthesized BDDs

25

Begin

× q0 0

q1

Z 1

0

y3

x1 1

y4

y1 × q1 0

x2

y2, y3

q’0 q’2

x2 y2 q0

1

R x1

y2, y4 × q2

0

Z

y1

x1

q0

q2

q’1

Z

x1 1

y1 × q0

(a)

(b)

(c)

End

Fig. 2.14 S3 BDD for a microprogram automaton (a Mealy machine) [375]

Example 2.4 Figure 2.14 presents an example of an S3 BDD for a control circuit, represented as a data flow diagram, and as a set of Boolean equations designed from the data flow diagram. Here, Z denotes the Start signal, R is a Reset signal, and the variables x i, yi , and qi represent inputs, outputs, and the state variables of the circuit, respectively. The current state of the circuit is denoted by q, and the previous state by q'. The clock signal is omitted in the equations. The system of 7 Boolean equations in Fig. 2.14b consists of 18 variables, whereas the S3 BDD model in Fig. 2.14c consists of only 12 nodes, demonstrating the gain in the complexity of the model. The 7 equations of the circuit to be used for designing the circuit are represented by 7 subgraphs, embedded in the two S3 BDDs, and sharing common subgraphs. In Table 2.3, we present for the ISCAS’85 [51], ’89 [39] and ITC’99 [86] benchmark circuits in columns 2–5, the sizes of gate-level circuits and the sizes of SSBDD and S3 BDD models [455]. In columns 5 and 6, we compare the gains in the sizes of the SSBDDs and S3 BDDs in the numbers of nodes versus the numbers of lines in the gate-level circuits. The reduction in the number of nodes in S3 BDDs compared with SSBDD allows speedup in logic simulation. In fault simulation, the gain is even bigger. If the gain in the number of nodes is n, then the gain in logic simulation fault by fault is as well n times. But the gain in fault simulation is even up to 2n2 times, because each node has a double impact, as a simulated signal and as a site of two faults as well.

26

2 Overview of Structural Decision Diagrams

Table 2.3 Comparison of the sizes of SSBDDs and S3 BDDs versus circuit sizes Circuit

Lines (gates)

1

2

Nodes

BDDs versus gates

SSBDD

S3 BDD

SSBDD

S3 BDD

3

4

5

6

c432

432

308

284

1.40

1.74

c880

775

497

442

1.56

1.96

c1335

1097

809

584

1.36

1.99

c1908

1394

866

684

1.61

2.14

c2670

2075

1313

1145

1.58

2.05

c3540

2784

1648

1362

1.69

2.12

c5315

4319

2712

2333

1.59

1.96

c6288

4864

3872

2448

1.26

2.01

c7552

5795

3552

2793

1.63

2.13

s13207

12,441

5228

4287

2.38

3.31

s15850

14,841

6075

4997

2.44

3.35

s35932

32,624

19,547

15,666

1.67

2.34

s38417

34,831

16,160

12,640

2.16

3.10

s38584

36,173

19,179

16,379

1.89

2.42

b14

19,491

8248

6871

2.36

2.96

b15

18,248

15,254

12,795

1.20

1.48

b17

64,711

46,397

38,949

1.39

1.72

b18

2E+05

132,122

105,495

1.51

1.96

b19

4E+05

267,092

212,806

1.50

1.94

Aver

48,837

28,994

23,314

1.68

2.25

2.5 High-Level Decision Diagrams 2.5.1 About the Similarity of Low- and High-Level Structural DDs The main idea of the concept of S3BDDs (S3 BDDs) is in the one-to-one mapping of the gate-level structural information of logic circuits into the uniform graphical form of Decision Diagrams (DD). As a result, the heterogeneity of the components of logic circuits converts into a homogeneous DD model with uniform atomic structure, with nodes as atoms, and uniform processing algorithms for the nodes. The importance of the uniformity of the DD model becomes even more apparent when we move from the gate level to the higher FFR level, where the complexity is reduced while maintaining homogeneity. In this chapter, we consider further possibilities of generalizing DDs from the logic level to higher functional levels by introducing a general term of High-Level

2.5 High-Level Decision Diagrams

27

GY

Gy y

m0

l1 lm

#1

Y lm

m

0

l0

l1

m

#0

(a)

ln Fn

2

l2

lj l0

1

Fj

lk lk+1

Fk

Fk+1

(b)

Fig. 2.15 Topological similarities of low- and high-level fault reasoning in SSBDDs and HLDDs

Decision Diagrams (HLDD) with the property of mapping between the nodes of HLDDs and the high-level structural components of digital systems. The meaning and the role of the universality of DDs is in the possibility of generalization and extension of the methods for test generation, fault simulation and fault diagnosis, developed for logic level circuits to higher abstraction levels of digital systems, using uniform graph topology-based formalism. For this purpose, we extended the class of variables from Boolean to Boolean vectors or integer variables. Another extension applies to the nodes of HLDDs by labeling them instead of Boolean variables with references to arbitrary functions (logic, arithmetic, algorithmic), which may be, in turn, represented by lower-level DDs. The uniformity of test generation and fault simulation is explained in Fig. 2.15, where two skeletons for DD’s topology at different levels are illustrated: Fig. 2.15a refers to an SSBDD Gy , and Fig. 2.15b refers to an HLDD GY . In the case of SSBDDs, each non-terminal node m has two output edges, and the graph has two terminal nodes, #0 and #1, labeled with Boolean constants 0 and 1, respectively. HLDDs differ from SSBDDs in having more than two edges from non-terminal nodes and having more than two terminal nodes. In the general case, the terminal nodes may have as labels word level constants, register variables or functional expressions. Both graphs represent a mapping into the structure of the system they describe. In both cases, the faults in the system can be modeled with errors at the node variables (or functions), and for both types of graphs, the conditions of fault detection are similar. When a test pattern is applied, a path lj in the HLDD, shown as an example in bold in Fig. 2.15, is activated, producing an expected value: Y = F j . Any node on the activated path may be a “candidate” for fault detection by the test pattern at the location associated with the node. To identify using the HLDD, if a fault is detected by the test pattern, in the general case, we have to:

28

2 Overview of Structural Decision Diagrams

1. 2. 3. 4. 5. 6. 7.

inject the fault by changing the value of the node variable on the activated path, fix the expected value of the output variable Y at the given test pattern, simulate the new, modified by the fault, test pattern, determine the terminal reached by simulation of the modified faulty pattern, calculate the value Y F of the expression in the terminal node, and compare the calculated value Y F with the expected value Y. if the values are different, the fault is detected by the pattern; otherwise, it is not detected.

The test generation task is the opposite. To generate a test for the fault x(m) ≡ 0 of the node m in the SSBDD, we have to activate in Gy with test pattern the paths lm and l 1 , and with modified by fault path l0 . In the case of test generation for the node m in the HLDD, we have to create the suitable fault model (a set of possible faults R(m) of the component represented by the node m). The fault model identifies for each fault r ∈ R(m) two different edges ej and ek , between which there would be a diversion due to the fault r. Then, to test a fault r(ej , ek ) ∈ R(m), we have to activate with test pattern two paths lj and l k , so that F j /= F k . For different classes of systems or sub-systems, there is a need to choose suitable fault models. It is the only difference between SSBDDs and HLDDs. Otherwise, the graph topology is handled similarly in both cases. In this sense, the SSBDDs can be regarded as a special case of HLDDs.

2.5.2 RTL Modeling of Digital Modules with HLDDs In the control-intensive RTL descriptions, we usually partition the systems or subsystems into control and data parts. In this case, the non-terminal nodes in HLDDs represent the control part, and their labels are control signals. On the other hand, the terminal nodes in HLDDs correspond to the data part. They are labeled with data words or functions of data words, which may represent buses, registers, or data manipulation sub-circuits or sub-systems. Example 2.5 In Fig. 2.16, an example of an RTL data part and its cycle-based HLDD is presented. The HLDD describes the behavior of the input logic of the register R2 . The result of the process carried out in the data part is saved in register R2 when the clock signal is applied. The integer variables y1 , y2 , y3 , y4 represent the control signals produced by the control part, but for the data part, they are modeling the multiplexers M 1 , M 2 , M 3 and the input logic of the register R2 , respectively. The variables R1 and R2 in terminal nodes represent the current states of registers, IN denotes the input bus, and the expressions R1 + R2 and R1 * R2 represent the adder and multiplier, respectively. Each node in the HLDD refers to a structural element or system component. To test a node in the HLDD means to test the bus, component or sub-circuit corresponding to the node. To simulate a clock cycle, we traverse a path

2.5 High-Level Decision Diagrams

(a)

29

(b)

Fig. 2.16 Representing a register-transfer level data part of a digital system by an HLDD

in the HLDD according to the values of control signals. For example, if y4 = 0, the register R2 is reset to R2 = 0. If y4 = 2, y3 = 3, and y2 = 0, a multiplication takes place R2 = R1 * R2 . The traversed path in the HLDD is highlighted in bold. Without going into details, we may refer here to a simplified fault model for verifying the correctness of the operation modes of the control unit related to the internal nodes. For example, to verify the correctness of the node y2 , we have to apply the test pattern T = (y4 , y3 , y2 , y1 ) = (2, 3, 0, -) highlighted with a bold path in Fig. 2.16b, in the loop for all data patterns which satisfy the condition R1 *R2 /= IN * R2 in all bits of the data words. Depending on the fault classes we target, the data constraints may be different and more complex, as discussed further in the book.

2.5.3 RTL Modeling of Control Circuits with HLDDs For representing the control parts of digital systems at the high level, State Transition Diagrams (STD) are traditionally used. This model is widely used for design purposes and high-level simulation of control processes. The STDs are a suitable tool to enable a straightforward understanding of the system’s functionality. However, for highlevel fault reasoning and for solving test-related tasks, such as test generation, the HLDDs are more suitable than STDs in terms of better expressiveness for reasoning cause-effect relationships. In [462], alternative graphs with integers in terminal nodes were introduced for representing the control parts of digital systems with the goal of high-level calculation of testability parameters (controllability and observability). A similar model was proposed in [80] under the name of Multi-Terminal BDDs (MTBDD) to represent functions from binary-vector to integer, and in [53] under the name of Algebraic DDs (ADD) for handling arithmetic operations, such as addition, subtraction and multiplication.

30

2 Overview of Structural Decision Diagrams

Res

1/0 x1

q.y

1 Res

1

x1

0

x2

x2

4/1

5/0

1

4

1

x1

0

2

1.0

3/0

2/1

3

q’

2 3

7

x2

1 0

4 x1

x1

6/1

5

11 1 x1

6

0

x

(a)

2.1 3.0 4.1 5.0 6.1

1.0 x.0

5 6 8 9 10

12 13

(b)

Fig. 2.17 Representing a control part of a system by an HLDD

Example 2.6 An example of the HLDD for representing the STD model of a control part of a digital system is depicted in Fig. 2.17. In this HLDD in Example 2.6, a new extension is introduced into the HLDD model for processing vector variables. Both the entry label of the HLDD and the labels in terminal nodes represent complex vectors in the form of concatenated components. The components may be either variables or constants, being either Boolean or integer. Let us call the graph in Fig. 2.17b a Vector High-Level Decision-Diagram (VHLDD). HLDDs and VHLDDs are shared, multi-terminal and multi-valued in nature, using the traditional ontology of BDDs. The VHLDD in Fig. 2.17b describes the behavior of the vector q.y, where q is the next state integer-type variable, and y is the output Boolean-type variable of the control part. The non-terminal nodes are labeled by Boolean input variables x i , reset signal Res and the integer current state variable q' . The current state variable q' is a multivalued variable, and for each value, the node q' has a respective neighbor node. Terminal nodes are labeled by vector constants. The constant vector of the terminal node m is assigned to the state/output vector q.y, component by component, when a clock signal is applied, and a respective path from the root node Res to the terminal node m is activated. For example, at the current state q' = 2, and at the input signals Res = 0, x 2 = 0, the path from node 1 to node 9 is activated (highlighted in bold). When applying the clock signal, the new state is q = 5, and the Boolean output y = 0. The symbol * in terminal node 13 means that the new state of the control circuit is unknown. Such a case may happen if the control circuit produces an illegal signal due to a possible fault.

2.5 High-Level Decision Diagrams Fig. 2.18 Two methods of representing the uncertainty in logic-level sequential circuits

S J C

K R

31

q

S

T

R

U

q R

c

q’

q’

J

K

For describing the uncertainty situations in sequential circuits and in flip-flops by logic level SSBDDs, along the two terminal nodes with Boolean constants #0 and #1, a third “unknown constant” was introduced [425]. In Fig. 2.18, a BDD for the multi-functional flip-flop is shown. If both asynchronous inputs S and R have the active value 1, then the state of the flip-flop is unknown. This situation can be discovered when simulating the BDD in Fig. 2.18 and reaching the terminal node labeled with a symbolic constant U. This extension in BDDs can be considered as a special case of multi-terminal BDDs [74, 80].

2.5.4 Instruction-Level Modeling of Microprocessors In the case of microprocessors, the terminal nodes in the HLDDs are labeled by highlevel functions to be processed in the data part of the processor. This construction enables simulating the data part very efficiently, when generating test sequences for the control part, described by the non-terminal nodes. When generating tests for the data part, the functions at the terminal nodes have to be targeted separately, using for test generation the logic level structural descriptions of these functions represented in turn with DDs, this time as SSBDDs or S3 BDDs. In this sense, we can consider the terminal nodes of HLDDs as the border (or interface) between two hierarchical levels of system modeling, enabling easy cross-level operations to cope with the structural complexity of systems. Example 2.7 Consider a simple hypothetical microprocessor given at the ISA level by the list of 10 instructions in Fig. 2.19a. Based on this information, we can represent the microprocessor by three instruction-based HLDDs: GOUT , GA , and GR , which describe the processor’s output behavior, and the behavior of the two subsystems— accumulator A and register R, respectively. In this example, in Fig. 2.19c, we see that the system of three HLDDs is actually representing a network of three high-level modules, where each of them is described by its HLDD. In other words, we can say that transforming a processor’s instruction set into the HLDD model is equivalent to transforming the behavior of the microprocessor into a structural model. For such a

32

2 Overview of Structural Decision Diagrams A

I1: I2: I3: I4: I5: I6: I7: I8: I9: I10:

A ← IN OUT R←A OUT ← R OUT ← A R ← IN R A ← IN A←A+R A←A∨R A←A∧R A ← NOT A

MVI A,D MOV R,A MOV M,R MOV M,A MOV R,M MOV A,M ADD R ORA R ANA R CMA A,D

(a)

3

I

I

7

A 2 5

IN

2,3,4,5

R

4

I

1,6

8

A

9

IN

10

1,3,4,6-10

A

IN

A+R A∨R A∧R

R

I

OUT A

¬A

R

(b)

(c)

Fig. 2.19 Instruction set, HLDD and high-level structure of a hypothetical microprocessor

model, we can apply common test generation methods typically used for high-level structures, such as RTL networks considered in Example 2.5. Figure 2.19 illustrates test generation for node A + R in the HLDD GA . We assume that the test patterns for testing the adder are generated at the logic level using SSBDDs, such as represented in Fig. 2.11. To activate path 7 in the HLDD GA , we apply the instruction I7 : ADD R (highlighted in bold in A). Before that, the registers A and R must be loaded with one of the test patterns for the adder. For loading R, we activate path 5 in GR by applying the instruction I7 : MOV R, M (highlighted in bold in GR ). For loading A, we activate path 6 in GA by applying I6 : MOV A, M. For reading out the result of the test (storing it in the memory), we activate path 4 in GOUT by applying I4 : MOV M, A. In this way, we have generated a test program: T = (I6 , I5 , I7 , I4 ). This test program must be executed in a cycle for all test patterns generated at the logic level for the adder. In the HLDDs in Fig. 2.19b, we see the similarity with the HLDD in Fig. 2.16b. The difference is in the more significant number of internal nodes in the paths from the root node to the terminal nodes in Fig. 2.16b. The same increase in the number of internal nodes also happens in the case of instruction-level modeling of microprocessors if we split the instruction code into fields and represent each field by a separate control variable. As in Fig. 2.16b, and also in Fig. 2.19b, the terminal nodes enable a hierarchical approach to test generation for microprocessors: for the control part at the high level, targeting the internal nodes of HLDDs, and for the data part at the low level, targeting the respective SSBDD models for the functions labeling the terminal nodes of HLDDs. When generating tests for the control signals at non-terminal nodes of the HLDD, we need the high-level fault model for these nodes. This problem is discussed further in Chap. 8. The HLDD model serves as a valuable tool to cope with the complexity of the low gate-level modeling of digital systems. The primary motivation for introducing the HLDD model was to integrate the decision principle with hierarchical decomposition and, in this way, to cope with the complexity problems of diagnostic analysis of digital systems.

2.6 Structural DDs as a Universal Tool for Digital Test

33

The core principle of HLDDs is partitioning the system’s behavior into two independent parts: decision-making and operation. The decision-making is related to the control actions (control part of the system), whereas the operating part is related to the portfolio of the system’s functionality (the set of operation modes or functions of the system). The assumption is that the complexity of high-level decision-making is manageable at a single higher level, whereas the operational functionality, consisting of the most complex part of the system, has to be handled hierarchically. For this reason, the topology of the given HLDD is mainly devoted to the implementation of the decision-making related to the selection and activation of proper operations. To cope with the complexity of the system’s operational part, two steps of decomposition are foreseen in HLDDs. First, the decision-making part splits the system’s functionality into a set of independent functions related to the terminal nodes of HLDDs. Second, each terminal node represents a separate function implemented as a system’s module or a sub-system and can be, in turn, the subject of presentation and partitioning with necessary detail in the next lower level of the hierarchy. The implementations of the functions related to terminal nodes of the HLDDs may be represented either as next-level HLDDs, or as the networks of SSBDDs, or as single SSBDDs (or S3 BDDs).

2.6 Structural DDs as a Universal Tool for Digital Test The described structural DDs, in the form of SSBDDs, S3 BDDs and HLDDs, represent a universal homogeneous structural model for the diagnostic modeling of digital systems regarding diverse tasks of test-related analysis. The statement about universality is illustrated in Fig. 2.20. It is based on juxtaposing two methods of modeling the given circuit: gate-level modeling of FFRs as gate networks versus single SSBDD-based modeling. Assume that the design environment has to serve for performing a variety of m diverse tasks related to design and test problems, for example, as the m = 9 diverse tasks listed in the middle column of Fig. 2.20. When using the gate-level reasoning concept, we need, for each of the m tasks for n different types of gates of the given library n special dedicated models or calculation procedures. As an example, the formulas or functional data for the AND gate are depicted in the left column. In the case of using SSBDDs, we need for each task only a single procedure running on the SSBDD as a parameter representing the given type of a gate, complex gate or a network composed of logic gates. In the case of the traditional gate-level approach, when a new type of component, e.g., a complex gate, appears in the design library, a new set of m diverse algorithms is to be developed for that new component and introduced into the new library. Most of the tasks and the notations listed in the left column, and the related algorithms to be used for SSBDDs are discussed later in the book. We can consider the universality of SSBDDs in horizontal and vertical crossviews. Let us call the discussed cross-task universality the horizontal one.

34

2 Overview of Structural Decision Diagrams

Fig. 2.20 Horizontal cross-task universality of SSBDDs (named previously as AGs) [437]

The vertical cross-level universality is illustrated in Fig. 2.21, showing the possibility of multi-level use of DDs, combining low-level SSBDDs with high-level DDs having the same decision concept and topology of the graph model. However, they are considered at different hierarchical levels with different complexities. Presenting the design at different levels, depicted in Fig. 2.21, traditionally needs to develop different languages and algorithms for each level. Transforming the descriptions into decision diagrams allows for the use of a uniform DD formalism for performing diverse simulation and testing tasks at different levels of system abstractions. The design and test tasks can be divided into direct and reverse tasks, such as simulation and effect-cause (or cause-effect) fault reasoning, respectively. The procedural descriptions of designs are efficient for simulation-based direct tasks, but are not well suitable for directly revealing the cause-effect relationships as reverse tasks. At the same time, the DD-based model is well-suitable for both tasks, for direct simulation, and for reverse diagnostic reasoning of systems. On the other hand, compared to behavioral ISA level representation in the form of system graphs [404], RTL languages [387], or VHDL descriptions [498], which use a lot of different high-level fault models, and which need a dedicated test-related

2.6 Structural DDs as a Universal Tool for Digital Test

35

Fig. 2.21 Vertical cross-level universality [437]

procedure for each fault model, the SSBDD/HLDD approach is based on a uniform DD-based fault model, and uniform test related algorithms.

Chapter 3

Structurally Synthesized BDDs

In this Chapter, we present the theoretical basics of structural BDDs considering, first, the model of Structurally Synthesized BDDs (SSBDDs) and then the more concise model of Shared SSBDDs (S3 BDDs). We present the definitions of the models and the methods of synthesis of the models from the descriptions of the given gate-level circuits. We discuss the main properties of the graphs, describe their relationships to Boolean algebra and compare them with traditional BDDs. We also consider the issues of optimization of the models by introducing the theory of equivalent transformation of SSBDDs and developing the lower bounds of the sizes of S3 BDDs created from a given gate-level circuit.

3.1 Synthesis of SSBDDs We present a method for the synthesis of SSBDDs from gate-level circuits by iterative superposition of elementary BDDs of logic gates. The method allows representing in SSBDDs, the structural specifics of the circuits alongside their functions by establishing the one-to-one mapping between the nodes of SSBDDs and signal paths in the respective circuits. Such a mapping creates the possibility of diagnostic reasoning of single and multiple faults directly in the model, which is not possible for traditional BDDs that describe only Boolean functions. We also present a method to verify the correctness of SSBDDs in terms of compliance with certain structural rules to guarantee the viability of the algorithms of diagnostic fault reasoning.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 R. Ubar et al., Structural Decision Diagrams in Digital Test, Computer Science Foundations and Applied Logic, https://doi.org/10.1007/978-3-031-44734-1_3

37

38

3 Structurally Synthesized BDDs

3.1.1 Definition of SSBDD The structural BDDs were proposed, under the name of Alternative Graphs (AG), in [335, 422, 428, 430, 440], for generating test patterns of gate-level digital circuits. Two classes of BDDs were introduced: structural and functional AGs for diagnostic modeling of circuits, fault simulation and test generation. The functional AGs were similar to the binary decision programs [245], not known by this time for the pioneers of AGs. The structural AGs were renamed later to Structurally Synthesized Binary Decision Diagrams (SSBDD) considering the subsequent wide use of Binary Decision Diagrams (BDDs) after the appearance of the seminal paper by Randal Bryant [60]. Definition 3.1 SSBDD is a rooted directed acyclic graph G = (M,Γ , X) with a set of nodes M = {m}, where m0 ∈ M is the root node. Γ —is a relation on M, where Γ (m) ⊂ M and Γ 1 (m) ⊂ M denote the sets of neighboring successors (child nodes) and predecessors (parents) of the node m, respectively. We label the nodes m ∈ M with Boolean variables x(m) ∈ X. The nodes m have two successor (child) nodes, denoted by me ∈ Γ(m), where e ∈ {0, 1}. The graph has two terminal nodes (leaves), denoted by mT ∈ M T = {#0, #1}, and that are labeled by Boolean constants e(mT ) ∈ {0, 1}, respectively. Let us call the terminal nodes #0 and #1, as 0-terminal and 1-terminal, respectively. Definition 3.2 Activating edges and paths. Variables x(m) ∈ X can be assigned with Boolean vectors. If there exists an assignment x(m) = e, we say that the edge (m, me ) in G is activated. Let us call the edge (m, m1 ) activated with x(m) = 1 a 1-edge of the node m, and the edge (m, m0 ) activated with x(m) = 0 a 0-edge of the node m, respectively. Activated edges, which connect nodes mi and mj form an activated path l (mi , mj ) from node mi to node mj . An activated path l(m0 , mT ) is called a fully activated path. There may be more than one path between the nodes mi and mj . Let us call L{mi , mj } a set of all paths between the nodes mi and mj . Definition 3.3 SSBDD represents a function. SSBDD Gy = (M,Γ , X) represents a Boolean function y = f (X) where X = (x 1 ,x 2 , …, x n ), iff for all vectors X t ∈ {0, 1}n , there is a full path l(m0 , mT ) activated in Gy so that y = f (X t ) = e(mT ). Definition 3.4 SSBDD represents the structure of a digital circuit. SSBDD Gy = (M,Γ , X) represents a Boolean function y = f (X) in the form of Boolean parenthesis expression, which describes a gate-level fan-out-free combinational circuit C y with a set of inputs IN, and is composed on the basis of AND, OR and NOT gates, where |M| = |IN|, |X| ≤ |M| and there exist a bijection M → IN and a surjection M → X. Each node m in Gy represents a signal path C(x(m), y) ⊂ C y in the circuit C y from the input with variable x(m) to output y. Definition 3.5 SSBDD nodes represent the locations of faults in a digital circuit. In the SSBDD Gy that models the digital circuit C y , each node m in Gy represents a compact joint fault location of all faults R(m) related to the signal path C(x(m), y) ⊂

3.1 Synthesis of SSBDDs

39

C y in the circuit from input x(m) to output y. For the class of Stuck-At Faults (SAF), R(m) = {x(m)/0, x(m)/1}, where x(m) ≡ 0 and x(m) ≡1 are the notations for the faults SAF/0 and SAF/1, respectively, of the fault location related to the node m in Gy , and hence, to all lines on the signal path C(x(m), y) ⊂ C y in the circuit. Example 3.1 Let us consider in Fig. 3.1a digital circuit with a Fan-out-Free Region (FFR) highlighted by a dotted rectangle, which implements a Boolean expression. y = f (x1 , x2 , x3 , x4 ) = x11 x21 ∨ x12 (x31 ∨ x4 ) ∨ x13 x22 x32 .

(3.1)

Figure 3.1b shows the SSBDD Gy , which represents the FFR C y . The graph Gy , is synthesized by a superposition of elementary BDDs of the gates in the circuit C y (See in Sect. 3.1.2). It consists of 8 nodes labeled with input variables of the FFR, which correspond to the literals of the expression (3.1). Between the paths in the FFR of the circuit and the nodes in the graph, there exists a one-to-one mapping. For example, the node x 11 in the SSBDD represents the signal path C(x 11 , y) = (x 11 , a, y) in the circuit. Note that node x 1 is not represented in the graph Gy . Recall the convention made in Sect. 2.3, regarding omitting the labels on the edges and the terminal nodes. Instead of overloading the drawings with too much information, we agree that exiting the node m along the edge (m, m1 ) to the right will correspond to the assignment x(m) = 1, and exiting the node m down along the edge (m, m0 ), will correspond to the assignment x(m) = 0. Similarly, a missing edge to the right means entering the 1-terminal labeled by constant #1, and a missing edge downward means entering to the 0-terminal, labeled by constant #0. As mentioned in Sect. 2.3 (see also Fig. 2.10), every complex combinational circuit can be regarded as a network of FFRs with inputs at fan-out branches or primary inputs. The SSBDD model for such a complex combinational circuit, represented as

y x11 x21

x1 x2

&

&

x22

x12

a

&

1

x13

1

0

x12 x31

x3 x4

x11 1 x21

&

1

x31 x4

y x13

x22

x32

x32 0

(a) Fig. 3.1 A circuit and its SSBDD [403]

(b)

40

3 Structurally Synthesized BDDs

a network of FFRs, is, in turn, a network of SSBDDs with a one-to-one mapping between FFRs and SSBDDs. As a result of using SSBDDs, the complexity of the hierarchical two-level network of the given design measured in the number of fault locations (lines in the plain gatelevel FFR and nodes in the corresponding SSBDDs) reduces. This phenomenon is called fault collapsing. Such a decrease in the number of faults takes place for each FFR over the full network. Another positive feature of the SSBDD network is the homogeneity of the model at both levels of the hierarchy. The homogeneity of SSBDDs manifests itself in uniform operating algorithms to be applied to SSBDDs representing the FFRs throughout the entire network. At the lower level of hierarchy, the atomic components of graphs, the nodes, are all treated by the same algorithms, opposite to the FFRs as plain gatelevel networks, which are heterogeneous, because the algorithms for processing of components differ depending on the types of gates. The property of homogeneity of the operating principles of SSBDDs, allows more efficient reasoning of the node topology in SSBDDs than of the heterogeneous gatelevel structures of FFRs. Particularly, this concerns the reasoning of cause-effects in fault simulation and interactions of multiple faults inside the FFRs. As we see later (Chaps. 7 and 8), the same phenomenon of homogeneity repeats at the higher RT-level module networks, where different types of modules are processed by dedicated procedures, whereas in HLDDs as a generalization of SSBDDs, representing the modules in RTL networks, the algorithms of processing the HLDDs are uniform independent of the types of modules they represent.

3.1.2 Synthesis of the SSBDD Model for a Digital Circuit SSBDDs are generated by a superposition of the elementary BDDs of gates (or complex gates) of the given circuit, which enables representing both, the functionality and structure of the circuit in the model with proper detail [430, 440]. A similar procedure is also used for creating the traditional BDDs for given logic expressions as a process of iterative logic operations with BDDs starting from the trivial BDDs for variables [381]. For the synthesis of SSBDDs for a given gate network, the substitution procedure of nodes with graphs is used. If the label x(m) of a node m in the SSBDD Gy is an output of a subnetwork, which is represented by another SSBDD Gx(m) then the node m in Gy can be substituted by the graph Gx(m) . In this graph superposition procedure, the following changes for Gy and Gx(m) are made.

3.1 Synthesis of SSBDDs

41

Algorithm 3.1 Substitution of the node m in Gy with Gx(m) 1. Remove the node m from Gy . 2. Connect the incoming edges of m in Gy to the root node m0 in Gx(m) . 3. Redirect all edges in Gx(m) connected to terminals mT,e , e ∈ {0, 1}, to successors me of m in Gy , respectively.

Assume that the BDDs of the logic gates of the given gate-level circuit are provided as a BDD library. Let us call the components of the library as elementary SSBDDs. An example of the library of elementary gates AND, OR, NAND, and NOR is depicted in Fig. 3.2. According to the agreement above, the 1-edges in SSBDDs go to the right, the 0-edges down. In this way, AND gates are always presented as horizontal chains of nodes, whereas OR gates are presented as vertical chains of nodes. If there is an inverter in the output of the gate, we apply De Morgan’s Law, to invert the input variables. A given gate-level network can be considered as a set (network) of connected elementary BDDs. The SSBDD synthesis represents a transformation of the given network of elementary BDDs, representing an FFR, into a single compressed SSBDD. Starting from the BDD of the output gate of the given FFR and using iteratively Algorithm 3.1, we reduce the model at each iteration by one node and by one graph. To avoid the explosion of the SSBDD model, we generate the SSBDDs only for tree-like subnetworks—FFRs. As a result, we get for a given gate-level circuit a macro-level network, where to each macro (FFR) an SSBDD corresponds. Example 3.2 The process of synthesis of an SSBDD for the given gate-level circuit is illustrated in Fig. 3.3. We start with the output AND gate, and its BDD Gy , which

NAND

AND x1 x2 x3

&

x1

y

1

x2

x3

x1 x2 x3

1

0

&

y

0

x1 x2 x3

1

1

x2

OR 1

x1

x3

y

0

x1 x2 x3

0 x1 x2 x3

NOR 1

Fig. 3.2 An example of a library of elementary logic gates

y

x1

x2

x3

42

3 Structurally Synthesized BDDs

consists of two nodes a, and b. The input b of the AND gate is simultaneously the output of the OR gate represented by the BDD Gb, which consists of the nodes x22 and x 3 . First, we substitute the node b in Gy by the graph Gb . Thereafter, the node a in Gy is substituted by the graph Ga , which consists of the nodes x 1 and x 21 . The final graph, which represents the whole circuit except the fan-out stem x 2 , consists of the nodes x 1 , x 21 , x22 , and x 3 . The full SSBDD model for the given circuit consists of two graphs: the final graph created in Fig. 3.3, and, optionally, the graph consisting of a single node x 2 . We need to add into the model the fan-out node x 2 , as an extra graph Gx2 , if we want to specifically emphasize and represent the fault location of the single fault at the fan-out input x 2 . For fault-free simulation, we do not need the graph Gx2 . In the case of fault simulation, in the absence of the single node graph Gx2 , we would have to convert the single fault simulation of the fault at x 2 to the multiple fault simulation case of faults at all branches of the node x 2 . Hence, the absence of the single node graph Gx2 would complicate the fault simulation algorithm. In general, the full SSBDD model for a given combinational circuit consists of SSBDDs for all tree-like fan-out-free regions (FFR) and of (optional) 1-node SSBDDs for all primary inputs, which have fan-out branches. In the case of sequential circuits, the library of elementary components has to include also Flip-Flops (FF), which can be represented by respective BDDs. Since standard FFs are usually equipped with standard local tests, they no longer need a structural representation by SSBDDs, and they can be modeled with more compact

y

a

b

y

a

x22

b

x1 x2

x21

1

a

x3

x3 &

x22

x22

x3

y

b

1

a

b y

x1

x22

x21

x3

x1

x21

a (a) Fig. 3.3 Synthesis of SSBDD from the gate-level circuit

(b)

3.1 Synthesis of SSBDDs

43

JK Flip-Flop

D Flip-Flop D

c

c

q

D

q

S

T

J

q’

S

U

q

C

K R

R

RS Flip-Flop R

S

q

R

T

S

c

q’

q’

J

K

R

T

U - unknown value R

q’

U

Fig. 3.4 Elementary functional BDDs for D, JK and RS Flip-Flops

functional BDDs. An example of a subset of elementary functional BDDs for D-, JK- and RS-flip-flops is depicted in Fig. 3.4. To develop functional elementary BDDs with minimum size for flip-flops, the synthesis of functional BDDs starts with the most important variable. For example, in the case of D Flip-flops (see Fig. 3.4), the “most important” variable is the clock signal. In the case of JK Flip-Flop with preset (S) and clear (R) inputs, the S and R signals are the most important. In the latter case, S and R represent asynchronous signals, which means that they will have an immediate effect on the output q regardless of the clock signal, and of the values of J and K inputs. If the S and R signals are both active, the state of the Flip-Flop will be indeterminate (unknown). To stress this aspect, we have introduced (see Fig. 3.4) a third constant terminal node labeled by “U” (an unknown value) into the BDD [335]. The BDDs of this type are called multi-terminal BDDs [80]. For sequential FFR networks, we synthesize the SSBDD models separately for all FFRs with outputs at the primary outputs of the circuit, fan-out nodes inside the circuit, and for flip-flop inputs. An example of a circuit consisting of 3 FFRs is depicted in Fig. 3.5, where the SSBDD model consists of three graphs Gy , Gx8 and GT for the primary output y, internal fan-out x 8 , and D-flip-flop T, respectively. Optionally, we may add to the model also a single node graph Gx3 to represent the fan-out primary input. The graph GT represents a hybrid BDD: considered as a whole as a functional BDD, which, however, contains the SSBDD Gz as a sub-graph.

44

x1 x2 x3

3 Structurally Synthesized BDDs

&

z

x31 1

MT D T

c

C

x81

T

&

x8 x82 x7

1 x5

&

&

Mx8 z

y

x6

x81

x7

x82

x8

y

&

1

x32 x4

My

x6

T x5

T

x32

SSBDD

c

x1

T

x3

x2

x4 Fig. 3.5 SSBDD for a sequential circuit consisting of 3 FFRs

3.1.3 Verification of the Correctness of SSBDDs SSBDDs are a special type of BDDs that are generated by superposition according to the structure of the given gate-level circuit. Efficient algorithms for fault simulation, diagnostic reasoning of faults, and test generation, running on SSBDDs, exploit the specific properties of the SSBDDs. Therefore, one must be sure that the SSBDDs are correct by definition before using those algorithms. The problem of “recognizing” the correct SSBDD, we can reduce to the problem of recognizing the skeleton of a superpositional graph (SG), a subclass of binary graphs. The paper [321] presents the theory of SG, and proposes linear time algorithms for testing, whether a BDD is an SG. The algorithms are based on checking if the necessary and sufficient conditions for a BDD to be an SG, are satisfied [331]. In the following, we present the basic definitions and statements characterizing the skeleton of SSBDDs and its properties, which can help us to check the compliance of the given BDD with the definition of SSBDDs [321]. Definition 3.6 A binary graph G is traceable if there exists a directed path through all internal nodes of G. We call this path a Hamiltonian path. Since a BDD is acyclic, therefore, if the Hamiltonian path exists, it is unique. Figure 3.6a presents an example of an SSBDD in the standard form, constructed by superposition of elementary BDDs for logic gates, and in the stretched-out linear

3.1 Synthesis of SSBDDs

y

x11 1 x21 1

45 #1

0

0

x12

x31

1 1

x4 1

x11

1

x21

0

x13

x22

0

#1

x12

x31

x4

x13

x22

x32 #0

x32 1

#0

(a)

(b)

Fig. 3.6 Two ways to present an SSBDD

form in Fig. 3.6b, to visualize the existence of the Hamiltonian path in the graph. Bold arrows mark the Hamiltonian path. Definition 3.7 A binary graph G is homogenous, if only one type of edge (i.e. either 1-edges only, or 0-edges only) enters into every node m ∈ M(G). A path in the binary graph is homogeneous, if it consists of only one type of edges. As an example, in Fig. 3.6a, we see that only 0-edges are entering into the node x 12 , and only 1-edges are entering into the 1-terminal. Definition 3.8 The edges (mk , mp ) and (ml , mr ) of a binary traceable graph are crossing edges if k < l < p < r. Consider two graphs in Fig. 3.7, which both represent the XOR gate. The graph in Fig. 3.7a is structural SSBDD, the graph in Fig. 3.7b represents the traditional functional BDD. The former graph does not have crossing edges, whereas the latter one has them. The graphs that do not have crossing edges are called planar graphs. Definition 3.9 We say that a binary traceable graph is strongly planar if there are no crossing 0-edges, and no crossing 1-edges. It is obvious that if a binary graph is strongly planar, then it is also planar. For example, in the stretched form of SSBDD in Fig. 3.6b, there are no 1-edges below the horizontal line, and there are no 0-edges above the horizontal line. Definition 3.10 We say that a binary traceable graph is 1-cofinal (0-cofinal) if all 1edges (0-edges), starting between the endpoints of some 0-edge (1-edge) and crossing it, end in the same node. A binary traceable graph is cofinal if it is both 1-cofinal and 0-cofinal. Theorem 3.1 A binary graph is superpositional, and hence, an SSBDD, iff it is a traceable, strongly planar, and cofinal graph. The proof can be found in [321].

46

y

3 Structurally Synthesized BDDs

xi1

#1

y

xi

xj2

xj

#0

#0 (a)

#1

(b)

Fig. 3.7 Two BDDs representing the XOR gate

Example 3.3 Figure 3.8 contains two binary graphs. Both graphs are traceable, they have a Hamiltonian path, and they are strongly planar. However, the graph in Fig. 3.8.a, is not 0-cofinal, since the nodes x 2 and x 3 lie between the endpoints of the 1-edge (x 1 , x 4 ), but 0-edges (x 2 , x 6 ) and (x 3 , x 5 ) point to different nodes. According to Theorem 3.1, the graph is not superpositional. The graph in Fig. 3.8b is cofinal and, therefore, it is a superpositional graph. In [321], linear time algorithms are given for testing if a given binary graph is superpositional and also for restoring a sequence of superpositions that generates a given superpositional graph. In this way, it is possible to regenerate from the SSBDD the digital circuit it presents.

y

x1

1

x2

x3

0

x4

0

x5

x6 (a) Fig. 3.8 a Not-cofinal and b cofinal binary graphs

y

x1

1

x2

x3

0

x4

0

x5 x6

(b)

3.2 Properties and Specific Features of SSBDDs

47

3.2 Properties and Specific Features of SSBDDs In this Section, we present, first, a brief overview of the basic operations in SSBDDs, related to different tasks of testing and diagnosis, such as path traversing, path activation, fault simulation and test generation. Then we list the basic properties of SSBDDs, which distinguish them from the traditional BDDs, and which can be exploited to improve the efficiency of SSBDD-based algorithms. We compare the SSBDDs with Boolean algebra by showing how the different Boolean laws are represented as respective equivalent transformations of SSBDDs. The proposed SSBDDs are compared with traditional BDDs, and a straightforward method is presented for transforming SSBDDs into traditional BDDs. The opposite transformation is not possible. Based on the comparison of SSBDDs with BDDs, we refer to the useful possibilities of cooperative usage of both models. We also discuss the problem of redundancy of SSBDDs, which manifests itself in two ways: as inherent functional redundancy (compared with traditional optimized BDDs) due to the need to represent the structure of the circuit (not needed for BDDs) and structural redundancy reflecting the redundancy inherent in the represented circuit.

3.2.1 Basic Operations on SSBDDs In the following, we consider four basic operations on the SSBDD model and relate them to the activities on the circuit represented by SSBDD. Two operations relate to the elementary path traversing, i.e., (1) walking in the graph, driven by the given input pattern, is called logic fault-free simulation, and (2) activating a path in the graph, is called input pattern generation. Other two operations are related to the reasoning of the test patterns of the circuit using SSBDD, i.e., (3) fault simulation as a procedure of double traversing of the graph at the given test pattern, and (4) test pattern generation for the circuit by activating two proper paths in the SSBDD.

3.2.1.1

Path Traversing and Logic Simulation

Logic fault-free simulation of the given test pattern X t , applied to FFR y = f (X), is performed in SSBDD Gy by traversing the path from the root node m0 to one of the terminal nodes, according to the values of variables in X t . The result of logic simulation is the value y = e, where e ∈ {0, 1} is the Boolean constant in the terminal node #e, reached by path traversing. We call the chain of traversed nodes the path activated by the test pattern X t , according to Definition 3.2. Differently from the logic simulation in the gate-level circuit, where, in general, all input variables and all logic elements in the circuit are traversed, in the case of SSBDDs, it is sufficient to traverse only these nodes, which belong to the activated path. Assume the size of SSBDD is n nodes, and the length of the shortest path from

48

3 Structurally Synthesized BDDs

y

x4 x5

1

x1 x2

0

x3

1

1 0

x1

x2 0

&

x3

x4

#1

1

&

1

x6 0

0

y

x5 x6

0

0 Fig. 3.9 Traversing a path in an SSBDD

the root node to one of the terminal nodes is nmin . Then, for the length ntr of the path to be traversed in logic simulation, we have nmin < ntr ≤ n. There are SSBDDs, where ntr = n will never happen. This is the case when the longest path is functionally not feasible (see Property 3.4 in Sect. 3.2.2). Example 3.4 Consider a circuit and its SSBDD in Fig. 3.9. The bold edges on the SSBDD show the traversed path, driven by the pattern X t = (x 1 , x 2 , x 3 , x 4 , x 5 , x 6 )t = (001100). Only 3 nodes need to be traversed to calculate the value y = 1 by reaching the 1-terminal. The longest path in the SSBDD has the length n = 6. For comparison, simulating the same pattern using the gate-level circuit description (or the related Boolean expression) would require 6 accesses to variables and performing of 4 logic operations.

3.2.1.2

Path Activation

Path activation is the reverse task of logic simulation. The task of activating a path l(mi , mj ) means to create an input pattern X t by a proper assignment of input variables of X, which activate a chain of edges from node mi to node mj . Since in general, l(mi , mj ) ⊆ L(mi , mj ), where L(mi , mj ) is the full set of all possible paths between the nodes mi and mj , the task of activating a path l(mi , mj ) is a search problem. The goal in path activation with the purpose of test generation is usually to find the shortest path l(mi , mj ) in the search space L(mi , mj ). The shorter is the activated path, the fewer variables need to be fixed. Since the path activation task implies a search problem, the fewer fixed variables there are, the fewer constraints there are in solving the search problem.

3.2 Properties and Specific Features of SSBDDs

49

The practical application of the path activation task is to justify the selected assignment of type y = e, where e ∈ {0, 1}, by a search for a solution of the equation y = f (X) = e. In this case, a set of suitable assignments to variables x ∈ X must be found such that a path l e = (m0 , #e) is activated. Example 3.5 The shortest paths in the SSBDD in Fig. 3.9 for justification of y = e, e ∈ {0, 1}, are: l 1 = (x 1 , x 2 , #1), for y = 1, by the assignment of x 1 = x 2 = 1, and l0 = (x 1 , x 3 , x 6 , #0), for y = 0, by the assignment of x 1 = x 3 = x 6 = 1.

3.2.1.3

Fault Simulation

Fault simulation is a two-step task of path traversing, which has the goal of identifying if a given test pattern detects a fault x(m) ≡ e (if x(m) is stuck at the value “e”), where e ∈ {0, 1}. The more powerful task of fault simulation is to identify simultaneously by a single run the full set of faults the given test pattern can detect. Fault simulation proceeds in two steps with two path traversing actions. Step 1. The goal of the first path traversing is twofold: (1) identifying the path, activated by the given test pattern, le = (m0 , #e), and (2) calculating the value of y = e, where e ∈ {0, 1}. Step 2. The goal of the second path traversing is to perform the following operations: (1) select a node m ∈ l e = (m0 , #e), where x(m) = e, for fault simulating, (2) inject a fault x(m) ≡ e by the new assignment x(m) = e, and (3) traverse the new path lm * = (m0 , #e*). If #e /= #e*, the test pattern detects the fault x(m) ≡ e. For the more powerful task of fault simulation, Step 2 is repeated for all nodes m, which belong to the activated path and satisfy the condition x(m) = e. The meaning and the need of the requirement x(m) = e is explained when discussing the Property 3.8 of SSBDDs in Sect. 3.2.2. Example 3.6 Consider the test pattern X t = (1, 2, 3, 4, 5, 6) = (001110), applied to the circuit in Fig. 3.7. In Step 1, the path l 1 = (x 1 , x 3 , x 4 , #1) is traversed. From this path, we identify y = 1, and the requirement of x(m) = 1 is satisfied for two nodes: x 3 , and x 4 . Let us apply Step 2 for both of them. After injecting the fault x 3 ≡ 0, we traverse the path lx3 * = (x 1 , x 3 , x 6 , #0). The terminals reached by l1 and l x3 * are different; hence, the test pattern can detect the fault x 3 ≡ 0. After injecting the fault x 4 ≡ 0, we traverse the path l x4 * = (x 1 , x 3 , x 4 , x 5 , #1). The terminals reached by l1 and l x4 *, coincide; hence, the test pattern is not able to detect the fault x 4 ≡ 0. We can easily merge these two steps to speed up fault simulation.

3.2.1.4

Test Generation

Test generation is the reverse task of fault simulation. Differently from fault simulation, where a test pattern is given, in test generation, the goal is to create a test

50

3 Structurally Synthesized BDDs

pattern, which would be able to detect a given fault x(m) ≡ e, e ∈ {0, 1}. Formally, such a test pattern presents a solution to the equation f (X )|x(m)=0 ⊕ f (X )|x(m)=1 = 1

(3.2)

From (3.2), it follows that test generation represents a double path activation task, where the two paths, activated by test pattern, must pass through the node m, and reach different terminals #0 and #1, depending on the value of x(m). According to that, the test generation for detecting a fault x(m) ≡ e, e ∈ {0, 1}, is a procedure, which consists of two path activation steps. Step 1. Create a test pattern X t , which activates a path, le = (m0 , #e), so that m ∈ le , and x(m) = e. Step 2. Extend the pattern X t , so that in case of x(m)/e, the pattern activated a path lm * = (m0 , #e*), where m ∈ lm *, and #e /= #e*. We can easily merge these two steps to speed up test generation. Example 3.7 Consider test generation for the fault x 4 ≡ 0 in the SSBDD in Fig. 3.9. In the first step we generate a test pattern X t = (x 1 , x 2 , x 3 , x 4 , x 5 , x 6 )t = (0–11–) that activates the path lx4 = (x 1 , x 3 , x4 , #1) where x 4 = 1. In the second step, we update the test pattern to X t = (0–1100) which activates the path lx4 * = (x 1 , x 3 , x 4 , x 5 , x 6 , #0) for the case if the fault were present. The two paths lx4 and l x4 * reach different terminals, #1 and #0, respectively, which means that the Eq. (3.2) is satisfied, and the generated test pattern detects the fault x 4 ≡ 0.

3.2.2 Basic Properties of SSBDD In the following, we present the most important properties of SSBDD, which distinguish them from traditional BDDs, and which support improving the efficiency of the algorithms of test pattern simulation, test pattern generation, and fault diagnosis. Property 3.1 The nodes of the SSBDD, and the signal paths in the gate-level circuit, represented by the SSBDD, have a one-to-one relationship. This is the decisive and most important property of SSBDDs, and this property results from the method of synthesis of SSBDDs. The fact that the nodes in an SSBDD represent signal paths in the original circuit allows the SSBDD to model the faults in the circuit explicitly. With traditional BDDs, representing only the function of the related circuit, the faults can be modeled only implicitly by providing the fault lists and creating one by one new faulty BDDs for each fault. Property 3.2 Gate-level subcircuits of FFRs can be mapped to related subgraphs of SSBDDs. Similarly, full signal paths and the related fault sets along the paths in

3.2 Properties and Specific Features of SSBDDs

x1 x2 x3

x31

&

&

x32

M1

&

x4

FFRy

b

&

a

M2

51

y

c

x5

M3

&

y

& M3

M1

x1 g

x33

x5

g x33

x2

(a)

x4 gM2

x31

x32

(b)

Fig. 3.10 Mapping between subcircuits and sub-graphs in the SSBDD

FFRs can be mapped to single nodes and to the faults at these nodes, respectively, in SSBDDs. Such a mapping allows easier reasoning of interactions of signals, such as static and dynamic hazards, and interaction of multiple faults in different paths, respectively. Similarly to SAF analysis, the path delay faults, or transition delay faults along signal paths in FFRs can be mapped to related delay faults at nodes in SSBDDs. The listed structural characteristics of logic circuits cannot be simulated explicitly with BDDs representing only the functions. Example 3.8 As an example, in Fig. 3.10, a circuit and its SSBDD are presented. The node x 31 in the SSBDD represents the full signal path through 5 lines x 31 , a, b, c and y in the FFRy . The two node faults x 31 ≡ 0 and x 31 ≡ 1 in the graph represent in total 10 stuck-at faults on the lines of the signal path in the circuit. Considerable compaction in the list of detectable faults is achieved when using SSBDDs, called fault collapse. Figure 3.10 highlights three mappings between the subcircuits M 1 , M 2 and M 3 of the FFRy and the related subgraphs with the same names in the SSBDD. Property 3.3 Each SSBDD is always a BDD. The opposite is not valid. The problem of identifying if a given BDD is an SSBDD is discussed in Sect. 3.1.3. Property 3.4 Not all paths in the SSBDD, which represents a function y = f (X), can be traversed and activated. Definition 3.11 If a path in SSBDD cannot be activated, we say the path is false (or infeasible). The paths which can be activated are called true (or feasible) paths. We call the existence of false paths in SSBDDs a functional redundancy of SSBDDs. The reason for the infeasibility of paths in an SSBDD is that the paths in SSBDDs may contain the same variable in several nodes, either in inverted or not inverted forms. This means that when activating a path in the SSBDD, and entering during this procedure a node m with already assigned value x(m) = e, the continuation of the path through the node m is forced through the edge (m, me ) only. The route in

52

3 Structurally Synthesized BDDs

another direction, through the edge (m, m e ), is determined, therefore, to be a false path. The redundancy in SSBDDs may be structural and functional. The structural redundancy of SSBDDs is the direct reflection of the existing redundancy in the related circuits, whereas the functional redundancy in terms of false paths is the property of the SSBDD model itself. Example 3.9 Figure 3.11a illustrates an SSBDD representing a redundant function, y = (xi ∨ xi xk )xl . We can detect this redundancy in the SSBDD by identifying the false path (x i , x i , x k , x l ), which cannot be activated because the assignment of x i = 0 activates the path (x i , x i , #0). In the graph in Fig. 3.11b, the redundancy is removed. In the SSBDD in Fig. 3.12b, which represents XOR circuit in Fig. 3.12a, there is also a false path (xi1 , x j2 , xi2 , #1), shown in bold. However, there is no structural redundancy in SSBDD because the node xi2 represents a signal path (xi , xi2 , y) in the circuit. In the case of the functional BDD in Fig. 3.12c, for the same circuit, but with the hidden structure of the circuit, the node xi2 is considered as redundant and is removed. In the optimized BDD, all paths are true. Here, we have the case of the functional redundancy of SSBDDs. The node xi2 in SSBDD in Fig. 3.12b, is functionally redundant but structurally important.

xl

xi

y

y

xi

xl

xi

xk

=

xi

xk

(b)

(a) Fig. 3.11 An example of a redundant SSBDD

xi xj

xi1 xj1

y &

xj2 xi2

xi1

y

1

xi

y xj2

#1

1

xj

#0

(a)

(b)

Fig. 3.12 Structural SSBDD and functional BDD for XOR circuit

(c)

3.2 Properties and Specific Features of SSBDDs

53

#1 y

x5

x33

x1

x2

x31

x4

x32 #0

Fig. 3.13 Hamiltonian path of the SSBDD in Fig. 3.10

Property 3.5 Each SSBDD has a single Hamiltonian path through all nodes of the graph. This is the longest path in SSBDDs, which also determines a strong ordering of nodes in SSBDDs. As we see later, from the fact that the nodes in SSBDDs are strongly ordered, the possibility arises to simulate in SSBDDs at the same time, in parallel, many test patterns (see Sect. 5.1.1.2). This allows for significant speed-up of both fault-free and fault simulation in digital circuits. Example 3.10 An example of the SSBDD, presented in Fig. 3.10b, is shown in Fig. 3.13 in the stretched-out form as a string of nodes along the Hamiltonian path of the graph. The linear path highlights the ordering of nodes in the graph. The path is true and can be walked through. But, it is not always possible to walk in SSBDDs through the Hamiltonian path. Another example of a Hamiltonian path in an SSBDD is presented in Fig. 3.6b. It is easy to see that for activating this longest path, we need the following condition to be satisfied: x11 x21 x12 x31 x4 x13 x22 x32 = 1

(3.3)

But, the condition (3.3) cannot be satisfied due to x11 x13 ≡ 0. From that, it follows that the topologically longest path in this graph is functionally false. The breakpoint on the highlighted longest path in Fig. 3.6.b is node x13 . Because of the assignment x 1 = 1 made at the root node x11 = 1, the walk on the path after the node x13 ends in the 0-terminal. Property 3.6 The two neighbor-nodes m1 , and m0 of a node m in an SSBDD are always connected with each other either via the path (m1 , m0 ) or via the path (m0 , m1 ). Figure 3.14 illustrates this property. The proof of Property 3.6 results from the method of superposition used during the SSBDD synthesis, and from the existence of a Hamiltonian path in the SSBDD. Let us prove Property 3.6 by contradiction. Consider the graph in Fig. 3.15a. There is a node x i with neighbors x k and x j . Assume that there is no path from x k to x j . Then, according to Property 3.6, there must be a path from x j to x k . Let us argue by

54

3 Structurally Synthesized BDDs

y

m

m1

y

m

m0

m1

m0

Fig. 3.14 Property about the connection between neighbors of the node

y

xi

xk y

xi

xj

xm

xj xk xm (a)

(b)

Fig. 3.15 To the proof of Property 3.6

contradiction that the path from x j to x k is missing. Then, there must be a path that crosses the path from x k to x m that contradicts Definition 3.7 and Theorem 3.1 about the planarity of SSBDDs. Consequently, either the graph is not SSBDD, or there must still be a path from x j to x k . A similar reasoning can be carried out symmetrically also for the SSBDD in Fig. 3.15b. Property 3.6 is one of the most important properties of SSBDDs, which allows recognition in SSBDDs the sub-circuits of the FFR (see Sect 3.3.1.2). Property 3.7 Assume that there are two nodes a, and b so that a < b in the Hamiltonian path of the SSBDD and there is no true path that can be activated from b to 1-terminal (or 0-terminal), then there is also no path from the node a, which can be activated to 1- terminal (or 0-terminal), respectively. The proof results also from Theorem 3.1, according to which SSBDD is planar, like Proof 3.6. Property 3.7 is important in that it provides good guidance on how to reduce the search space when activating the required paths in SSBDDs. Example 3.11 Consider the SSBDD model, consisting of three graphs Gy , Gz1 , and Gz2 , in Fig. 3.16 for illustrating Property 3.7. Let us have the task of finding the

3.2 Properties and Specific Features of SSBDDs

z2

y

z2

z1 z1

x9

x4

x1

x9

#1

#1

x2

x3

x5

x6

x7

x8

x10

x1 #0

55

#0

Fig. 3.16 SSBDD for illustrating the Property 3.7

assignments of variables x i to solve the equation y = z 1 z2 = 1 in the graph Gy . By assignment x 9 = 0, and x 1 = 0, we have activated the path l(x 9 , #0) in Gz1 , and assigned z1 = 0. The last task of activating the path l(z 1 „ #1) = (z 1 , z2 , #1) in Gy is to assign z2 = 1 by activating the path l(x 1 , #1) in the graph Gz2 . The shortest path from x 1 to #1 in Gz2 would be the string of nodes (x 1 , x 4 , x 9 , #1). However, we have already assigned x 1 = 0, and hence, forced to activate another path starting from x 2 towards #1 (shown in bold). Reaching the node x 9 , we see that the value x 9 = 0 is already fixed, which means that we are directed to #0. It is easy to see that any other trial to change the values of node variables x 2 ,… x 8 , would always enter into the path (x 8 , #0) or cross this path, which, according to Theorem 3.1, is not possible in SSBDDs, because they are planar graphs. Therefore, the task to activate a path from node 1 to the 1-terminal, fails already at this point when, during traversal of the path (x 1 , x 2 , x 3 , x 4 , x 5 , x 9 ), the node x 9 is met. Using Property 3.7, we can immediately leave the graph Gz2 , and backtrace the search tree to the point where we fixed the value x 9 = 0 in Gz1 . There we see that there is another option to assign z1 = 0 by activating a path (x 9 , x 10 , x 1 , #0) that allows us to assign x 9 = 1, and activate the shortest path (x 1 , x 4 , x 9 , #1) in Gz2 , and solve the task y = z 1 z2 = 1. Property 3.7 is useful for improving the test generation algorithms to prune the search space when the assigned values to variables must be justified. In fault simulation and fault diagnosis, another type of problem occurs. Assume a test pattern has activated a path, and one must identify possible fault candidates on that activated path. In this case, we suggest using the following rule. Property 3.8 If a test pattern X t activates in SSBDD a path l 1 (m0 , #e), e ∈ {0, 1}, then only these nodes on the path, where x(m) = e, may be the candidates of fault locations, to continue the fault simulation for checking if there exists an activated path l 2 (m0 ,#e) for the changed value of x(m) = e.

56

3 Structurally Synthesized BDDs

y

x1 x2

x3

x4

x6

x5

x7

x8

#1

#0 Fig. 3.17 Identification of fault candidates using Property 3.8

The proof of the property results from Theorem 3.1, according to which SSBDD is planar, similar to the Properties of 3.6 and 3.7. Example 3.12 Consider the path through the nodes (x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 ) in the SSBDD, activated to the 0-Terminal in Fig. 3.17. According to Property 3.8, only the nodes x 1 , x 6 , and x 7 have a chance to be the candidate fault locations. For nodes x 6 , and x 7 , simulation of node x 8 is needed to check if the faulty value in the nodes x 6 and x 7 would lead to 1-Terminal. We can exclude the node x 1 from the list of candidates due to Property 3.7. Other nodes are not fault candidates.

3.2.3 SSBDDs and Boolean Algebra Since the SSBDD is a graphical model for presenting Boolean expressions describing FFRs of digital circuits, it would also be useful to point out its graphical equivalence to basic Boolean postulates and transformations. Consider in the following a subset of Boolean Laws and their equivalence in the SSBDD technique with corresponding comments.

3.2.3.1

Boolean Laws

Boolean Commutative Law. The Boolean Commutative Law, presented by the formulas (3.4) and (3.5), is illustrated in Fig. 3.18 as swapping nodes in the SSBDD model. Two nodes x i and x j connected by 0-edge (1-edge) can be swapped, if the 1-edge (0-edge) from both enters the same node. (

) ) ( xi ∨ x j xk = x j ∨ xi xk

(3.4)

xi x j ∨ xk = x j xi ∨ xk

(3.5)

3.2 Properties and Specific Features of SSBDDs

y

xk

xi

y

=

xj

y

57

xj

xi

xj xi

xj

y

=

xk

xk

xi

xk

Fig. 3.18 Boolean Commutative Law as Swapping of Nodes in SSBDD

This law is useful when transforming the SSBDDs into functional BDDs for optimization purposes. As an example of this case, we consider Fig. 3.24, where the nodes x 3 and x 4 are swapped. As we will see in the following, this law is useful also in equivalent transformations of SSBDDs (in Sect. 3.3) in the process of S3 BDD synthesis. Boolean Idempotent Law. The Boolean Idempotent Law, presented by the formula (3.6), illustrates Fig. 3.19 as skipping the nodes in the SSBDD model. (

y

xi xi xj

) ) ( xi ∨ xi ∨ x j xk = xi ∨ x j xk

y

xk

(3.6)

xi

= xj

Fig. 3.19 Boolean Idempotent Law as Swapping of Nodes in SSBDD

xk

58

y

3 Structurally Synthesized BDDs

xi

xj

xi

xk

xn

y

xm

xi

xj

xm

xk

= xn

Fig. 3.20 Boolean Distributive Law as Node Skipping in SSBDD

Since SSBDDs are usually redundant in functional terms due to the repetition of the same variable in the paths of SSBDDs, the idempotent law is useful if the reduction of the size of the graph is the target, for example, when transforming a structural SSBDD into a functional BDD is the goal. Boolean Distributive Law. The Boolean Distributive Law, presented by the formula (3.7), is illustrated in Fig. 3.20 as skipping and removing nodes from the SSBDD model (

) xi x j ∨ xi xk xm ∨ xn = xi (x j ∨ xk )xm ∨ xn

(3.7)

If we traverse nodes in SSBDDs and encounter the same variable repeatedly on the traversed path, we can skip the respective node and remove it from the graph, if the minimization of the BDD is the purpose, e.g., in case of transforming the structural BDDs into optimized functional BDDs (see Sect. 3.2.3.2). On the other hand, we must realize that having applied this law and removed nodes from the graph, the latter can no longer said that it describes the structure of the circuit it is modeling. Boolean Absorptive Law. The Boolean Absorptive Law, presented by the formula (3.8), is illustrated in Example 3.9 and in Fig. 3.11 as a removal of nodes from the SSBDD model. This law enables a reduction in a complex expression to a simpler one by absorbing identical terms. (xi ∨ xi xk )xl = xi xl

(3.8)

Similarly to Boolean commutative, idempotent and distributive laws, the absorptive law can be used when a transformation of structural SSBDDs into functional BDDs is the goal. Inverting SSBDDs (De Morgan’s Law). Figure 3.21 illustrates De Morgan’s Law. To invert an SSBDD, for each node m, one must invert its variable x(m), and swap its adjacent nodes m0 and m1 . One must perform this procedure iteratively for all nodes of the graph.

3.2 Properties and Specific Features of SSBDDs

59

Fig. 3.21 Inverting Boolean function (De Morgan’s Law)

y

xi xk

xj

y*

xi

xk

*

xj

Fig. 3.22 Generating dual and inverted dual SSBDDs

Dual and inverted dual SSBDDs. The generation of Dual and Inverted Dual SSBDDs for the given SSBDDs is illustrated in Fig. 3.22. To get a dual form of SSBDD (Fig. 3.22b) from the given SSBDD (Fig. 3.22a), the neighbors of each node must be swapped. To get the inverted dual SSBDD (Fig. 3.22c) from the given one, all node variables must be inverted as well. Recognizing DNF and CNF in SSBDD. Consider in the following the SSBDD presented in Fig. 3.23. Each path l(m0 , #1) in the graph presents a term of the Disjunctive Normal Form (DNF) of the function represented by the SSBDD. The terms can be identified by AND-ing the variables at the nodes with 1-edges (1-nodes) on that path. For example, the SSBDD in Fig. 3.23 has three 1-paths representing the 3 terms of the DNF: y = x 1 x 2 ∨ x 3 x 4 ∨ x 3 x 5 x 6 . In a similar way, each path l(m0 , #0) in the SSBDD contains a term of the conjunctive normal form (CNF) of the function represented by SSBDD. The terms, represented by the paths from the root node m0 to 0-terminal, can be identified by OR-ing the variables in the nodes with 0-edges (0-nodes) on that path. For example, the SSBDD in Fig. 3.23, has six 0-paths representing the 6 terms of the CNF y = (x1 ∨ x3 ) (x1 ∨ x4 ∨ x5 ) (x1 ∨ x4 ∨ x6 ) (x2 ∨ x3 ) (x2 ∨ x4 ∨ x5 ) (x2 ∨ x4 ∨ x6 ).

From above, it follows that SSBDDs simultaneously present both the DNK and the CNK of a given Boolean function in a compressed form, from which it is possible

60

3 Structurally Synthesized BDDs

Boolean function: y

x1

x2

x3

x4 x5

y = x1x2 x3 (x4 x5x6) #1

y

x2

x3

x4

x6

x5

1-nodes of a 1-path represent a term in the DNF

#1

x6

#0

#0

x3x5x6 = 1

x1

0-nodes of a 0-path represent a term in the CNF

x1x4x5 = 0

Fig. 3.23 Recognizing the terms of DNF and CNF in the given SSBDD

to choose a shorter formula to represent either the 1-region or the 0-region of the function. To summarize this short overview of the properties of SSBDD, one should highlight the most important contribution of SSBDDs. Unlike the “traditional” BDDs, the SSBDDs directly support test generation and fault simulation for gate-level structural SAF without representing these faults explicitly. The advantage of the SSBDD-based approach is that no library of components is needed for structural path activation, as it is the case in traditional gate-level path activation.

3.2.3.2

Transformation of SSBDDs into BDDs

The overview of the main Boolean laws, such as commutative, idempotent, distributive, and absorptive laws, showed that all these laws, traditionally used for simplification and minimization of Boolean expressions, can also be used for simplification of SSBDDs and to remove from them the redundancy, related to the topology. On the other hand, this redundancy is an intentional burden on the model to reflect the structural details of the circuit. Removing the topological redundancy from SSBDDs is a way of transforming them into functional BDDs. As we see in the next chapter, there are several aspects, where it would be effective to use both models, structural SSBDDs and the functional BDDs together, to complement each other. In these cases, we need a procedure to convert SSBDDs into BDDs.

3.2 Properties and Specific Features of SSBDDs

61

Traditional BDDs are easy to synthesize from the given SSBDD by tracing all consistent paths in the graph, removing the topologically redundant paths and functionally redundant nodes. For this redundancy removal, the rules of graphical conversions, equivalent to the application of Boolean laws, are used. When tracing the paths in SSBDDs, the double or multiple meeting of the same variable is a sign to that one of the Boolean laws, such as the idempotent, distributive, or absorptive rule, can be applied for removing a node and optimizing the BDD. Figure 3.24 illustrates the full process of the transformation of an SSBDD in Fig. 3.24a into an optimized traditional BDD in Fig. 3.24c. All the paths in the original SSBDD are iteratively traversed (towards 1-terminal), as illustrated in Fig. 3.24b. The nodes, whose variable comes across on the path the first time, we mark with a circle, and put the node into stack. The nodes, whose variable has come up already, we skip and remove from the model. The nodes removed from the model are marked with variables without circles in Fig. 3.24b. When reaching a terminal, we go back, take a node from the stack, and start to traverse from this node on a new path in another direction (towards the 0-terminal). In the example in Fig. 3.24b, first, the 1-path is traversed, and then the 0-path. The procedure continues until the stack is empty. The BDD is then ready. In this process of creating the BDD, the second index of variables disappears because in the functional model, distinguishing the fan-out branches in the circuit does not have meaning anymore. For optimization purposes, the nodes from which the same terminal is reached in both directions are removed iteratively. In the final BDD, additional optimization is possible by converging different paths into the same node. For example, in Fig. 3.24c, for this purpose, we use the Boolean commutative law to swap the nodes x 3 and x 4 , to have the possibility of sharing the node x 3 for two paths in the BDD.

S2BDD y

x11 x12 x13

x11

x21

#1

x12

x12

x31

x4

x13

x13

x13

x13

x22

x32

y

x21 x31

22

x4 x32

(a) Original SSBDD

#0

(b) SSBDD under traversal

Fig. 3.24 Converting a structural SSBDD into a functional BDD

x1

#1

x2 x4

x2 BDD

(c) Optimized BDD

x3

62

3 Structurally Synthesized BDDs

3.2.4 SSBDDs and Functional BDDs BDDs and SSBDDs belong both to the large family of single-rooted binary acyclic graphs, including trees, used in mathematics for representing discrete sets and mapping among them. Binary means that there are two outgoing edges from the nodes of BDDs. There are, however, significant differences between traditional BDDs and SSBDDs. By their nature, traditional BDDs are function-based, whereas SSBDDs present the structure along with presenting the functions. The difference results from how the synthesis of graphs proceeds. While SSBDDs are synthesized according to the topology of gate-level circuits, the BDDs are constructed by recursive application of the Shannon expansion rule to all argument variables of the function represented. In the same way, various other decision diagrams are defined by using different expansion rules, for instance, the positive and negative Davio expansion rules [381, 368].

3.2.4.1

BDDs as a Network of MUXes

In BDDs, the nodes correspond to MUX (2 × 1) modules viewed as hardware realizations of the Shannon expansion rule. The decision variables assigned to the nodes serve as control variables of the multiplexers. Otherwise, the nodes in BDDs may have only functional meaning as the points where the decisions of type “if–then-else” apply. On the other hand, this interpretation shows the way to synthesize from a BDD a circuit using only MUX components. Example 3.13 In Fig. 3.25, a BDD and the network, consisting of MUX components to implement the function y = x1 x2 ∨ x1 x3 , are shown. The MUX network is synthesized by copying the topology of the given BDD, and the one-to-one mapping of the BDD nodes to the MUX-is of the network.

x3

x1

y 0

1

1

x3 0

1

x2

#1 0

0

0

1

1 x2

x1 0 1

0 #0

1

Fig. 3.25 A network derived from the BDD to realize the function y = x1 x2 ∨ x1 x3 [403]

y

3.2 Properties and Specific Features of SSBDDs

3.2.4.2

63

SSBDDs as a Double Topology

In contrast to the case of functional BDDs, which represent single-level networks of decisions, SSBDDs represent hierarchical networks as double topologies. At the SSBDD level, we are talking about the paths through the network of nodes, where each node, in turn, represents a signal path through the FFR, represented by the SSBDD. This gives a unique possibility to analyze the interactions of signal paths as elementary entities in digital circuits. The mapping between the SSBDD nodes and signal paths in FFRs is the decisive and most important feature of SSBDDs. Moreover, the mapping from gate-level circuits to SSBDDs is not restricted to the Boolean AND, OR, NOT basis only. At the system level, we can create Binary (or high-level) decision diagrams whose nodes can be mapped to complex circuits of logic or higher-level units. We discuss High-Level DDs in Chaps. 7 and 8.

3.2.4.3

Mapping the Circuit’s Structure to BDDs

The fact that the nodes in SSBDDs represent signal paths in the original circuit allows us to model with SSBDDs explicitly both, the structural architecture and the faults in the circuit structure. Consider the circuit and its SSBDD in Fig. 3.26a, b, respectively. The SSBDD is constructed for the FFR of the circuit, highlighted with a dotted rectangle. The circuit contains 7 paths, and each of them is mapped to the related node of the SSBDD. For example, the full path (x 31 , a, d, e, y) in the FFR is mapped into the single node x 31 in the SSBDD. Along with mapping of signal paths to the nodes of SSBDDs, there exists also one-to-one mapping between components or sub-circuits of the FFR and the subgraphs of the related SSBDD. As an example, one of such mapping is illustrated in Fig. 3.26a, b, regarding the module with output d, extracted and shown in grey color in both, the FFR of the circuit and SSBDD, respectively. We discuss the problem of identifying the mappings between the sub-circuits and sub-graphs in SSBDDs in Sect. 3.3.1. The possibility of mapping modules and sub-circuits into the sub-graphs of SSBDDs enables carrying out test generation targeting extracted modules one by one and using for the modules more specific and dedicated fault models than the simple SAF model, e.g., conditional SAF or physical defect-related fault models (see Sect. 4.2).

3.2.4.4

Mapping the Faults to BDDs

Similar mapping as between the paths in the FFR and the nodes in the SSBDD exists also between the stuck-at-faults (SAF) in a circuit and the SAF in the related SSBDD. For example, due to the mapping of the signal path l(x 31 , y) of the circuit in Fig. 3.26a to the node x 31 of the SSBDD in Fig. 3.26b, all SAF ≡ 0 and SAF ≡ 1

64

3 Structurally Synthesized BDDs

x1 x2

x31 & x41 & x42

x4

&

x5

x42

x1

d &

c

(a)

Module x31 #0 (b)

y

1

x5

x2

e

b

SSBDD

y

FFR

&

Module

x32

x3

a

BDD

y x32 x41

x3

x4

x2

#1

x1 #1

x4 #0

x5 (c)

Fig. 3.26 Mapping of nodes and a group of nodes from SSBDD back into the circuit

faults (10 in total) of the path l(x 31 , y), are represented by only two faults x 31 ≡ 0, and x 31 ≡ 1 in the SSBDD. This allows generating of test patterns with more concise structural SSBDD and then mapping the detected high-level faults to the real faults in the respective circuit. In Sect. 3.2.1.4, we introduced the basic operation of test generation in SSBDDs. The test pattern T 1 (x 1 , x 2 , x 3 , x 4 , x 5 ) = (1110-) generated for the fault x 31 ≡ 0, using the SSBDD in Fig. 3.26b, activates the path l(x 42 , #1) = (x 42 , x1 , x 2 , x 31 , x32 , x41 , #1), highlighted in bold. From that, in turn, it results that testing the fault x 31 ≡ 0 in the SSBDD is equivalent to testing the whole subset of faults in the path l(x 31 , y) of the FFR in Fig. 3.26a as follows: {x31 ≡ 0}SSBDD → {x31 ≡ 0, a ≡ 1, d ≡ 0, e ≡ 0, y ≡ 0}circuit . The mapping between the faults in signal paths of gate-level circuits and the faults in SSBDDs allows double-topology-based test generation and fault modeling by processing of two paths in two models that have different meanings. The sensitized path l(x 31 , y) in the FFR in Fig. 3.26a has the role of gate-level fault modeling and fault propagating by the given test pattern in the circuit. On the

3.2 Properties and Specific Features of SSBDDs

65

other hand, the path l(x 42 , #1), activated in the SSBDD, selects a subset of high-level faults as candidates for detecting by the test pattern. By fault simulation in the SSBDD, we can identify, that the generated test pattern for testing the fault x 31 ≡ 0 in the SSBDD, detects additionally also the high-level faults x 2 ≡ 0, and x41 ≡ 0. By mapping the nodes x 2 , and x41 in the SSBDD to the signal paths l(x 2 , y), and l(x 41 , y) in the circuit, respectively, we can also identify the additional faults detected by the given test pattern in the circuit as follows: {x2 ≡ 0}SSBDD → {x2 ≡ 0, a ≡ 1, d ≡ 0, e ≡ 0, y ≡ 0}circuit {x41 ≡ 0}SSBDD → {x41 ≡ 1, b ≡ 0, e ≡ 0, y ≡ 0}circuit Figure 3.26c depicts the traditional BDD, which represents the Boolean function of the circuit in Fig. 3.26a. Figure 3.26c represents an optimized BDD with a minimum number of nodes. The functional BDD is more concise than the structural SSBDD in Fig. 3.26b (it has one node less), but it does not represent the internal structure of the circuit. This is the reason why we cannot target in test generation the faults inside the FFR, using the functional BDD in Fig. 3.26c. However, there exists a one-to-one mapping between the nodes of the BDD in Fig. 3.26b, and the primary inputs of the circuit in Fig. 3.26a. It allows generating test patterns with the functional BDD for the faults at the primary inputs of the related circuit.

3.2.4.5

Comparison of Test Generation in Structural and Functional BDDs

There are several cons and pros for using structural SSBDDs and functional BDDs. Let us consider some of the problems. Example 3.14 Let us compare the test generation for a few faults in the circuit in Fig. 3.26a using both, the structural SSBDD in Fig. 3.26b and the functional BDD in Fig. 3.26c. To generate a test pattern for the fault x 31 ≡ 0 in SSBDD, we have to activate, as we saw in Sect. 3.2.4.4, a path l(x 42 , #1) through the node x31 to terminal #1. The test T1 (x 1 , x 2 , x 3 , x 4 , x 5 ) = (1110-) activates in the SSBDD the path l(x 42 , #1) = (x 42 , x1 , x 2 , x 31 , x32 , x41 , #1). For testing the fault x 31 ≡ 1, we activate in the SSBDD the path l(x42 , #0) = (x42 , x1 , x 2 , x 31 ,#0) through the node x 31 , and the second path l(x32 , #1) = (x32 , x41 , #1) for detecting the fault at the node x 31 . The respective test pattern is: T 2 (x 1 , x 2 , x 3 , x 4 , x 5 ) = (1100-), where only the variable x 3 has changed its value compared to the test T 1 . We cannot use the traditional functional BDD in Fig. 3.26c for generating test for the faults x 31 ≡ 0, or x 31 ≡ 1, because the BDD does not have such a node x 31 to represent the fan-out branches of the primary input x 3 of the circuit. However, we can generate with the BDD test patterns for the faults x 3 ≡ 0 and x 3 ≡ 1, which are located at the stem of this fan-out input x 3 . The test generation technique is the same for structural SSBDDs and functional BDDs. In

66

3 Structurally Synthesized BDDs

Fig. 3.26c, we can reason that for testing the fault x3 ≡ 0, we must assign x 3 = 1, and activate a 1-path, for example, l(x 3 , #1) = (x 3 , x4 , x 2 , #1), for the correct behavior of the circuit, and activate a 0-path, such as l(x1 , #0) = (x1 , x 4 , #0), for the case when the fault will take place. The respective test pattern is: T 3 (x 1 , x 2 , x 3 , x 4 , x 5 ) = (1110-), which is the same as T 1 . For testing the fault x 3 ≡ 1, it is enough to change the value of x3 in the test pattern T 3 . The problem with testing the faults of the fan-out primary inputs using functional BDDs, is that it remains hidden which branch the fault propagates along. To answer this question, we can always use post-simulation, but it needs a gate-level simulation that we want to avoid. For example, by simulating the test T 3 in the circuit, we see that it tests the faults x 3 ≡ 0, and x 31 ≡ 0, but it does not test x 32 ≡ 0. For targeting the fault x 32 ≡ 0, we have to use the SSBDD, or we have to work directly with the gate-level circuit, which we want to avoid due to the complexity reason. When using traditional BDDs for test generation, which do not represent the internal structure of the given circuit, there will always be a subset of internal faults remaining open whether they are tested or not. For example, the circuit in Fig. 3.26a contains 10 stuck-at faults, located at nodes x 31 , x 32 , x 41 , x 42 , and b5 , for which we cannot generate test patterns unambiguously using the BDD in Fig. 3.26c, but we can generate with SSBDD in Fig. 3.26b. However, the problem of testing the fan-out stems remains for SSBDDs, because these nodes are not represented in SSBDDs. For example, the SSBDD in Fig. 3.26b does not contain the node x 3 . To solve the problem, we must map the fault x 3 ≡ 0 in the circuit from the fan-out input x 3 to both branches x 31 , and x 32 , and generate a test for the multiple fault x 31 ≡ 0, and x 33 ≡ 0 in the SSBDD in Fig. 3.26b. Test generation for multiple faults needs another dedicated technique. The general case of multiple-fault testing, where there are more than one fault origin, is discussed in Sect. 6.2. In the next Section, we show that the case of the multiple faults due to the faults at fan-out stems in the FFR-networks can even be ignored.

3.2.4.6

Combined Use of Structural and Functional BDDs at the System Level

At the system level, we consider digital circuits as networks of FFRs. At this level, we have to propagate the local test responses from the output of the FFR under test to the primary outputs (PO) of the system and to justify the signals of local test patterns of the FFR at the primary inputs (PI) of the system. Since, for solving these tasks, we need to process only the functions of the rest of the system, the functional BDDs may be preferred at the system level over the SSBDDs due to the greater compactness. It follows from above that the best strategy for test generation for FFR networks is a combined use of both types of BDDs—structural SSBDDs for generating local test patterns for all FFRs separately, followed by propagation of test responses to primary outputs and justification of local test patterns using the traditional BDDs and SAT solvers.

3.2 Properties and Specific Features of SSBDDs

67

Local test patterns for FFR3 are generated using structural SSBDDs

x12 PI

BDD1

x1

x11 xn1

BDDn

xn

FFR0

xn2

BDDs

P0

xi,k-1

x0 X01 FFRi PI

4

PI

X02

BDD0

y (P0)

xi xi,k FFRj

P0

Fault propagation for X0 and signal justifications for X1 … Xn are carried out using functional BDDs

(a)

(b)

Fig. 3.27 Sharing both structural and functional BDDs to perform test generation

Such a scheme is illustrated in Fig. 3.27a, where local test patterns are generated for the FFR0 represented by an SSBDD, fault propagation from the output x 0 to the primary output y is processed using BDD0 , and the signals of the local test patterns (x 1 ,…, x n ) are justified using the related BDDs. In general, the inputs and outputs of FFRs belong to the fan-out nets. Assume there are two fan-out free regions FFRi and FFRj, connected with each other by the fan-out line x i and one of its branches x i,k , as shown in Fig. 3.27b. When FFRj is tested then the faults x i ≡ 0 and x i ≡ 1 cannot be the targets of test generation for FFRj . On the other hand, when FFRi is under test, then all the faults of FFRi together with the faults x i ≡ 0 and x i ≡ 1 are propagated to the primary output of the network via the fan-out node x i and any of its branches, such as x i,k . Hence, the faults x i ≡ 0 and x i ≡ 1 can be collapsed from the full set of faults, and they do not need to be the target in the test generation process. The faults of the primary fan-out inputs of the network of FFRs, constitute a special case, and can be treated as FFRs with a single node.

3.2.4.7

Fault Redundancy and BDDs

Logic redundancy in the digital circuit may be either undesirable, or sometimes as intended. Some hidden undesirable redundancy may often remain not detected in the design phase. On the other hand, functional redundancy may be intentionally inserted into a circuit to improve some non-functional parameters, such as quality, fault tolerance, avoiding hazards etc. Test generation is one of the ways of detecting redundancies in circuits by identifying not testable faults. If a non-testable fault is identified, the circuit can be updated by removing the non-intended redundancy. If a non-testable fault is related

68

x1 x2

3 Structurally Synthesized BDDs

1

x11

01

x21 x22

x3

1

x31

& &

01 10

x11

x21

y

x22

x31

x12

x32

y

x2

x1

x3

x12 x32

1

1

y

&

1 G Gate is not testable (a)

No test for x12 0, x32 0 (b)

(c)

Fig. 3.28 Hazard control circuitry with untestable highlighted 2 vertices (AND gate)

to intended redundancy, a new task may arise to make the functionally redundant parts of the circuit testable. This task we call design for testability. Both types of BDDs, the functional and structural ones, can contribute to detecting redundancy in the circuit, because both target different fault types, which we classified here as internal FFR faults and fan-out stem faults. Example 3.15 Consider in Fig. 3.28, a hazard control circuitry, where the highlighted gate is included to prevent unwanted transitions at the output of the circuit, called “glitches” (or hazards). If the input signal x 2 changes 0 → 1, and from the two paths, controlled by signals x 1 = x 3 = 1, one path has a longer delay than the other, the increased delay on one path can cause a glitch at the output gate. To prevent the hazard, we may insert additional gate G, which will be functionally redundant, and the faults SAF/0 at the inputs and output of the gate will be non-testable. There exists no path l(x 11 , x 32 ) through SSBDD in Fig. 3.28b to terminal #1, to be activated, for testing the faults x 12 ≡ 0, x 32 ≡ 0. Figure 3.28c presents the functional BDD for the circuit in Fig. 3.28a. Because the BDD describes only the functionality of the circuit, the role and place of gate G remain hidden, and the testability analysis is not possible using the BDD. Figure 3.29a presents functionally the same circuit as Fig. 3.28a, but is made testable by simply adding an additional testability control input T. If T = 1, the circuit is switched into the hazard-free functional mode, and if T = 0, the faults x 12 ≡ 0, and x 32 ≡ 0 are testable by test pattern X t = (T, x 1 , x 2 , x 3 )t = (0101). The pattern X t activates the path (x 11 , x 21 , T, x 12 , x 32 , #1) in SSBDD in Fig. 3.29b. If there is a fault in the circuit, either x 12 ≡ 0, or x 32 ≡ 0, the output will turn to y = 0. From this example, we see how useful SSBDDs may be for locating the causes of testability loss by identifying the missing paths in the model, and at the same time, for removing this loss, by showing how and where to insert these missing test paths.

3.2 Properties and Specific Features of SSBDDs

x1 x2

1

x11

01

x21

T x3

x22

1

x31

0

& &

1

x12 x32

&

1

G Gate is testable

1

0

69

y

1

1

y

A node is inserted to improve the testability

(a)

x11

x21

T

x22

x12

x32

x31

Test for x12 0, x32 0 (b)

Fig. 3.29 A testable hazard control circuitry

3.2.4.8

The Most Attractive Features of SSBDDs

SSBDD model has several specific features that make it attractive compared to other commonly used models, such as conventional BDDs (including state-of-the-art modifications) or gate-level netlists. The time cost of synthesis of the SSBDD model from the gate-level netlist is linear with respect to the number of logic gates (due to circuit partitioning), while it is exponential for conventional BDDs [252]. SSBDDs for a given gate-level digital circuit can be generated very rapidly using only a small amount of computer memory. The size of the SSBDD model is linear with respect to the circuit size, whereas the size of conventional BDDs tends to be exponential. The SSBDD model implicitly preserves structural information about the circuit, while the conventional BDD models do not. Therefore, unlike traditional BDDs, SSBDDs support structural test generation and fault simulation for gate-level faults in terms of groups of faults along signal paths in circuits without the need to represent the faults explicitly by modification of the model. The use of SSBDDs converts the circuit model homogeneous in terms of operating algorithms, compared to the gate-level representations, where different types of gates such as AND, OR, EXOR and all types of arbitrary complex gates must be treated by different dedicated to the gate algorithms. This property makes it more efficient and flexible to partition the circuits for introducing hierarchical algorithms of modeling and simulation. Instead of considering each gate and the gate-related faults separately, SSBDDs allow integrating and handling groups of gates as graph-based entities. There will be a one-to-one mapping between such groups and the nodes of SSBDDs without losing structural information essential for testing. This property provides additional fault collapsing possibilities for making fault modeling and simulation as well as test generation more efficient.

70

3 Structurally Synthesized BDDs

3.3 Equivalent Transformations of SSBDDs Simulation is the most critical tool in the design of digital circuits for verification and dependability evaluation purposes. It is especially critical when it comes to fault coverage analysis and fault simulation in digital systems. The speed of fault simulation highly depends on the data structures used in simulation algorithms and tools. The efficiency of using SSBDDs in simulation depends, in turn, on the distribution of the probabilities of input signals and how the nodes are ordered in the graph. In the following, we propose several methods of equivalent transformations of SSBDDs targeting restructuring and optimization of the order of nodes in SSBDDs, taking into account the given distribution of probabilities of input signal values applied to the inputs of circuits. A method and algorithms are proposed for the optimization of SSBDDs to speed up the algorithms of logic simulation in SSBDDs. We also propose a method for restructuring of SSBDDs into a form that makes it possible to share the same SSBDD jointly between other SSBDDs [200, 201]. It is a preparation step for generating a new type of structural BDD called Shared SSBDDs (S3 BDDs) for the given gate-level circuit.

3.3.1 Mapping Between SSBDDs and FFRs of Digital Circuits Many studies have been conducted on equivalent transformations of classical BDDs. It has been shown that variable ordering in BDDs largely influences the size of the BDD, varying from linear to exponential [34]. In [273], the node count and path length have been the optimization criteria. The targets of minimization of the size of BDDs [71] and variable ordering [97] have been used for saving area and power in circuits. Finding an optimal order of variables in BDD synthesis has been the target for improving path delay fault testability [124] and efficiency of test generation [413, 309]. Due to the difficulty of finding exact solutions in the task of BDD optimization, mostly genetic algorithms have been developed to find nearly optimal solutions [34, 71]. In [328, 61], it was shown that minimizing the average path length in BDDs can reduce the evaluation time of Boolean functions represented by BDDs. A method was proposed for variable re-ordering by giving priority to variables that produce minimum cumulative complexity for the sub-functions. Unlike the optimization of traditional BDDs, where the target is the minimization of the number of nodes, in the case of SSBDDs, the number of nodes must remain invariant to preserve the exact mapping between SSBDDs and the related gate-level circuit. The goal of equivalent transformations of SSBDDs is to find optimized orders of nodes in the graphs to increase the speed of path traversing in graphs and thereby speed up the simulation procedures on the SSBDDs, targeting fault simulation, test generation and fault diagnosis.

3.3 Equivalent Transformations of SSBDDs

3.3.1.1

71

Recognition of Logic Gates in SSBDDs

Introduce the following definitions for systematic exploring of structural details in SSBDDs. In Definition 3.7, we defined the meaning of a homogeneous graph. Continuing this definition, we can specify the types of paths in SSBDDs as follows. Definition 3.12 We call a path l(mi , mj ) between nodes mi and mj homogeneous, if it contains only the 1-edges, or only the 0-edges. Definition 3.13 We call a homogeneous path in SSBDD vertical if it consists only of the 0-edges, and horizontal if it consists only of the 1-edges. From Definitions 3.1–3.3 and 3.13, it is easy to derive and prove the following Lemma. Lemma 3.1 In SSBDD, a horizontal path lh (m1 , mk ) through k 1-nodes represents AND gate with k inputs, if the 0-edges of all the nodes of lh are connected to the same node. Similarly, a vertical path lv (m1 , mk ) through k 0-nodes represents OR-gate, if the 1-edges of all the nodes of l v are connected to the same node. Proof Consider the case of the path lh (m1 , mk ). Let us activate in SSBDD Gy where y = f (X) three paths: l(m0 , m1 ), l(mk 1 , #1), l(mk 0 , #0). After this assignment, the value of y, according to Definition 3.3, will depend on the value of the variable x(m), where m ∈ lh (m1 , mk ). Hence, we can construct from Gy a new SSBDD, with root node in m1 , replace mk 1 with #1, and replace mk 0 with #0. Since all nodes m ∈ lh (m1 , mk ) have the same neighbor as mk 0 , and the path l h (m1 , mk ) is a horizontal one, ending in terminal #1, new SSBDD represents elementary AND gate. Similarly, we can show that a vertical path lv (m1 , mk ) represents the elementary OR gate. ◼ As an example, in Fig. 3.1, the horizontal sub-graphs (x 11 , x21 ) and ( x13 , x22 , x32 ) represent AND gates, and the sub-graph (x 31 , x4 ) represent OR gate.

3.3.1.2

Recognition of Sub-circuits in SSBDDs

Definition 3.14 Let us call a homogeneous path b(m) in SSBDD a functional borderline controlled by the node m, if it starts with m and traverses backward through the nodes m’ ∈ b(m), so that from the neighbor nodes of m’, the node with the smallest number is included into b(m). We say the borderline is horizontal if it consists only 1-edges, and vertical, if it consists only 0-edges. We call the node where two (vertical and horizontal) borderlines b(mi ) and b(mj ) cross, a crossroad node cr(mi , mj ). The crossroad node cr(mi , mj ) is the intersection of two sets: cr(mi , mi ) = b(mi ) ∩ b(mj ). The meaning of functional borderlines in SSBDDs is to form the bounds for a sub-graph in SSBDD, presenting a sub-circuit of FFR. The horizontal borderline serves as the top boundary, and the vertical borderline serves as the left boundary. Other boundaries are the terminal nodes #1 and #0. The crossroads of borderlines have the meaning of being the root nodes for different subgraphs of SSBDD.

72

3 Structurally Synthesized BDDs

y

x1

x2

mR

mR* x3

mE 0

#1

x6

x4

x5

x12

x13

x8

mE

x7

mE 1 x10

x9

x11

#0 Fig. 3.30 Recognizing true sub-graphs in SSBDD [200]

Corollary 3.1 The crossroad node of the borderlines b(#1) and b(#0) in SSBDD, controlled by terminals #0 and #1, respectively, is the root-node m0 of the SSBDD. Corollary 3.2 From above, it follows that the borderlines controlled by terminal nodes, can be understood as a set of nodes, which form a topological bound for the full SSBDD. Example 3.16 Consider an SSBDD in Fig. 3.30, which represents an FFR implementing the Boolean expression. y = x1 x2 ∨ (x3 ∨ x4 x5 )(x 6 x 7 ∨ x 8 x 9 )(x10 ∨ x11 ) ∨ x12 x13

(3.9)

As an example, for the node x 10 there are three horizontal borderlines: to x 3 , x 4 , and x 8 . For the node x 12 there are also three vertical borderlines: to x 1 , x 6 , and x 10 . Note, the paths from x 12 to x 5 and to x 9 are not borderlines, according to Definition 3.14, because x 5 has the neighbor x 4 with a smaller number, and x 9 has the neighbor x 8 also with a smaller number. In this example, two borderlines are highlighted in Fig. 3.30: horizontal borderlines b(x 10 ) = (x 10 , x 7 , x 6 , x 3 ), and vertical borderline b(x 12 ) = (x 12 , x 8 , x 6 ). The highlighted crossroad cr(x 10 , x 12 ) of these borderlines is the intersection of two paths b(x 10 ) and b(x 12 ), the node x 6 . Lemma 3.2 The crossroad cr(mi , mj ) of the borderlines b(mi ) and b(mj ) in SSBDD Gy , is always the root node of a sub-graph G* ⊆ Gy , representing AND/OR sub-circuit C* of the FFR C y represented by Gy . Proof Any node m in SSBDD Gy can be taken as a root of a subgraph G* ⊆ Gy that always represents a Boolean function. However, if a node belongs to a homogeneous

3.3 Equivalent Transformations of SSBDDs

73

path l(mk , ml ) ⊆ G*, recognized as a gate in C y , then only the node with the smallest number mk ∈ l(mk , ml ) can be chosen as the root node of G* ⊆ Gy , satisfying the Lemma. Any other choice of m ∈ l(mk , ml ), as a root node for G*, so that mk < m ≤ ml , would break the path l(mk , ml ), and therefore also the respective gate in C y . Consider the path l(mk , ml ) as a borderline b(mj ). The neighbor m of mj , so that mj < m, can form another borderline b(m) crossing the node mk ∈ l(mk , ml ), according to Lemma 3.1. Since the borderlines b(m) and b(mj ) have the cross-road in mk , the condition of Lemma is satisfied. This ensures that the related string l(mk , ml ) will belong to G*. In other words, the sub-circuit C* represented by G*, will be a ◼ sub-circuit of the original FFR C y with the accuracy of AND/OR gates. Let us generalize now the concept of the topological bound for defining a topological location of any “meaningful” sub-graph of the given SSBDD, which represents a Boolean expression as a disjunctive or conjunctive part of the full expression represented by the SSBDD and implemented by FFR, with accuracy of the AND/OR gates. Definition 3.15 Let Gy (mR , mE ) be a subgraph of the SSBDD Gy , y = f (X), so that Gy (mR , mE ) ⊆ Gy , has the root mR , and end-node mE with neighbors m 0E and m 1E . Let us call Gy (mR , mE ) a true sub-graph if it represents a sub-circuit of the FFR implementing the function y = f (X) with accuracy of AND/OR gates. Lemma 3.3 A sub-graph Gy (mR , mE ) ⊆ Gy is true if the node mR is the crossroad node of two borderlines b(m0 E ) and b(m1 E ). Proof In accordance with Theorem 3.1, no other node in Gy (mR , mE ), except mR , has entries from nodes m < mR , and no other node m > mE , except m 0E and m 1E , has entries from Gy (mR , mE ) due to the planarity feature of SSBDDs. Hence, we can claim that Gy (mR , mE ) is an independent sub-graph with connections to other parts of Gy only through the nodes mR , m 0E , and m 1E . It allows extracting Gy (mR , mE ) from Gy , together with m 0E and m 1E , and replacing m 0E and m 1E with (local) terminals #0 and #1, respectively. As a result, the sub-graph Gy (mR , mE ) is an SSBDD with root mR , and two terminals #0 and #1. After the replacements above, and according to Corollary 3.1, we get mR = b(#0) ∩ b(#1). On the other hand, from this condition, and taking into account Lemma 3.2, and Definition 3.15, we can conclude that Gy (mR , ◼ mE ) ⊆ Gy is a true sub-graph of Gy . Example 3.17 Consider as an example in Fig. 3.30, the SSBDD Gy and the sub-graph Gy (mR , mE ) ⊆ Gy where mR = x 3 , mE = x 9 , m 0E = x 12 , m 1E = x 10 . The borderlines b(x 10 ) = (x 10 , x 7 , x 6 , x 3 ), and b(x 12 ) = (x 12 , x 4 , x 3 ), which determine the crossroad node x 3 , and hence, the root x 3 of the sub-graph Gy (x 3 , x 9 ), can be considered as the topological bound for the subgraph Gy (x 3 , x 9 ), highlighted in Fig. 3.30. Removing from Gy the nodes x 1 , x 2 , x 10 , x 11 , x 12 , x 13 , outside Gy (mR , mE ) (i.e., across the topological bound of the subgraph), and replacing the nodes x 10 and x 12 by terminals #0 and #1, respectively, we get a new independent Gy (x 3 , x 9 ) ⊆ Gy , which represents the Boolean expression y ' = (x3 ∨x 4 x5 )(x6 x7 ∨ x8 x9 ) as a sub-function of (3.9) and the related sub-circuit in the original FFR.

74

3 Structurally Synthesized BDDs

Another example of the same borderlines b(x 10 ) and b(x 12 ), but with another crossroad node mR * = x 6 , determine the root x 6 for another sub-graph Gy (x 6 , x 9 ) ⊆ Gy , also highlighted in Fig. 3.30.

3.3.1.3

Transformation of SSBDDs

Equivalent transformations of SSBDDs are based on operations such as moving nodes from one place to another, swapping node locations or swapping subgraphs. To ensure that the functionality of the graph remains unchanged by these transformations, we first have to prove the following statements. Lemma 3.4 An SSBDD, which contains a homogeneous path l(mi , mj ), where all outgoing from this path edges are entering the same node, remains equivalent if any two nodes will exchange the locations in this path. Proof The proof of Lemma 3.4 results from Lemma 3.1. Since a homogeneous path l(mi , mj ), where all outgoing from this path edges are entering the same node, represents an OR/AND logic function. These functions do not depend on the order of the arguments. ◼ Definition 3.16 We say that two subgraphs Gy (mR , mE ), and Gy (mR *, mE *) in SSBDD are connected homogeneously in horizontal (vertical) direction if the entering edges into the root node mR * of Gy (mR *, mE *) are originating from Gy (mR , mE ), and the nodes mE and mE * have the common neighbor in vertical (horizontal) direction. Let us call the subgraphs Gy *, and Gy **, connected homogeneously, neighboring subgraphs. Definition 3.16 is in accordance with Theorem 3.1, regarding the homogeneity property of SSBDDs. As an example, in SSBDD depicted in Fig. 3.30, the sub-graphs Gy (x 3 , x 9 ) and Gy (x 10 , x 11 ) are connected homogeneously in a horizontal direction. Definition 3.17 We say that two neighboring sub-graphs Gy *, and Gy ** are swapped if the incoming edges of Gy * became incoming edges for Gy ** and the outcoming edges of Gy * became outcoming edges for Gy **, whereas the common neighbor node for the both graphs will remain the same. As an example, Fig. 3.31 illustrates an exchange of the sub-graphs Gy (mR , mE ), and Gy (mR* , mE* ), according to Definition 3.17 [200]. Theorem 3.2 S S BDD Gy , representing the function y = f(X), with two neighboring sub-graphs Gy (mR , mE ) and Gy (mR* , mE* ), which represent sub-functions yR = f R (X), and yR* = f R *(X), respectively, remains equivalent if Gy (mR , mE ) and Gy (mR* , mE* ) swap their positions. Proof The proof results from Lemma 3.4. Consider in Gy , the subgraphs Gy (mR , mE ) and Gy (mR* , mE* ), which represent the functions yR = f R (X), and yR* = f R *(X),

3.3 Equivalent Transformations of SSBDDs

mi

mi

mR

mE

mR*

mE*

m*

75

mj

mR*

mE*

mR

mE

mj

m*

Fig. 3.31 Exchange the position of two neighboring sub-graphs in SSBDD [200]

respectively. Let us replace the subgraphs Gy (mR , mE ) and Gy (mR* , mE* ) with nodes m and m*, and label the nodes with functions x(m) = f R (X), and x(m*) = f R *(X), by swapping the nodes m and m*. According respectively. Create a new graph G NEW y will be equivalent to Gy . Replace in G NEW the nodes m to Lemma 3.4, the graph G NEW y y and m* back with sub-graphs Gy (mR , mE ) and Gy (mR* , mE* ), respectively, according to the rules of superpositioning of SSBDDs, using Algorithm 3.1. As a result, the will be equivalent. ◼ original SSBDD Gy , and the new SSBDD G NEW y

3.3.2 Restructuring of SSBDDs to Speed-Up Simulation Let us discuss some applications of transformations of SSBDDs for optimization of the structures of SSBDDs to increase the simulation speed.

3.3.2.1

The Average Length of Simulated Paths in SSBDD

The speed of logic simulation with SSBDDs will increase if we are able to reduce the number of nodes traversed during simulation. Suppose we have an SSBDD that represents an FFR with n inputs. Each combination of input signals activates a path in the SSBDD. Each path can be characterized by its length L j (the number of nodes traversed during simulation), and by the probabilities of node variables to take the

76

3 Structurally Synthesized BDDs

values 1 or 0, depending on the distribution of probabilities for input signals of the circuit. We can calculate the average length of paths traversed during simulation by the following general formula: 2 ∑ n

L average =

L j Pj ,

(3.10)

j=1

where L j is the length of the path activated by the j-th input pattern, Pj is the probability of activating this path, and 2n is the number of all possible input patterns. There are several problems with the practical use of this formula. The first one is that the formula is not scalable because if the number of nodes in SSBDDs increases, the number of different paths tends to be exponential. The second problem relates to the correlation of probabilities, and finally, to identify and exclude from the formula (3.10) the false (infeasible) paths is also not an easy task. In fact, there is no need to calculate the average lengths of the paths in the full SSBDD model. The problem we formulated for finding the ways of speeding up simulation on SSBDDs is to find the ways, how to reorder the nodes so that the statistical averages of the lengths of simulated paths were minimal. For that purpose, let us start to solve the problem of ordering nodes with an analysis of the average lengths of simulated paths, starting from the gate and small modules. We should learn to estimate the average path-lengths by comparison of different ideas and solutions of reordering the nodes or subgraphs in SSBDDs starting from trivial circuits and related SSBDDs.

3.3.2.2

Ordering of Gate Inputs to Speed up Simulation

First, let us start from the gates and find out how the nodes in the SSBDD of a gate, must be ordered, such that the simulation of the gate takes as few steps as possible to traverse along the path in the subgraph of SSBDD, representing the gate. Consider in Fig. 3.32, an SSBDD of a two-input AND gate, which has three paths highlighted. Each node variable x j is assigned a probability pj that its value is x j = 1. There are also formulas shown for each path to calculate the average statistical length L i of the path when we traverse it during simulation. The question is, how the two nodes x 1 and x 2 must be ordered to keep the statistical average length of the path to a minimum? For simplicity, instead of showing the numbers of nodes m in Fig. 3.32 (and sometimes also later, for convenience), we label the nodes directly with node variables x(m). Lemma 3.5 To achieve the maximum speed of simulation of AND gate subgraphs in SSBDDs, the nodes m in the subgraph must be ordered in ascending order of probabilities p(x(m) = 1).

3.3 Equivalent Transformations of SSBDDs

y

x1

#0

p1

77

x2

p2

Lav,1 = 2p1p2

#1

Lav,3 = 2p1 (1 Lav,2 = (1 - p1)

p2)

Fig. 3.32 Calculation of the average length of the elementary path in SSBDD

Proof Consider in Fig. 3.32, the AND gate subgraph with two nodes x 1 and x 2 , with probabilities p1 (x 1 = 1) and p2 (x 2 = 1), respectively, that the variables will have the value 1 during simulation. Assume that p1 < p2 , and the ordering of nodes is consistent with the rule of Lemma. The SSBDD of the AND gate consists of two nodes, traversed through the three possible paths l 1 = (x 1 , x 2 , #1), l2 = (x 1 , #0), and l 3 = (x 1 , x 2 , #0), as shown in Fig. 3.32. The formulas of calculating the average path lengths are highlighted. The average length of simulated paths during the simulation is calculated using the formula (3.10) as follows: L aver = L 1 + L 2 + L 3 = 2 p1 p2 + (1 − p1 ) + 2 p1 (1− p2 ) = 1 + p1

(3.11a)

Let us prove now by contradiction that Lemma is correct. Assume the opposite, that reordering of the nodes x 1 , x 2 in the path l1 = (x 1 , x 2 , #1) to the opposite order, l’1 = (x 2 , x 1 , #1), will produce a smaller average length than in (3.11a). Calculating again the average length of the modified path l’1 = (x 2 , x 1 , #1), using formula (3.10), we get ' = L '1 + L '2 + L '3 = 2 p2 p1 + (1 − p2 ) + 2 p2 (1 − p1 ) = 1 + p2 . (3.11b) L aver

Since p1 < p2 , the new result L' aver = 1 + p2 , is bigger than L aver = 1 + p1 . That contradicts the Lemma. ◼ In an analogical way, we can prove a similar Lemma for the OR gate subgraphs in SSBDDs as vertical homogeneous paths. Lemma 3.6 To achieve the maximum speed of simulation of OR gate subgraphs in SSBDDs, the nodes m in the subgraph, must be ordered in descending order of probabilities p(x(m) = 1). If the AND/OR gates have more inputs than 2, then Lemmas 3.5 and 3.6 can be extended easily for the multi-input gates. Note, if the probabilities pi of the input

78

y

3 Structurally Synthesized BDDs

x1

x2

#0

x3

#1

y

x3

x2

x1

#1

#0

(a)

(b)

Fig. 3.33 Reordering of nodes in the BDDs for the 3-input AND gate

variables x i of embedded gates are not available explicitly, we can easily calculate these probabilities by intensive simulating either with random or application-specific sequences, and counting the number of times the signal 1 appears. Example 3.18 Consider an SSBDD of the 3-input AND gate with input variables x 1 , x 2 , and x 3 in Fig. 3.33. Assume the input variables have the following probabilities of value 1: p1 = 0.9; p2 = 0.4; p3 = 0.1. By simulation of the gate with all possible input combinations, which produce different path lengths on the graph, we get the following results (for 23 = 8 paths): 4 times the path l 1 through the node x 1 to #0 with path length L 1 = 1, and probability P1 = (1 – p1 ) = 0.1; twice the path l 2 through the nodes x 1 and x 2 to #0 with L 2 = 2, and P2 = p1 (1 – p2 ) = 0.54; once the full path l3 through all nodes x 1 , x 2 and x 3 to #0 with L 3 = 3, and P3 = p1 p2 (1 – p3 ) = 0.324, and once the full path l4 through all nodes x 1 , x 2 and x 3 to #1 with L 4 = 3, P4 = p1 p2 p3 = 0.036. The average length, according to (3.10), of the paths traversed in the BDD of the AND gate in Fig. 3.33a, is equal to 2.26 traversed nodes. By changing the order of the nodes in the BDD as shown in Fig. 3.33b, we calculate the average length of the paths equal to 1.14 nodes, i.e., twice shorter than for the order in Fig. 3.33a. In other words, at the given signal probabilities, the speed of simulation on the BDD in Fig. 3.33b will be about two times faster than in the case of using the BDD in Fig. 3.33a. The example considered shows how the speed of simulation depends on how the nodes in the graph are ordered. In case of the equal probabilities, of course, the speed of simulation of gates will not depend on the order in which the variables are simulated. The average length of traversing all paths in both graphs in Fig. 3.33, at equal probabilities, is L aver = 1 + p1 + p1 p2 = 1 + p3 + p3 p2 = 1.75, representing the middle case between the extreme cases calculated above: 1.14 < 1.75 < 2.26. Let us now extend the problem of ordering nodes on the given paths in SSBDDs (AND/OR gate models) to the problem of ordering subgraphs of SSBDDs (sub-circuit models).

3.3.2.3

Swapping of Nodes in SSBDDs to Speed Up Simulation

Consider in Fig. 3.34 two functionally equivalent SSBDDs, consisting of a node x 1 and a path l 23 = (x 2 , x 3 ), representing a 2-input AND gate. The node and the path

3.3 Equivalent Transformations of SSBDDs

y

x1

x2

79

Lav,1 = p1

p1

y

#1 p2

x3

Lav,2 = 3(1 p3

#1

#0

#0

Lav,3 = 2(1 - p1) (1

p1) p2

x2

p2

x3

p3

Lav,1 = 2p2p3

#1

p1 x1

Lav,2 = 3 p2 (1

p3)

#0 p2)

Lav,3 = 2(1

(a)

p2)

(b)

Fig. 3.34 Interchange between a node and a gate in SSBDD

have swapped their places in two ways. The Boolean functions of the SSBDDs are y = x 1 ∨ x 2 x 3 in Fig. 3.34a, and y = x 2 x 3 ∨ x 1 in Fig. 3.34b. The question is, which of these two versions of the SSBDDs supports faster simulation (statistically shorter average simulated path)? Let us calculate and compare the averages of simulated paths for both SSBDDs. To make the comparison more transparent, assume that all probabilities pi are equal for all nodes. According to the formula (3.10), we get for the circuits (a) and (b) in Fig. 3.34, the following formulas for the average path’s lengths L aav = p1 + 3(1 − p1 ) p2 + 2(1 − p1 ) (1− p2 ) = 2 − p 2 ,

(3.12)

b L av = 2 p2 p3 + 3 p2 (1− p3 ) + 2(1− p2 ) = 2 − p 2 + p.

(3.13)

b The comparison L aaver < L aver shows that the SSBDD in Fig. 3.34a, has a shorter average path length, which also enables a faster simulation procedure. The difference in the length, and gain in the speed of the simulation is Δ = p.

Example 3.19 Let us calculate the values of the average lengths of simulated paths to understand the meaning of the abstract estimate of the difference p. From (3.12), we b = 2.25. The gain in terms of shortening find L aav = 1.75 and from (3.13) we get L av the average simulated path length is 22%. Let us generalize the observation above to the general case where a horizontal path in SSBDD has a neighbor node. It corresponds to the circuit, where one of the inputs of the OR gate is connected to an AND gate, corresponding to the horizontal path in SSBDD, and the other input is connected directly to the primary input of the circuit, as shown in Fig. 3.35. There are two possibilities for representing the circuit in Fig. 3.35a, by SSBDDs as shown in Fig. 3.35b, or in Fig. 3.35c. The question is which of the models supports faster simulation, and how much the speed up of simulation is.

80

3 Structurally Synthesized BDDs

y

x1 x2

x0

xj

xi

&

xn

1 x0

y y

(a)

xn

x2

x1

(b) xn

x2

x1

xj

x0 xi

(c)

Fig. 3.35 A path in SSBDD with a neighbor node

Lemma 3.7 The average length of simulated paths in a subgraph of SSBDD, consisting of a horizontal path with a single neighbor node, is minimal if the node is swapped above the path in the vertical direction. The gain in terms of the average length of the simulated paths, compared to the opposite ordering of the node and the horizontal path representing n-input AND gate, is Δ=

n−1 ∑

pk

k=1

where n —is the number of nodes in the horizontal path, which is equal to the number of inputs of the AND gate, it represents. Proof Consider the two versions of SSBDDs Gy (x 0 , x n ) and Gy (x 1 , x 0 ) in Fig. 3.35b, c, respectively. Both SSBDDs represent the circuit in Fig. 3.35a. The average length of the simulated path in Gy (x 0 , x n ) we calculate according to (3.10) as follows: ( ) L av G y (x0 , xn ) = pL av (x0 , x j ) + p n L av (x0 , x1, . . . , x j ) + p n L av (x0 , x1 , . . . , xn−1 , xi ) + p n−1 L av (x0 , x1 , . . . , xn−2 , xi ) . . . + pL av (x0 , xi ) = p + (n + 1) pn + npn−1 (n − 1) pn−2 + · · · + 2 p 2 .

Similarly, we calculate the average length of the path in Gy (x 1 , x 0 ): ( ) L av G y (x1 , x0 ) = (n + 1) pn + npn−1 + np n−2 + · · · + 3 p 2 + 2 p.

3.3 Equivalent Transformations of SSBDDs

81

Comparing the two sums, we see that the difference can be calculated for any value of n ≥ 2 as follows: ( ( ) ) Δ = L av G y (x0 , xn ) − L av G y (x0 , xn ) = p n−2 + p n−3 + · · · + p 2 + p, where n is the number of nodes in the full set of nodes {x0 , x1 , …, x n } that is one less than the length of the horizontal path equal to the number of inputs of the related AND gate. ◼ Note, the special case of n = 2 was discussed above using Fig. 3.34. Example 3.20 Let us calculate the average lengths of simulated paths and the gain in simulation speed for the case when the number of nodes of the horizontal path in SSBDD in Fig. 3.35 is n = 4. It is the case of the AND gate with 5 inputs. For the case of Fig. 3.35a, we have L av (a) = 1.94, and for Fig. 3.35b, we have L av (b) = 1.68. The gain in simulation speed, resulting from this difference, is 12.5%. Compared with the result of Example 3.19, we see the gain has decreased from 22 to 12.5%. In a similar way, as Lemma 3.7 considered the case of simulating the AND gates connected to OR gates, we can also prove the following statement regarding the OR gates connected to AND gates. Lemma 3.8 The average length of simulated paths in a subgraph of SSBDD, consisting of a vertical path with a single neighbor node, is minimal if the node is swapped to the left of the path on the horizontal direction. The gain in the average length of the simulated paths, compared to the opposite ordering of the node and the vertical path representing n-input OR gate, is Δ=

n−1 ∑

pk

k=1

where n—is the number of nodes in the vertical path, which is equal to the number of inputs of the OR gate, it represents.

3.3.2.4

Swapping of Paths in SSBDDs to Speed up Simulation

Let us consider the case of two AND gates with different numbers of inputs, connected to the OR gate. Figure 3.36 presents the general case of the circuit and its two swapped versions of SSBDDs. Recalling the method of SSBDD synthesis, where we used the superposition procedure for substituting the nodes by BDDs, we can apply here the opposite action and substitute the paths, denoted as G1 and G2 in Fig. 3.36, with two nodes m1 and m2 , respectively. Then, we would have, instead of the given circuit in Fig. 3.36a, an OR gate with two inputs. According to Lemma 3.6, if p(x(m1 )) > p( x(m2 )), then the node m1 must have the first place in ordering the input variables of the OR gate. This would correspond to the case of Fig. 3.36b.

82

3 Structurally Synthesized BDDs

y x'1 x'2 x'n x1 x2 xk

G1

& 1 G2

(a)

x'2

G2

x1

x2

G1

xi

y y

&

x’1

xn

(b)

x1

x2

G1

x'1

x'2

G2

xi

xj

x'k

xn

xj

x’k

(c)

Fig. 3.36 Swapping of two gates represented by horizontal parallel paths

By reasoning above, we can set up the hypothesis that the rule of Lemma 3.6 will also hold after superposition of variables with functions. For generalization purposes, let us assume that the probabilities of 1 for all variables are equal p = 0.5. Let us prove the following Lemma. Consider the case, where the two gates whose numbers of inputs differ by one: k = n − 1. Lemma 3.9 The average length of simulated paths in the subgraph of SSBDD, consisting of two horizontal neighboring paths with different lengths, is minimal, if the shorter path is located above the longer one in the vertical direction. If the paths differ in the number of nodes by one, the gain from the described order of paths, and the respective speed increase of simulation is pk . Proof Let us prove it by mathematical induction. The special and simplest case as the base with n = 2, and k = 1 was already analyzed in Sect. 3.3.2.3 . It was shown that according to Lemma 3.7, the shorter path with length k = 1 (number of nodes in the path) must be swapped above the longer path with length n = 2 (number of nodes in the path due to the simulated shorter average length. It was also shown that the difference in average path lengths is p in favor of the shorter path, where k < n. Let us prove now that if these statements hold for the case k = 2, then it must also hold for the next case k = 3. Let us adjust the SSBDDs in Fig. 3.36 for the case where k = 2 and n = 3. Average lengths for each traversed path in the SSBDDs, we add to the path endpoints. For the calculated average lengths for each traversed path in the SSBDDs, we add to the path

3.3 Equivalent Transformations of SSBDDs

y

x1

x2

2p2

x3

x4

x5

4p4 3p3

5p4 4p3

x2) 3p3 x1) 2p2

83

y

x3

x4

x1

x2

x5) 4p4 x4) 3p3 x3) 2p2

(a)

x5

3p3

5p4 4p3 3p2 (b)

Fig. 3.37 To proof of Lemma 3.9

endpoints. For better understanding, in each data row below the graphs, we show the node variable that was the last one in the travel through nodes of the upper path. Comparing the components of the sums of statistical average path lengths below the graphs, we find that two components, 2p2 and 3p2 do not have counterparts among all components. From that, we see that the SSBDD in Fig. 3.37a has a shorter average path length with a difference 3p2 – 3p2 = p2 . This is in accordance with the statements of the Lemma, and with the case of k = 1 of the basis. We also checked the case of k = 3, where the difference is p3 , which is also in accordance with Lemma. ◼ A similar statement to Lemma 3.9 is also provable for the two parallel vertical neighboring paths, which represent the circuit with two OR gates connected to the AND gate, where the numbers of inputs differ by one. Let us investigate now how the difference in average path lengths of the subgraph of SSBDD, consisting of two parallel horizontal paths, is changing if the difference in the number of nodes in these paths is increasing. It is easy to prove again by induction the following statement. Lemma 3.10 Let us have a subgraph of SSBDD, consisting of two horizontal (vertical) neighboring paths with different lengths n1 and n2 , where n2 > n1 , and representing two AND gates connected to the OR gate (two OR gates connected to the AND gate). If the shorter path is located above the longer one in a vertical direction (on the left side of the longer one in a horizontal direction), the gain in terms of the average length of simulated paths in this subgraph and the respective increase of simulation speed of the related sub-circuit, compared with the opposite swapping of paths, are equal to Δ=

n∑ 2 −1 k=n1

pk

84

3 Structurally Synthesized BDDs

y y

x1

x2

2p2

x3

x4

x5

x6

3p3

4p4

5p5

6p5

3p3

4p4

5p4

x2) x1) 2p2

x3

x4

x1

x2

x5) 5p5 x4) 4p4 x3) 3p3 x2) 2p2

x5

x6

4p4

6p5 5p4 4p3 3p2

(a)

(b)

Fig. 3.38 The proof for Lemma 3.10

Proof Let us prove Lemma by mathematical induction to show how the gain in the average of the simulated path length depends on the change of k. Let us take as the base of k = 1 at n = 4, which was analyzed in Sect. 3.2.2.3 and shown that Lemma 3.7 for the gain holds. Let us prove now that it will hold also for k = 2, and n = 4. Let us adjust the SSBDDs in Fig. 3.36 for the case where k = 2 and n = 4. The new SSBDDs is presented in Fig. 3.38. The calculated average path lengths for each traveled path in the SSBDDs are added to the path endpoints. For better understanding, we show in each data row below the graphs the node variable that was the last one in the travel through nodes of the upper path. Comparing the components of the sums of average path lengths below the graphs, we find that the components of the total sums, 2p2 , 3p3 for (a), and 3p2 , 4p3 for (b) do not have counterparts among all components. From that, we conclude that the SSBDD in Fig. 3.38a has a shorter average path length with the difference Δ = (3p2 + 4p3 ) − (2p2 + 3p3 ) = p2 + p3 . This is in accordance with the statement of the Lemma and with the case of k = 1 of the basis. We also checked the case of k = 3, where the difference is p3 , which is also in accordance with the Lemma. ◼ Theorem 3.3 The statistical average length of the simulated paths in the SSBDD subcircuit, which consists of two horizontal (vertical) neighboring paths and represents two AND gates connected to the OR gate (two OR gates connected to the AND gate) is minimal if the path with fewer nodes is swapped vertically above the longer one (horizontally to the left of the longer one). Proof The proof results from Lemmas 3.7, 3.9, and 3.10.



The discussed in Lemmas summary of gains for different combinations of path lengths, n and k < n, of the two paths in the SSBDD, which correspond to the numbers of gate inputs, are shown in Table 3.1. We denote by Δ the general formula of the difference in the average length of simulated paths for the two solutions of swapping

3.3 Equivalent Transformations of SSBDDs

85

Table 3.1 The gains achieved by optimization of SSBDDs according to Theorem 3.3 # gate inputs n/k

1

2

Gain

# paths

Δ

%

2

p

22.2

4/5

p2 + p

28.6

5/7

p2

4

p3

6/9

p3

+ +p

31.0

# paths

Δ

3

p2

3

Gain %

+

p2

8.2

9/10

11.4

11/13

Gain

# paths

Δ

%

p3

4.1

16/17

the paths in SSBDD, and we show the gain in percentage as well. The column “# paths” shows the total number of paths in the SSBDDs. From Table 3.1, we draw the following conclusions: (1) All columns show that the larger the difference between n and k, the more significant the gain. (2) Along with the increase of difference between n and k (Column 1), the increment of gain is reducing. (3) If the length of both paths n and k increases, the gain reduces (rows and diagonal). 3.3.2.5

Swapping of Sub-graphs in SSBDDs

Let us generalize the approach of mutual swapping of homogeneous neighboring paths, considered in previous sections, to swapping arbitrary modules. The concept is based on the superposition of nodes of SSBDDs with other SSBDDs used in the synthesis of the graphs in Sect. 3.1.2. Now, we are looking for the possibility of applying the superposition of nodes also for optimization purposes of SSBDDs. Example 3.21 Let us have an AND gate y = ab in Fig. 3.39, whose inputs are connected to the outputs of AND, and OR gates. The SSBDD for the AND gate is a horizontal string of two nodes a, and b. According to Lemma 3.5, we can order the nodes a, and b according to the ascending values of probabilities p(a), and p(b). Assume that the probabilities are equal p(a) = p(b) = 0.5. In this case, the order of the nodes in the SSBDD Gy in Fig. 3.39a is arbitrary. Let us connect now in Fig. 3.39a, the AND, and OR gates to the inputs a, and b of the AND gate y, respectively. Extend also the SSBDD of the gate y in Fig. 3.39a by superposition of its nodes a, and b with SSBDDs of the OR and AND gates, respectively. Figure 3.39b shows the new SSBDD Gy,1 . The average length of simulated paths in the SSBDD Gy,1 , according to (3.10), is L av (Gy,1 ) = 2.63. Initially, we assumed for the input probabilities for the AND gate y, that p(a) = p(b) = 0.5. After connecting the OR, and AND gates to the inputs of the AND gate in Fig. 3.39a, it is not anymore the case. If we assume that the probabilities of 1 for all primary input variables are equal x i = 0.5, then the probabilities for a, and b

86

3 Structurally Synthesized BDDs Gb (OR)

x1

&

x2

y y

&

x3

b

Ga (AND)

x3

x1

x2

x4

#0

#0

1

x4 y

a

Gy,1

#0

b

a

#1

y

x1

(b)

#1

#0

x3

#0

x4

Ga (AND)

#0 Gy (AND)

#0

x2

#0

Gy,2

(a)

#1

Gb (OR)

(c)

Fig. 3.39 Swapping gates to minimize the average length of simulated paths

will change, and will not be any more equal. In this new situation, we have: p(a) = 0.75 for OR gate, and p(b) = 0.25 for the AND gate. According to Lemma 3.5, we have to reorder the nodes a, and b in Gy,1 , which leads to the need of swapping the subgraphs Ga , and Gb , as shown in Fig. 3.39c. The updated average length of simulated paths in the new subgraph Gy,2 , will be L av (Gy,2 ) = 1.88. The correction of calculations implies the difference Δ = L av (Gy,1 ) – L av (Gy,2 ) = 2.63 – 1.88 = 0.75, or 28.5% regarding the baseline (the worse choice of swapping of subcircuits). More important would be to find out if the number of inputs of gates will increase, the impact of SSBDD transformations will also increase. In Fig. 3.40, the same configuration of gates, as in Fig. 3.39, is shown, where the number of inputs of gates is increased by one. The average lengths of simulated paths in the SSBDDs Gy,1 , Gy,2 , and the difference Δ of the simulated average lengths of paths for these SSBDDs, are calculated as Δ = L av (Gy,1 ) − L av (Gy,2 ) = 3.28 – 1.96 = 1.32, which gives a gain 40.2%, compared to the “wrong” swapping of subgraphs. For the digital design practice, this improvement would mean about 40% increase in the speed of simulation.

Gb(AND)

Ga(OR) y

x4 x5 x6

1

x1 x2 x3

&

a

x1

x2

x3

x5

#0

#0

#0

y

x1

x2 #0

x3

x4

#0

x5

Gb(AND)

#1

x6

x6 Gy,1

#0

(a)

y #1

#0

& b

x4

(b)

#0

Gy,2

Ga(OR)

(c)

Fig. 3.40 The effect of swapping modules is greater as the number of gate inputs increases

3.3 Equivalent Transformations of SSBDDs

87

Example 3.22 Let us consider another example. In Fig. 3.41, we present a case, where the theory seems “not to work”. The two versions Gy,1 , and Gy,2 of the SSBDD with the same function differ so that the sub-graphs G1 , and G2 , are swapped. The horizontal neighborhood of the sub-graphs G1 , and G2 corresponds to the AND connection of two modules. Hence, according to Lemma 3.5, and due to the calculated probabilities p(G1 ) = 0.625, and p(G2 ) = 0.75, the subgraph G1 with lower probability, p(G1 ) < p(G2 ), must be swapped to the left from G2 . For that reason, we expect that L av (Gy,1 ) < L av (Gy,2 ), and the SSBDD Gy,1 will provide faster simulation than Gy,2 . The expectations are confirmed. We get the following result: Δ = L av (Gy,2 ) − L av (Gy,1 ) = 3.64 – 3.04 = 0.6. From this, it follows that the configuration of Gy,1 is a better solution than Gy,2 . However, there is another SSBDD Gy,3 in Fig. 3.42, which is functionally equivalent with Gy,1 and Gy,2 in Fig. 3.41, but provides even shorter average length of simulated paths L av (Gy,3 ) = 2.53, compared to Gy,1 and Gy,2 . The reason is that the SSBDD G1 , used in the comparison in Fig. 3.41, is not constructed along the rule of Lemma 3.6. This error makes Lemmas 3.5 and 3.6 potentially fail to order the graphs, which have not been constructed according to Lemmas 3.5 and 3.6. If we swap the graphs G3 and G4 in G2 , we get a new graph Gy,3 (AND connection) in Fig. 3.42, with the best solution. From above, it follows that the search for the SSBDDs with a minimal average length of simulated paths for the given circuit must begin from the front-end, from

y

x2

x3

G2

G3

x4

x1

x5 G4

x4

y

x5

x6

G1

G2

x2 x6

x1

Gy,1

(a)

(b)

Fig. 3.41 Two competing equivalent SSBDDs with hidden deviation

y

x4 x5

G2 x6

x1 x2

G2 x3 Gy,3

Fig. 3.42 The best of three versions of equivalent SSBDDs

x3

G1

Gy,2

88

3 Structurally Synthesized BDDs

the inputs of the circuits and continue iteratively towards the back-end to the outputs of the circuit.

3.3.3 Optimization of SSBDDs by Reconfiguring the Structure Considering the conclusion of the previous section, we now develop a method and algorithms for minimization of the average length of simulated paths in the given SSBDD, constructed by the method of superposition of elementary BDDs of the gates in the given FFR. We propose to perform a recursive application of the rules of Lemmas 3.5 and 3.6, starting from the homogeneous neighboring paths in the SSBDD, which represent the gates, and continuing the procedure by applying Lemmas 3.5 and 3.6 for continuously growing adjacent sub-graphs. The method is based on the idea of superposition, where any subgraph G of the given SSBDD can be reduced to a node m, so that x(m) would represent the function of G. This justifies that Lemmas 3.5 and 3.6, targeting node swapping, are applicable also for sub-graph swapping.

3.3.3.1

Reordering Nodes in SSBDDs for Optimization

Let us first describe the method using an example and then generalize the whole procedure as two algorithms. The goal is to minimize the statistical average length of simulated paths in the given SSBDD using the probabilities of signals in the circuit. We assume that the probabilities of the primary input signals are given. If the values of these probabilities are not available, they should be taken as equal for the values of 0 and 1, then for the internal signals, the probabilities in the general case will not remain any more equal, and this will be the decisive point in the optimization procedure. We can calculate the probabilities of all signals in the circuit by extensive simulation with random input patterns or using application-specific sequences of signals. Example 3.23 Consider a circuit and its SSBDD Gy in Fig. 3.43. The graph represents the Boolean formula:y = [x1 x2 ∨ x3 (x4 ∨ x5 )]x6 . Assume that we have given the following probabilities of 1 for the input variables p(x 1 ) = 0.8; p(x 2 ) = 0.2; p(x 3 ) = 0.9; p(x 4 ) = 0.1; p(x 5 ) = 0.6; p(x 6 ) = 0.1. We carry out the procedure by traversing the nodes in SSBDD in the order of ranking (following the Hamiltonian path), given by numbers next to the nodes. Moving along the path, we look for homogeneous path segments where all nodes will have a common neighbor. This path segment will represent a gate, where we should decide if there is a need to swap the nodes according to Lemmas 3.5 and 3.6, or not.

3.3 Equivalent Transformations of SSBDDs

C12 C45 x4 x5

1

x1 x2 x3

C16

& &

89

1

(a)

C15

x1 x3

& C35

y

x6

y

G35

1

G12

x2 x4

3

2

G6

x6

6

4

G45 x5

5

(b)

Fig. 3.43 To Example 3.23—minimization of the average length of simulated paths

The first path segment we find is the subgraph G12 (x 1 , x 2 ) with common neighbor x 3 . The path segment represents AND gate, and according to Lemma 3.5, the nodes x 1 and x 2 should be swapped since p(x 1 ) > p(x 2 ). We get a new graph G21 (x 2 , x 1 ), but we will handle it later as a node m21 (to be substituted in the final phase back again with improved G21 (x 2 , x 1 )) with the node variable x(m21 ), which represents G21 (x 2 , x 1 ). Then we calculate the probability p(x(m2,1 )) = p(x 2 ) p(x 1 ) = 0, 16. We leave graph G21 waiting till the traversing of the graph reaches point x 6 (the neighbor of G21 ), which would be the common node for G21 and for another subgraph G*, for possible swapping. As we see later, this subgraph G*, will be G35 . Let us call node x 6 as a meeting point. Reaching this point of the procedure, we decide if the two subgraphs G21 and G35 (possibly modified) need swapping. The next node x 3 in the track does not have with x 4 a common neighbor. We define x 3 as a segment with the length 1, and leave it as well waiting for another embedded sub-graph with whom to have a common node, terminal #0 (not shown in Fig. 3.43). The next segment on the track is the subgraph G45 (x 4 , x 5 ), which is the OR gate. We swap the nodes x 4 and x 5 , according to Lemma 3.6, because p(x 5 ) > p(x 4 ). We define a new graph G54 (x 5 , x 4 ), to be handled later as a node m54 with node variable x(m54 ), and with the calculated probability p(x(m54 )) = 1 – (1 – p(x 5 )(1 – p(x 4 )) = 0.64. Now we discover that there is a node x 3 waiting at meeting point #0, which is a common node also for G54 . The node x 3 and G54, alias node m54 , form AND gate. Since p(x(m54 )) < p(x 3 ), we must swap the node x 3 and G54 according to Lemma 3.5. We define a new subgraph G53 (G54 , x 3 ), alias a node m53 , and calculate the probability p(x(m53 )) = p(x(m54 )) p(x 3 ) = 0.58. We have now arrived at meeting point x 6 , where G21 is waiting for a partner to form a new subgraph. The two meeting graphs G21 , alias m21 , and G53 , alias m53 , form OR gate. Since p(x(m21 )) < p(x(m53 )), then according to Lemma 3.6, the graphs G21 and G53 , have to be swapped, and they form a new graph G51 (G53 , G21 ), alias m51 with probability p(x(m51 )) = 1 – (1 – p(x(m53 )) (1 – p(x(m21 )) = 0.36. Finally, since the new graph G51 , and the node x 6 have the common neighbor #0, they form AND gate, and since p(x(m51 )) > p(x 6 ), they must be swapped, according

90

3 Structurally Synthesized BDDs

x2

5

G21

x1

6

x5

2

G3

G54 x5

2

G3

G54

x4

3

x3

4

G53

(a)

x4 x2

3

x3

4

G21

G6

x6

1

x5

2

G3

G54 G53

5

y

x1

G51 6

(b)

G61

x4 x2

3

x3

4

G51

G53

5

G21

x1

6

(c)

Fig. 3.44 To Example 3.23: the results of equivalent transformations of subgraphs

to Lemma 3.5. We finish the procedure of SSBDD optimization by the last final equivalent transformation of the given SSBDD in Fig. 3.43, and forming a new SSBDD as G61 (x 6 , G51 ). Figure 3.44 illustrates the described procedure of minimization of the statistical average length of simulated paths in the circuit, presented in Fig. 3.43a by transforming the initial SSBDD Gy in Fig. 3.43b, to the final form G51 (G53 , G21 ) in Fig. 3.44a and b. The last conversion is swapping the graphs G6 and G51 to form the final optimized graph G61 in Fig. 3.44c. The average length of the simulation path for the SSBDD in Fig. 3.43b is equal to 4.72, whereas the same for the final SSBDD in Fig. 3.44c is 1.25. The gain in terms of simulation speed increasing is 3.8 times. Figure 3.45 shows two equivalent SSBDDs for the circuit in Fig. 3.43a, with the worst ordering of nodes (Fig. 3.45a) and with the best ordering of nodes (Fig. 3.45b), for the case of equal distribution of input signal probabilities. The average length of the simulated paths for the first case is 3.72, and for the second case is 2.34. The gain in simulation speed is 1.59 times. These two examples show that the simulation speed is highly dependent, first, on the distribution of signal probabilities, and second, on the ordering of nodes in SSBDDs.

3.3.3.2

Algorithm for Reordering of Nodes

We present in the following two algorithms for optimization of SSBDDs, to increase the speed of simulation of input patterns applied to the related circuit. The algorithms implement a search for the best order of nodes in the SSBDD model, enabling the simulator to traverse a minimum number of nodes on average during the fault-free or fault simulation. The first algorithm (Algorithm 3.2. SWAP) is the basic operation of the optimization procedure, which performs analysis if the two adjacent subgraphs G* and G, in the SSBDD must be swapped in their positions or not. The criteria for swapping for AND, and OR type connections in the SSBDD are given in Lemmas 3.5 and 3.6,

3.3 Equivalent Transformations of SSBDDs

y

x1

1

x4

3

x5

4

x2

x3

2

x6

91

6

y

x6

1

x3

5

2

x4

3

x5

4

x2

6

5

x1 (a)

(b)

Fig. 3.45 Comparison of the worst and the best ordering of nodes in two equivalent SSBDDs at equal probabilities

respectively. In the case when swapping is needed, the algorithm performs respective modifications in the SSBDD, and calculates the probability p(G*) of producing signal 1, for the new joint subgraph G*(G*, G) consisting of the previous G* and G. Algorithm 3.2 SWAP two subgraphs in SSBDD 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12.

Given: G*, G, p(G*), p(G) IF G*, G are connected horizontally (AND gate) *** Check criterion of Lemma 3.3 THEN IF p(G*) > p(G) THEN (G*, G) → G*(G, G*) *** Subgraphs G* and G are swapped ELSE (G*, G) → G*(G*, G) *** No modification in SSBDD IF G*, G are connected vertically (OR gate) *** Check criterion of Lemma 3.6 THEN IF p(G*) > p(G) THEN (G*, G) → G*(G*, G) *** Subgraphs G* and G are swapped ELSE (G*, G) → G*(G, G*) *** No modification in SSBDD IF G* is AND type, G* = p(G*)p(G) IF G* is OR type, G* = 1 – ((1 – p(G*))(1 – p(G)) Return {G*, p(G*)}

The second algorithm (Algorithm 3.3. Optimize SSBDD) organizes traversing through all nodes in SSBDD along the Hamiltonian path to recognize recursively the pairs of subgraphs (G(m)*, G(m))) in that path with a common neighbor as candidates for swapping. The subgraphs may represent nodes, homogeneous paths or any complex subgraphs. To organize recursive operations with these graphs embedded in other graphs, a stack is introduced. To each entry in the stack, an end node pointer (meeting point) of a related cycle of the recursive operation is attributed. The stack includes all subgraphs with related data in a recursive queue, waiting to be paired with the next subgraph, recognized during traversing along the Hamiltonian path of the SSBDD.

92

3 Structurally Synthesized BDDs

Algorithm 3.3 Optimize SSBDD 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.

Given: G, |M|, {p(m)} SET m = 0 WHILE m < |M| BEGIN m=m+1 Recognize G(m) READ STACK {G*(G*, G), p(G*), Pointer*} *** End of the recursive cycle SWAP (G*, G(m)) *** Call for Algorithm 3.2. SWAP WRITE STACK: G*(m), p(G*), Pointer(m) *** Start of a new recursive cycle END Return: Modified G*

The step “Recognize G(m)” is a procedure not described in detail here but is described in Sect. 3.3.1. The goal of the procedure is to identify either a string of nodes, which have a common neighbor, to create an elementary subgraph or to identify two subgraphs (G(m), G*), which have a common neighbor, to create a new joint graph G* = {G(m), G*}. For a better understanding of Algorithms 3.2 and 3.3, it is useful to follow the example presented in Sect. 3.3.1.

3.3.4 Restructuring of SSBDDs for Sharing the Sub-graphs 3.3.4.1

Reconfiguration of SSBDDs to Support S3 BDD Synthesis

One-to-one correspondence between SSBDD nodes and FFR paths allows compact modeling of gate-level FFRs without manifesting internal connections between the gates in FFR-s. As a result, the SSBDDs represent the faults in FFRs in collapsed form, reducing the cost of fault simulation and test generation. The last two properties—compaction of the model and collapsed fault representation, inherent to the SSBDD model with direct access to representatives of collapsed faults (without external listings of faults, outside the model)—deserve further strengthening. A promising trend is to increase the lengths of signal paths, represented by the nodes of SSBDDs, beyond the borders of tree-like FFRs, to achieve in this way, further compaction in the modeling of signal paths and faults. The compaction of the model, in turn, will bring with it the problem of sharing the resources jointly between several parts of the model. These developments will be another application of the theory of equivalent transformations of SSBDDs, presented in Sect. 3.3.3, for optimization of the SSBDD model to increase the speed of simulation. Restricting the generation of SSBDDs only to the tree-like FFRs was the principle constraint in the generation of SSBDDs. The reason was to avoid the explosion of

3.3 Equivalent Transformations of SSBDDs

y1

x1 x2 x6 x7 x8

FFR

FFR

x3 x4 x5

FFR

93

x15

y2

x14

x21

x32

x3

x25 x3

x36

x58

x47

y1 (b)

y2

y1

x15

y2

x21

x41

x52

x36

x58

x3

x47

(a) (c)

Fig. 3.46 Presentation of a network of three FFRs with three SSBDDs and a single S3 BDD [200]

the model size due to the repeated superposition of the fan-out nodes with the same sub-graph created for the common fan-out stem). An attempt to overcome this limitation of SSBDDs was the introduction of a new type of structural BDDs called Shared Structurally Synthesized BDDs (S3 BDDs) [335, 462]. The S3 BDD model joins the extension of superposition beyond the fan-outs of the FFR network with the concept of sharing resources of the model. Figure 3.46 illustrates a network of three FFRs in Fig. 3.46a, represented by a set of three SSBDDs Gy1 , Gy2 , and Gx3 in Fig. 3.46b. Figure 3.46c illustrates how we can join these three separate graphs into a single shared SSBDD model that has three root nodes with entry arrows y1 , y2 , and x 3 into the graphs Gy1 , Gy2 and Gx3 , respectively. The SSBDDs Gy1 , Gy2 share the embedded graph Gx3 . As we can see, in Fig. 3.46c, the graphs Gy1 and Gy2 , share the subgraph Gx3 , in a similar way as the two output functions y1 = f (X) and y2 = f (X) of the circuit in Fig. 3.46a are sharing the sub-function x 3 = f (X). We can notice here two essential actions, which we need to involve in performing the synthesis of the graph S3 BDD. First, the superposition process extends beyond the fan-out net x 3 . Both graphs Gy1 and Gy2 in the SSBDD model in Fig. 3.46b contain the node x 3 to be replaced with graph Gx3 for creating the S3 BDD. However, we see that only one single superposition has taken place, instead of two. To avoid the second superposition, the graphs Gy1 and Gy2 , (called supergraphs) share the subgraph Gx3 . In a similar way, if there were n branches involved in the fan-out net, we would like to have always only a single superposition, so that other n – 1 graphs would share the same joint subgraph, related to the fan-out stem. The second action concerns graph conversion. To connect the graphs Gy1 and Gx3 by replacing the node x 3 in Gy1 with the graph Gx3 , we must swap the nodes x 3 and x 2 in Gy1 , to move the node x 3 into the last position in the graph. Why?

94

3 Structurally Synthesized BDDs

Consider each graph (sub-graph) as a binary program with nodes as “if–thenelse” instructions. If a node refers to another sub-graph, this is like a “call for the subroutine”. The superposition procedure in constructing S3 BDDs means replacing a call-node in a main program by a subroutine. The problem of minimization of nodes in the case of S3 BDDs is in connection with the constraint that only the sub-graphs at the ends of supergraphs can be treated as subroutines that we can share to merge different supergraphs. The reason for this constraint results from the general concept of SSBDDs (and of all BDDs), where there is no return mechanism inherent for the jumps to subroutines. It is the reason why we need to reorder the nodes x 3 and x 2 in Gy1 before merging the graphs Gy1 and Gx3 in Fig. 3.46c. The concept of constructing S3 BDDs in this example lies in replacing the endnodes of the joining graphs Gy1 and Gy2 , with Gx3 . The basis of the concept is the assumption that in all joining SSBDDs, the node with the required variable x 3 would necessarily be the end node of the Hamiltonian paths of the joining graphs. In general, such a coincident, so that the desired node happens to be at the end of the Hamiltonian path, is purely random if not specially targeted. More common is that the joining SSBDDs need to go through suitable equivalent transformations to shift the desired nodes to the end of the related Hamiltonian paths.

3.3.4.2

Shifting the Nodes in SSBDDs Along Hamiltonian Paths

Based on the discussion above, we formulate the following task: shift a given node in SSBDD by as few as possible iterative equivalent transformations of the SSBDD to the end of its Hamiltonian path. We illustrate the task in Fig. 3.47, where we must shift the node x 8 to the position of the last node in the SSBDD Gy . Figure 3.47 shows the graph in the standard form in (a), and in the stretched-out form to highlight the Hamiltonian path (b). The main operational step in the shifting process is to identify a true sub-graph (in terms of Definition 3.15) with the target node in the last position, such that the subgraph has a true neighbor sub-graph and then to swap both sub-graphs, helping the target node to make a jump towards the end of the Hamiltonian path. In Fig. 3.47, two sub-graphs G7.8 and G9 , belonging to the path segment G79 , which can be identified as a 3-input AND gate, are highlighted in grey color. Swapping G7.8 and G9 moves x 8 a step further. Let us propose for solving the task of shifting a node to the end of SSBDD, the following iterative procedure, in the form of Algorithm 3.4, where each iterative step means swapping of two neighboring true subgraphs in SSBDDs, according to Theorem 3.2.

3.3 Equivalent Transformations of SSBDDs

Gy

95

x1

#1

x2

x3

x4

x5

x6

x7

x8

x10

x9

x11

x12 #0

(a)

#1 Gy

x1

x2

x3

x4

x5

x6

x7

x8

x9

x10

x11

x12 #0

(b)

Fig. 3.47 The problem of shifting a node in SSBDD to the end of its Hamiltonian path [201]

Algorithm 3.4 Shifting a node to the end of SSBDD 1. Given: Gy , m – is the node to be shifted in Gy 2. WHILE the node m has become the end-node in Gy 3. BEGIN 4. Determine a true sub-graph Gy (mR , mE ) ⊆ Gy for m, so that mE = m, and mR will be found as the crossroad of two borderlines mR = b(m0 E ) ∩ b(m1 E ), according to Lemma 3.3 5. Find the neighboring sub-graph Gy (mR* , mE* ), so that mR* = mE + 1 and mE and mE* both share either m 0E for horizontal neighbours or m 1E for vertical neighbours. 6. Swap the subgraphs Gy (mR , mE ), and Gy (mR* , mE* ) to move the node m towards the end of the SSBDD. 7. END

Let us explain the main steps 4, and 5 of Algorithm 3.4, using Fig. 3.48. We pick the node x 8 , to be shifted and build around him a true sub-graph Gy (mR , mE ) of Gy . The end-node mE of this sub-graph will be x 8 . From the neighbors of x 8 , the nodes x 10 as m 1E , and x 10 as m 1E , we draw backward horizontal and vertical, respectively, homogeneous paths called borderlines b(m 1E ) and b(m 1E ), respectively. The crosspoint x 2 of borderlines is called a crossroad in Definition 3.14. The crossroad x 2 will be the root node mR of the sub-graph Gy (mR , mE ). According to Lemma 3.3, the sub-graph Gy (x 2 , x 8 ) is true. Then we find the horizontal neighbor for Gy (x 2 , x 8 ), the sub-graph Gy (mR* , mE* ) as big as possible. The neighbor will be Gy (x 10 , x 11 ). The

96 Gy

3 Structurally Synthesized BDDs x1

#1

x2

x3

x4

x5

x6

x9

x7

x8

Gy

x1

#1

x10

x10

x2

x11

x11

x3

x6

x9

x7

x8

x12

x12

(a) #0

x4

x5

#0

(b)

Fig. 3.48 Typical step of Algorithm 3.4 (moving the node x 8 towards the end of the graph) [201]

node x 10 is the next node for x 8 on the Hamiltonian path, and x 11 is the farthest node on the Hamiltonian path, which has the common neighbor x 12 with x 8 .

3.4 Shared Structurally Synthesized BDD In this chapter, we present a new type of SSBDDs called Shared SSBDD (S3 BDD), which enables further compaction of the SSBDD-based models for given FFR networks. We learned that SSBDDs help to reduce the model complexity compared to gatelevel representation, since instead of gates, they represent sub-networks of gates, and the algorithms running on SSBDDs need no separate and diverse treatments of gates of different types. Each single node in the SSBDD represents a signal path through the network of gates, and all stuck-at faults (SAF) along such signal paths collapse into only two SAF of the SSBDD node. As a result, SSBDDs provide a double impact on the speed-up of fault simulation: they allow both to compress simulation space and to reduce the number of faults to simulate. In [475], Shared Structurally Synthesized BDDs (S3 BDDs) were introduced under the name of “Structurally Synthesized BDDs with Multiple Inputs” (SSMIBDDs) as an extension of the SSBDD model. In SSBDDs, the synthesis of the model follows the superposition procedure, according to Algorithm 3.1, that stops at the fan-out branches of the circuit. This means that we construct separate graphs for each Fanout-Free Region (FFR) in digital circuits. In S3 BDDs, on the other hand, superposition of nodes with graphs continues beyond the fan-out stems up to the primary inputs, supported by a special method of sharing sub-graphs. As a result, a significant reduction in the model size becomes possible in S3 BDDs, compared to SSBDDs, measured in the number of nodes. The compression of the model leads to a decrease in memory requirements, to more efficient fault collapsing and an increase in the speed of fault-free simulation and fault simulation. In the following, we present the definitions of S3 BDDs, the concept and the methods of synthesis of S3 BDDs. We develop the formulas for calculating the sizes of S3 BDDs, and the formulas for the lower bounds of sizes.

3.4 Shared Structurally Synthesized BDD

97

3.4.1 Definitions and the Structure of S3 BDDs In the following, we give the definition of S3 BDDs as a model of digital circuits, which we generate by iterative superposition of nodes of graphs, similarly as in the case of SSBDDs, but with the difference that the superposition process extends beyond the fan-out nodes of the given circuit, involving other FFRs to cover with the same S3 BDD. Such an extension allows additional compaction of the structural BDDs, compared to the method of generating SSBDDs.

3.4.1.1

Definitions of S3 BDDs

We use here and elsewhere the general graph theory notations, instead of traditional ite expressions, that is common in the BDD field [281] because all the procedures, based on structure-oriented SSBDDs and S3 BDDs, use topological reasoning rather than graph symbolic functional manipulations as traditionally in case of functional BDDs. We formally define an S3 BDD as follows. Definition 3.18 S3 BDD is a graph. S3 BDD is a multi-rooted directed acyclic graph Gy = (M, Γ, Y, X) with a set of nodes M = {m} where m0 ∈ M is the root node. Γ is a relation on M, where Γ(m) ⊂ M and Γ −1 (m) ⊂ M denote the sets of neighboring successors (child nodes) and predecessors (parents) of the node m, respectively, and − → ← − Γ (m) and Γ (m) denote the transitive closures of Γ(m) and Γ −1 (m), respectively. For m0 ∈ M we have Γ −1 ((m0 )) = ∅. Y = {y, yk } is a set of entry variables of the graph Gy . We call the graph Gy , with entry y ∈ Y, a supergraph, and others Gy,k ⊆ Gy , with entries yk ∈ Y \ y, subgraphs. The nodes m ∈ M are labelled by Boolean variables, denoted by x(m). Each node has exactly two successor (child) nodes me ∈ Γ(m), where e ∈ {0, 1}. The graph has two terminal nodes (leaves) mT ∈ M T = {#0, #1}, labeled by a Boolean constant e(mT ) ∈ {0, 1}. Let us call the terminal nodes with labels #0 and #1, as 0-terminal and 1-terminal, respectively. Definition 3.19 Activating edges and paths. If there exists an assignment x(m) = e, in S3 BDD, we say that the edge (m, me ) in G is activated. Let us call the activated by x(m) = 1 edge (m, m1 ) as 1-edge of the node m, and the activated by x(m) = 0 edge (m, m0 ) as 0-edge of the node m. Activated edges, which connect the nodes mi and mj , form activated path l (mi , mj ). An activated path l(m0 , mT ) is called a fully activated path. There may be more than one path between the nodes mi and mj . Let us call L{mi , mj } a set of all paths between the nodes mi and mj . Definition 3.20 S3 BDD represents a set of functions. S3 BDD Gy = (M, Γ, Y, X) represents a Boolean function y = f (X) where X = {x 1 , x 2 , …, x n }, and y ∈ Y, iff for all vectors X t ∈ {0, 1}n , a full path l(m0 , mT ) is activated in Gy so that y = f (X t ) = e(mT ). To each subgraph Gy,k ⊂ Gy a subfunction yk = f (X k ), X k ⊆ X, corresponds. Some nodes in Gy may have as labels x(m) the entry variables yk ∈ Y. These labels

98

3 Structurally Synthesized BDDs

allow access via the entries yk to respective subgraphs Gy,k for calculating the value of the node variable x(m). This is similar to calling a subroutine from another program. Definition 3.21 S3 BDD represents a digital circuit. S3 BDD Gy = (M,Γ , Y, X) S3 BDD represents a Boolean function y = f (X) in the form of Boolean parenthesis expression, which describes a gate-level fan-out-free combinational circuit C y with a set of inputs IN, and is composed on the basis of AND, OR and NOT gates, where |M| = |IN|, |X| ≤ |M| and there exist bijection M → IN and surjection M → X. Each node m in Gy represents a signal path C(x(m), y) ⊂ C y in the circuit from the input with variable x(m) to output y. Definition 3.22 Nodes of the S3 BDD represent the faults of a digital circuit. In the S3 BDD Gy that models the digital circuit C y , each node m in Gy represents a compact fault location of all faults R(m) related to the signal path C(x(m), y) ⊂ C y in the circuit from input x(m) to output y. For the class of stuck-at faults (SAF), R(m) = {x(m)/0, x(m)/1}, where x(m)/0 and x(m)/1 are the notations for the faults SAF/0 and SAF/1, respectively, of the fault location related to the node m in Gy , and hence, to all lines on the signal path C(x(m), y) ⊂ C y in the circuit.

3.4.1.2

Partitioning of S3 BDDs into Sub-graphs

The idea of the S3 BDD model for digital circuits stands in extending the superposition procedure beyond the fan-out nodes in FFR networks by introducing the graphsharing principle into the model. To avoid the negative effects of exploding the model due to fan-out stems ci , if all its branches ci, 1 , ci,2 , … ci, k , would be replaced by subgraphs, the superposition in S3 BDDs is allowed only for a single fan-out branch ci, j , 1 ≤ j ≤ k. From that point on, in the S3 BDD synthesis process, the creation of a new embedded subgraph begins. To the root node of this new sub-graph, we attribute the entry label yj . The function of the sub-graph will be to calculate the value of the fan-out variable ci . The branches ci, k , k /= j of the fan-out stem ci are represented by related nodes in the S3 BDD model, labeled by the function variables yi,k ∈ Y, where yi,k = f i (X i ), X i ⊆ X. Here, i denotes the fan-out stem, k denotes the fan-out branches, and Y is a set of all entries into the S3 BDD. The function yi,k = f i (X i ) relates to the FFR with output ci and the sub-graph with entry yi,k . Example 3.25 Consider a circuit in Fig. 3.49a consisting of two FFRs C y and C z . We can represent the circuit by structural BDDs in three ways. Figure 3.49b presents an SSBDD model consisting of two SSBDDs Gy and Gz , for two FFRs separately. The number of nodes in this SSBDD model is 8. According to the concept of SSBDDs, superposition of the elementary BDDs for gates during SSBDD synthesis stops at the fan-out stem z to avoid the explosion of the model. Figure 3.49c shows what happens when we continue superposition beyond the fan-out stem z by replacing both fan-out branch nodes z1 and z2 in Gy with Gz . The number of nodes grows to 10,

3.4 Shared Structurally Synthesized BDD

99

x1 x2 x3 x4 x5

z

1

&

z1 1

y

z2 & Cz

&

x6

Cy

(a) y

z

x1

xz21

xz 32

x26

y

Gz

x25

y

x13

x25 x31

x1

x4

x24 (b)

x52

z

Gz x31

x24

xz21

x36

x2

x5 x56

x25

x1

x31

Gz x42

x42

x52 (c)

(d)

Fig. 3.49 Comparison of the SSBDD and S3 BDD models for a combinational circuit

but in the case of bigger BDDs with more FFR levels in the circuits, the growth may become exponential. Figure 3.49d shows the S3 BDD consisting of a single graph, which has one entry to sub-graph Gz with the root node x 2 , and entry node z. Note, in this example, before the synthesis of the S3 BDD in Fig. 3.49b, we had to swap the two nodes z2 and x 6 , in the graph Gy , because the replacement of the node z2 by Gz is only possible provided, the node z2 has the last position in the graph Gy . It is needed to keep graph Gz in Fig. 3.49b and subgraph Gz in Fig. 3.49d equivalent.

3.4.1.3

Inverting of Sub-graphs in the S3 BDD Model

In the S3 BDD synthesis, it is sometimes necessary to transform the SSBDDs, before embedding them in the S3 BDD, into the inverted form. This will be the case when the node variable, which represents a fan-out branch, is in inverted form. For inverting SSBDDs, recall back to Sect. 3.2.3.1. Example 3.26 Consider the ISCAS’85 benchmark circuit c17 in Fig. 3.50a. We illustrate in Fig. 3.50b the problem of replacing node z 21 in graph Gy1 with graph Gz2 of the gate g3 that represents the function z2 = x2 ∨ z 11 . Before the replacement, we must invert the graph for z2 .

100

x1

3 Structurally Synthesized BDDs

g1 x31 &

x2 x3

x4 x5

g2 x32 &

g5

g5 g3 z11 & z1

g4 z12 &

z2

z21 &

y1

g3

g6 z22 &

y1

y2

z2

(a)

g1 z21

y1

g1

x2

z2

x2

z11

z11

(b)

Fig. 3.50 Superposition operation in the SSBDD model of the ISCAS’85 benchmark circuit c17

3.4.2 Synthesis of S3 BDDs 3.4.2.1

General Concept of S3 BDD Synthesis

The basis of the S3 BDD concept is sharing sub-graphs. There are two techniques of sharing the subgraphs between other graphs. The first technique uses superposition, where a node in a graph Gy is replaced with another graph Gm that becomes then a subgraph embedded in Gy . The second technique involves assigning entry variables to subgraphs, through which the subgraphs can be accessed from the nodes of different graphs, labeled by related entry variables. For example, in Fig. 3.49d, the nodes z1 and z2 in the supergraph Gy share the subgraph Gz such that z2 is directly replaced with subgraph Gz during superposition procedure, whereas z1 will have access to Gz through the entry variable z. As another example, in Fig. 3.46c, the two supergraphs Gy1 and Gy2 share the subgraph Gx3 through the entry variable x 3 , whereas superposition takes place in both graphs Gy1 and Gy2 (the graph Gx3 replaces the node x 3 in both graphs). Note that sharing subgraphs in BDDs is also popular in the field of functional BDDs, such as in Shared or Multi-Rooted BDDs [282]. We propose a method for synthesis of S3 BDDs, which consists of the following five phases. Phase 1. SSBDD generation. In the first phase, we generate SSBDDs for all FFRs of the circuit by superposition of elementary BDDs for the gates of the circuit using Algorithm 3.1, as explained in Sect. 3.1.2. In this step, the superposition procedure runs in the backward direction in the circuit from outputs to inputs or in the internal FFRs from the output fan-out stem towards the inputs of the FFR but stops at the primary input nodes of the circuit or at the internal fan-out stems. Phase 2. S3 BDD preplanning. In the second phase, we carry out a preplanning process of sharing the subgraphs, which takes place in the opposite direction, i.e. from primary inputs to primary outputs of the circuit. The planning process runs at the high-level FFR network, consisting in ordering and sharing of FFRs, with the

3.4 Shared Structurally Synthesized BDD

101

goal of identifying the lists of pairs of SSBDDs to be merged by superposition one after another. This phase we describe in detail in Sect. 3.4.2.2. The result of the second phase will be a set of ordered pairs of two SSBDDs {(Ga , Gb )}, to be merged by replacing in Ga a node mb with Gb . Phase 3. Reconfiguring of SSBDDs if needed. In the third phase, we identify for all pairs of SSBDDs in {(Ga , Gb )} if there is a need for shifting the node mb in Ga to the end of the Hamiltonian path of the graph. In all SSBDDs Ga , where we need the reconfiguring of the graph, we will carry out the corresponding equivalent transformations of the graph for shifting the node mb in Ga to the end position along the Hamiltonian path. We describe the method in detail in Sect. 3.3.4.2. Phase 4. Inverting of SSBDDs. In the fourth phase, for all pairs of SSBDDs in {(Ga , Gb )}, where x(mb ) in Ga is inverted, we invert the graph Gb as well, as shown in Sect. 3.4.1.3. Phase 5. Merging of SSBDDs. In the final fifth phase, we merge the pairs of SSBDDs listed in {(Ga , Gb )}, using the results of the preplanning procedure. In this phase, we carry out a similar procedure as in the first phase, with the difference that the objects of superposition are now the SSBDDs of FFRs instead of the elementary BDDs of gates. In the following, we will consider preplanning as the second phase of synthesis. We assume that the SSBDD model consisting of SSBDDs for all FFRs of the circuit is available.

3.4.2.2

FFR-Network Level Preplanning of S3 BDD Synthesis

Consider in Fig. 3.51a high-level circuit as a network of FFRs. The network presents the set of all fan-out nets where the superposition of FFRs during the synthesis of S3 BDDs will take place. Using this network, we will construct a preplan for S3 BDD synthesis in the form of Fan-out Topology Graph (FTG), as shown in Fig. 3.51b. The nodes in the FTG represent SSBDDs, which may be embedded into another SSBDD, replacing a call node. On the FTG, we carry out a topological analysis with the goal of creating a set of trees of non-overlapping routes through all fan-out stems of the circuit. Such a set of trees for the circuit in Fig. 3.51a, is highlighted in Fig. 3.51b by bold edges. The root node 1 of the topology graph in Fig. 3.51b represents a subgraph that all subsequent SSBDDs in the topology graph will share. The end nodes represent SSBDDs that will take the role of supergraphs in the S3 BDD model. They will contain all the SSBDDs, along the bold routes up to the root node 1, as subgraphs. The bold edge means that the two neighboring graphs will merge in the S3 BDD model under construction. Each node in FTG may have only one bold input arrow. If there is a bold edge from node a to node b in FTG, then we say that the SSBDD Gb will be the supergraph with respect to Ga , and Ga will be the subgraph of Gb . If there is a thin edge from node a to node b in FTG, then we say that the graph Gb has a call node labelled with a, and the graph Ga has an entry labelled with a, into its root node.

102

3 Structurally Synthesized BDDs A 4

B 9

A

C

B

10

11

2

6 C

5

1

8

11

8

7

8

3

8

10

10

7

9

10

11

6

3

3

5

(a) 5

6

A

7

6

9 4

4

1

11

6

2

B 2

5

10

7

C

3 8

1

(c)

(b) Fig. 3.51 Preplanning of S3 BDD synthesis using FFR network [462]

Based on the routes synthesized on the fan-out topology graph in Fig. 3.51b, and shown by the bold trees in it, we carry out the merger process of SSBDDs in the last phase of S3 BDD synthesis. The result of this process is shown in Fig. 3.51c, where the clouds mean the SSBDDs created for the FFRs and embedded in the S3 BDD as subgraphs. The arrows between the clouds mean that the lower SSBDD will be a sub-graph of the upper one, and the numbers in the clouds represent call nodes in SSBDDs showing the links to the entries of subgraphs.

3.4.2.3

Example of the S3 BDD Synthesis

Example 3.27 Let us synthesize the S3 BDD model for the ISCAS’85 benchmark circuit c17, presented in Fig. 3.52a. We assume that the library of elementary BDDs for the gates is given (see Figs. 3.53 and 3.54). During the first phase of the synthesis, two SSBDDs are synthesized: Gy1 (x 1 , x 31 , z 21 ) in Fig. 3.53 and Gy2 (z 22 , x 5 , z12 ) in Fig. 3.54). During the second phase of the synthesis, we create the Fan-out Topology Graph in Fig. 3.52b. According to the FTG, which plans the structure of S3 BDD, we will have two supergraphs Gy1 , and Gy2 , sharing three subgraphs Gx3 ⊂ Gz1 ⊂ Gz2 . Gy1 shares two subgraphs directly Gx3 ⊂ Gy1 , Gz2 ⊂ Gy1 , and the subgraph Gz1 indirectly Gz1 ⊂ Gz2 ⊂ Gy1 whereas Gy2 shares Gz1 and Gz2 directly. Based on the relationships derived from FTG, we can order the steps of merging the subgraphs. Finally, Fig. 3.53 illustrates the graph merging steps during the synthesis of the S3 BDD Gy1 with related subgraphs, and Fig. 3.54 illustrates the graph merging steps during the synthesis of the S3 BDD Gy2 with related subgraphs.

3.4 Shared Structurally Synthesized BDD

g1

x1

g5

x31 &

x2 x3

g2

x32 &

x4 x5

103

g3 z11 & z1

z2

y1

y1

z21 &

z2

g6

g4 z12 &

x3

y2

z22 &

z1 y2

(a)

(b)

Fig. 3.52 ISCAS’85 benchmark circuit c17 and its Fan-out Topology Graph

g5

z1

y1

g1

y1

z21 g1

x1

x 31

x4

z21

y1

x1

x 31

z2

x2

z 11

g3

x 31

x1

x32

g2

z2

x2

z 11

y1

x1

x 31

z2

x2

x 32

z1 x4

Fig. 3.53 Synthesis of the S3 BDD for the output y1 of the ISCAS’85 benchmark circuit c17 in Fig. 3.52a

g6 y2

z22

y2

g4

g4

x5

z22 x5

z12

z12

y1

x1

x 31

z2

x2

x 32

z1

y1

x1

x 31

z2 z1

x2

x32

y2

x4

z22

x4

x5 Fig. 3.54 Synthesis of the S3 BDD for the output y2 of the ISCAS’85 benchmark circuit c17 Fig. 3.52a

104

3 Structurally Synthesized BDDs

3.4.3 The Size of the S3 BDD Model 3.4.3.1

Calculation of the Size of SSBDDs

The size of the SSBDD model in terms of the total number of nodes in all graphs of the model is fixed by the given gate-level circuit, the model represents, and we can calculate it from the number of gates and wires of the circuit. On the other hand, the size of the S3 BDD is not fixed and is an object of optimization. However, it is possible to estimate the lower bound of the model size. To develop the size of S3 BDD models, let us first estimate the size of the SSBDD models for the given gate-level circuit. In [207] it was shown that the number of nodes in the gate-level SSBDD models, where each gate was replaced by an elementary BDD, is. N S 2 B D D_Gate = N Signals − NG

(3.14)

where N SSBDD_Gate —is the number of nodes in the gate-level SSBDD model, N Signals —is the number of lines in the circuit, and N G —is the number of gates in the circuit. Since the number of nodes that we could gain by synthesizing S3 BDDs compared to FFR- and gate-level SSBDDs depends directly on the number of fan-outs in the circuit, we must modify the estimation (3.14) into the function of the number of fan-outs and replace the gate-level network with FFR-level network. Let us have the following notations for the parameters of the circuits to be represented by SSBDD or S3 BDD: s—is the number of primary inputs of the circuit, s0 —is the number of inputs with no fan-outs, sint —is the number of internal lines of the fan-out free regions of the circuit, sk —is the number of nets with k fan-outs (k > 1), n—is the number of primary outputs, and m—is the maximum number of fan-out branches over all fan-out nets in the circuit. First, estimate the number of nodes in the gate-level circuit where each node denotes the place to be fault modeled when using the stuck-at-fault (SAF) model. For the fan-out nodes, we have to model the faults both at the fan-out stem itself and at the fan-out branches as well. Hence, we have to calculate the size of the gate-level SSBDD model regarding fault modeling, in terms of the number of nodes in the circuit, in the following way: N S 2 B D DGate = N Signals − NG = s0 + sint +

m ∑

sk ∗ (k + 1)

(3.15)

k=2

To obtain the compaction of the gate-level model of the given circuit, using SSBDDs, we generate the SSBDDs for the fan-out regions of circuits. To achieve the maximum compaction of the model by covering as many gates as possible by SSBDDs, but retaining at the same time the necessary for fault diagnosis structural information in the model, we must strive for the maximum size of FFRs. For this

3.4 Shared Structurally Synthesized BDD

105

purpose, let us define unambiguously the FFR sub-circuits of the maximum size represented and covered by SSBDDs. Definition 3.23 FFR is a sub-circuit in the given combinational circuit where all inputs of all FFRs are either primary inputs without fan-outs or fan-out branches and the FFR itself does not include any fan-out node inside. Let us call this model an FFR of maximum size. Denote by N FFR the number of FFR modules (FFRs of maximum size), and by N Signals the number of lines between the FFR modules. From above, the following Lemma arises. Lemma 3.11 The size of the SSBDD model in terms of the number of nodes, representing an FFR network, and covering all SAF, is equal to N S 2 B D D = s0 +

m ∑

sk ∗ k

(3.16)

k=2

Proof According to Definition 3.23, all signal paths (or path segments), that start either at primary inputs or at fan-out nodes and terminate either at fan-out stems or at primary outputs are represented in the SSBDD model by single nodes. Hence, the parameter sint from (3.15) disappears. Let us classify all fault locations in the FFRbased circuit, to be mapped into the SSBDD model, in two classes: primary inputs and fan-out branches. Each fan-out stem with k ≥ 2 branches produces exactly k nodes in the SSBDD model, due to the dominance of faults inside FFRs over the outputs of FFRs. Hence, the fan-out stems do not produce any additional nodes in the SSBDDs. On the contrary, the primary input fan-outs need to be presented by (k + 1) nodes to model the faults at both, the fan-out stems and the fan-out branches. Hence, the number of all nodes N S 2 B D D , in the SSBDD model, will be equal to the sum of the number of all fan-out branches and all inputs. ◼

3.4.3.2

Calculation of the Size of S3 BDDs

Differently than in the case of SSBDDs, where the size of the model is exactly calculable in the number of nodes, in the case of S3 BDDs, we can set up only the target to minimize the number of nodes as much as possible. Considering the idea of merging different S3 BDDs (as described in the synthesis of minimized S3 BDDs), we can formulate the following Lemma. Lemma 3.12 The size of the S 3 BDD model generated for the FFR-level network can be calculated in the number of nodes as N S 3 B D D = s − (n − 1) +

m ∑ k=2

sk (k − 1) + (r − 1)

(3.17)

106

x1 x2

3 Structurally Synthesized BDDs

1

x5

y

&

1

&

x61 x5

1 x3 x4

x51

y

x1

Gx6

x2

Gx5

Gy

x3

x52 x6

x4

x6 (a)

LInk Merge

(b)

No merger feasible

(c)

Fig. 3.55 A single graph S3 BDD model is not possible due to the two roots in the FTG

Proof The formula (3.17) is an extension of Lemma 3.11, concerning SSBDDs. According to the definition of S3 BDDs, for all fan-outs with k ≥ 2, one node can be removed from the SSBDD model by extending the superposition of SSBDDs beyond the fan-outs, which means that each fan-out with k branches will be represented with only k – 1 nodes in the S3 BDD model. Additional gain, up to (n – 1) less nodes in the model, where n is the number of outputs of the circuit, we achieve by sharing the subgraphs between supergraphs of outputs of the circuit. Each sharing operation reduces the number of nodes by one. On the other hand, a negative impact may be caused by the component (r – 1), where r means the total number of separate S3 BDDs in the model. Each separate graph in the S3 BDD model means a not merged subgraph, and, therefore, adds an additional node to the model. ◼ The following example illustrates the reason for not sometimes reaching the lower bound of the size of S3 BDD. Example 3.28 Consider the circuit in Fig. 3.55. To calculate the lower bound for the size of S3 BDD, according to Eq. (3.16), we have: s = 4, n = 1, k = 2, and sk = 2, which gives an estimate N S3BDD = 6. However, from Fig. 3.55, we see that the real number of nodes is N S3BDD = 7. A question arises: why is it not possible to create a single S3 BDD for the circuit in Fig. 3.55a? We get the answer if we create the circuit the Fan-out Topology Graph in Fig. 3.55c to preplan the S3 BDD synthesis. It consists of three nodes for the SSBDDs. The bold edge between Gx6 and Gy means that the graphs merge, and Gx6 becomes the subgraph in the supergraph Gy . The thin edge between Gx5 and Gy means that no merger is feasible between the graphs, and the only link between the graphs can be using the nodes x 51 and x52 in Gy for accessing the stand-alone SSBDD Gx5 . Hence, the S3 BDD model in Fig. 3.55b will consist of two non-merged graphs. We cannot share simultaneously the BDDs of gates x 5 and x 6 in the same graph Gy in Fig. 3.55b.

3.4.3.3

Lower Bound of the Size of S3 BDDs

Figure 3.56 illustrates the meaning of the parts of Eq. (3.17).

3.4 Shared Structurally Synthesized BDD

107

Fig. 3.56 Interpretation of the formula of the lower bound of the S3 BDD size

Equation (3.16) is a compact and easy-to-calculate estimation of the size of the S3 BDD model to make it before the synthesis begins. The two components in the formula, (n – 1) and (r – 1), can be precisely evaluated only in the preplanning of the synthesis by creating the Fan-out Topology Graph. In this process, it can be clarified how many subgraphs can be shared between the supergraphs related to the outputs of circuit, and how many not. In a similar way, only in that phase, it will be clear how many additional graphs, not merged with supergraphs, must be constructed (Example 3.28). From the reasoning above, we can formulate the following statement. Theorem 3.4 The lower bound for the size of the S 3 BDD model generated for the FFR-level network can be calculated in the number of nodes as. N S 3 B D D_L B = s +

m ∑

sk (k − 1)

(3.18)

k=2

Proof Equation (3.18) results straightforwardly from Eq. (3.17) and the reasoning above. The lower bound of the size of the S3 BDD model will be achieved iff the SSBDDs of all FFRs can merge with the supergraphs of the S3 BDD model, the number of which is equal to the number of primary outputs of the circuit. If any subgraph remains not merged with supergraphs, then either (n − 1) will reduce or ◼ (r − 1) will increase in Eq. (3.17). From Theorem 3.5, the following Corollary results. Corollary 3.3 The compaction of the S 3 BDD model compared to the SSBDD model is equal to or more than the number of fan-out nets in the circuit. Proof If we compare Eq. (3.16) for the size of SSBDD and Eq. (3.18) for the lower bound of S3BDD, then the difference is equal or more of the number of all fan-out nets in the circuit, independent of the sizes of sk (of the numbers of fan-out branches):

108

3 Structurally Synthesized BDDs

( N S2 B D D − N S3 B D DL B ≥ s +

m ∑

)

(

sk ∗ k − s +

k=2

m ∑

) sk (k − 1) =

k=2

m ∑

sk

k=2

(3.19) where sk is the number of nets in the circuit with k fan-out branches (k > 1), and m is the maximum number of fan-out branches over all fan-out nets in the circuit. The lower bound is achievable in S3 BDDs of the circuits with a single output if all SSBDDs merge as subgraphs in the single supergraph. Any additional output, if the number of fan-out nets in the circuit remains constant, means increasing of N S 2 B D D , without increasing of N S 3 B D DL B . Note, we compare not between the sizes of the two models, but between the size of one model and the expected minimum size (lower bound) of the other model. ◼ Consider in the following an example of two comparisons between the sizes of the SSBDDs and S3 BDDs created for two circuits in Fig. 3.57. Example 3.29 Consider in Fig. 3.57a, the circuit C 1 , represented in Fig. 3.57b by the S3 BDD Gy1 . The SSBDD G' y1 (C 1 ) (not shown in Figure) will consist of three separate SSBDDs, created for three FFRs z1 = f z1 (x 2 , x 3 ), z2 = f z2 (z1 , x 4 ), and y1 = f y1 (x 1 , z11 , z21 , z22 , x 5 ) with number of nodes in total N S 3 B D D = 9. Using (3.17), we get for the S3 BDD the following result: N S 3 B D D = 7, where s = 5, n = 1, r = 0, and we have 2 fan-out nets with 2 branches. The result coincides with the 7 nodes in the graph in Fig. 3.57b (the dotted part excluded). The difference in the lengths of the models is 2, which is in accordance with (3.19). Moreover, the result N S 3 B D D = 7 shows that the lower bound N S 3 B D D_L B of the S3 BDD length is achieved. Note, the case r = 0 means, that the model contains a single graph. Let us now join the subcircuits C 1 and C 2 (framed by dotted rectangles) into a unified whole in Fig. 3.57a.

x2 x3

x4

z2

1

& y3

(a)

1 &

x6 1

z11

&

z21

x5

x7

z21

1 z1

&

x1

y1

x1 z11

y1 C1

y2

C2

x5 y2

x6

y3

x7

x4

z2

x2

z1 (b)

Fig. 3.57 Comparison of the sizes of SSBDD and S3 BDD for two circuits in Example 3.29

x3

3.4 Shared Structurally Synthesized BDD

109

Figure 3.57b presents, together with the dotted part, the S3 BDD for the joint circuit. The SSBDD model gets two graphs more for the C 2 , added to the previous three graphs for C 1 , created for the FFRs y2 = f y2 (z2 , x 6 ) and y3 = f y3 (z1 , x 7 ) with number of nodes in total N S 3 B D D = 13. Using (3.17), we get for the S3 BDD the following result: N S 3 B D D = 9, where s = 7, n = 3, r = 0, and we have 2 fan-out nets with 3 branches. The result coincides with the 9 nodes in the total graph (including the dotted part) in Fig. 3.57b. Note that the case r = 0 means that the model contains a single graph. The graph, however, includes three supergraphs sharing between themselves a few subgraphs.

3.4.3.4

Optimization of S3 BDDs

Lemma 3.11 defines the size of the SSBDD model, in which each node represents a SAF location. To ensure this property, all primary inputs with fan-outs in the circuit have to be involved in the model. The reason is that the faults at fan-out stems, which propagate to all branches, have to be differently handled, as the faults locating at separate branches. On the other hand, for simulation purposes, including fault simulation, there is no need to have these nodes in the model. The faults at the fan-out stems are easily transformable to multiple faults at the fan-out branches. Moreover, removing the fan-out stem nodes from the model even increases the simulation speed. Additionally, simulation of the delay faults, similarly to the fault-free simulation, does not need separate handling of fan-out stems and branches. Hence, we can optimize the SSBDD model in the following way. Theorem 3.5 The lower bound for the size of the optimized S 3 BDD model generated for the FFR-level network can be calculated in the number of nodes as N S 3 B D D_L B_opt = s0 +

m ∑

sk (k − 1)

(3.20)

k=2

where s0 —is the number of primary inputs of the network without fan-outs. Proof The proof is similar to Theorem 3.4, and the update of the latter is justified by the explanation above. ◼ Based on Theorem 3.5, we will add here, without proof, the following updates of the calculation formulas for the size of the optimized S3 BDD model, and for the compaction of the optimized S3 BDD model compared to the SSBDD model. Note, in fact, the SSBDD model can be optimized in a similar way. Then, if both models are optimized, the formula for comparison (3.19) holds without changes. Corollary 3.4 The size of the optimized S 3 BDD model generated for the FFR-level network is calculated in the number of nodes as

110

3 Structurally Synthesized BDDs

N S 3 B D D = s0 − (n − 1) +

m ∑

sk (k − 1) + (r − 1)

(3.21)

k=2

Corollary 3.5 The compression of the optimized S 3 BDD model compared to the SSBDD model is equal to or more than the number of fan-out nets in the circuit plus the number of fan-outs at the primary inputs of the circuit. N S 2 B D D − N S 3 B D D_L B_opt ≥

( m ∑

) sk ) + (s − s0

(3.22)

k=2

where s is the number of primary inputs of the circuit, and s0 is the number of primary inputs without fan-outs.

3.4.3.5

Experimental Comparison of the Sizes of SSBDDs and S3 BDDs

The main goal of introducing the structural BDDs was to develop a hierarchical approach to fault modeling and simulation on top of traditional gate-level approaches. The key idea was to rise from the level of the gate networks to the macro level of FFR networks while maintaining a structural one-to-one correspondence between the two levels. The mapping between the nodes in the graphs and signal paths in the circuits allowed fault collapsing without any loss of accuracy. It, in turn, allowed, on the one hand, to increase the productivity of fault simulation and, on the other hand, to increase the simulation speed. An important marker in achieving this effect is the size of the SSBDD and S3 BDD models, measured in the number of nodes in graphs. In Table 3.2, we compare the sizes of the S3 BDD model with state-of-the-art BDD models like ROBDD [58], FBDD [128], and SSBDD [440, 484] for the benchmark family of ISCAS’85 [51]. As a reference, the lower bounds (LB) are included as well. It is interesting to note that the difference in the number of nodes between S3 BDDs and LB is exactly equal to the number of roots in the FTG of the circuits, which prevents merging the graphs. In Table 3.3, in columns 2–4, we present the sizes of gate-level circuits and the sizes of SSBDD and S3 BDD models for the ISCAS’85 [51], ISCAS’89 [39] and ITC’99 [86] benchmark circuits. In columns 5 and 6, we compare the gains in the number of simulation steps of the SSBDD and the advanced S3 BDD models vs. gatelevel circuits. As we can see, the S3 BDD model allows speed-up in logic simulation on average 2.3 times, and in particular cases, up to 3.3 times compared to gate-level simulation. In fault simulation, the gain increases. The number of fault sites equals the number of nodes in the SSBDD model, while the simulation speed depends on the number of nodes in the S3 BDD model. Therefore, the speed-up in fault simulation will be the product of columns 5 and 6 in Table 3 (on average of 1.71 * 2.27 = 3.9 times).

3.4 Shared Structurally Synthesized BDD

111

Table 3.2 Comparison of different BDD sizes for ISCAS’85 circuits [462] Circuit

Gates

ROBDD

FBDD

SSBDD

S3 BDD

LB

c432

232

30,200

1063

308

213

248

c499

618

49,786

25,866

601

415

447

c880

357

7655

3575

497

347

395

c1355

514

39,858

N/A

809

519

551

c1908

718

12,463

5103

866

619

651

c2670

997

N/A

1815

1313

884

1014

c3540

1446

208,947

21,000

1648

1271

1316

c5315

1994

32,193

1594

2712

2080

2206

c6288

2416

N/A

N/A

3872

2385

2416

c7552

2978

N/A

2092

3552

2633

2715

Table 3.3 Comparison of the effect of SSBDDs and S3 BDDs with gate-level circuit sizes [468] Circuit

Lines

SSBDD

S3 BDD

Lines versus SSBDD

Lines vs. S3 BDD

c432

432

308

248

1.40

1.74

c880

775

497

395

1.56

1.96

c1335

1097

809

551

1.36

1.99

c1908

1394

866

651

1.61

2.14

c2670

2075

1313

1014

1.58

2.05

c3540

2784

1648

1316

1.69

2.12

c5315

4319

2712

2206

1.59

1.96

c6288

4864

3872

2416

1.26

2.01

c7552

5795

3552

2715

1.63

2.13

s13207

12,441

5228

3755

2.38

3.31

s15850

14,841

6075

4435

2.44

3.35

s35932

32,624

19,547

13,935

1.67

2.34

s38417

34,831

16,160

11,246

2.16

3.10

s38584

36,173

19,179

14,941

1.89

2.42

b14

19,491

8248

6594

2.36

2.96

b15

18,248

15,254

12,310

1.20

1.48

b17

64,711

46,397

37,535

1.39

1.72

b18

222,499

132,122

102,160

1.68

2.18

b19

448,502

267,092

206,185

1.68

2.18

1.71

2.27

Average

112

3 Structurally Synthesized BDDs

In this Chapter, we have described the model of Shared Structurally Synthesized BDDs (S3BDD) as an extension of SSBDDs. The motivation for introducing this model was to increase the speed of both fault-free and fault simulation of digital circuits compared with SSBDDs. Since the model preserves the accuracy of mapping gate-level faults in the original circuit, it is better suited for testability analysis of circuits compared to traditional BDDs. We can highlight the following contributions to state-of-the-art. First, we have developed an equation that allows the calculation of the minimum size of the compact S3 BDD model for representing combinational and sequential circuits. Second, we have developed a method for generating the S3 BDD model that guarantees achieving that minimum size due to the added step of analyzing the Fan-out Topology Graph. Third, we have shown why the lower bound cannot always be achieved. We also proposed an optimized version of the S3 BDD model with a reduced number of nodes that is applicable directly for fault-free and delay fault simulation of digital circuits. It is important because the delay fault model is attracting more and more attention as microelectronic technology is increasingly moving towards smaller feature sizes.

Chapter 4

Fault Modelling with Structural BDDs

Testing digital systems is challenging because of the growing complexity of the systems and the increasing variety of physical defects that need accurate modeling to achieve the high quality and trust of test results and fault diagnosis. The open question is: how to improve the testing quality under these challenges? We can observe the following main approaches followed in the literature. The first one is the development of defect-oriented test methods, and the second one is applying high-level fault modeling. The combination of both approaches naturally guides us to multi-level and hierarchical fault modeling techniques. We show that the concept of structural decision diagrams serves as a good environment for developing uniform methods for fault modeling at different abstraction levels of digital systems: SSBDDs and S3 BDDs at the logic level, considered in this chapter, and HLDDs, discussed in Chaps. 7 and 8. In this chapter, we also focus on fault modeling at the physical defect level in transistor-level circuits and on the general concept of hierarchical fault modeling. We present novel ideas of formal fault collapsing by applying the fault equivalence and dominance relationships using the formalism of structural decision diagrams. To reduce the size of the SAF model, we propose a fault-collapsing method with linear complexity.

4.1 Fault Modeling with Structural BDDs Technology scaling in today’s nanoscale processes produces new failure mechanisms in electronic devices. It forced researchers to develop more advanced fault models compared to the traditional stuck-at fault (SAF) model, and also to investigate possibilities of reasoning the faulty behavior of systems without using any fault models. In this section, we explain the methods of fault modeling using SSBDDs and S3 BDDs, where we consider, basically, the class of stuck-at faults (SAF). A group of faults along a signal path in a circuit can map to only a single fault at a node of SSBDD,

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 R. Ubar et al., Structural Decision Diagrams in Digital Test, Computer Science Foundations and Applied Logic, https://doi.org/10.1007/978-3-031-44734-1_4

113

114

4 Fault Modelling with Structural BDDs

resulting in considerable fault collapsing in the model. This collapsing is possible for both the stuck-at and delay faults.

4.1.1 Modeling Faults with SSBDDs The fact that the nodes in SSBDDs represent signal paths in circuits allows modeling with SSBDDs explicitly the faults related to these paths. The faults may belong to different classes, such as stuck-at faults (SAF), bridging faults or shorts between the lines in the circuit, opens, delay faults, static and dynamic hazards and others. Moreover, because a single node in SSBDD represents a path in the FFR, all faults located on the path collapse to the faults related to the single SSBDD node. The stronger the fault collapse in the model, the bigger the decrease in the complexity of fault modeling. This aspect leads to a hierarchical test generation approach, where we alleviate the high complexity combinatorial tasks of test generation by reducing the sizes of the fault sets on SSBDDs, to leave the tasks of fault simulation and fault diagnosis with less complexity dealing with fault lists of large sizes, related to low-level structures of real circuits. Consider the circuit and its SSBDD in Fig. 4.1. We have generated a test pattern T (x 1 , x 2 , x 3 , x 4 , x 5 ) = (01010), which activates a path l1 = (x1 , x2 , x3 , x5 , #1). The test pattern, generated at the FFR level, using the SSBDD, detects the faults x2 ≡ 0, x3 ≡ 0, x5 ≡ 0, related to the nodes of the SSBDD. Each of these FFR-level faults in the SSBDD model covers a subset of gate-level faults of different classes in the circuit. For example, the generated test pattern activates the following mappings of faults from the FFR level to the gate-level: (x2 ≡ 0)FFR → {x2 ≡ 0, b ≡ 1, x3 → x2 , c → x2 }gate (x3 ≡ 1)FFR → {x3 ≡ 1, a ≡ 0, c ≡ 1, b → a}gate (x5 ≡ 0)FFR → {x5 ≡ 0, c ≡ 1, b → x5 }gate

0

x1 x2 x3 0 1 x4

& x5

1

1

b

y 0

a 1

&

1

1

y

x1

x3

x5

x2

x4

#0

#0

#0

0

c

1

Fig. 4.1 Modeling faults with SSBDDs

(4.1)

#1

4.1 Fault Modeling with Structural BDDs

115

The notation x ≡ e is for the SAF fault class, “x stuck-at e”, where e ∈ {0, 1}. The notation x ' → x'' is for the bridging faults, where x ' = 0 is the aggressor, and x'' = 1 is the victim, i.e., in case of the fault, the value of x'' = 1 changes to x'' = 0. Since each node in the SSBDD represents a signal path in the circuit, it is possible also to represent in a compact way the gate-level path delay faults in FFRs as node delay faults in SSBDDs. Let us introduce the term of node delay fault Δx(m), for the node variable x(m) in SSBDD, for representing the path delay fault in the FFR. For example, in Fig. 4.1, the notation Δx3FFR → Δ(x3 , a, c, y)gate

(4.2)

means a mapping of a node delay fault Δ{x3 } in SSBDD to the respective path delay fault in the gate-level FFR. In larger circuits representing FFR networks, we classify this fault as a path segment-delay fault [211]. The two-level representation of delay faults also extends to the FFR-level modeling of nominal node delays in SSBDDs, calculated as the sum of gate-level nominal delays δx on the respective signal path. δx3FFR = (δa + δc + δy)gate .

(4.3)

The fault collapsing is a side effect of the procedure of SSBDD synthesis. An extension of SSBDDs to S3 BDDs allows additional compression of the fault model that leads automatically to additional fault collapsing.

4.1.2 Measuring Fault Coverage Using SSBDDs Reducing the size of the fault set of the SSBDD model, compared with the set of gate-level faults, leads to different bases (sizes of fault sets) for calculating the percentages of covered faults. Since the number of gate-level faults in the circuit is always larger than in the SSBDD model, a single not detected gate-level fault always implies lower fault coverage when measured by fault simulation in the SSBDD model than measured by gate-level fault simulation. However, the larger the circuits are, and the closer the fault coverages will be to 100%, the smaller the difference in fault coverages, and the coverage measured with the SSBDD model tends to be rather pessimistic than optimistic. We achieve equal fault coverage with gate-level and SSBDD-based fault simulation only in two cases: (1) when the fault simulation provides 100% fault coverage, or (2) when it happens that a node in the SSBDD model represents only a single line in the gate level circuit. It is the case when one of the inputs of the output gate of an FFR is connected to the primary input of the FFR.

116

4 Fault Modelling with Structural BDDs

The practical recommendation is to perform fast fault simulation for fault coverage calculation on the SSBDD model with follow-up slow fault simulation at the gatelevel model for only these faults represented by SSBDD nodes, which were qualified by the SSBDD simulator as not covered. Example 4.1 Consider again the circuit and its SSBDD in Fig. 2.9. The number of SAF in the FFR of the circuit is 24, and in the more compact model SSBDD, it is 16. Assume the SAF x31 ≡ 0 in SSBDD, was not detected with the SSBDD-based fault simulator, which leads to 93.8% fault coverage. Assume also that at the same time, the gate-level simulation gave the result 95.8%, showing that either the fault x12 ≡ 0, or the fault x4 ≡ 0 is present. The 93.8% fault coverage would mean here a pessimistic evaluation. In the exceptional case, if there were at the same time two faults x12 ≡ 0, and x4 ≡ 0 present, giving the gate-level fault coverage 91.75%, the SSBDD-based estimate 93.8% would mean an optimistic evaluation.

4.1.3 Modeling Faults with S3 BDDs Each operation of substituting a node m with an elementary BDD of a gate in the gate-level circuit, during SSBDD synthesis, results in the removal of a subset Rgate (m) of the gate-level faults and representing them by a subset of higher FFR-level faults RFFR (m), so that |RFFR (m)| < |Rgate (m)|. Synthesizing the S3 BDD model affords further fault collapsing, compared with the SSBDD model, due to the superposition procedure extending beyond the fan-out stems of the circuit, and because of reducing by each step of superposition the size of the model by the amount of difference |Rgate (m)| – |RFFR (m)|. The effect of final mapping of faults Rgate → RFFR results in the fact that instead of dealing with the full set of faults Rgate , only a smaller subset |RFFR (m)| < |Rgate (m)| of explicitly represented faults in RFFR has to be handled for test generation, and in fault simulation tasks. We need to process the fault set Rgate again only in the final phase of diagnostic reasoning of test results. There is a difference in the modeling of faults in SSBDDs and S3 BDDs due to the differences in the models synthesized. In the case of SSBDDs, as already told, all faults in the circuit map directly to the nodes of the SSBDD. In the case of S3 BDDs, we map only one part of the faults in the circuit as well as directly to the nodes of S3 BDDs, in a similar way as in SSBDDs. Another part of the faults in the circuit will collapse during the S3 BDD synthesis, and they disappear from the model. The processing of these faults during test generation, fault simulation and diagnosis will take place indirectly when the activated paths run through the entry arrows into the sub-graphs of the model. From the latter, the main effect of S3 BDDs follows that we do not need to explicitly generate test patterns for all faults associated with entry arrows. This statement results from the following Theorem.

4.1 Fault Modeling with Structural BDDs

x3 x1 x2

1

&

a

1 &

x4

y* y

x51 x52

x5

117

x3

x51

x4

x52

y

b

(a)

x5

x3

x51

x4

x1

x1 x2 x2

x5 (b)

(c)

Fig. 4.2 Comparison of the SSBDD and S3 BDD models

Theorem 4.1 Let us have in the S 3 BDD model a supergraph Gy representing an FFR with nonredundant function y = f(X), and a subgraph Gy,k ⊂ Gy merged into Gy by substituting a node my ∈ Gy . Then, a test pattern, which detects a fault at any nonredundant node my,k ∈ Gy,k in Gy , detects the fault at the collapsed node my ∈ Gy as well. Proof Consider the circuit in Fig. 4.2a, and the process of synthesis of S3 BDD in Fig. 4.2c from a set of two SSBDDs in Fig. 4.2b. The SSBDD model of the circuit consists of two graphs Gy and Gx5 in Fig. 4.2.b, representing two FFRs, respectively, of the given circuit. Due to the nonredundancy of y = f (X), the node x 52 ∈ Gy is testable. The synthesis of S3 BDD consists of merging Gy and Gx5 in the SSBDD model into a new single S3 BDD G ∗y in Fig. 4.2c. G ∗y presents a combination of the supergraph G ∗y and the subgraph Gx5 . The node x 52 ∈ Gy is replaced with the subgraph Gx5 in G ∗y . As a result, node x 52 , whose function was to represent the faults of the path (x 52 , b, y) in the gate-level circuit, disappears from the model. We show now that the faults related to the lost node x 52 , are covered indirectly, without targeting x 52 specially. In Fig. 4.2, this means that testing the node x 52 ∈ Gy is equivalent to testing any node x k ∈ Gx5 in G ∗y . As an example, consider the test T(x 1 , x 2 , x 3 , x 4 ) = 0e01 for the node x 2 ≡ x k , which is generated for detecting the faults x 2 ≡ e, e ∈ {0, 1}, in G ∗y . The same test would detect the same faults x 52 ≡ e in Gy . The paths activated by the test pattern are highlighted in bold. There are two cases of testing the fan-out branches in the circuit: (1) independent of itself testing, where the test activation condition (test pattern) does not depend on the variable under test, and (2) self-dependent testing, where the test activation condition is depending on the value of the variable under test, and therefore the test is under threat of fault masking. Consider four representative situations in the example of Fig. 4.3 of establishing an equivalence between testing any node in the subgraph Gx52 ⊂ Gy in the S3 BDD model, and testing the lost node variable x 52 ∈ Gy in the SSBDD model. (a) This case refers to the independent of itself test in Fig. 4.3a is characterized by the independence of the activated by the test path in the subgraph Gx52 from the fan-out variable x 5 under test. The test activates the path l(x 3 , #0) = (x 3 , x 4 , x 1 , x 2 , #0) through the graphs Gy , and Gx52 and has the target to test the faults

118

y

4 Fault Modelling with Structural BDDs

x3

x51

x4

x1 x2

x52 (a)

#0

y

x3

x51

x4

x1

#1

x2

x52

y

x3

x51

x4

x51 x2

x52

#0

(b)

#0

(c)

y

x51 x4

#1

x51

#1

x2

x52

#0

(d)

Fig. 4.3 Testing of fan-out nets using the S3 BDD model

x 1 ≡ 1 and x 2 ≡ 1 in Gx52 . Since the path includes the entry x 52 , the test also targets the collapsed node x 52 in the S3BDD Gy , representing the fan-out branch x 52 in the circuit in Fig. 4.2a. The targeted fault x 52 ≡ 1 of the disappeared node x 52 in the S3 BDD Gy will be detected indirectly. Since the test is independent of itself, e.g., the activated path l(x 3 , #0) does not include the node x 51 , the test is not influenced by the value of the fan-out variable x 5 under test. Therefore, by changing the value of x 1 to 1, we get activated the path l(x 3 , #1) = (x 3 , x 4 , x 1 , #1), and have a test for x 1 ≡ 0 and also for x 52 ≡ 0. The statement of Theorem is valid for the case of (a). (b) This case corresponds to self-dependent testing in Fig. 4.3b, where the activated path l(x 3 , #0) = (x 3 , x 51 , x 4 , x 1 , x 2 , #0) includes the node x 51 , which introduces restrictions for generating tests for the disappeared node x 52 in the S3 BDD. The activated path l(x 3 , #0) allows still to test the faults x 1 ≡ 1, x 2 ≡ 1, and also x 52 ≡ 1, according to Theorem. But, with the change of the value of x 1 to 1, as in the case of (a), we cannot use the rest of the path for testing the faults x 1 ≡ 1 and also for x 52 ≡ 1. However, we can return to case (a) to get the Theorem valid for testing these faults. (c) This case is presented in Fig. 4.3c, and illustrates again self-dependent testing by activating the two paths l(x 3 , #1) = (x 3 , x51 , x 4 , x 1 , #1), and l(x 1 , #0) = (x 1 , x 2 , #0) for testing the fault x 1 ≡ 0. We see that changing x 1 → 0 due to the fault x 1 ≡ 0, implies x 52 → 0, and consequently, x51 → 1, that activates a new path l(x 3 , #1) = (x 3 , x51 , #1). As a result, the value of the output y = 1 will not change, and the faults x 1 ≡ 0 and x 52 ≡ 0 will not be detected. However, there is still the possibility to make Theorem valid by activating the paths l(x 3 , #1) = (x 3 , x 4 , x 1 , #1) and l(x 1 , #0) = (x 1 , x 2 , #0) for testing the faults x 1 ≡ 0, and x 52 ≡ 0. To test the faults x 1 ≡ 1, and x 52 ≡ 1, we have to change in the test only the value of x 1 , and activate the new first path l(x 3 , #0) = (x 3 , x 4 , x 1 , x 2 , #0) to satisfy the statement of Theorem. (d) The last case, presented in Fig. 4.3d, differs from the case (d) by removing node x 3 from the model. For this circuit, there is no test for the nodes x 1 and x 2 , because the circuit has changed to redundant. The new function of the circuit is y = x 4 ∨ x1 x2 . The Theorem is valid only for the nonredundant circuits.

4.1 Fault Modeling with Structural BDDs

119

From the above, it is always possible to indirectly generate the test patterns for the disappeared (collapsed) nodes in S3 BDDs by targeting the faults in the subgraphs that caused the fault collapse. ∎ Based on Theorem 4.1, we can state that using the S3 BDD model, we can rely on the same principle as in the case of SSBDDs, to represent the signal paths and the fault sets related to these paths by only the nodes of graphs. It makes it possible to map several difficult problems related to fault reasoning to the same uniform model basis. To these problems belong reasoning of different fault interactions, such as fault self-masking due to signal path convergences, mutual multiple fault masking, advanced fault simulation approaches, fault diagnosis and others.

4.1.4 Mapping of S3 BDD Nodes to Signal Paths in Circuits The model of SSBDDs presents a tool for mapping FFRs into SSBDDs, and the signal paths in FFRs to the nodes of SSBDDs. The model of S3 BDDs provides more general possibilities for representing complex FFR networks. Each supergraph in the S3 BDD model consists of a network of nested subgraphs, representing a network of FFRs, respectively. The nodes in S3 BDDs share different meanings. On the one hand, a node of a subgraph represents a signal path in the related FFR. On the other hand, the same node can be regarded as a set of nested path segments with different lengths throughout a chain of FFRs, represented by the related supergraph of the S3 BDD model. The longest nested path segment presents the special case of the full signal path through all FFRs of the chain, represented by the supergraph. The modeling of signal paths in the circuit may become, on the one hand, even more sophisticated because the paths in the full FFR network can also go through several different supergraphs. On the other hand, it may provide diverse path compaction possibilities. Consider a combinational circuit in Fig. 4.4 as a network of connected gates so that each connection represents a fan-out net, and each gate can be regarded as an extreme case of the smallest possible FFR. We constructed such a circuit to model it with S3 BDDs as small as possible for reasoning but at the same time to present a rather complex structure of interconnected signal paths. The circuit is partitioned into three non-overlapping sub-circuits C 19 , C 20 , C 21 , mapped to the three S3 BDDs G19 , G20 , G21 , respectively, called as supergraphs. Each supergraph consists of several sub-graphs labeled with entry arrows. The nodes in the graphs are divided into two groups: those labeled with the circuit’s primary input variables, and those corresponding to the internal fan-out nets with labels addressing the entry arrows into the sub-graphs describing the function of the respective fan-out net. Each node in a supergraph of the S3 BDD model represents, first, either a primary input of the circuit or a branch of an internal fan-out net of the circuit and, second, a signal path in the circuit consisting of the consecutive path segments throughout the subset of FFRs the supergraph represents, as shown in Table 4.1. The subscripts

120

4 Fault Modelling with Structural BDDs

7 3 4

&

5 1 2

1

1

&

&

14

1

15

&

13 8

141

111

& 1

12

10 6

19

11

& 1

16

1

19

1

20

17

18 1

21

151

14 11

172

18

161

21

2 182 9

16 13

(a)

1

4

&

9

5

10

121 3

131

15 12

7 17

20

8 6

102

(b)

Fig. 4.4 Representation of the network of FFRs by a network of three supergraphs of the S3 BDD model [444]

1 and 2 of the numbers of fan-out stems in Table 4.1 have the following meaning: 1—represents the upper branch of the fan-out net in the circuit in Fig. 4.4a, and 2—the lower branch. The presented S3 BDD model allows a considerable reduction in the number of sites as fault locations. So, 42 lines in the gate-level circuit are represented by 18 nodes only, which in turn means that the size of the fault model, for example, of the SAF class, is reduced from 84 gate-level SAF to 36 FFR-level SAF in the S3 BDD model. Let us consider the mapping possibilities between the nodes of the S3 BDD model and the paths in the original gate-level circuit, considering both the FFRs only and the full FFR network. Figure 4.4a depicts three FFR sub-networks, represented by the supergraphs G19 , G20 , and G21 , which in turn, are highlighted by the bold frames, grey, and white Table 4.1 Mapping between the S3 BDD nodes and the paths in the circuit in Fig. 3.59 S3 BDD G19

S3 BDD G20

S3 BDD G21

Node

Path

Node

Path

Node

Path

14

141 -19

17

172 -20

18

182 -21

11

111 -19

16

161 -181 -20

9

9-21

7

7-19

13

131 -152 -181 -20

8

8-162 -21

15

151 -171 -19

5

5-122 -152 -181 -20

6

6-132 -162 -21

12

121 -142 -171 -19

1

1-101 -122 -152 -181 -20

10

102 -132 -162 -21

3

3-112 -142 -171 -19

2

2-101 -122 -152 -181 -20

4

4-112 -142 -171 -19

17

172 -20

4.1 Fault Modeling with Structural BDDs

121

colored gates, respectively. For example, the supergraph G19 has three nested subgraphs G17, G14, and G11, representing three gates (alias FFRs) with outputs 17, 14, and 11, respectively, and one FFR of three gates. We can interpret and exploit the nodes in S3 BDDs in the model in different ways. First, each node may represent a path through the related FFR. For example, node 7 ∈ G19 represents the path from the primary input 7 to the primary output 19 through several gates in the circuit. On the other hand, node 172 ∈ G20 represents the path from the fan-out net 17 through the FFR, consisting only of the single gate, to the primary output 20. Note the path from 17 to 19 is not represented by the node in the model (it disappeared during the synthesis), and it is represented indirectly by the entry arrow in G19 . The faults on the path from 17 to 19, as explained in Sect. according to Theorem 4.1, are tested indirectly without targeting them specially. Another interpretation of the S3 BDD nodes illustrates Table 4.1 as mapping of the node to the related signal path through the chains of the FFRs represented by the supergraph the node belongs to. The nodes labeled by the primary input variables represent the full signal paths (the longest paths for the related supergraph) throughout the FFR network. For example, node 1 represents the path c(1, 20) = (1, 101 , 122 , 152 , 181 , 20) in the circuit. The nodes labeled by fan-out variables represent the paths with different lengths, beginning with the branch of the first fan-out net and ending at the primary output of the circuit. The paths presented in Table 4.1 are divisible into path segments. The path or a path segment, represented by an S3 BDD node, is observable in the circuit following the fan-out nets it passes through. On the other hand, the paths or path segments, represented by an S3 BDD node, are also observable in the supergraph it belongs to, following the entry arrows upwards from the node to the root of the graph. The higher flexibility of exploiting S3 BDDs in test generation, fault simulation and fault diagnosis is achievable by defining and using the cross-graph paths through different supergraphs. For example, another view on node 1, next to the mapping c(1, 20) → 1, as shown in Table 4.1, would be c(1, 19) → 1, i.e., to represent by node 1 in path c(1, 19) = (1, 101 , 122 , 151 , 171 , 19). Path c(1, 19) can be partitioned into segments c(1, 152 ) = (1, 101 , 122 . 152 ) ⊂ C 20 , and c(151 , 19) = (151 , 171 , 19) ⊂ C 19 , belonging to different sub-circuits C 20 and C 19 , respectively. Hence, we can say that path c(1, 19) crosses supergraphs G20 and G19 due to the mappings C 20 → G20 and C 19 , → G19 , respectively, Note that paths c(1, 19) and c(1, 20) (considered above) differ in the path segments, highlighted in bold. Testing cross-graph paths is helpful for fault diagnosis. For example, if the test for c(1, 19) passes, but the test for c(1, 20) fails, we deduce from it that the fault is located at the path segment (152 -181 -20) ⊂ C 20 . If both tests fail, then we can reason that the fault has to be located at the intersection path segment (1–101 -122 ) = c(1, 19) ∩ c(1, 20).

122

4 Fault Modelling with Structural BDDs

4.1.5 Making Faults Observable in S3 BDDs The basic operations considered for SSBDDs in Sect. 3.2.1 also apply for S3BDDs: walking along the graphs guided by test pattern, activating of paths, fault simulation and test pattern generation for detecting and diagnosing faults. The techniques for performing these operations differ in that, in the case of SSBDDs, the path concept is applied only to single FFRs. In contrast, in the case of the S3 BDD model, the path concept is generalized to cross-circuit paths involving the full set of graphs of the model. To make a fault observable, we must sensitize the fault, propagate the sensitized fault to primary outputs and justify both solutions by consistent signals at the primary inputs. For sensitizing the fault at the related node, we must activate two paths in the supergraph the node belongs to, from the root node to the sensitized node and from it to terminals #0 and #1. Activation of these paths in the S3 BDD model does not differ from how we activate the paths in the system of SSBDDs. In both models, it is possible to apply either the depth-first or breadth-first search to explore the trees of true paths from the node under sensitization to terminals #0 and #1 of the related graph. In the breadth-first search, we first activate the path in the graph where the node is located, and after that, we try to justify the solution. In the depth-first search, we join the path activation and line justification to find as fast as possible the inconsistencies of variables assignments. For propagating the fault, related to the sensitized node in the S3 BDD model, to primary outputs of the circuit, we must search for a true path from the sensitized node to the root of the supergraph, the node belongs to, or switch to the search for a true cross-graph path to the root of another supergraph. For crossing the graphs, we can use entry arrows. For activating a path towards the root node of the supergraph, at each node attributed with the entry arrow, two alternatives exist: either to continue the search in the current supergraph or to switch to the search for a cross-graph path in another supergraph. Example 4.2 Consider a fault propagation search as a part of test generation for the faults at the primary input 1 of the circuit in Fig. 4.4a, mapped to node 1 in the S3 BDD model in Fig. 4.5, labeled with variable x 1 . The S3 BDD model consists of three supergraphs representing the three subcircuits of the FFR network represented in Fig. 4.4a. We assign x 1 = D with a symbolic value D ∈ {0, 1}, where D denotes the type of SAF, which is targeted: D = 1 means that we target the fault SAF-1, and D = 0 means the fault SAF-0. Using the symbolic value for the targeted fault, we can generate by a single run two test patterns simultaneously, which will differ only in the value of the variable under test. The test generation process for testing the fault x 1 ≡ D is illustrated in Fig. 4.5 for showing the activated paths in the S3 BDD model and in Fig. 4.6 for presenting the search for the true paths, activated for testing the fault. The subset of tested faults is present in Table 4.2. We start the search in the supergraph G20 by activating the path from node 1 to terminal #0, by assigning x 2 = 0. We do not need to activate the path from 1 to #1,

4.1 Fault Modeling with Structural BDDs 19

141

111

151

121

14 11

172

18

161

15 12

#0

21

7 17

20

123

3

131

#1

#1 9

5

1

#1

2

#1

8

#0

16 13

#0

10

4

182

6

102

#0

#0

#1

#0

Fig. 4.5 Activated paths during test generation for the circuit in Fig. 4.4 in the S3 BDD model

G20

G21

G20

G19

G19

x1 = D x2 = 0

x10 = D x6 = 1

x13 = D x5 = 0

x15 = D x12 = 1

x17 = D x16 = 0

x5 = 1

x8 = 0

x16 = 1

x5 = 1 Another possible option for fault propagation

x8 = 0 x6 = 1 x10 = 1 Conflict with x10 = D

x13 = 0 G19 x12 = D

x9 = 1

G20 x16 = D

G20

x9 = 1

x18 = D x9 = 0

Generated test pattern x 1 x 2 x 5 x 6 x9 D 0 0 1 0

x21 = D

Fig. 4.6 Fault propagation search tree for the circuit in Fig. 4.4 using the S3 BDD model

Table 4.2 Tested faults by test pattern generated in Fig. 4.6 SAF-D

x1

x2

x5

x6

x9

x 10, 2

x 13,1

D=0

+



D=1

+

+

x 16,1

x 18,2

x 21



+



+

+



+

+

+

+

+

+

+



+

+

because the right neighbor of node 1 is already terminal #1. Now we start to propagate the fault toward the outputs of the circuit. We discover in node 1 two alternatives. The first would be to continue the paths by the assignment x 5 = 1 toward the root node 172 in the same graph G20. After that, we would have two options again, i.e., to continue the path by assignment x 13,1 = 0, or to launch the cross-graph path via the entry arrow 12 to node 121 in G19 with assignment x 12 = D. Let us decide, however, to take the second option and start immediately in node 1 to activate a cross-graph path jumping via entry 10 to node 102 in graph G21. We propagate the symbol D to x 10,2 = D, and assign x 6 = 1. Again, there are again two options, i.e., either to assign x 8 = 0 towards the root node 182 in the same graph G21,

124

4 Fault Modelling with Structural BDDs

or to jump via entry 13 back to graph G20, to node 131 . We propagate the symbol D to x 13,1 = D, and assign x 5 = 0 to keep the fault sensitized in node 131 . There are again two options: to prolong the cross-graph path via entry 15 to 151 ∈ G19 , or to continue the path in G20 . We choose the latter one by assigning x 16,1 = 1. Then we jump via entry 18 to the root node 182 of G21 , and sensitize the node by x 9 = 0. This allows us to do the final assignment x 21 = D, which means that we have propagated the fault at the primary input 1 to the primary output 21. The generated test pattern T(x 1 , x 2 , x 5 , x 6 , x 9 ) = D0010 activates in the circuit, in Fig. 4.4, the signal path c(1, 21) = (1, 102 , 131 , 152 , 182 , 21). The nodes through which the faults are propagated are highlighted in grey color in Fig. 4.5. For these nodes, both types of SAF are tested. The tested faults at the nodes, highlighted by bold circles, can be considered as an added value of fault propagation because they were not specially targeted. The tested subset of faults by the two test patterns for D ∈ {0, 1} is shown in Table 4.2.

4.1.6 Modeling Path and Path Segment Delay Faults in S3 BDDs As we see later, direct fault handling in the case of SSBDDs, or S3 BDDs, offers several benefits regarding the possibility of reasoning multiple faults and coping with the problem of mutual masking of faults. Since each node in SSBDDs models represents a signal path in FFRs of digital circuits, it is also convenient to represent in a compact way the Path Segment Delay Faults (PSDF) in the circuit as single node delay faults in SSBDDs [451]. A PSDF we can consider as well as the product of Gate Delay Faults (GDF) collapsing along the path represented as a node in the SSBDD. Identifying first the path segment with PSDF using SSBDD and locating after that, more precisely, the gate delay faults in the faulty segment we can consider as a hierarchical diagnosis of delay faults. This is like the hierarchical diagnosis of SAF as described in Example 4.2. Assume a node m represents a path c(m) in the FFR of the circuit through a sequence of gates. Regarding the full circuit, we can consider the paths in FFRs as path segments. Knowing the delays of gates on that path, we can calculate the delays for the related path segments and assign these delay values as node parameters. Connecting the FFR-based path segments into longer segments over several FFRs, or into the full paths, the S3 BDD model allows a rise from a low gate-level delay analysis to a higher FFR-level path-segment delay analysis. Example 4.3 Consider the circuit in Fig. 4.7a, consisting of three FFRs, with fan-out stems 9, 10, and 11. The S3 BDD in Fig. 4.7b represents the FFRs by the embedded three SSBDDs with entries 92 , 102 , and 11, respectively. A path from input 3 to output 11, highlighted in the circuit, is activated by two consecutive input patterns T(1, 2, 3, 4, 5, 6, 7, 8) = (0-D10110), where D denotes

4.2 Extending the Class of Faults

125 11

1

0

&

2 D 3 4

5

a

91

1 1

&

0

6 1

7

c

9 92

b

&

&

101

102

d

e

1

10

1 &

1

8 0

&

D

f

(a)

11

7

101

8

5

91

6

1

2

3

4

#0 102

92

(b)

#1

#1

#0

Fig. 4.7 Paths and Paths segments in the circuit and S3 BDD

the transition 0 → 1 at the primary input 3. The pair of patterns is testing the Path Delay Fault (PDF) Δ01 (3, 11), which accumulates along the highlighted path. It also tests three path segment delay faults (PSDF) Δ01 (3, 9), Δ01 (92 , 10), and Δ01 (101 , 11) along the respective path segments. The path segments (3, 9) and (101 , 11) are represented in the S3 BDD by the highlighted nodes 3 and 101 , respectively. The PSDFs propagate from node 3 to entry 92 , and from 101 to root node 11. Note the path segment (92 , 10) is collapsed in the S3 BDD model, and the fault Δ01 (92 , 10) is tested indirectly (see Sect. 4.1.3) as an added value when generating the test for PDF Δ01 (3, 11). The possibility to model delays on signal paths by SSBDDs and S3 BDDs allowed the development of different applications in simulating structural aspects of logic circuits, such as hazards with multi-valued simulation [441, 442], calculation of delays with timing simulation [204], delay fault simulation [228], identification of true critical paths [465]. The SSBDDs allowed a higher speed of simulation compared with gate-level methods, mainly due to raising the simulation level from a network of gates to a network of FFRs.

4.2 Extending the Class of Faults In this Section, we extend the traditional stuck-at-fault class to cover a large class of structural faults taking place in digital circuits, such as bridging faults, opens and diverse physical defects characterizing the transistor level, and bring this large fault class under the joint umbrella of conditional SAF. This general fault model is composed of two parts to describe a defect: a SAF at a node of the circuit where the defect manifests itself, and a logic condition needed for sensitizing the defect. We propose a formal definition for such a conditional SAF model in the form of a generic parametric function and present a general technique for the formal generation of the conditions based on Boolean partial derivatives and solving Boolean differential equations for modeling physical defects in transistor-level circuits. A hierarchical fault reasoning model is presented based on the diverse techniques of fault modeling at different abstraction levels. We propose a method of cross-level

126

4 Fault Modelling with Structural BDDs

hierarchical modeling of conditional SAF using the conditions as interfaces between the neighboring levels.

4.2.1 Generalization of the SAF Model As the integration scale increases, more transistors can be fabricated on a single chip, increasing at the same time the difficulty of testing each transistor due to the exploding complexity of digital circuits and systems with increasing potential for defects, as well as the difficulty of detecting the faults produced by those defects. Another problem is that the defects due to their diverse physical mechanisms make it difficult to treat them mathematically. It led to the need to introduce uniform methods of modeling the diverse universe of malfunctioning digital circuits and systems.

4.2.1.1

Defects, Faults, Errors and Stuck-At-Fault Model

Let us refer to the established definitions for the different notations of the malfunctioning digital systems and circuits [510]. Definition 4.1 A fault is a representation of a defect reflecting a physical condition that causes a circuit to fail to perform in a required manner. A failure is a deviation in the performance of a circuit or system from its specified behavior and represents an irreversible state of a component such that we have to repair it in order to provide its intended design function. Figure 4.8 illustrates the relationships between defects, faults, errors and failures. A defect is a physical cause of the malfunctioning of the components, circuits and systems, whereas errors and failures are the observable effects of the physical cause. A circuit error is a wrong output signal produced by a defective circuit, whereas a failure refers to the malfunctioning behavior of the system. The faults are the mathematical concepts we use when following how the fault effects are spreading and propagating along different signal paths in the circuits and systems. The classical stuck-at-fault model (SAF/0, SAF/1) has been the industrial standard since 1959. The reasons for that are its properties, such as simplicity, tractability, logic behavior, measurability, and adaptability. Simulation of SAF is straightforward and deterministic. We can apply SAF on transistors, gates, and RTL components [510]. However, the high SAF coverage cannot guarantee high quality of testing, for example, for CMOS integrated circuits. The reason is that the SAF model ignores the actual behavior of CMOS circuits and does not adequately represent the majority of real IC defects and failure mechanisms, which often do not manifest themselves as stuck-at faults. The types of faults that we can observe in a real gate depend not only on the logic function of the gate but also on its physical design. These facts are well known [182], but usually, they are ignored in engineering practice.

4.2 Extending the Class of Faults

127

Faults Defect

Failure Component

System

Error

Fig. 4.8 Fault propagation from a physical defect to observable errors and failures

We can see an example of diverse structural faults happening in transistor circuits in Fig. 4.9. Only two of these faults correspond to the logic level SAF/0 and SAF/1 faults. To handle physical defects in fault simulation, we still need logic fault models for the following reasons: to reduce the complexity of simulation (the same logic fault may model many physical defects), a single logic fault model may be applicable to many technologies, logic fault tests may be used for physical defects whose effect is not well understood. The most crucial reason for logic modeling of physical defects is to get a possibility for moving from the lower physical level to the higher logic level with less complexity. Transistor level physical defects:

Logic level interpretations:

Stuck-at-1 Broken (change of the logic function) Bridging Stuck-open (change of the number of states) Stuck-on (change of the function) Short (change of the logic function) Stuck-off (change of the logic function) Stuck-at-0

Fig. 4.9 Diverse structural faults in a transistor circuit

128

4.2.1.2

4 Fault Modelling with Structural BDDs

Conditional Stuck-At-Fault Model

Many researchers have focused on developing new fault models for particular types of physical defect mechanisms like signal line bridges [20, 525], transistor stuckopens [176, 245] or failures due to changes in circuit delays [211]. Another trend has been to develop general fault modeling mechanisms and test tools to effectively analyze arbitrary fault types. The oldest example is the D-calculus [342] that uses D-cubes to model any arbitrary change in the logic function of a circuit block. This approach has been generalized in the pattern fault model [55, 212], which can model any arbitrary change in the logic function of a circuit block. This model is also called conditional SAF (CSAF) and is proposed as an extension of the classical SAF model in [266] for test generation and in [172] for diagnosis purposes. A similar fault model, called functional faults, and introduced in [425, 426, 430] was proposed for using it for multiple fault diagnosis in combinational circuits. The functional fault model consists of two parts, a SAF and additional logic constraints for sensitizing the SAF. It allows handling a broad class of faults uniformly, such as arbitrary defects in components and interconnects. In this approach, SAF represented the topological part of the fault model, whereas the defect activation condition represented the functional part of the model. A conditional SAF allows additional signal line objectives to be combined with a particular fault’s detection requirements. For complete exercising blocks in combinational circuits at the gate level, a similar pattern-oriented gate-exhaustive fault model was proposed in [78], which was extended to target bigger regions (collections of gates) by region-exhaustive fault model in [185]. The described functional, conditional and pattern fault models with different names, but with the same idea, offer high flexibility in defect modeling beyond the single SAF model. Further advancements considered in [47, 94] have introduced the fault tuple model. Fault tuples provide a general capability to handle sequential misbehavior of circuits. In [335], a diagnostic methodology is proposed that targets the whole set of bridging faults leading to either static or dynamic faulty behavior. A unified fault model for interconnect opens and bridges using constrained multiple line stuck-at faults (MLSF) is proposed in [88]. The faulty behavior is referred to as Byzantine effect at a floating line with k branches, leading to 2k – 1 possible fault cases. A method is proposed to reduce the number of 2k – 1 to a reasonably smaller subset, which, however, needs additional information about the threshold voltages of the transistors driven by the floating nodes. A novel X-fault model was developed in [319, 505] for the efficient diagnosis of realistic faults leading to the Byzantine effect. The X-fault model represents all possible behaviors of a physical defect or defects in a gate and/ or on its fan-out branches by using different X symbols on the fan-out branches. A novel method for parallel X- fault simulation was proposed in [452], and is discussed in detail in Sect. 5.2.4. Let us consider the following a formal approach to generating the fault model of conditional SAF for physical defects in transistor circuits using Boolean differential equations.

4.2 Extending the Class of Faults

129

4.2.2 Modeling Physical Defects by Boolean Differential Equations In the following, we discuss in more detail the definition of the conditional SAF model, how the classical SAF fault class can be extended to cover a broad class of physical defects, such as defects inside the components, and bridges between the interconnects between components, and how the model supports hierarchical simulation of physical defects using FFR level SSBDDs.

4.2.2.1

Physical Defect-Oriented Fault Modeling

Consider the design library, where the components f y are represented by Boolean functions y = f y (X), X = (x 1 , x 2 ,…, x n ). Introduce a symbolic Boolean variable Δ for representing a given defect in the component, which converts the fault-free function f y into another faulty function f yΔ . Let us construct for the physical defect Δ of a digital circuit, a generic parametric function y ∗ = f y∗ (x1 , x2 , . . . , xn , Δ) = Δ f y ∨ Δ f yΔ ,

(4.4)

to model the component f y as a function of the defect variable Δ (as a parameter), which describes jointly the behavior of the component for both fault-free and faulty cases. For the faulty case, Δ = 1, and for the fault-free case, Δ = 0, we get from (4.4), two special cases y ∗ = f yΔ , and y ∗ = f y , respectively. Lemma 4.1. The solutions W y (Δ) of the Boolean differential equation.

W y (Δ) =

∂ f y∗ ∂Δ

=

] [ ∂ Δ f y ∨ Δ f yΔ ∂Δ

=1

(4.5)

describe the set of conditions that activate the defect Δ to produce an error on the output of the FFR, described by logic function y = f y (X). Proof The proof follows from the expression (4.4), which describes the possible impact of the defect on the value of function f y . The value of the partial Boolean derivative indicates the conditions under which this possibility applies. ∎ To find the conditions W y (Δ) = 1 for a given defect Δ, we have to create the corresponding logic expression for the faulty function f yΔ , either by logic reasoning, or by carrying out defect simulation directly, or by carrying out real experiments to learn the physical behavior of different defects. The described method represents a general approach to map arbitrary transistor-level physical defects inside the components f y to a higher logic level.

130

4 Fault Modelling with Structural BDDs

Example 4.4 Let us have a transistor circuit in Fig. 4.10, which implements the function y = x1 x2 x3 ∨ x4 x5 . A short defect as shown in Fig. 4.10, changes the function of the circuit as follows: y ∗ = (x1 ∨ x4 )(x2 x3 ∨ x5 ). Using the defect variable Δ for the short, we can create a generic differential equation for this defect as follows: [ ] ∂ Δ(x1 x2 x3 x4 x5 ) ∨ Δ(x1 x4 )(x2 x3 ∨ x5 ) ∂y∗ = ∂Δ ∂Δ = x1 x2 x4 x5 ∨ x1 x3 x4 x5 ∨ x1 x2 x3 x4 x5 = 1 (4.6) From the Eq. (4.6), three possible solutions follow, i.e., T = {10 × 01, 1 × 001, 01110}. Each of them can be used as a test pattern for the given short. As a contra-example, it is easy to show the inadequacy of the stuck-at-fault (SAF) model for testing the transistor level faults. For example, the set of five test patterns 1110x, 0xx11, 01, 101, 10, 110, 11,010, which test all the stuck-at faults in the circuit, does not include any of the three possible test solutions for detecting the short from the set T. Note that for the same purposes of finding the test for the defect Δ, we could also solve directly the following equation to distinguish the fault-free f , and faulty f Δ behaviors of the circuit, respectively, f ⊕ f Δ = (x1 x2 x3 ∨ x4 x5 ) ⊕ (x1 ∨ x4 )(x2 x3 ∨ x5 ) = 1, Fig. 4.10 A transistor circuit with a short physical defect

y x1

x4

x2

x3

x5

Short

4.2 Extending the Class of Faults

131

without introducing the defect variable Δ. However, solving the differential Eq. (4.5) is much easier due to the simplification possibilities resulting from specific properties of partial Boolean derivatives [409]. For understanding the operations used for solving the Eq. (4.5) of Boolean differentiation of logic functions, let us refer here to the following useful properties of Boolean partial derivatives for Boolean functions y = f (X), where X = f (x 1 , x 2 , …, x n ) [409]. Statement 4.1 We can calculate the Boolean partial derivative as ∂ f (X ) = f (x1 , x2 , . . . , xi , . . . , xn ) ⊕ f (x1 , x2 , . . . , xi , . . . , xn ) ∂ xi Statement 4.2 If f (X) is independent of x i , and g(x) depends on x i , then we can apply the following simplifications ∂[ f (X ) ∧ g(X )] = f (x) ∧ ∂ xi ∂[ f (X ) ∨ g(X )] = f (x) ∧ ∂ xi

∂g(X ) , ∂ xi ∂g(X ) . ∂ xi

Example 4.5 Let us have the function y = x1 (x2 x3 ∨ x2 x3 ), where f (X) = x1 , and g(X ) = x2 x3 ∨ x2 x3 . Then ∂g(X ) ∂ x2 ∂[ f (X ) ∧ g(X )] ∂[x1 (x2 x3 ∨ x2 x3 )] = = x1 = x1 = x1 . ∂ x2 ∂ x2 ∂ x2 ∂ x2 The properties of Statements 4.1 and 4.2 allow simplifying the Boolean differential Eq. (4.5), to be solved for generating the constraints for the conditional SAF model, as well as for test patterns for a fault at x i .

4.2.2.2

Mapping Physical Defects to Logic Level

The method described in Sect. 4.2.2.1 represents a general approach to map an arbitrary physical defect onto a higher (in this case, logic) level. By the described approach, an arbitrary physical defect of a digital circuit, described by a function y = f (X) can be represented by a logical constraint W y (Δ) = 1, to be satisfied for activating the defect, as shown in Fig. 4.11. Definition 4.2 Let us denote a functional fault (conditional SAF) in the circuit y = f (X), representing a defect Δ as a pair (dy, W y (Δ)), where dy (Boolean differential) denotes an erroneous reaction of the output signal (considered as SAF) to the defect Δ under a condition that a pattern W y (Δ) is applied at the inputs of the circuit.

132

4 Fault Modelling with Structural BDDs

Fig. 4.11 Modeling a physical defect by a logic constraint

Component with defect:

Component

Wy(Δ)

f(x1,x2,…,xn)

y

Defect Δ Logic constraints In other words, in the presence of a defect Δ, we will have an error dy = 1, if the condition W y (Δ) = 1 is satisfied. The following examples will show the feasibility of using Boolean differential Eq. (4.5) for mapping faults from the physical transistor level to the logic level. Example 4.6 Transistor-level stuck-on faults. The behavior of the transistor level NOR gate depicted in Fig. 4.12, cannot be described strictly logically. The input vector “10” produces a conducting path from VDD to VSS, and the corresponding voltage at the output node Y will not be equal to either VDD or VSS, but will instead be a function of the voltage divider formed by the channel resistances of the conducting transistors: VY =

VD D R N . (R P + R N )

Depending on the ratio of these resistances along with the switching thresholds of the gates driven by the output of the faulty gate y, the output voltage of the faulty gate may or may not be detected at the primary output. Denote by the Z the ambivalent value of the signal at the gate output. We can represent the erroneous function of the gate as follows: Stuck-on NOR gate

VDD x1

RN

x2

VY = Y

RP

x1

VDD RN ( RP+ RN )

x2 VSS

Fig. 4.12 Modeling of the stuck-on fault in the NOR gate

x1

x2

y



0

0

1

1

0

1

0

0

1

0

0

Z = VY

1

1

0

0

4.2 Extending the Class of Faults

133

y Δ = x1 x2 ∨ x1 x2 Z . If x1 x2 = 1, then y Δ = Z . According to expressions (4.4) and (4.5), we get, respectively: y ∗ = Δ(x1 x2 ) ∨ Δ(x1 x2 ∨ x1 x2 Z ) ∂ y∗ W y (Δ) = = x1 x2 Z = 1. ∂Δ

(4.7)

From that, it follows that the condition to activate the defect is x1 = 1, x2 = 0.x1 = 1, x2 = 0. Example 4.7 Transistor level stuck-open faults. For the transistor stuck-open fault of the NOR gate in Fig. 4.13, there will be no path from the output node to either VDD or VSS for some input patterns. As a result, the output node will retain its previous logic value. It creates a situation where a combinational logic gate behaves like a dynamic memory element. The faulty function of the gate is: y Δ = x1 x2 ∨ x1 x2 y ' , where y' corresponds to the output value stored at the output of the faulty gate. Using the expressions (4.4) and (4.5) we can get: ) ( ) ( y ∗ = Δ(x1 x2 ) ∨ Δ x1 x2 ∨ x1 x2 y ' = x2 x1 Δy' ∂ y∗ Wy (Δ) = =x1 x2 y ' = 1. ∂Δ

(4.8)

Stuck-off (open) VDD

NOR gate x1

x2

y

yd

x2

0

0

1

1

0

1

0

0

t x1 x2 y

1

0

0

Y’

1

0

0

1

0

2

1

0

1

Y x1

Test sequence is needed: 00,10

x1

x2

1

1

0

VSS No conducting path from VDD to VSS for “10” Fig. 4.13 Modeling of the stuck-off (open) fault in the NOR gate

134

4 Fault Modelling with Structural BDDs

From that, it follows that the condition to activate the defect is x1 = 1, x2 = 0, y ' = 1. In other words, for testing the fault, we must apply a test sequence of two patterns: first, “00” to get the value 1 on the output, and then “10”.

4.2.3 Hierarchical Fault Modeling Using Conditional SAF 4.2.3.1

Functional and Structural Conditional SAF

The functional fault model (also called conditional SAF, see Sect. 4.2.1.2) described as a couple (dy, W d ) in Definition 4.2, can be regarded, first, as a method of mapping arbitrary physical defects onto the logic level, and second, as a universal method of fault modeling. Let us divide this fault model into two parts: the functional part of the conditional SAF, and the structural part of the conditional SAF. Definition 4.3 We call the set of conditions W F y = {W F y (Δ)} a functional part of the conditional SAF model of the component F y to represent the physical defects Δ through functional deviations in the behavior of the component F y : a defect Δ produces a higher logic level erroneous signal on the block output y if W F y (Δ) = 1. We considered this type of fault model in Sect. 4.2.2.2, where we discussed the internal defects stuck-on and stuck-off in the CMOS transistor circuit. Similarly, we can model the physical defects in the network of interconnects between the components in the circuit. Consider line y in the circuit, and a condition W S y (Δ) = 1 needed for activating the interconnect defect Δ, so that line y will change its value as an error (erroneous reaction to the defect Δ). Definition 4.4 We call the set of conditions W S y = {W S y (Δ)} a structural part of the conditional SAF model of the component F y to represent the physical defects Δ in the interconnect structure of the circuit, which affect the output signal y of the circuit y = f (X). Figure 4.14 illustrates the locations and effects of the functional and structural parts of conditional SAFs in component F y , mapping physical level defects onto the higher logic level. By satisfying the condition W F y (Δ1 ) = 1 with proper input stimuli of the component, a defect Δ1 in the component F y is activated to produce an erroneous logic signal on the output y of the component. By the condition W S y (Δ2 ) = 1, an interconnect structural defect (e.g., a short) Δ2 in the surrounding of the component F y is activated to produce the erroneous change of the value of y due to Δ2 . Let us consider the following two examples of the generation of the structural conditions W S y (Δ) of activating the structural defects in a component y = f (X) of a network as interactions between the signal lines in the component with other lines in the network.

4.2 Extending the Class of Faults Fig. 4.14 Modeling of physical defects by the conditional SAF model

135

Network of components (FFR) Surrounding of Fy W Fy (Δ1)

Component Fy

W Sy (Δ 2)

Defect Δ1

y

Short Δ2

Example 4.8 Bridging faults between lines. Let two lines, y in the component y = f (X) and x outside the component, are involved in a bridging fault (y ↔ x) as shown in Fig. 4.15a. Figure 4.15b presents the classical wired-AND equivalent circuit, which describes the faulty behavior yΔ = yx, of the pair of lines. For this equivalent circuit, we can apply traditional gate-level test generation methods for detecting this bridging fault, similar to the procedure for stuck-at faults. In this approach, instead of creating equivalent circuits, we derive conditions to implement the more general conditional SAF testing approach. Figure 4.15a illustrates the two steps for generating the conditional fault model. First, we set up the generic function y∗ = f(y, x, Δ) = Δy ∨ Δyx,

y

Fy Δ

x

y*= f(y, x, Δ )

(a)

y*

Wired - AND

y x

yΔ &

(b)

Fig. 4.15 Modeling of the bridging fault by the conditional SAF model



136

4 Fault Modelling with Structural BDDs

by combining the faulty behavior yΔ = yx, with the correct line y, and thereafter we formally calculate the condition for testing the bridging fault: W yS (Δ)

( ( ) ) ∂[y Δ ∨ Δx ] ∂[y Δ ∨ x ] ∂ y∗ ∂[Δy ∨ Δyx] = = = = = yx = 1. ∂Δ ∂Δ ∂Δ ∂Δ (4.9)

From the solution of the equation, we derive the test pattern for detecting the bridging fault: y = 1, x = 0. It is not always possible to detect bridging faults by single pattern tests. There are, for example, “open” faults that convert the combinational circuits into sequential ones, as we considered in Example 4.7, where we need two pattern test sequences. Similarly, also bridging faults can cause the same kind of trouble if the bridge is creating a feedback loop between the output and input lines of a combinational circuit. Example 4.9 Bridging faults that create feedback loops. Such a circuit is presented in Fig. 4.16a. It presents the classical wired-AND equivalent circuit as in Fig. 4.15, which describes the faulty behavior y Δ = x1 x2 y ' ∨ x3 . As in the previous example, we first create the generic function for the faulty circuit:

) ( ) ( ) ( y ∗ = f x1 , x2 , x3 , y ' , Δ = Δ(x1 x2 ∨ x3 ) ∨ Δ x1 x2 y ' ∨ x3 = x1 x2 Δ ∨ y' ∨ x3 The apostrophe at the variable y' refers to a new state variable arising due to the fault that has caused a feedback loop in the circuit. It means that the value of the state variable y’ belongs to the previous clock cycle. The partial Boolean derivative of the generic function of the faulty circuit y* with respect to the defect variable Δ, we calculate as ( ] ∂[x1 x2 Δ ∨ y ' ) ∨ x3 ∂ y∗ S (4.10) = = x1 x2 x3 y ' = 1. W y (Δ) = ∂Δ ∂Δ Bridging fault causes a feedback loop:

x1 x2

&

&

x3

Equivalent faulty circuit:

x1

y x2

& x3

&

Bridge

(a)

&

(b)

y

Two pattern sequence as the constraint for detecting the bridging defect

t x1 x 2 x3 y 1 0 x 1 0 2 1 1 1 1 (c)

Fig. 4.16 Modeling of the bridging fault that converts the circuit into a sequential one

4.2 Extending the Class of Faults

137

From the solution of the Eq. (4.10), we notice that the calculated logic constraint for detecting the conditional SAF caused by the bridging defect represents a twopattern test sequence, depicted in Fig. 4.16c. In the correct circuit, the test produces in the output of the circuit y = 1. In the faulty case, y = 0.

4.2.3.2

General Conditional SAF Model

Let us join the functional and structural conditional SAF classes formulated in Definitions 4.3 and 4.4, respectively, into a joint fault model for a component in a network of other components specified by a given design library. Definition 4.5 We call a set W y = W yF ∪ W yS a full set of constraints of the conditional SAF fault model of the component F y . In Table 4.3, several common types of functional and structural defects (also called as faults) are presented as examples of explaining them in terms of conditional SAF introduced as a general fault model for components (in general, for complex gates) of gate-level logic networks. In the first row, we present a class of defects inside the component, activated by input stimuli (called conditions) of the component. We discussed some respective cases of this class in Examples 4.5, 4.6 and 4.7. The second row illustrates the transition fault class where the delay faults affected the signal paths inside the component can be activated by two pattern input stimuli, which can also be interpreted as the constraint to be satisfied. By apostrophe (‘), we mark the signal y’ of the previous clock cycle to represent the first stimuli of the two-pattern test. The 4th and 5th rows represent the SAF class as the special case of conditional SAF. Other faults in Table 4.3 belong to the bridging faults between the inputs of the component or between the internal lines of the component and other lines of the network. Some respective cases of bridging faults we discussed in Examples 4.8 and 4.9. The last row refers to the design or assembly errors. Table 4.3 Activation conditions for typical defects No.

Class

Defect examples

Conditions W (Δ)

1

W yF

Defect Δ in a component F y

{W yF (Δ)}

2

Delay (transition) fault on the line y

y = 1, y' = 0, or y = 0, y' = 1

3

SAF y ≡ 0

y=1

SAF y ≡ 1

y=0

AND bridge between y and x

y = 1, x = 0

6

OR bridge between y and x

y = 0, x = 1

7

Exchange of lines y and x (e.g., design error)

y = 1, x = 0, or y = 0, x = 1

4 5

W yS y

138

4 Fault Modelling with Structural BDDs

4.2.3.3

Experimental Research on Building the Conditional SAF Model

To create a set of defects for the given component with function y = f (X) in the form of the constraints W y (Δ), the first and the most creative task is to develop the set of defects Δ. However, the possibility of updating this set is always available, even during the lifetime of the library component. To calculate the constraints W y (Δ), the Boolean functions f Δ for the defective cases are to be derived analytically or experimentally by simulation. Then we can build a table with defect coverage for all calculated input patterns and specified defects. Another option is to have a set of given input patterns for the library component, and to build up the table with defect coverage by the same method of calculating the values of Boolean derivatives. A special case is to have an exhaustive set of input patterns. With respect to the length of the exhaustive test set, it is useful to find an optimal shorter test set with the highest defect coverage. An optimal set of patterns for an arbitrary library component (cell) can be generated using defect tables with probabilistic parameters. These parameters are defined as probabilistic estimation of physical defects occurrence. An example of such a defect coverage table for a complex library cell depicted in Fig. 4.17 is shown in Table 4.4 [75]. The function of the investigated library cell is AN1 = (NOR (AND (A, B), AND (C, D)). The logic diagram of the complex gate AN1, and its schematic diagram present Fig. 4.17a, b respectively. The probabilities in column Pi in Table 4.4 were calculated by simulation-based defect analysis of the cell AN1 (from an industrial 0.8 m CMOS standard cell library) [45]. The probabilistic parameters pi of defects Δi (i = 1, 2,…, n, where n is the number of investigated defects) are pre-calculated for each defect from the defect

A B

& 1

C D

&

(a)

(b)

Fig. 4.17 Modeling of the bridging fault that converts the circuit into a sequential one [75]

Not(C*D)

not(B+C*D)

A/GND

5

4,9059E−08

1,9847E−08

D/VDD

N1/Q

20

21

9,9504E−09 1,3697E−07

not(C+A*B)

Not(A + B)

2,1443E−Q8

2,0036E−07

Not(A*B)

nol(not(D) + A*B)

D/Q

D/GND

18

2,7604E−08

not((A+B+C)*(A*B+C+not(D)))

D/N1

17

19

1,9862E−08 1,4727E−08

Not(A*B)

not(D + A*B)

C/GND

C/VDD

15

9,1480E−Q8

16

not(not(C)+A*B)

C/Q

14

1,8972E−08 3,9147E−08

not(A + C*D)

not((A+B+D)*(A*B+not(C)+D))

B/VDD

C/N1

12

4,6466E−08

3,3912E−08

5,2930E−08

5,7931 E−08

1,0477E−07

Not(C*D)

not(not(B)+C*D)

13

B/GND

11

not(A+C*D)

B/N1

B/Q

9

JO

not(B*D*(A+C))

B/D

8

not(B*C*(A+D))

A/VDD

B/C

6

7

2,6895E−08

6,9159E−08

not(B*(not(A)+C+D) +C*D)

not(not(A)+C*D)

A/N1

A/Q

3

4

3,1100E−07 1,1940E−07

not(A*C*(B+D))

not(A*D*(B+C))

A/C

A/D

1

Pi

2

Erroneous function f di

Fault d i

1

1

1

1

1

0

1

1

1

1

1

1

1

1

1

1

1

2

1

1

1

1

1

1

1

1

1

3

Input patterns t j

Table 4.4 The defect table with conditional probabilities for the complex gate AN1 [75]

1

1

1

1

1

1

1

4

1

1

1

1

1

1

5

1

1

1

1

1

1

6

1

1

1

1

1

1

1

7

1

1

1

1

1

1

1

8

1

1

1

1

1

1

9

1

1

1

1

1

1

10

1

1

1

1

1

1

1

11

1

1

1

1

1

1

1

1

1

12

1

1

1

1

1

1

13

15

(continued)

1

1

1

1

1

1

14

4.2 Extending the Class of Faults 139

Q/GND

Q/VDD

24

25

SA1 for Q

SA0 for Q

SA0 for Q

Not(C*D)

N1/GND

N1/VDD

22

23

Erroneous function f di

Fault d i

1

Table 4.4 (continued)

3,5661E−08

1,0145E−07

2,1532E−07

8,4883E−09

Pi

1

1

0

1

1

1

1

1

2

1

3

Input patterns t j

1

1

4

1

1

5

1

1

6

1

7

1

1

8

1

1

9

1

1

10

1

11

1

1

12

1

1

13

1

1

14

1

15

140 4 Fault Modelling with Structural BDDs

4.2 Extending the Class of Faults

141

table according to the number of investigated defects. The probabilistic parameters (relative probabilities) we calculate by the formula Pi pi = ∑n i=1

Pi

.

The probabilities of defects contribute to better estimation of the test pattern quality in cases when not all defects are covered by the test set generated for the whole circuit under test. The probabilities of defects are useful in finding the optimal list of patterns (the list of fault conditions) for cells with the highest defect coverage. In [75], Automatic Library Builder (ALB) tool was developed to find an optimal list of fault conditions for an analyzed cell. The tool uses the probabilistic parameters pi in the analysis of the defect table, like Table 4.4. Probabilistic estimation of individual defects helps find the best subset of patterns for the investigated cell. Many of the faults investigated in the cell are detectable by several patterns. Therefore, we can easily calculate the defect table’s best probabilistic cover.

4.2.3.4

Simulation of Defect Coverage with SSBDD Model

By using the set of conditions W y = W yF ∪ W yS it is possible to map the physical defects from the lower physical transistor-level to the next higher, i.e., logic level, in the form of constraints W y for logic-level fault simulation purposes. If the components of the circuit represent standard library cells, or if they belong to the specific library of complex gates, in both cases, we must process the described calculations of constraints for activating the defects only once for all library components and add the sets of calculated conditions into the library of components. After creating by defect simulation the defect tables for the set of cells, similar to Table 4.4, we can extract an optimized subset of test patterns T y covering the full set of defects and include into the cell library to support fault simulation experiments targeting the calculation of defect coverages of the generated test sequences for the given circuits. We can carry out the simulation at the FFR level using SSBDD or S3 BDD models (see Chap. 5). In this data transformation process, we can proceed at three levels, i.e., defects, from defects to logic constraints for gates, and from gate simulation to FFR simulation using SSBDDs. Theorem 4.2 Let F y be a component used in the implementation of the FFR y = f (X) presented by the SSBDD Gy . Let the conditional SAF model of F y given by a set of conditions W y . Then, a test T y for the component F y is complete regarding W y , if, for each condition w ∈ W y , there is a test patternt ∈ T y , which simultaneously satisfies the condition w and detects the respective SAF. Proof The proof follows from Lemma 4.1 and Definitions 4.3–4.5.



Definition 4.6 We say if a test pattern t ∈ T y satisfies the condition w ∈ W y , and detects at the same time the respective SAF at the output of the component F y , then the conditional SAF w ∈ W y in F y is detected.

142

4 Fault Modelling with Structural BDDs

Example 4.10 In Fig. 4.18, test generation using the conditional SAF fault model is illustrated. We calculate the fault coverage achieved by the four test patterns applied to the FFR in Fig. 4.18a. The test patterns and the set of conditional SAF detected for all five components in the FFR are described in Table 4.5. Assume the same set of conditions for each component, i.e., W c = {00, 01, 10, 11}. As we see from Table 4.5, the only completely tested component is c2 . The input patterns applied to c2 fully cover the set of conditions of W c , and all four patterns detect a SAF on the output of c2 . Of all 20 conditional SAF in total, 11 are covered. Figure 4.18b illustrates the FFR-level fault simulation results for the test pattern t 4 . From the detected node 1, we reason that the full path from the input 1 to the output y is tested, and hence, the respective components c1 , c2 , c3 on that path are tested for the conditional SAF 11, 10 and 11, respectively. Similarly, the simulation is carried out for other test patterns. The simulation for each test proceeds at three levels: (1) SAF at the FFR-level in the SSBDD in Fig. 4.18b, then (2) SAF at the gate-level network in Fig. 4.1a, and finally, (3) the conditional SAF at each gate,

1 2 3 4

t1t2t3t4 0101

c1

1111

&

0101 0110

0110

1111 41

c2

1

c4

&

0111

c3

1111

&

y 0111

1111

5

0000

2

42

3

41

5

y

c5

42

1

#0

1

#0 (b)

(a) Fig. 4.18 Conditional SAF detection with 4 test patterns

Classical logic level approaches

Possible approaches

Decrease complexity

Increase accuracy

CATCH - 22

High-level models

Low-level models

Problems

Accuracy decreases

Complexity increases

Fig. 4.19 A deadlock between the complexity and accuracy

Call for hierarchical approach

#1

4.2 Extending the Class of Faults

143

Table 4.5 Coverage of detected conditional SAF Inputs

Detected conditional SAF in the components

12,345

C1

C2

C3

C4

t1

01010

01

00

01

01

t2

11110

11

11

t3

01,110

01

11

t4

11,010

11

10

11

2

4

2

Input patterns

Fault coverage

C5 10

11

10 10

2

1

where the SAF is detected, in Fig. 4.1a. The results of all detected conditional SAF are shown in Table 4.5. To summarize, fault models are used in test generation and fault diagnosis, whereas the efficiency of both actions relies heavily on the efficiency of fault simulation. For the very generic conditional SAF models, the fault simulation is reasonable to divide into two phases: (1) SAF simulation to determine which nodes y in the circuit are “active” (observable via fault propagation to primary outputs at the given test pattern); (2) Defect analysis for these nodes y, where the SAF is detected, according to the satisfied by test pattern constraints W y to determine which realistic defects Δ can influence the signals at observable (active) nodes y. Example 4.10 demonstrates that the two concepts, using SSBDDs and the conditional SAF fault model, allow fault reasoning even at three hierarchical levels: • at the FFR level, where the nodes y of the simulated SSBDD are identified for which the SAF is detected; • at the gate-level, for the nodes y identified at the 1st step as faulty, the respective components F y in the FFR are identified, for which SAF is detected; • at the physical defect level, for the components F y identified at the 2nd step as faulty, the physical defects Δy are identified as present, according to the conditional SAF fault model, and by satisfied conditions w ∈ W y . The conditional SAF model allows an arbitrary set of signal lines grouped into activation conditions for a single fault site, allowing a variety of fault types or defects modelled. The conditional SAF can be either static or dynamic. In a dynamic fault, an additional set of initial values is specified for the required signal line values to accommodate two-pattern test cases for testing delays, transition faults, or opens. Based on the conditional SAF model, a deterministic defect-oriented test pattern generator DOT was developed in [359], which in addition to simulating and generating test sequences for physical defects, allows the proof of the logic redundancy of not detected physical defects as well.

144

4.2.3.5

4 Fault Modelling with Structural BDDs

Cross-level and Hierarchical Fault Modeling

The efficiency of test generation is highly dependent on the system description and fault models. Since traditional low-level test generation methods and tools for complex VLSI systems have lost their importance, other approaches based mainly on functional and behavioral methods are gaining more and more popularity. However, the trend to higher levels moves us even more away from the real life of defects. To adequately handle defects in nanoscale technologies, new fault models and defectoriented test methods are appearing. On the other hand, the defect-orientation is increasing the complexity even more. We have a dilemma with two goals “to decrease the complexity” vs. “to increase the accuracy”, as illustrated in Fig. 4.19, in which the solution to the problem seems to be impossible because it is also the cause of the problem. To get out from the Catch22 paradox deadlock of the two opposite trends—“high-level modeling” to reduce the complexity and “defect-orientation” to increase the accuracy—they should be combined to involve both of them using a hierarchical approach to the problem. Compared to high-level functional modeling as another approach to reduce the complexity, the advantage of hierarchical approaches lies in constructing test plans at higher functional levels to reduce the complexity and modeling faults and defects at lower levels to increase the accuracy. Figure 4.20 illustrates the concept and the meaning of the hierarchical test approach based on the conditional generic fault model consisting of two ends: the conditional test-stimuli as the front-end at the inputs of the component under test (CUT), targeting the fault in the component as back-end. This type of the couple of front- and back-ends exists at each level of the system abstraction. On the other hand, the concept of the conditional fault modeling allows introducing the fault conditions as cross-level interfaces between the neighboring levels of the system abstraction, so that the lower-level tests serve as the higher-level conditional fault models in the form of the constraints to be satisfied. For example, at the lowest component level, the test T k,i as the front end for the component F k,i targets a defect Δ as the backend, where the defect represents a Δ mutation Δ ∈ Wk,i in the transistor network. At the same time, the test pattern T k,i of the component serves as the condition of the fault model in the form of constraint F Wk,i to be satisfied for testing the component F k,i at the higher module level. S F The conditions Wk,i , together with mutations Wk,i of the module-level network of components, form the fault models for generating module-level tests T k . These tests in turn, form the system-level fault models W F k for modules F k , which, together with mutations WkS of the system-level network of modules, form the fault models for generating system-level test program T. Implementation of the hierarchical test concept is illustrated in Fig. 4.21 at three levels, transistor level, logic level as gate-level networks and register-transfer level (high-level) modules and buses. Different mathematics traditionally describes the different levels and uses different tools for test generation and fault simulation.

4.2 Extending the Class of Faults

Functions System:

F

Fault model Module

145

Structure

Fk

Fault model Component: Fki

W Sk

Test T

k

Wdki

Fault model

Test

at the higher level (back-end)

for the lower level (front-end)

WkF

Tk

Network of transistors

Physical defects

Low level

Cross-level interfaces

Network of components

Test Tk,i

Module

Bridge Network of modules

W Sk,i

Test Tk WFk,i

WFk

WSk

Network of modules

WkF

System

High level

Fig. 4.20 Cross-level and hierarchical fault-test relationships

Transistor level

RT Level

Logic level &

&

&

1

R1 &

dy &

x1

x4

x2 x3

Δ Defect

Defect mapping

&

x5

Wd

Δ Component

dy

M1

+

M2

*

M3 IN

R2

System level

y*

Error

Logic level

Hierarchical test: fault propagation

Fig. 4.21 Implementation of the hierarchical level test concept

In this book, we propose for all levels a uniform modeling approach based on SSBDDs or S3 BDDs for the logic level (see the methods in Chaps. 5 and 6), and based on HLDDs for the RT level, and microprocessors (see the methods in Chaps. 7 and 8). The model of SSBDDs can also be used for modeling the physical defects at the lowest transistor level abstraction of logic circuits, given by Boolean differential equations. For modeling physical defects, we used in Sect. 4.2.2.2, Boolean differential equations. To find the solutions of Eq. (4.5) for constructing the conditional SAF models of the physical defects, we must represent the Boolean formula of (4.5) as

146

4 Fault Modelling with Structural BDDs

the SSBDD model and apply to the model the path scanning algorithm like the two first steps of Algorithm 6.5 we used in Sect. 6.3.3.2. Each path traced in the SSBDD Gy , representing a Boolean function y = f (X), from the root node to terminal #1 of the SSBDD, represents a term of DNF that is also one of the solutions of equation f (X) = 1.

4.3 Fault Collapsing We give a short introduction to the basic terms and definitions of the theory of fault collapsing in digital circuits. We show that the process of the SSBDD synthesis is itself an action of the fault collapsing that happens due to the creation of the oneto-one mapping between the nodes in SSBDDs and the related signal paths in the circuits. Then we define the relations of fault dominance and fault equivalence in terms of the faults at the nodes of SSBDDs and develop the corresponding algorithms for carrying out fault collapsing directly in SSBDDs. The algorithms are based on a single traversing along the Hamiltonian paths of SSBDDs and, therefore, have a linear complexity. Additional fault collapsing is achieved by transforming the SSBDDs into S3 BDDs. This problem is discussed in Sect. 3.4 in the context of S3 BDD synthesis. The results discussed in this Section have been published earlier in [458, 459].

4.3.1 Related Work and Definitions Fault collapsing is a procedure to reduce the number of faults of a given circuit targeted for testing purposes. Using a reduced set of only representative faults instead of a full set of faults aims to minimize the efforts in many test-related tasks, such as test pattern generation, fault simulation for test quality evaluation, fault diagnosis, circuit testability evaluation etc.

4.3.1.1

Structural Fault Collapsing

The methods of fault collapsing are classified as structural and functional methods. Structural fault collapsing uses only the topology of the circuit, whereas functional fault collapsing uses the circuit’s functional properties inherent in the circuit. There are two classical ways used for structural fault collapsing: fault equivalence based and fault dominance based collapsing [35, 366]. Definition 4.7 A fault f j dominates a fault f i if every test that detects f i also detects f j . When two faults f j and f i dominate each other, they are equivalent. From Definition 4.6, the following results.

4.3 Fault Collapsing

147

Corollary 4.1 If f j dominates f i , only f i needs to be considered during test generation. If two faults are equivalent, only one of them needs to be considered during test generation or fault diagnosis. Structural fault collapsing uses the topology of the circuit structure. For example, a stuck-at-0 fault (SAF y ≡ 0) at the output y of AND gate is equivalent to all the SAF x ≡ 0 faults at its inputs x i . In a similar way, SAF y ≡ 1 at the output of the AND gate dominates all the input SAF x ≡ 1 faults. The equivalent class for the NAND gate is shown in Fig. 4.22, where the test vector 111 can detect SAF/0 faults on inputs A, B and C, as well as SAF/1 on the output D of the NAND gate. As a result, the faults mentioned above are equivalent for the NAND gate. When two faults dominate each other, we call them equivalent. The dominance class for the NAND gate is shown in Fig. 4.22, where SAF/0 on the output of the NAND gate dominates all the input SAF-1 faults (Fig. 4.23).

A B C

&

D

ABC D

Fault class

1 0 1 1

A/0, B/0, C/0, D/1 Equivalence class A/1, D/0 B/1, D/0 Dominance class C/1, D/0

1 1 0 1

1 1 1 0

0 1 1 1

Fault collapsing: 1 1

&

_1

_1

&

_0

Equivalence

1 1

&

_0

_1

&

_0

Dominance

Dominance

Equivalence

Fig. 4.22 An example of the equivalence and dominance classes of faults in a NAND gate

c0, c1 c

d0, d1 d

a

&

b

f

h

e

Functional dominance examples: d0, j0 , k1,

&

g0,

j g

& i

m

k &

Fig. 4.23 Functional fault collapsing [490]

All faults: 24 Struct. equivalent faults: 16 Struct. dominant faults: 13 Functional dominant faults: 4

148

4 Fault Modelling with Structural BDDs

The classical structural approaches to fault collapsing are based on gate-level circuit processing. An approach based on fault folding was introduced in [418] for the structural collapsing of faults, using the iterative analysis of gate fault equivalence and dominance relations. Since structural fault collapsing is very fast, it is employed in many Automated Test Pattern Generators (ATPG) [366]. Figure 4.22 demonstrates the method of fault folding. The iterative application of the rules, as shown in Fig. 4.22, allows the internal faults in the fan-out free tree-shaped circuit to collapse, leaving only the faults at the primary inputs in the representative fault list.

4.3.1.2

Functional Fault Collapsing

Functional fault collapsing uses the circuit’s functional information to establish equivalence and dominance relations [490]. Two faults are functionally equivalent if they produce identical faulty functions [370], or we can say two faults are functionally equivalent if we cannot distinguish them at the Primary Outputs (PO) with any input test vector [490]. Functional fault collapsing is generally regarded as very difficult to compute because it deals with the whole function of the circuit under test. In [249], the authors showed that the algorithmic complexity for identifying functionally equivalent faults is exponential, i.e., similar to the complexity of ATPG algorithms. An approximate fault collapsing via simulation was proposed in [17]. In [323], a metric called the level of similarity was introduced and is efficiently used to improve the level of approximation. The fault collapsing suffers from the danger that if a fault in the collapsed fault set remains undetected, then all other faults equivalent or dominating this fault removed from the collapsed fault set remain undetected as well. In [325], a safety parameter s to restrict the use of the dominance relation was introduced, and a safe fault collapsing method with a level of safety s was proposed. The potential of hierarchical fault collapsing was discussed in [160]. It was shown that the hierarchical approach to fault collapsing gives more possibilities to increase efficiency than the non-hierarchical one. An algorithm based on transitive closures on the dominance graphs has been proposed [306, 370], which enables more efficient hierarchical fault collapsing. It is a graph theoretic, fault independent and polynomial technique for functional fault collapsing. In [24], functional dominance was used to collapse the fault sets. However, this technique requires a quadratic number of ATPG runs to obtain the collapsed fault set. An improvement was proposed in [365], which has the linear complexity regarding the number of ATPG runs. Since ATPG itself is used for learning functional dominance relations, both these techniques are suitable for small circuits only, but they can be helpful when combined with hierarchical fault collapsing. In [249], two theorems were introduced based on unique requirements and D-Frontiers of faults to extract equivalence and dominance relations. A similar approach was used in [11] based on the dominator theory for identifying more functionally equivalent fault pairs. In

4.3 Fault Collapsing

149

[491], a generalized dominance approach requires similar or lower run times than that of [249]. A collapsed fault set helps generate smaller test sets for achieving the desired fault coverage, and it contributes to fault diagnosis as well. Since fault diagnosis deals with fault pairs, a linear reduction of the number of faults would result in a quadratic reduction of the target pairs. In [365, 366], a novel diagnostic fault equivalence and dominance technique was proposed. A new method for fault collapsing for diagnosis called dominance with sub-faults was proposed in [30]. The method allows reducing the diagnosis search space. A framework where equivalence and dominance relations are defined for fault pairs is introduced in [324]. A fault pair collapsing is described, where fault pairs are removed from consideration under diagnostic fault simulation and test generation, since they are guaranteed to be distinguished when other pairs are distinguished. A technique to speed up diagnosis via dominance relations between sets of faults using function-based techniques was proposed in [31]. Due to the high memory and time complexity, this approach only applies to small circuits. All the listed techniques are fault-oriented approaches, i.e., they consider a fault pair at a time and use ATPG for the identification of equivalence or dominance relations. In [325], a dynamic fault collapsing procedure is presented for fault diagnosis, where the faults are collapsed during the diagnostic test pattern generation, contrary to the traditional static approaches described above, where the faults are collapsed before test generation. One of the main limitations of the described methods is that there is no evidence that investing more effort in fault collapsing reduces the total test generation time [325]. The reason is that most of the methods use ATPG itself as a tool for fault collapsing, or they are usable only for small circuits because of the high computing complexity.

4.3.1.3

Fault Collapsing with Structural BDDs

In this section, we approach the fault collapsing problem in three steps. First, we take advantage of the effect of the topological gate-level fault dominance nature during the synthesis of SSBDDs and achieve, as a result, an initial fault collapsing effect without targeting it directly. In the second step, we perform the FFR-level fault collapsing procedure using SSBDD-specific fault equivalence and dominance relations to achieve an additional reduction of the set of representative faults. The third step, the transformation of SSBBDs to S3 BDDs, provides further fault collapsing effect without targeting the fault relationships at all. So, we deal with the fault collapsing problem at two levels hierarchically to cope with the complexity of circuits and keep the solutions scalable. The first step deals with fault collapsing at the gate level, where the logic level fault collapsing relationships are used. The second and third steps deal with fault collapsing at the FFR-network level using the SSBDD and S3 BDD specific fault equivalence and

150

4 Fault Modelling with Structural BDDs

dominance relationships between the high-level nodes, which represent the gatelevel paths, instead of gate-related rules. The fault collapsing at the first and third steps can be interpreted as an added value (byproduct) of the synthesis of SSBDD and S3 BDD, respectively. The methods are well scalable and are therefore usable for large circuits where the functional fault collapsing methods give up due to the complexity of calculations. In the following, we focus mainly on the second and third steps of the fault collapsing. We start with presenting the main theoretical concepts for analyzing equivalence and dominance relations between the faults in the higher than gate-level FFR networks modeled with SSBDDs.

4.3.2 Fault Equivalence and Fault Dominance in the SSBDD Model 4.3.2.1

Synthesis of SSBDDs as a Fault-Collapsing Procedure

By applying Algorithm 3.1 for merging two BDDs, we reduce the current model, represented as a set of elementary BDDs, by one node and by one BDD. By removing the node from the model, we also remove the faults related to this node from the fault set. Hence, there is a relation between the problem of fault collapsing of the fault set for a given gate-level circuit and the problem of synthesis of the SSBDD model for the gate-level circuit. Let us draw the following corollaries from the results of Sect. 3.1 describing the synthesis of SSBDDs, and the theory of fault folding that led to the basic procedure of fault collapsing presented in [418]. Corollary 4.2 From Algorithm 3.1, it results that in the SSBDD Gy , generated for the FFR C y , the number of nodes in Gy is equal to the number of inputs in C y , and to each node m in Gy , a unique signal path exists inC y from the primary input x(m) to the output y of the FFR. The corollary is a reformulation of Definition 3.4 regarding the interpretation of SSBDDs. It links the definition with Algorithm 3.1. Corollary 4.3 Since all SAF at the inputs of the FFR C y , according to the approach of fault folding [418], form the collapsed fault set of the FFR, and since all these faults are represented by the faults at the nodes m of the corresponding SSBDD Gy , then the creation of the SSBDD is equivalent to fault collapsing, similar to fault folding. Theorem 4.3 Let Gy be the SSBDD model generated for the combinational circuit C y by Algorithm 3.1. Then, any set of test patterns that checks all SAF at the nodes of Gy checks all SAF also in C y .

4.3 Fault Collapsing

151

Proof The proof follows from Corollaries 4.1 and 4.2, and from Theorem 5 in the paper [418]. ∎ Note, in addition to the theorem that the fault sets are different in C y and in Gy , they are in Gy smaller than in C y . Another remark is that, unlike the traditional gate-level approaches to test generation and fault simulation that use the collapsed fault list apart from the simulation model, the SSBDD-based test generation and fault simulation are carried out first, on the higher FFR-level, and second, with direct representation of SAF in the model. Therefore, there is no need for separate fault lists to be used during test generation and fault simulation with SSBDDs or S3 BDDs, except when there are defined other fault classes, such as conditional SAF. Example 4.11 Consider a circuit and the SSBDD generated for it, in Fig. 4.24. The node x 22 in the SSBDD represents the path from x 22 to y in the circuit shown by bold lines in Fig. 4.24. On the other hand, the stuck-at faults SAF y ≡ 0 and SAF y ≡ 1 dominate the faults x 22 ≡ 0 and x 22 ≡ 1, respectively, in the circuit. The same dominance relation stands for all the faults along the bold path from x 22 to y, regarding to the faults at x 22 . From this dominance relation, it results that all the faults along the signal path from x 22 to y, except x 22 /0 and x 22 /1, can be collapsed. In other words, we can remove them from the fault list to be considered in fault simulation and test generation. The two faults at x 22 will form the representative fault subset for the full signal path from x 22 to y. On the other hand, these two faults also form the set of faults related to node x 22 in the SSBDD. From above, it follows that in SSBDD, all the collapsed faults are not visible, and all the representative faults we must consider are related to the SSBDD nodes. This fault-collapsing result is similar to that of the fault-folding method presented in [418].

y

x1

x21

x22

x3

x4

x5

x81

x71

x61

x82

x9

x72 x62

#0

Fig. 4.24 Activated paths in a digital circuit and its SSBDD [458]

x10

#1

152

4 Fault Modelling with Structural BDDs

To summarize, the procedure of SSBDD synthesis presents the first part of fault collapsing for the given circuit. In the next Sect. 4.3.3, we discuss the possibility of additional fault collapsing, now already directly using the SSBDD model.

4.3.2.2

Identification of Fault Equivalences and Dominances in SSBDDs

The second part of fault collapsing will consist of the processing of the SSBDD model with the goal of finding an additional subset of faults, which may collapse using the equivalence and dominance relationships defined on the SSBDD level. Since the nodes of SSBDDs represent signal paths in the gate-level circuit, then each node-related collapsed fault in the SSBDD is equivalent to all the related gate-level faults in the signal path of the circuit represented by the node in SSBDD. Let us formulate the fault equivalence and fault dominance conditions for the faults associated with nodes in SSBDDs. Since the nodes in SSBDD stand in the strong order along the Hamiltonian path, it will be reasonable to organize the procedure of analyzing the nodes as traversing the graph but stopping at each node to identify if there exists any relationship, dominance or equivalence between the current node and its neighboring nodes. Moreover, since it is easy to recognize in the node-traversing process the gates of the circuit (see Sect. 3.3.1.1), we can easily transfer the rules of equivalence and dominance applicable to gates, as well to the subgraphs of SSBDD, recognized as gates. From that, an idea results to carry out at each traversing stop at a node to analyze the equivalence or dominance relations of this node with its immediate neighbors. Fault Equivalence in SSBDDs Let the two nodes m, and me in SSBDD be neighbors. To check that the faults x(m)e, and x(m e ) ≡ e are equivalent, we have to show that, according to Definition 4.5, both faults can always be tested with the same test pattern and that the faults mutually dominate each other. Theorem 4.4 The faults x(m) ≡ e, and x(m e ) ≡ e, at two SSBDD neighboring nodes m, and me are equivalent iff the following two conditions are satisfied: (1) the nodes have a common neighbor-node m e , and (2) the node me has a single incoming edge from the node m before him. Proof The conditions (1) and (2), formulated in the theorem statement, are sufficient to identify that the nodes m, and me represent the inputs of the same gate according to Lemma 3.1. From that, it follows, according to the gate-level fault collapsing rules in Sect. 4.3.1.1, that the faults at nodes m, and me are in the equivalence relationship. In the SSBDD notations, this means that for testing these nodes by activating a path that passes the edge (m, me ) connecting the nodes, there is no other possibility to detect the same faults by activating another path that will pass through one node but does not pass through the other. ∎

4.3 Fault Collapsing

153

Example 4.12 Consider Fig. 4.24. The faults x22 ≡ 0 and x3 ≡ 0 are equivalent because the related nodes x22 and x3 have the same neighbor node x4 , and a single entry edge into x3 . According to Lemma 3.1, it is the AND gate. Hence, one of these faults collapses. Similarly, using Theorem 4.4, it is easy to find in the SSBDD in Fig. 4.24 (other) equivalent faults, such as: (x1 /0) ∼ (x21 /0), (x5 /1) ∼ (x61 /1), (x82 /0) ∼ x9/ 0 , and (x72 /1) ∼ (x62 /1), where “ ~ ” is the notation of equivalence. On the other hand, the faults x71 /0 and x81 /0 are not equivalent. Despite having the same neighbor x82 , node x81 has more than one entry edge, and the single-entry requirement of Theorem 4.4 is not satisfied. Fault Dominance in SSBDDs Definition 4.8 Let us call the paths, starting from the root node, and ending at terminal #1, the 1-paths, whereas the paths ending at terminal #0, the 0-paths. Theorem 4.5 Let us have a subset of nodes M e , where e ∈ {0,1}, | M e | ≥ 2, in an SSBDD, and all nodes in M e have the same neighbor m e . If there is a node m' ∈ M e , so that any e-path through at least one other node m ∈ M e \ m' of this subset, at the value x(m) =( e, )is always passing through that node m' , at the value x(m' ) = e, then the fault x m ' ≡ e dominates the fault x(m) ≡ e. Proof From the description of the subset M e in Theorem, and, in accordance with the test generation requirements described in Sect. 3.2.1.4, it follows that if a test is detecting a subset of faults x(m) ≡ e, m ∈ M ' ⊆ M e , the test is activating an e-path through the nodes m ∈ M' at the values x(m) = e. On the other hand, from Definition 4.7, it follows that if every test that detects the fault x(m) ≡ e, m ∈ M ' , always detects also the fault x(m ' ) ≡ e, m ' ∈ M ' , then the fault x(m ' ) ≡ e dominates the fault x(m) ≡ e. From above, we can derive that if every test activating an e-path through the node m, at the value x(m) = e, always passes through another node m' at the value x(m ' ) = e, then the fault x(m ' ) ≡ e dominates the fault x(m) ≡ e. This conclusion can be extended from regarding the node m to the full subset of nodes M e \m' as stated in the Theorem. ∎ Example 4.13 Consider the subset of nodes M 1 = {m3 , m5 , m6 } in the SSBDD Gy Fig. 4.25a (for simplicity, we equate here the node numbers with the variable indices). All nodes in M 1 have m7 as the common successor node m0 . Let us activate in Gy the 1-path l 1 (m1 , #1) = (m1 , m3 , m4 , m6 , #1) through the nodes {m3 , m6 } ⊂ M 1 by the assignments x 1 = 0, x 3 = 1, x 4 = 0, x 5 = 0, x 6 = 0. If we assign additionally, x 7 = 0, then the test detects the faults x 3 ≡ 0, x 5 ≡ 0, and x 6 ≡ 0 of all nodes in M 1 . We see that, according to Theorem 4.5, the faults x 3 ≡ 0, and x 6 ≡ 0 dominate the fault x 5 ≡ 0, because any test for x 5 ≡ 0 also implies the detection of the faults x 5 ≡ 0, and x 6 ≡ 0. (Another possibility of testing x 5 ≡ 0 would be the test, where we assign x 1 = 1, x 2 = 0, and leave the other assignments the same). The opposite is not true: the test for the faults x 3 ≡ 0, and x 6 ≡ 0 by activating the 1-path l 1 (m1 , #1) = (m1 , m3 , m4 , m6 , #1) passing the node m5 does not detect the fault x 5 ≡ 0. The

154

y

4 Fault Modelling with Structural BDDs

x1

x2

x3

x4

x6

#1

y

x1

x2

x6

x3

x4

x61

#1

x5

x5

x7

x8

x7

x8

#0

#0 (a)

(b)

Fig. 4.25 To the problem of fault dominance

fact that the faults x 3 ≡ 0, and x 6 ≡ 0 dominate the fault x 5 ≡ 0, means that we can collapse the faults x 3 ≡ 0, and x 6 ≡ 0, relying on the knowledge that if generating the test for x 5 ≡ 0, the faults x 3 ≡ 0, and x 6 ≡ 0 will be detected by the same test as an added value. Figure 4.25b shows another example, where we can analyze the subset of nodes M 1 = {m3 , m5 , m61 } with m7 as the common successor for all nodes in M 1 . Here, only the fault x 3 ≡ 0 dominates over other nodes in M 1 . There exist three different test patterns that detect the faults x 5 ≡ 0, and x 61 ≡ 0, but all of them also detect the fault x 3 ≡ 0. Definition 4.9 Let us call the consecutive nodes on the Hamiltonian path of SSBDD as a group if they all have the same neighbor node and all these nodes, except the first one, have a single incoming edge. The groups of nodes, according to Lemma 3.1, represent either the OR gate if the nodes of the group form a vertical homogeneous path or the AND gate if the nodes of the group form a horizontal homogeneous path. Example 4.14 Consider a circuit and its SSBDD model in Fig. 4.25a. The two consecutive nodes m1 and m2 , and the nodes m4 and m5 form two groups in the SSBDD in Fig. 4.26. The first of them represents the AND gate, and the second represents the OR gate. From Theorem 4.5 and Example 4.13, we can formulate the following corollary that helps to construct a straightforward algorithm of collapsing the faults in the SSBDD model.

4.3 Fault Collapsing

y

x1

x2

155

x6

#1 x1

x3

x2

x4

& &

x3 x5

a & x4 x5

&

x6

c &

b &

y

d

#0 Fig. 4.26 To the problem of fault equivalence

Corollary 4.4 If the node m does not belong to any group and does not have with its neighbor-node me a common neighbor m e , then the fault x(m) ≡ e dominates the faults x(m ' ) ≡ e for all the nodes of a subset M e , which have with the node m the common neighbor m e . Proof At least one of the two following claims must take place. (1) Since the node m does not belong to a group, it must have more than one incoming edge, including a node from M e . Hence, there must be more than one e-paths entering all the node m at the value x(m) = e. (2) On the other hand, since m does not have with me a common neighbor m e , there must be more than one e-paths passing all through the node m at the value x(m) = e, and the node me . Consequently, since at least one of these statements must take place for the SSBDD, there will exist a test that activates an e-path, which, if passing through the node m', is also passing through m, and if testing the fault x(m ' ) ≡ e, is testing also the fault x(m) ≡ e. From above the following corollary results. Corollary 4.5 If an e-node mdoes not belong to any group, the fault x(m) ≡ e has at least one fault it dominates, and hence, can be collapsed. Example 4.14 Consider Fig. 4.26 representing a circuit and its SSBDD. There are two groups in the SSBDD: the group of nodes (m1 , m2 ) that represents the AND gate, and the group (m4 , m5 ) representing the OR gate. From these groups, we derive that the faults x 1 ≡ 0 and x 2 ≡ 0 are equivalent, and the faults x4 ≡ 1(x4 ≡ 0) and x5 ≡ 1(x5 ≡ 0) are also equivalent. The nodes m3 and m6 are the dominating nodes. According to Corollary 4.4, the faults x 3 ≡ 0 and x6 ≡ 0, both dominate the fault x5 ≡ 0, and therefore we can collapse them. As a matter of fact, they dominate even more faults than only x5 ≡ 0. For example, it is easy to recognize, using Theorem 4.5, that the fault x 3 ≡ 0 also

156

4 Fault Modelling with Structural BDDs

dominates the fault x4 ≡ 0, and the fault x6 ≡ 0 dominates in total 5 faults: x 1 ≡ 0, x 2 ≡ 0, x 3 ≡ 0, x 4 ≡ 1, x 5 ≡ 1. However, to collapse the dominating fault, it is sufficient to show the dominance, at least regarding a single fault. Corollary 4.4 specifies for that purpose the faults for which it is easiest to prove the dominance. Corollary 4.5 formulates a simple rule, which does not need to specify any concrete dominance relation.

4.3.3 Fault Collapsing in SSBDDs 4.3.3.1

Basic Rules of Fault Collapsing in SSBDDs

Theorems 4.4 and 4.5 allow specifying two algorithmic rules for identification for each node if it is equivalent to another fault or dominates another fault. Rule 4.1 Equivalence of faults. Finding in SSBDD a group of e-nodes m connected by horizontal edges, where e = 1 (vertical edges, where e = 0), which represent AND (OR) gate, is the evidence to identify the equivalent faults: x(m) ≡ e. All equivalent faults regarding a group can be collapsed, except one of them that will be kept as the representative fault of the group of collapsed faults. Rule 4.1 results directly from the method of synthesis of SSBDDs by superposition of BDDs of gates (see Sect. 3.1.2). Rule 4.2 Dominance of faults. If an e-node m does not belong to any group, the fault x(m) ≡ e is dominating at least one fault, and hence, can be collapsed. Example 4.15 Consider Fig. 4.27 as an example of fault collapsing on the SSBDD for the respective FFR circuit. There are only two nodes in the SSBDD that form a group (m4 , m5 ) representing the OR gate. One of the inputs disappeared during the SSBDD synthesis for other gates in the circuit, and the gates cannot be directly identified. Due to the equivalence of two faults x 4 ≡ 1 and x 5 ≡ 1 at the nodes m4 and m5 in the SSBDD, respectively, one of them, e.g., x 4 ≡ 1 can collapse. On the other hand, at all nodes (highlighted in grey), except the node m5 in the SSBDD, a fault is dominating at least one another fault, that can be collapsed. The following fault dominance relations can be highlighted: (x 1 ≡ 0) → (x 2 ≡ 0), (x 2 ≡ 1) → (x 3 ≡ 1), (x 3 ≡ 0) → (x 4 ≡ 0), (x 4 ≡ 1) → (x 4 ≡ 0). Note the faults x 4 ≡ 1 and x 5 ≡ 1 dominate each other, and hence, they are equivalent. Example 4.16 In the SSBDD in Fig. 4.28, no fault dominance can be identified. We can, however, recognize four groups, each consisting of two nodes highlighted by dotted frames. The groups representing AND gates, refer to the equivalence of the SAF/0 faults at both nodes. The high-lighted SAF/0 faults related to grey-colored nodes can collapse.

4.3 Fault Collapsing

157

x1

y x1 x2 x3 x4 x5

&

x2

y

#1

x3

x4

1 x5

&

1 #0

Fig. 4.27 Fault collapsing due to the fault dominance in SSBDD (to Example 4.15)

x1 x2

&

x2

x5

x6

x3

x4

x7

x8

#1

1

x3 x4

&

x5 x6

&

x7 x8

x1

&

y

1

#0

&

Fig. 4.28 Fault collapsing due to the fault equivalence in SSBDD (to Example 4.16)

4.3.3.2

Algorithm of Fault Collapsing

In the following, we present an algorithm with linear complexity for fault collapsing in a circuit represented by the SSDD model. The algorithm is based on pairwise checking of Rule 4.1 (for equivalence) and Rule 4.2 (for dominance) by traversing along the Hamiltonian path in the SSBDD. The algorithm is of a linear complexity since it needs only a single traversing through all nodes of the SSBDD along its Hamiltonian path for checking Rules 4.1 and 4.2. Algorithm 4.1 Fault collapsing on SSBDDs Input: SSBDD model of the given circuit Output: Set of collapsed faults Comment: the value of e for the nodes m follows the direction of the group the nodes belong to (For the horizontal AND-group: e = 1, and for the vertical OR-group: e = 0) (continued)

158

4 Fault Modelling with Structural BDDs

(continued) 1. for all SSBDDs in the model 2. for all nodes m in the current SSBDD (in the order of node numbers) 3. if the node m and its neighbor me have a common node *** Checking for Rule 1 4. then fault x(m) ≡ e collapses *** Rule 1 applies: equivalence of faults 5. Group: = 1; goto end_for *** Flag: a group was found 6. else if Group = 1 *** Checking for Rule 2 7. then Group: = 0; goto end_for *** The group ends, no fault collapse 8. else fault x(m) ≡ e collapses goto end_for *** Rule 2 applies: fault dominance 9. end_for 10. end_for

Example 4.17 Consider the results of the fault collapsing procedure of tracing all the nodes in the SSBDD in Fig. 4.24, illustrated in Table 4.6. The initial number of the gate level SAF faults in the FFR in Fig. 4.24 is 52 (2 faults per each of 26 lines). By synthesizing the SSBDD for the FFR of the circuit according to Algorithm 4.1, we reduce the number of representative faults from all 52 faults to 28 faults (2 faults per each of 14 nodes in the SSBDD). By the presented method, we collapse additionally 10 faults (x 1 ≡ 0, x 22 ≡ 0, x 4 ≡ 0, x 5 ≡ 0, x 71 ≡ 1, x 81 ≡ 0, x 82 ≡ 0, x 9 ≡ 0, x 72 ≡ 1, x 10 ≡ 1), which results in the total number of remaining 18 representative faults, i.e., 3-times reduction compared to the initial number of faults.

Table 4.6 Fault collapsing results for the SSBDD in Fig. 4.13 Node

e

Collapsed fault

Comments

x1

1

SAF x 1 /0

Equivalent with x 21 /0

x 21

1

x 22

1

x3

1

x4

1

SAF x 4 /0

Dominates x 61 /1

x5

0

SAF x 5 /0

Equivalent with x 61 /0

x61

0

x71

1

SAF x 71 /1

Dominates x 61 /1

x 81

1

SAF x 81 /0

Dominates x 61 /1,

x 82

1

SAF x 82 /0

Equivalent with x 9 /0

x9

1

SAF x 9 /0

Dominates x 62 /1

x 72

0

SAF x 72 /1

Equivalent with x 62 /1

x62

0

x10

1

No collapse SAF x 22 /0

Equivalent with x 3 /0 No collapse

No collapse

No collapse SAF x 10 /1

Dominates x 62 /1

4.3 Fault Collapsing

4.3.3.3

159

Experimental Results of Fault Collapsing

The fault collapsing experiments were carried out with Intel Core i5 3570 Quad Core 3.4 GHz, 8 GB RAM, using ISCAS’85, ISCAS’89 and ITC’99 benchmark circuits. The experimental results are presented in Tables 4.7 and 4.8 [459]. Table 4.7 compares the sizes of the fault sets after fault collapsing by the proposed method (New) with previous structural [24, 418] and functional [30] methods. The proposed method has better results in fault collapsing than the referenced structural methods. The functional method [30] is the best of state-of-the-art methods Table 4.7 Comparison with other methods Circuit

# Faults

[418]

[24]

c1355

2710

1234

1210

c1908

3816

1568

c2670

5340

2324

c3540

7080

c5315 c6288 c7552

Fault set size

CPU time (s) [30]

New

[30]

New

808

1210

46

0.003

1566

753

1243

14

0.008

2317

1853

1989

110

0.009

2882

2786

2092

2340

831

0.010

10,630

4530

4492

3443

3900

72

0.012

12,576

5840

5824

5824

5824

4

0.019

15,104

6163

6132

4707

5156

232

0.016

Table 4.8 Fault collapsing for ISCAS’89 and ITC’99 circuits Circuit

# Gates

R*

2N

R (New)

Gain R*/R

Time (s)

s13207

24,882

9815

10,456

7933

1.24

0.04

s15850

29,682

11,727

12,150

9178

1.28

0.04

s35932

65,248

39,094

39,094

29,797

1.31

0.26

s38417

69,662

31,180

32,320

25,162

1.24

0.20

s38584

72,346

36,305

38,358

28,016

1.30

0.18

b15

47,414

21,072

23,498

17,439

1.21

0.04

b17

154,220

68,037

81,330

60,684

1.12

0.12

b18

463,570

206,736

277,978

205,866

1.00

0.42

453,088

202,812

264,244

196,179

1.03

0.40

b19

1,345,442

533,142

560,704

415,251

1.28

0.84

b19_1

1,275,720

507,476

534,184

396,151

1.28

0.80

b21

79,556

35,994

48,182

35,169

1.02

0.08

b21_1

63,732

29,091

34,510

25,359

1.15

0.06

113,308

51,277

70,464

51,511

1.00

0.11

b18_1

b22 b22_1 Average

98,006

44,771

52,172

38,359

1.17

0.08

290,392

121,902

138,643

102,804

1.2

0.24

160

4 Fault Modelling with Structural BDDs

in reducing the fault set, but it is not scalable for complex circuits due to the high computational cost of calculating transitive closures in dominance graphs. The proposed method has high speed due to the linear complexity of the algorithm and is well scalable. Due to the different computing frameworks used in the comparable cases, the speeds of the algorithms in [30] and the presented NEW method cannot be directly compared. We can still perform a relative comparison. Consider the two experiments with circuits c3540 and c6288 (highlighted with grey color in Table 4.7), which differ in the complexity of the analysis used in [30], causing the difference in computational cost 200 times. On the other hand, in the case of the proposed NEW method, the difference in computing times for these two circuits is only twice. The experimental results for larger ISCAS’89 and ITC’99 circuits (R* is the number of remaining faults after collapsing in [39, 173]) are depicted in Table 4.8. The column R*/R shows the gain (1.2 times on average) of the achieved fault collapse (in column R(New)) compared to the results in [37, 39]. The last column shows that the proposed Algorithm 4.1 has linear complexity. Hence, it is well-scalable and efficiently usable for large circuits. The linear complexity of the method is explained by the fact that the fault equivalence and dominance reasoning is reduced only to the local pairwise analysis of the neighbor nodes while traversing the Hamiltonian path of the SSBDD.

Chapter 5

Logic-Level Fault Simulation

In this chapter, we present different applications of structural DDs such as SSBDDs and S3 BDDs in the field of simulation of digital circuits. We have developed algorithms for parallel pattern simulation by tracing the paths in the structural BDDs in parallel for all bits of the data words. For timing analysis and detection of hazards in the circuit, we present a method of multi-valued simulation with SSBDDs, based on the 3- and 5-valued algebras. A novel idea of parallel pattern back-tracing of critical paths was implemented using SSBDDs for detecting by a single simulation run all detectable faults in the combinational circuit by the simulated set of patterns. We generalized the method for sequential circuits. Another group of simulation-based methods is proposed for the identification of critical paths and for searching the longest critical path in combinational and sequential circuits.

5.1 Fault-Free Logic Simulation This chapter’s topic is two-valued and multi-valued fault-free simulation using the SSBDD model. We present single-pattern and parallel multiple-pattern simulation algorithms for the traditional two-valued logic simulation. The novelty of the latter is in the original algorithm for fast parallel traversing of a set of different activated paths in the SSBDD (or S3 BDD), corresponding to different bits of the data-word, each of the bits representing a different and independent simulated test pattern. To extend and advance the traditional multi-valued gate-level simulation, we have transformed the gate-level multivalued algebra into the SSBDD-based algebra that allows replacing the gate-level simulation with higher FFR-level simulation, resulting in considerable simulation speed-up.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 R. Ubar et al., Structural Decision Diagrams in Digital Test, Computer Science Foundations and Applied Logic, https://doi.org/10.1007/978-3-031-44734-1_5

161

162

5 Logic-Level Fault Simulation

5.1.1 Fault-Free Simulation with Structural BDDs We consider two concepts of logic simulation on structural BDDs: single-pattern and multiple-pattern simulation. In the first case, we start the simulation at the root node and continue the traversing, guided by the values of the simulated pattern, along the graph up to reaching one of the terminal nodes #1 or #0. The terminal node reached determines the value of the output variable of the simulated circuit. The single pattern simulation contributes to minimizing the number of nodes necessary for traversing, thus speeding up the simulation. As we can see in Sect. 3.3, there are possibilities to optimize the SSBDDs to minimize the average lengths of simulated paths further while traversing the graph. The multiple pattern parallel simulation starts at the last node of the Hamiltonian path and must traverse backward through the full path. Despite the disadvantage of traversing the entire path, the method has the advantage of parallel simulation of the full package of patterns.

5.1.1.1

Single Pattern Logic Simulation

Logic simulation of the pattern Xt, applied to FFR y = f(X), is performed in the related SSBDD Gy by traversing the path activated by Xt. The result of the simulation is the value of y = e, e ∈ {0, 1}, determined in the terminal node #e, where the activated path ends. Example 5.1 Consider the circuit and its S3 BDD in Fig. 5.1. The bold edges on the S3BDD show the activated path for the test pattern X t = (x 1 , x 2 , x 3 , x 4 , x 5 , x 6 ) = (011100), which calculates the value y = 1. Note that the calculation required only three queries of variables (three steps of walking). For comparison, simulation of the same pattern using the circuit description (or Boolean expression) would require 6 accesses to all variables and performing 4 logic operations. We can perform the simulation of the FFR networks in two ways. In the first one, called the forward event-driven algorithm, the graphs are ranked so that each S3 BDD will be simulated only when all its input variables have already been evaluated. The second approach, called the backtracing algorithm, starts with simulating the Fig. 5.1 Logic simulation of a pattern with SSBDD

x1 x2 0

x3

x4 0 1 x5

1 0

y &

1

& x6

1

0

x1

x2

x3

x4

y

x5

0

x6

#1

5.1 Fault-Free Logic Simulation

163

S3 BDDs of primary outputs. Other graphs are accessed only when it is necessary to calculate the value of a variable.

5.1.1.2

Multiple-Pattern Parallel Logic Simulation

To further increase the speed of logic simulation, parallel evaluation of many patterns is also possible using both SSBDDs and S3 BDDs [455]. The idea is based on the following statement. Lemma 5.1 For any test pattern applied to the inputs of an FFR, described by the SSBDD G, for each node of G there is always a path activated, either to terminal 1 or to terminal 0. Proof According to Theorem 3.1, S3BDDs are traceable, and the nodes are strongly ordered along the Hamiltonian path. Since each node variable is assigned a value, the output edge is activated for each node. Thus, starting from any node, an iterative concatenation of activated edges always forms an activated path that enters one of the terminals, # 0 or # 1. ∎ Denote by D(m) ∈ {0, 1} the Boolean constant of the terminal reached from the node m by simulating the given test pattern in the SSBDD. Based on Statement 3.1 and Property 3.1 of SSBDDs, the following can be stated. Theorem 5.1 For any input pattern applied to the inputs of the FFR, represented by an SSBDD G, for each node in G, the following is valid: ( ) ( ) D(m) = x(m)D m 1 ∨ x(m)D m 0

(5.1)

D(#0) = 0, D(#1) = 1

(5.2)

Proof The proof of the statement results from Property 3.1 of SSBDDs, and from Lemma 5.1, that the nodes of a graph form a unique Hamiltonian path in which all nodes are strictly ordered. This allows for recursive simulation of SSBDDs starting from the last node of the Hamiltonian path backward to the root node, using (5.1) with (5.2) for terminal nodes as a special case. Depending on the value of x(m) = e, e ∈ {0, 1}, either the edge (m, m 1 ) or (m, m 0 ) is activated, and the value of D(m e ) is assigned to D(m). ∎ The idea of group simulation of many input patterns in parallel comes from Theorem 5.1, which “inspires” to trace the Hamiltonian path of the SSBDD starting from the last node backward and use the formula (5.1) iteratively and the formula (5.2) as well if needed. Since the expression (5.1) is Boolean, and the input patterns for the combinational circuits are independent, we can use the formula in the vector form iteratively for many input patterns.

164

5 Logic-Level Fault Simulation

Table 5.1 Parallel simulation of input patterns using SSBDD m

x(m)

xj

D(m)

y

6

x6

0100

0100



5

x5

1001

1101



4

x4

0100

1101



3

x3

0111

0101



2

x2

1100

1101



1

x1

1010

1101

1101

Example 5.2 Consider the circuit and its SSBDD representation in Fig. 5.1. Let us simulate using the SSBDD four patterns applied to the inputs of the circuit. We carry out the simulation in parallel for all 4 patterns. The results of the experiment are depicted in Table 5.1. The patterns are shown for all 6 inputs in column x(m) as 4-bit vectors, and Column D(m) shows the results. We start the simulation from the last node in the graph (the first row in Table 5.1), where according to (5.1), and (5.2), we get D(6) = x 6 . Simulation continues downwards, using the formulas (5.1) and (5.2). The presented multiple pattern parallel simulation method applied to the SSBDD model is, in a similar way, applicable also for S3 BDDs. Both methods of singlepattern simulation and parallel multi-pattern simulation are also usable for the simulation of sequential circuits with minor additions in some details, described in the following two Sections.

5.1.1.3

Fault-Free Simulation in Sequential Circuits

The method described in Sect. 5.1.1.1 of single-pattern fault-free logic simulation can be easily extended for three-valued simulation of pattern sequences for both SSBDDs and S3 BDDs synthesized for sequential circuits. Simulation of the S3 BDD GY means traversing activated paths in GY according to the given pattern X t . As a result of the simulation, the values of y ∈ Y are calculated for all entries of the model. The value of y is equal to e(mT ) ∈ {0 , 1}, where mT is the terminal node reached by traversing the graph. However, the simulation of sequential circuits needs the three-valued simulation to handle the unknown states of flip-flops. If a node m labelled by a state variable x(m) of a flip-flop with unknown value, denoted by x(m) = ∅, is encountered, then both two activated paths are branching from m, i.e., l 0 = l(m0 , e0 ) and l1 = l(m1 , e1 ) have to be traversed up to one of the terminal nodes. We denote by e0 , e1 ∈ {#0, #1} the values of the terminal nodes reached by the paths l0 , and l 1 , respectively. If e0 = e1 = v, where v ∈ {#0, #1}, then the result of the simulation is y = v. Otherwise, if e0 /= e1 , then the value y = ∅ is the simulation result. If during traversing of the paths l0 and l 1 , several nodes mj are encountered with x(mj ) = ∅, then also these two branching paths lj,0 = l(m 0j , ej,0 ) and l j,1 = l(m 1j ,

5.1 Fault-Free Logic Simulation

165

ej,1 ) must be traversed. In this case, already three paths must be traversed with three simulation results. If more than one node m with unknown values x(m) of flip-flops are encountered, then a tree of paths must be traversed up to the terminal nodes. If all these paths end in the same terminal v ∈ {#0, #1}, then the result of the simulation is y = v; otherwise, y = ∅. Example 5.3 Consider again the sequential circuit s27 from the ISCAS’89 benchmark family, presented in Fig. 2.13. The S3 BDD model, represented in Fig. 5.2, consists of two graphs, the first graph GT 1 for calculating the value of T 1 , and the second graph GT 2 for calculating the values of T 2 , T 3 , and y26 . GT 2 is a supergraph, which has embedded two subgraphs GT 3 , and Gy26 . Some entry variables, such as T2 , and y26 are inverted. This is the result of the S3 BDD synthesis process, i.e., an inverted node variable means that the signal path in the circuit, represented by the node variable, has an odd number of inverters. We simulated for this circuit two input patterns P1 = (x 1 , x 2 , x 3 , x 4 ) = (0001), and P2 = (1110), using the S3 BDD model. Simulation results are depicted in Table 5.2. We assume that the flip-flops’ states are unknown, and the reset signal is unavailable. The results of the simulation of the patterns P1 and P2 , are illustrated by activated edges and highlighted in bold in Fig. 5.2a, b, respectively. The nodes with unknown values of variables are shown in grey. These nodes launch two activated paths. For the first input pattern P1 , in graph GT 1 , the value of T 1 remains unknown because the simulated path ends in the node T 1 with an unknown value, and both

T1

x3

x9

T2

x8

T3

T2

x2

T1

x9

x4

x14

T3

y26 x1

(a)

x14 T1

x3

x9

T2

x8

T3

T2

x8

x2

T1

y26 (b)

x9

x4

x14

T3

x14

x1

x8

Fig. 5.2 S3 BDD model for the ISCAS’89 benchmark circuit c27. Activated paths are highlighted for two test patterns P1, and P2, respectively in a and b, shown in Table 5.2

166

5 Logic-Level Fault Simulation

Table 5.2 Two input patterns simulated for the ISCAS’89 benchmark circuit c27 Input patterns

Inputs

Internal nodes

Flip-flops

Output

x1

x2

x3

x4

x8

x9

x14

T1

T2

T3

y26

P1

0

0

0

1

1







0





P2

1

1

1

0

0

0

0

0

1

0

1

edges from that node reach different terminals, #1 in the right direction and #0 downward. In the subgraph GT 3 , from the node T2 , altogether, three paths branch out, including node x 9 , which also has an unknown value (calculated in GT 1 ). The paths end in different terminals; therefore, T 3 = ∅. If all fan-out paths from the node m with variable x(m) = ∅ end in the same terminal v ∈ {#0, #1}, the calculated value is equal to v. This is the case for the simulated pattern P2 in supergraph GT 2 . For GT 3 , we have to simulate two paths because the value of T2 is unknown. All paths branching out from T2 (at node T 3 again, two paths are branching out) end in terminal #0; hence the result of the simulation of the pattern P2 is T 3 = 0.

5.1.1.4

Two-Valued Pattern Parallel Simulation in Sequential Circuits

Similar to the method of group simulation of many input patterns in parallel as we used for combinational circuits in Sect. 5.1.1.2, we can also apply to sequential circuits. For that purpose, we need, however, proper reconverting of the packages of the simulated patterns. The idea comes from Theorem 5.1, which allows tracing the Hamiltonian path of the SSBDD or S3 BDD, starting from the last node backwards, and using the formulas (5.1) and (5.2) iteratively. Since the expression (5.1) is Boolean, and if the simulated input patterns are independent, we can use the formula in the vector form iteratively for many input patterns. In sequential circuits, however, the simulated patterns are not independent of each other, because the circuit’s behaviour depends not only on the patterns applied to the primary inputs but also on the states of flip-flops of the circuit. At the beginning of the applied input sequence, the state of the circuit may be unknown. In this case, a two-valued simulation is applicable only if we have the possibility of resetting the circuit before applying the test sequences. In the general case, if the circuit is not resettable, we need to apply a multi-valued simulation for managing the don’t care signals. The multi-valued simulation case is discussed in Sect. 5.1.2. Let us assume first that there is a possibility to reset the circuit, i.e., to put it into a known state, and to have the possibility of using a two-valued simulator for parallel processing of test patterns. The parallelism in the simulation of sequential circuits is possible if the simulated data can be partitioned into groups of patterns independent of each other.

5.1 Fault-Free Logic Simulation Fig. 5.3 Converting pattern sequences into independent pattern packages

167

X1,n1

X1,1 k segments of dependent patterns

- - -

X1

Xk Xk,1

X1,1 n packages of independent patterns

- - -

- - -

- - -

XP1

Xk,nk

Xk,1

Xk,nk

X1,n1 - - -

- - -

XPn

Consider a simulated sequence consisting of k segments, where each i-th segment consists of ni patterns X i = (X i, 1 , X i, 2 , … X i, ni ). Since all segments can be made independent using reset signal before applying each segment, then also the set of first patterns XP1 = (X 1, 1 , X 2, 1 , … X k, 1 ) of all k segments consist of independent patterns. In a similar way, we can state the same about all subsequent sets of patterns XPj = (X 1, j , X 2, j , … X k, j ), j = 2, 3, …n, where n – is the maximum length of the segment. Hence, we can regroup the k segments {X i } of depending on each other simulated data with length of ni patterns, into a new set of n = max (ni ) packages {XPj }, j = 1, 2,…k, of independent simulated data with equal lengths of k patterns. Since the lengths of segments X i , in the general case, may be different, then the number of packages XPj is equal to the number of patterns n = max (ni ) in the longest test sequence. Figure 5.3 illustrates the reorganization of the structure of simulated data, as explained. Since we assume that the states of flip-flops are initialized, the two-valued simulation is performed, and we can apply a similar algorithm for simulation using the SSBDDs, as illustrated in Example 5.2 for combinational circuits. Let us consider here how to apply the algorithm of traversing the Hamiltonian paths in S3 BDDs and applying the formulas (5.1) and (5.2) iteratively. The structure of S3 BDD is complex due to cross-redirections and returns from graph to graph. Therefore, the first step before starting the simulation is ranking all nodes of the model, such that for each node m, before calculating its D(m) using formulas (5.1) and (5.2), for its descendant nodes the D(m0 ) and D(m1 ) have already been calculated. For that purpose, we start from an arbitrary supergraph, and order all its nodes during traversing through its Hamiltonian path. If redirected into another subgraph, the nodes traversed in this graph are ordered and inserted into the main Hamiltonian path as well. We have to repeat this procedure for all supergraphs in an arbitrary order. If, during the procedure, any node appears to be already ordered, we skip it.

168

5 Logic-Level Fault Simulation

Example 5.4 Consider again the sequential circuit s27 from the ISCAS’89 benchmark family, presented in Fig. 2.13. The S3 BDD model in Fig. 5.4 consists of two graphs. The first graph GT 1 is for calculating the value of T 1 , and the second graph G T2 is for calculating the values of T 2 , T 3 , and y26 . Table 5.3 illustrates the ordering of tracing the nodes and the whole simulation process. First, the nodes are ordered in columns 2 and 3. We start the ordering in supergraph G T2 . When entering the node x 9 , we have to jump to subgraph Gx9 to calculate the value of the variable x 9 , so we continue to order the nodes in Gx9 . In the 7th node, we return to node x 9 in G T2 . After finishing the ordering of nodes in G T2 , we continue and finish the procedure in GT 1 with the last node x3 . After finishing the ordering of nodes, we can start the simulation and fill up column D(m) in Table 5.3 in the same way as we did in the case of the combinational circuit and using the SSBDD in Example 5.2. The results of the simulation are presented as for the entries in column 5 into the sub-graphs x 8 , x 14 , 9 10 T1

x3

6

x8 7

8

x2 5

x9

T2

T3

T2

y26

T1

3

x9

x4

4 x14

T3

2

x14

1 x1

x8

Fig. 5.4 S3 BDD for ISCAS’89 circuit s27 with ordered nodes for simulation

Table 5.3 Parallel simulation of input patterns using S3 BDD for the circuit s27 in Fig. 2.13 S 3 BDD G T2

m

x(m)

xj

D(m)

1

x1

0010

0010

x8

2

T3

0000

0000

x14

3

x4

1011

1011

4

x14

0000

0000

G x9

5

T1

0000

0000

6

x2

1010

1010

G T2

7

x9

0101

0001

8

T2

1111

0001

GT 1

y

x9 y26 , T3

9

x8

0010

0011

T2

10

x3

1101

1000

T1

5.1 Fault-Free Logic Simulation

169

and x 9 , highlighted in grey in column D(m), and in the last three rows for the output y26 , and flip-flops T 1 , T 2 , T 3 , also in column D(m). For the simulation process, there is still one new operation added. When using S3 BDDs (or a system of SSBDDs), the calculated values of other graphs or subgraphs appearing in column D(m) are mapped to the related variables y in the last column of Simulation Table 5.3. These values are used in formulas (5.1) when a node with the respective variable y is encountered. For example, see the cells highlighted in grey in the 6th and 7th rows. In the 6th row, we calculate in the graph GT 1 the value of D(6) = 1010. The node variable x 2 is the root of the subgraph G x9 . Hence, x9 = D(6) = 1010. This result is fixed in column y. In the next, 7th row, we return to the graph G T2 , fix the value of x 9 = 0101, and use this value for calculating D(7) = 0001.

5.1.2 Multi-valued Logic Simulation with SSBDDs 5.1.2.1

Motivation for Multi-valued Simulation

In traditional test generation and fault simulation methods, two-valued simulation is used primarily because of the need for high computational speed. This results in underestimating the impact of transient pulses caused by hazards during signal transitions, which may lead to decreased accuracy in estimating testing quality. Overcoming this type of drawbacks lies in using dynamic analysis methods based on multi-valued logic [18, 50]. Multi-valued simulation has been useful for detecting hazards in digital circuits [23], delay fault analysis and test synthesis [271], test generation for crosstalk glitches and crosstalk delays [254], fault cover analysis and test synthesis for the case of dynamic testing [382], etc. In the multi-valued simulation approach, to each value of the given alphabet, a special type of waveform corresponds. The number of values (waveform types) can be different. Three-, five-, six-, eighth-, nine-valued simulation alphabets have been very common in delay fault analysis and dynamic test synthesis. The main drawbacks of traditional multi-valued simulation methods have been the complexity of models, and the restriction to multi-valued modeling of two-input gates. Traditional approaches have targeted gate-level analysis of signal hazards, which is not scalable. SSBDDs are a suitable model to reduce the size of the model and to increase the simulation speed. The main ideas of this approach are published in [441, 442].

5.1.2.2

Gate-Level Multi-valued Simulation

Consider an example of a circuit in Fig. 5.5a, where the transition 0 → 1 is applied at the input which has a fan-out. The circuit may react with an erroneous transition at the output instead of stable 1 to be expected. Such a hazard cannot be detected

170

5 Logic-Level Fault Simulation

1

1 0→1 1

&

0→1

& 1→ 0 1

0→1

101 Hazard

1

&

0→1

& 1→ 0 1

& SAF/0 Not testable

1

1 101 No hazard

Fig. 5.5 Design for testability of hazards

by traditional two-valued logic simulation. To detect such a false transition, we have to extend the alphabet of signals with this transition case to be able to model it. Additionally, we must extend the algebra to be able to process the dynamic signals using multivalued simulation. In Fig. 5.5b, the same circuit is designed to be free of the hazard by adding the AND gate. With this extension, we have introduced the circuit functional redundancy. Due to this redundancy, the faults SAF/0 at the inputs and output of the added AND gate cannot be detected by two-valued logic simulation. To cope with the problem, we can apply the multi-valued simulation approach. There are many tasks related to design and test, where multivalued simulation is expected, such as detection of hazards in digital circuits, delay fault analysis, test generation for crosstalk glitches and delays, fault cover analysis and test generation for the case of dynamic testing. In multi-valued simulation, we assign symbolic values of the given alphabet of signals to special stylized waveforms. The number of values (waveform types) can be different, differing in the more precise distinction between waveforms. Consider, as an example, the 5-valued simulation with the alphabet A5 = {0, 1, ε, h, x} in Table 5.4. The value 0 (1) represents the type of waveform having a stable logic value 0 (1) during an observable time frame (clock cycle), ε (h) represents a waveform having a rising transition from 0 to 1 (step-down transition from 1 to 0), and x represents unknown waveform, or don’t care waveform. The related 5-valued algebra is presented as three arrays in Table 5.4. The algebra covers 2-bit operations OR, AND, and 1-bit operation NOT. Using the algebra A5 , 5-valued logic simulation can be carried out at gate-level circuits consisting of AND, OR and NOT gates. Figure 5.6a illustrates graphically the behavior of the OR gate y = x 1 ∨ x 2 at the input pattern x 1 = ε, x 2 = h. Example 5.5 Gate-level multi-valued simulation for a simple combinational circuit is illustrated in Fig. 5.6b for a test pattern T(x 1 , x 2 , x 3 , x 4 , x 5 ) = (10ε10), where ε denotes the transition 0 → 1. The NOT gate with input x32 inverts the transition 0 → 1 to 1 → 0, denoted by symbol h. The values ε and h, according to the operation of AND gate in Table 5.4, propagate to the inputs of OR gate. Since the 5-valued

5.1 Fault-Free Logic Simulation

171

Table 5.4 Algebra for 5-valued simulation for OR-, AND and NOT gates OR

01εhx

AND

01εhx

NOT

0

01εhx

0

00000

0

1

1

11111

1

01εhx

1

0

ε

ε1εxx

ε

0εεxx

ε

h

h

h1 xhx

h

0hxhx

h

ε

X

x1xxx

x

0xxxx

x

X

Hazard detected: Waveform types: y =x1 ∨ x2 y x2 x1

∼1

∼0 ? 1→0 0→1 (a)

x2

x h

ε

x3

x1 1

0

ε x31

1

0→1 &

x32

x4

ε

&

ε 1

h & 1

x5

x

y

h 0

(b)

Fig. 5.6 Multivalued simulation in the gate-level circuit and with SSBDD

algebra in Table 5.4 describes only a 2-bit operation OR, we have to carry out the 2-step OR operation: y = (ε ∨ h) ∨ 0 = x ∨ 0 = x. The calculated value x in the output means that the test pattern causes a hazard. The disadvantage of the gate-level multi-valued simulation is the need to update the algebra with a new type of operation whenever a new type of gate appears in the library. Another disadvantage of applying multi-valued simulation for gatelevel circuits is the low speed of simulation characterized by the huge number of gate-by-gate operations in the complex circuits.

5.1.2.3

FFR-Level Multi-valued Simulation Using SSBDDs

SSBDDs allow dynamic analysis of gate networks (delay fault simulation, hazard detection) using multi-valued simulation, instead of low gate-level, at the FFR level [442]. The idea is to substitute the use of gate-level algebra with higher, FFRlevel procedural implementation of the 5-valued algebra to increase the speed of simulation. The procedure of simulation is uniform for any structure of FFRs. The idea of multi-valued procedural logic simulation can be mapped into partial Boolean differential algebra, which has been an important concept also used in fault simulation and test generation.

172

5 Logic-Level Fault Simulation

A partial derivative of a Boolean function y = f (X) is its derivative with respect to one of the variables x i ∈ X, with other variables x j ∈ X \ x i , held constant. Definition 5.1 Partial Boolean derivative of a Boolean function y = f (x 1 , x 2 , … x n ) of the variable xi is ∂y = f (x1 , x2 , . . . xi , . . . , xn ) ⊕ f (x1 , x2 , . . . xi , . . . , xn ) = y|xi =1 ⊕ y|xi =0 ∂ xi (5.3) In the case of SSBDDs, we express the partial Boolean derivative as follows. Lemma 5.2 Consider an SSBDD Gy representing a function y = f (X). Let pattern X t activate a path l 1 = (m0 ,#e1 ) through a node m, so that x(m) = e1 . Let the change of the value x(m) = e1 activate another path l2 = (m0 ,#e2 ) then the following holds: ∂y = e1 ⊕ e2 ∂ x(m)

(5.4)

Proof Due to the conditions of Lemma, both paths l1 and l 0 must pass the node m. Therefore, the value of variable x(m) determines in the graph Gy the value of y directly by switching the path between the terminal nodes #0 and #1, in a similar way, as the change of the variable x i changes the value of y in formula (5.3). Hence, ∎ the formula (5.4) results in a straightforward manner from (5.3). Differently from Boolean simulation, where the values of variables are binary, in multi-valued simulation, it is not the case. To solve the problem of multi-valued simulation, the paper [441, 442] introduced the concept of the maximum of the Boolean derivative. Definition 5.2 Let us introduce the following notations: • W = {ε, h, x} is the subset of dynamic waveforms used in multi-valued simulation. Introduce the relationships: 0 < ε < 1, 0 < h < 1, 0 < x < 1; • w → e is the operator of mapping any dynamic value w ∈ W to one of the specified static values e ∈ {0, 1}; • y|w→e the value of y, which is calculated by traversing the activated path in SSBDD, provided the operator w → e is applied to all dynamic values of the node variables along the activated path le , where e ∈ {0, 1}. When simulating the node m at x(m) = w, w ∈ W, to account for extreme deviations of the dynamic values, we fan-out the path tracing into independent two branches l1 = (m, #e1 ) and l 2 = (m, #e2 ). If e1 = e2 , then we interpret that the waveform w does not propagate to the output y, and we assign to y the static value e1 = e2 . Otherwise, if e1 /= e2 , the value of y depends on the dynamic value of x(m) = w, and we assign it to the output variable y = w.

5.1 Fault-Free Logic Simulation

173

When simulating a dynamic pattern with more than one dynamic value, then for applying the simulation algebra A5 depicted in Table 5.4 for SSBDDs, we suggest proceeding in two steps. First, to propagate each dynamic value independently through the SSBDD, and second, to apply for all dynamic values propagated up to the terminal nodes the relations of algebra A5 . This idea can be explained as a procedure of mapping the set of operations of algebra A5 from all pairs of nodes inside the SSBDD network to the single pair of terminal nodes. Let us first consider how we can propagate a dynamic value through the simulated path in SSBDD, independently of other dynamic values. The easiest and most straightforward way would be to let each dynamic value propagate to the terminal nodes as if there were no other dynamic values by transforming them into logic constants. Lemma 5.3 Let us have a function y = f (X), represented by an SSBDD Gy . Then, for any assignment of the variables x j ∈ X, with values of the alphabet A5 , the following relations are valid: y|W →1 ≥ y

(5.5)

y|W →0 ≤ y

(5.6)

Proof Consider first the case of relation (5.5). Let us prove by contradiction that y|w→1 < y. Assume there is activated, according to Definition 5.2, a path l1a = (m1 , #1) in Gy producing y = 1. See as examples Fig. 5.7a, b, where we simulate the pattern T (x 1 , x 2 , x 3 , x 4 , x 5 , x 6 ) = ε1h111. On the activated path l1a in SSBDD there are two nodes m1 , and m3 with dynamic signals x 1 = ε, and x 3 = h, respectively. From these nodes, two sub-paths fan out (see Fig. 5.7b), but they converge, and the resultant path reaches terminal #1. Hence, the multivalued simulation gives the result y = 1. We show in Fig. 5.7b also the second possible option (for the case when x 6 = 0), where the two sub-paths l1a and l1a∗ that fan-out from m3 reach different terminals, which would produce, according to Definition 5.2, the dynamic value y = x.

ε y

x1

y

h x3

x4

#1

x3

x4

ε

x1

x5

x2 x2 #0

x5

x6

#0

(#0)

#1

ε→1

(a)

Fig. 5.7 Multi-valued simulation in SSBDD (to Lemma 5.3)

1

1 x6 (0)

y|w→1 ≥ y x1

#1 ( )

h

(b) x3

h→1 (c)

x4

1 #0

#1 ( ) (#0) (

)

#1 ( ) Not possible

174

5 Logic-Level Fault Simulation

Let us show now in Fig. 5.7c that applying the operator w → 1 to the path l1a could not create a situation where y|w→1 = 0, and hence, y|w→1 < y. Indeed, when applying the operator w → 1 at the nodes m1 and m3 , we activate the path l1b also reaching terminal #1, and getting the result y|w→1 = 1. There is no possibility of creating a pattern that would extend the path from the node m4 to terminal #0, because, in this case, we have to cross the path l1a (see Fig. 5.7a for comparing the two paths l1a , and l1b ), that is not possible due to the property of planarity of SSBDDs, according to Theorem 3.1. Hence, y|w→1 = y. For the case when x 6 = 0, when the path l1a∗ will be activated in Fig. 5.7a, we get y = x, and according to Definition 5.2 that states 0 < x < 1, we get y|w→1 > y. In this way, we have shown that the statement y|w→1 < y is false, and the formula 5.5 is valid. In a similar way, we can also prove ∎ the validity of the symmetrical formula 5.6. Definition 5.3 Let us introduce the term maximum of the Boolean derivative of the function y = f (X) with respect to x(m), for SSBDD Gy , max w

∂y = y|w→1 ⊕ y|w→0 ∂ x(m)

(5.7)

Intuitively, the formula expresses the highest resolution in the interactions of dynamic signals. Note, the problem of calculation of the maximum of a Boolean derivative arises only then if more than one signal interact at the inputs of gates. Otherwise, if ∂y/∂x(m) = 1, and the value of x(m) = w represents a waveform w ∈ W, the latter propagates to the output y of the FFR. In the general case of interaction of more than one waveform in the FFR represented by a function y = f (X), and the respective SSBDD Gy , the following Theorem holds. Theorem 5.2 If there is more than one waveform, represented by w ∈ {ε, h, x} and applied to the primary inputs of FFR y = f (X) represented by SSBDD Gy then any of the waveforms x(m) = w in Gy propagates to the output y if max w

∂y = y|w→1 ⊕ y|w→0 = 1 ∂ x(m)

(5.8)

If there is a subset X D ⊂ X of more than one variable x(m) = w, where w ∈ W, and x(m) ∈ X D for which (5.8) is valid, then the value of y is calculated as the logic OR of all variables x(m) ∈ X D in accordance to the algebra A5 , presented in Table 5.4. Proof The proof of the statement (5.8) results from Definition 5.3 and Lemmas 5.2 and 5.3. The statement about the interaction of different waveforms satisfying the constraint (5.8) follows from the reasoning that if two interacting waveforms are similar, either ε or h, they support each other, and the symbolic waveform does not change. On the other hand, if the different waveforms ε and h interact, there is a hazard at the output of FFR, denoted by the symbol x. ∎

5.1 Fault-Free Logic Simulation

175

y

x2

x1 1

0

ε x31 x3

1

0→1

&

x32

x4

ε

&

h 1 &

x5

x2

#1

ε x 31

ε

h

1

x1

x

y

x32

x4

h

x5 0 #0

Fig. 5.8 Multi-valued simulation using SSBDD

Example 5.6 Consider the multivalued simulation of the pattern T (x 1 , x 2 , x 3 , x 4 , x 5 ) = 10ε10 for the circuit using its SSBDD presented in Fig. 5.8. There is a transition in x 3 = ε at the inputs of the FFR which propagates along two paths in the FFR, shown as the nodes x 31 and x32 in SSBDD. To calculate maxW {∂y/∂x 31 }, two paths are activated SSBDD: l1 = (x 1 , x 2 , x 31 , #1) to find that y|W →1 = 1 and l 0 = (x 1 , x 2 , x 31 , x32 , x 5 , #0) to find that y|W →0 = 0, respectively. When activating the path l1 , the condition w → 1 implies ε = 1 and when activating the path l0 , the condition w → 0 implies h = 0. Since maxW {∂y/∂x 31 } = 1, we have found that the transition x 31 = ε propagates to the output y. In a similar way, we can find that maxW {∂y/∂ x32 } = 1, which means that the transition x32 = h and propagates to the output y. According to Theorem 5.2, two interactive waveforms cause a hazard. Hence the final result is y = x. To make the multi-valued simulation more efficient, there is no need to calculate the maximum Boolean derivatives separately for all nodes with assigned waveforms. It is not difficult to develop algorithms, that allow path traversing for all waveforms collectively in parallel and decide the OR functions at terminals separately. Example 5.7 Consider multi-valued simulation of the pattern T (x 1 , x 2 , …, x 8 ) = ε10h00ε1 with SSBDD in Fig. 5.9. There are three nodes highlighted with respective symbolic waveforms assigned to the node variables: x 1 = ε1 , x 4 = h, and x 7 = ε2 . The indexes 1 and 2 at the dynamic value ε are introduced to distinguish the waveforms of different signal sources x 1 and x 7 . Each waveform w ∈ {ε, h, x} propagates in two directions towards #0 and #1, according to Lemma 5.3, and is operated at each node with a dynamic value with operators w → 1, and w → 0 along the 1-paths and 0-paths respectively, according to the need of calculating the maximum of Boolean derivatives using the formula (5.7). If the waveform converges, it disappears, according to Definition 5.2. In this example, the two propagating waveforms ε2 converge at reaching terminal #1. In terminal nodes, the analysis of incoming waveform symbols is performed. The signals ε1 and h arrive in both terminals #0 and #1. Hence, the maximum of Boolean differences for the variables x 1 and x 4 is

176

5 Logic-Level Fault Simulation

y

ε1

x1 ε1

x2

ε1, h

x4

#1 (ε1, h)

h x5

x3

x7 h

ε1

h x9 ε1, ε2

ε1

x6

ε2

ε2

ε2

x8

T = ε10h00 ε1 y = ε1 ∨ h = x

#0 (ε1, h)

Fig. 5.9 Multi-valued simulation to Example 5.7

Table 5.5 Multi-valued simulation results using the SSBDD in Fig. 5.8 Pattern

Input pattern x1

x2

x 31

x32

x4

x5

y

1

1

0

ε

h

1

1

1

Output

2

0

0

ε

h

1

0

h

3

1

0

ε

h

0

0

ε

4

0

0

ε

h

0

0

0

equal to 1, according to Theorem 5.2, which means that the respective waveforms are assigned to y using OR operation y = ε1 ∨ h = x, according to Algebra A5 , and Theorem 5.2. The convergence of signals ε2 means that in the real FFR, the propagation of the waveforms is blocked somewhere on the signal path from the related input to the output of FFR. In Table 5.5, several simulation results are depicted to illustrate the theory of multi-valued simulation on the SSBDD in Fig. 5.8.

5.1.2.4

Experimental Comparison of the Gate- and FFR-Level Multi-valued Simulation

In the following, we present the data about experiments on multivalued simulation, reported in [441, 442]. The goal was to compare the speed of multivalued simulation of combinational circuits represented at the gate level and FFR levels. For experiments, we used the multi-valued simulation tool of the Turbo-Tester (TT) test software [482] developed at the Tallinn University of Technology and the files of the ISCAS’85 benchmark circuits [51]. Experimental results are shown in Tables 5.6 and 5.7. We measured the gains in the size of the SSBDD models, given as the number of nodes representing the gate-level and FFR-level networks and the speed-up of using

5.1 Fault-Free Logic Simulation

177

the FFR-level simulation compared with gate-level simulation. “Gain 1” characterizes the superiority of FFR-level simulation for the case when only a single signal transition at inputs was simulated, and “Gain 2” corresponds to the case of multiple random transitions at inputs. We discovered that the efficiency of simulation is highly dependent on the number of levels and gates in the tree-like FFRs represented by SSBDDs. Experimental results for 5 different sizes of a single SSBDD with numbers of levels from 3 to 7 (and with numbers of gates from 7 to 127) are represented in Table 5.7. As an example, note that the FFR with a maximum size in the ISCAS’85 benchmark circuit c7552 contains 64 gates. Note the gain in simulation speed is very sensitive on the number of levels in the gate-level FFRs and on the number of nodes in SSBDDs, as shown in Table 5.7. On the other hand, from Table 5.6, we see that the gain in the average size of the model (number of nodes in SSBDDs) is rather small, when we compare the gateand FFR-level networks. From that, we can draw a conclusion that the transition from SSBDDs to S3 BDDs is very promising to speed up multi-valued simulation of digital circuits, using the concept of structural BDDs. Table 5.6 Experimental results of multi-valued simulation speed with SSBDD and ISCAS’85 benchmark circuits [441, 442] Model sizes

Single bit transition

Multiple bit transitions

Circuit

c1355

c1908

c2670

c3540

c5315

c6288

c7552

#gates

514

718

997

1446

1994

2416

2978

Gain in size

1,36

1,61

1,58

Gate level

4.86

6.98

9.24

1,69

1,59

1,26

1,63

12.9

20.1

58.7

28.0

FFR level

2.81

2.32

3.80

3.63

6.18

30.8

8.88

Gain 1

1.73

3.01

2.43

3.54

3.26

1.90

3.15

Gate level

8.38

11.50

15.90

23.60

37.70

272

57.10

15.2

139

24.4

FFR level

5.17

4.49

7.85

8.69

Gain 2

1.62

2.57

2.03

2.72

2.47

1.95

2.34

Table 5.7 Dependence of the simulation speed on the number of circuit levels [441, 442] Circuit characteristics

# levels in FFR # gates in FFR

7

15

31

63

127

Time cost of multi-valued simulation

Gate-level

1.3

2.8

4.7

9.1

17.0

FFR-level

0.6

1.3

1.3

1.4

1.7

3

4

5

6

7

178

5 Logic-Level Fault Simulation

5.2 Fault Simulation in Combinational Circuits Fault simulation is the central task in the field of digital test used for estimating the quality of tests for digital circuits. In addition, the procedure of fault simulation is often required for other test-related tasks like fault diagnosis, automated test pattern generation (ATPG), test compaction, built-in self-test design and optimization, design of reliable systems and other tasks. It makes the performance of fault simulators a key factor for improving the efficiency of solving the mentioned tasks. In contrast to logic simulation (i.e., fault-free and true-valued simulation), the goal of fault simulation is to evaluate the behavior of a circuit in case of the presence of faults inside it. In particular, the fault simulator has to find out whether the output response of a circuit is changing due to the influence of a fault or not. A fault whose effect propagates to primary outputs or scan-path flip-flops under the current test pattern classifies as a detected fault by this pattern. The task of the fault simulator is to determine which faults are detectable by applying the given test pattern, called also test stimuli. The ultimate result of fault simulation is the measurement of the effectiveness of test patterns to detect faults. The chapter starts with an overview of the methods of fault simulation. The central topic of the chapter is the presentation of the powerful method of parallel pattern fault back-tracing method for fault simulation in combinational circuits developed by the authors. The simulator that implements the method outperforms state-of-theart methods in simulation speed several times. The high speed of the simulator is explained by two novelties introduced. The first relates to substituting gate-level simulation with higher-level, i.e. FFR-level, simulation based on using the SSBDDs. The second novelty concerns the original modeling of the complex set of embedded convergent fan-outs with a compiled set of embedded Boolean differential equations. The first version of the method was developed for the simulation of SAF faults, and then it was extended for a larger conditional SAF class, and also for a very general X-fault model. Further speed-up of the simulation was achieved by implementing the simulator in the multiple core environment, which enabled to combine the concurrence of simulation in three dimensions: algorithmic, software-based and hardwarebased. The algorithmic dimension concerns cause-effect fault reasoning throughout the full simulated circuit, and over the full set of faults, the software-based bit-level parallelism takes place regarding concurrent multiple test pattern reasoning, and the hardware-based concurrency concerns distributing the fault reasoning process between different cores in a multi-core processor environment.

5.2 Fault Simulation in Combinational Circuits

179

5.2.1 Related Work and Overview 5.2.1.1

Overview of Simulation Techniques

Numerous methods for fault simulation have been proposed over the last decades. A simplified comparison of different fault simulation methods is highlighted in Fig. 5.10. As the criterion for comparison, the number of faults processed during a single simulation run of the method is chosen. In Fig. 5.10, a fault table FT = ||r i,j || is presented with columns for faults r i and rows for test patterns t j , where the entry cell r i,j = 1 if the test pattern t j detects the fault r i , and r i,j = 0 otherwise. The fault simulation aims to calculate the entries of r i,j in the array FT. The grey areas in the table show how many faults a particular method is processing by a single run. The methods of fault simulation are classified into two large groups: direct fault simulation with fault insertion and fault reasoning-based analytical methods. The first group consists of methods such as serial fault simulation and parallel fault or parallel pattern simulation. The second group includes deductive and concurrent fault simulation, where the fault reasoning goes in the direction from inputs to outputs of the circuit, and critical path tracing, where the faults are analyzed in the opposite direction from outputs to inputs. Most methods of fault simulation focus on the SAF model. Let us focus here on this model. Fault simulation techniques : Serial single fault simulation Parallel fault/pattern simulation

Comparison of techniques:

Fault table

Analytical fault reasoning: Deductive fault simulation

Faults ri

Concurrent fault simulation Critical path analysis

Parallel critical path analysis [448] – for SAF [452] – for larger class of faults [148] – in the multicore environment

Fig. 5.10 Comparison of fault simulation techniques

Test patterns tj Entry (i,j) = 1 if ri is detectable by tj Entry (i,j) = 0 if ri is not detectable by tj

180

5.2.1.2

5 Logic-Level Fault Simulation

Serial and Parallel Fault Simulation

Serial fault simulation targets a single test pattern and a single fault at a time. It can compute by one run only a single entry in FT. Hence, this method runs very slowly, and the more sophisticated techniques, processing many faults or patterns simultaneously, outperform this approach clearly. However, this method is still very practical for very complex specific fault models. Parallel fault simulation techniques utilize the width of a computer word (e.g., for 32-bit or 64-bit processor architectures) to simultaneously perform logic operations (e.g., AND, OR, XOR) with many operands. This enables grouping many faults (or patterns) into a packet and processing them with a single run, hence increasing the fault simulation speed. Parallel pattern single fault propagation (PPSFP) [500] was widely used in combinational circuit fault simulation. Two types of parallel fault simulation are distinguished: parallel fault simulation [376], which simulates many faults in parallel for a single test pattern, and parallel pattern simulation [500], which processes many patterns in parallel for a single fault. Many proposed fault simulators incorporate PPSFP together with other advanced techniques, such as test detect [453], dominator concept [166], identification of independent fan-out branches [27], stem-region analysis [285], critical path tracing [22, 27], and others. These techniques have further reduced the simulation time. The paper [108] proposes a novel high-performance resistive bridging fault simulator based on fault sectioning in combination with parallel-pattern or parallel-fault multiple stuck-at-fault simulation. In [163], a parallel-pattern approach for simulating interconnect-open defects was developed. Stem-region concept [285], allows limiting the repetitive simulation area by associating each fan-out with a region bounded by so-called exit lines (these lines form a set of disjoint cones from the exit point to the primary outputs). If a fault propagates to the exit line and this line is critical (i.e., the effect of fault on this exit line propagates to primary outputs), further simulation is not needed. Thus, one simulation pass is enough to detect all the activated faults of the stem-region. A high-performance PPSFP-type simulator [246] relies on the idea of eliminating unnecessarily simulated regions at the early stages of fault simulation. This is achieved by examining the detectability of faults and excluding the following regions out of simulation if no faults are detectable at the output of the currently simulated fan-out-free region or stem region. The method was also enhanced with the efficient implementation of a stack of gates under evaluation. A group of methods (deductive fault simulation, concurrent fault simulation, and critical path tracing that are described below) combine simulation with logic reasoning. The basic feature of these methods discussed below is to calculate the whole row of the fault table by a single run of the algorithm. However, the computing time of these runs may significantly differ. Another group of methods like deductive [26], concurrent [420] and differential fault simulation [93] are based on logic reasoning rather than simulation.

5.2 Fault Simulation in Combinational Circuits

181

Gate-level fault list propagation Fault list calculation:

R1

R2

1

1 R3 R4 0 4 0 5

1

2

0 0

3

1

&

b 1 0

&

La = L4 ∪ L5 1

1

Library of formulas for gates

Lb = L1 ∪ L2

y

Lc = L3 ∩ La

c

a

Ly = Lb - Lc

R5

-----------------------------------------------------------

Ly = (L1∪ L2) - (L3 ∩ (L4 ∪ L5))

Ra ó faults causing

erroneous signal on the node a

Ly ó faults causing erroneous signal on the output node y

Fig. 5.11 Deductive fault simulation technique

5.2.1.3

Deductive Fault Simulation

The deductive fault simulation algorithm [26] performs a logic reasoning procedure on lists of fault effects that are propagated to the inputs of a gate. The reasoning consists of logic operations on fault lists depending on fault-free input signals and eventually derives a new list of faults propagated to the gate’s output. Finally, the faults propagated through all the gates in the circuit up to primary outputs or to the scan-path flip-flops are considered detectable. An example of gate-level deductive fault simulation, as a method of set-theoretical calculating and propagating of the fault lists, presents in Fig. 5.11. Each gate has its own formula for fault propagating. The deductive fault simulation is extremely powerful compared to simulationbased approaches because all faults are processed in one run of the algorithm (for a single test pattern), avoiding re-simulations of the same circuit. In fact, a deductive fault simulator spends most of its CPU time on logic operations (union, intersection and complementation) over fault lists that might contain large numbers of faults. Deductive fault simulation (Fig. 5.11) scales better than parallel fault simulation as their complexities are O(n2 ) and O(n3 ), respectively [141], where n is the number of logic gates in a circuit. This group of methods is very powerful since all the detectable faults are calculated by a single run of the test pattern. However, they cannot produce reasoning for many test patterns in parallel.

5.2.1.4

Concurrent Fault Simulation

Concurrent fault [420] uses the idea of event-driven logic simulation. The simulator exploits the hypothesis that the typical fault effect results in differences for a small

182

5 Logic-Level Fault Simulation

part of the circuit. Consequently, only the affected area needs analysis for fault detection. A variation of concurrent fault simulation, referred to as differential fault simulation [93], utilizes the analogous event-driven technique but requires a minimal amount of memory for implementation. Unlike the previous method, differential fault simulation deals with a single fault at a time. On the other hand, the parallel version of the concurrent fault simulator [367] can evaluate faults by groups. The technique for partitioning faults into groups reduces the time needed for processing events in the concurrent simulation. There is no direct comparison between the deductive and concurrent fault simulation techniques. However, it was estimated that the latter is faster than the former, since the concurrent fault simulation only deals with the “active” parts of the circuit that are affected by faults [510]. Differential fault simulation combines the merits of concurrent fault simulation and single fault propagation techniques and shows up to 12 times faster than concurrent fault simulation and PPSFP [93]. Both deductive and concurrent fault simulation procedures have not been implemented for the parallel processing simultaneously of many test patterns.

5.2.1.5

Critical Path Backtracing

Critical Path Backtracing (CPB) consists of simulating the fault-free circuit and using the computed signal values for backtracing all sensitized paths from primary outputs to primary inputs, to determine the detected faults. The backtracing continues until the paths become non-sensitive or end at primary inputs. Faults on the sensitive (critical) paths are detectable by the test. CPB also uses reasoning and can process all the faults in a single run. However, the precise results are only guaranteed in fan-out-free regions (FFR) of circuits. A modified critical path tracing technique that is linear time, exact, and complete is proposed in [508]. However, the rule-based strategy does not allow a parallel analysis of many patterns. Single test pattern critical path tracing [22] is a very efficient method of computing the detectable faults since it does not require carrying out fault simulation explicitly. Instead of that, the approach uses computed fault-free signal values to backtrace sensitized (critical) paths starting from primary outputs towards primary inputs of the circuit. A modified technique that is able to perform exact critical path tracing in a circuit with convergent fan-outs and works in nearly linear time was proposed in [508]. For this purpose, the enhanced CPT method is supplemented with a set of rules to handle various converging cases. However, the drawback of the rule-based approach to exact critical path tracing is the lack of ability to process these pattern-dependent rules for different test patterns in parallel. Later in this chapter, we describe the developed by authors a new approach, which allows carrying out exact critical path backtracing for many test patterns in parallel. For some of the advanced fault models, dedicated fault simulation methods have been developed, e.g., symbolic X-fault simulation [505, 514], simulation of resistive bridging faults based on resistance intervals [111], etc. These methods, however,

5.2 Fault Simulation in Combinational Circuits

183

allow simulating faults one by one, and methods for analyzing faults in parallel for many patterns are missing. A novel concept of parallel pattern exact critical path backtracing, which can be applied efficiently also beyond the FFRs, was proposed for gate-level circuits in [422, 423, 424], and was later developed for using SSBDDs in [448] with considerable speed-up of simulation. The new method was further advanced for extended fault classes in [449, 451]. These methods are discussed in detail in Sects. 5.2.3 and 5.2.4.

5.2.1.6

Other Fault Simulation Methods

For the sake of reducing the efforts of fault simulation, several methods of fast computation of approximate fault coverage have been proposed to replace exact analysis. Fault sampling [287] works in conjunction with a fault simulator to determine the detectability of a randomly picked sample of faults and extrapolate these results by using the means of probability theory. Statistical fault analyzer [176] simulates fault-free circuits to count the number of gates that have inputs sensitized to gate output. These statistical data are used to compute the probability of each fault to be detected. However, as the approximate methods cannot provide the exact data about fault detectability, they remain unusable in many cases, such as when the fault tables are created for diagnosis purposes. Besides the conventional approaches, many attempts were made to increase the speed of fault simulation by delegating a part of computations to specially developed hardware accelerators. Many such attempts utilize the reconfigurability of FPGAs to emulate the whole circuit under test with reprogrammable logic [113, 215, 331]. However, these techniques require additional devices attached to the host computer, thus narrowing their applicability. Recently a new dimension in the area of fault analysis acceleration has been thoroughly explored in [134]. The key idea of the approach is to use standard off-the-shelf hardware that is capable for parallel processing to accelerate the well-known fault simulation algorithms. Typically, graphical processing units (which likely contain hundreds of separate processing cores) are programmed for concurrent execution of basic operations needed to run simple fault simulation algorithms, such as for parallel fault simulation.

5.2.2 Parallel Fault Simulation with SSBDDs Let us extend in this section the method of parallel pattern fault-free simulation using SSBDDs, described in Sect. 5.1.1, to parallel pattern fault simulation. We carry out this procedure in three steps. In the first step, we perform fault-free parallel pattern simulation, as described in Sect. 5.1.1, for a subset of test patterns T = {X t } by backtracing the Hamiltonian path of the SSBDD. As a result, we calculate the output value of the circuit, represented

184

5 Logic-Level Fault Simulation

by the SSBDD, for all the simulated patterns in T, and initialize the graph in parallel, for all patterns, using the formulas (5.1) and (5.2). The initialization of the graph is equivalent to the calculation of the values of the internal nodes of the gate-level circuit. In the second step, we fix in parallel for all simulated patterns X t ∈ T, those nodes, which lay on the simulated paths lt (m0 , mT,t ). We call these nodes candidates for fault detection. In the third step, we determine which of these fault candidates have actually been tested.

5.2.2.1

Calculation of Candidates of Fault Detection

Consider again the case of single-test fault simulation. It is easy to understand that only the nodes m laying on the traversed path m ∈ lt (m0 , mT,t ) may be the fault candidates, i.e. “responsible” for the possible faulty signal on the output of the FFR. Introduce the candidate function L t (m), so that L t (m) = 1 if m ∈ l t (m0 , mT,t ), and / l t (m0 , mT,t ) for the given test X t . L t (m) = 0 if m ∈ To find the candidates for fault detection, the nodes m of the SS BDD are processed in direct order according to the ranking m < m0 , and m < m1 , where m0 and m1 are the neighbours of m, as follows: ( t ( 1) ( )) ( ) L m = L t m 1 ∨ L t (m) ∧ x t (m)

(5.9)

( t ( 0) ( )) ( ) L m = L t m 0 ∨ L t (m) ∧ x t (m)

(5.10)

Initially, we have L t (m0 ) = 1, because, for the root node, we always have m0 ∈ l(m0 , mT ). For all other nodes, m ∈ M \ m0 , we assign the initial values L t (m) = 0, which are updated during the procedure. The meaning of formulas (5.9) and (5.10) is in moving the token starting from the root node along the activated path lt (m0 , mT,t ). If node m has the token, L t (m) = 1, then it is moved further to the neighbour node according to the value x t (m). If node m does not have the token, no neighbour gets it. Since the formulas (5.9) and (5.10) are Boolean, all the calculations can be carried out in parallel for many test patterns X t ∈ T. In other words, the tokens for all simulated test patterns are moved along the SSBDD concurrently. Example 5.8 Consider the circuit and its SSBDD in Fig. 5.12. Table 5.8 shows the results of parallel pattern fault simulation in the model of SSBDDs. Four different test patterns are listed in column X t (m). Column Dt (m) presents the result of the first step of the fault-free simulation, explained in Sect. 5.1.1.2. The results of the second step of fault simulation show the column L t (m). The entries “1” in the column refer to the nodes traversed by the test-activated paths, which represent the fault candidates. The last column S t x(m),y illustrates the third step of fault simulation and shows the sensitized nodes, and the identified fault locations. The procedure for calculating the data in this column is described in the next section.

5.2 Fault Simulation in Combinational Circuits

185

y

x1

x2

x3

x4

#1

x5

#1

1

x1 0 & x2 0

1

x3

&

x4 1 x5 0

x6

1

0

y x6

0

#0

Fig. 5.12 Parallel pattern fault simulation in the SSBDD

Table 5.8 Results of parallel pattern fault simulation in the SSBDD in Fig. 5.12 m

x(m)

X t (m)

Dt (m)

L t (m)

S t x(m),y

1

x1

1001

1100

1111

1010

2

x2

1010

1110

1001

1001

3

x3

0101

0100

0111

0100

4

x4

0100

0100

0101

0001

5

x5

0100

0100

0001

0001

6

x6

0000

0000

0011

0011

5.2.2.2

Calculation of Detected Faults

In the third step of fault simulation, we find which nodes m on the traversed paths m ∈ l t (m0 , mT,t ) in the SSBDD indicate the inputs of the simulated FFR block, where the faults are sensitized, and the faulty signals are propagated to the FFR output. This step is equivalent to the parallel critical path tracing from the output of FFR through the circuit to its inputs. The value S t x(m),y = 1 means that the signal path in the FFR, represented by the node x(m), becomes critical when a test pattern X t applies and propagates the faults through the FFR. Introduce the sensibility function S t x(m),y for the node m ∈ lt (m0 , mT,t ) as follows: t Sx(m),y =

( )) ( ( ) ∂y = L t (m) ∧ D t m 0 ⊕ D t m 0 ∂ x(m)

(5.11)

where Dt x(m),y = 1 if the faulty signal on the input x(m) of the simulated FFR is detected at the output y of the FFR at the test pattern X t ∈ T, otherwise, if not detected, Dt x(m),y = 0. As it can be seen from the formula (5.11), the calculation of Dt x(m),y is equivalent to the calculation of the Boolean derivatives, since for Dt x(m),y = 1, the following two conditions must be fulfilled:

186

5 Logic-Level Fault Simulation

• the node m must belong to the path traced on the SSBDD, m ∈ lt (m0 , mT,t ), i.e., it should be the candidate for fault detection, L t (m) = 1, and • the value of the output variable y of the FFR must depend on (the value of( the) input ) variable x(m), i.e., in addition to L t (m) = 1, the condition D t m 0 ⊕ D t m 0 must be fulfilled. Since all the arguments of the function (5.11) are vector variables, where the components of the vectors correspond to different independent test patterns X t ∈ T, the expression (5.11) can be calculated in parallel for many test patterns. For example, the results of this step of computing according to the formula (5.13) are presented in column S t x(m),y of Table 5.8. The entries of this column are the sensitivities of the output y of the FFR to the faults at the simulated critical path in the FFR, represented by the node x(m).

5.2.3 Critical Path Parallel Pattern Fault Backtracing 5.2.3.1

Introduction to the Problem and Ideas

Critical path backtracing along the signal paths of a digital circuit is a natural way of fault reasoning and identification of the faults detected by a given test pattern. The method works well in tree-like circuits, where its extension for parallel fault reasoning for many test patterns in parallel is straightforward. The problems begin in extending the backtracing beyond the convergent fan-out regions. A modified technique that can perform exact critical path tracing in a circuit with convergent fan-outs was proposed in [508]. In this technique, a set of rules to handle various fan-out converging cases was developed. The drawback of the rule-based approach to exact critical path backtracing is the unfeasibility of processing the pattern-dependent rules for different test patterns in parallel. In the following, we propose a novel approach to handle the problem of convergent fan-outs in a way that allows carrying out exact critical path fault backtracing for many test patterns concurrently in parallel. Let us focus first on the idea of backtracing the faults and on the problem of convergent fan-outs in more detail. An example of fault analysis with critical path backtracing is shown in Fig. 5.13a. Consider a test pattern T(1, 2, 3, 4, 5, 6, 7) = 1,000,000, applied to the circuit. The backtrace (shown by bold lines) continues in the circuit in Fig. 5.13a until the path becomes non-critical or it reaches a primary input. The faults on critical (sensitive) lines are detectable by the test pattern. Only one full path (2, b, c, y) in Fig. 5.13a (shown in bold), from the output y to the primary input 2, becomes critical during the backtrace, and the faults on this path 2 ≡ 1, b ≡ 1, and c ≡ 1 will be detectable. Critical path backtracing stops on the line a, because the signals on the inputs of the gate a allow no fault propagating. The faults of the gate a at the primary inputs 3 and 4 cannot propagate further through the gate b. As the result, there would be no need to continue backtracing from the nodes 3 and 4 backwards, if the nodes

5.2 Fault Simulation in Combinational Circuits

187

y

2

1

#1

Critical path tracing in the fan-out free circuit

3 4

2

0 0

&

1

0

0 0

1

0

&

b

a 5

6 7

1

1

0

C

0

1

1

y 5

0 0

& e

&

1 0

1

&

1

1 The critical path is not continuous

(c)

1/0

1/0

1 1/0

7

(b)

1

1/0

6

#0

d (a)

?

4

3 0

1

y 0

1

?

&

1

1 1/0

&

1 y

0

1 The critical path stops on the fan-out stem

(d)

Fig. 5.13 Critical path backtracing in a tree-like circuit and in an SSBDD

were the outputs of other gates. In similar way the backtrace stops also on the line e, and the rest of the circuit, starting from the line e backwards does not need further backtracing towards the inputs.. Note, the faults a ≡ 1 and e ≡ 1 on the lines, where the backtracking stopped, are also detectable. However, since these faults belong to the collapsed fault set, because they are dominating the faults on the path from the nodes a, or e backwards, the fault simulation for them is not needed. For example, the fault e ≡ 1 is testable by any test pattern that targets the faults 5 ≡ 1, 6 ≡ 1 or 7 ≡ 1 at the primary inputs. An example of the result of the critical path backtracing for the same test pattern, using the model of SSBDD, is shown in Fig. 5.13b. Instead of backtracing the full tree of activated signal paths at the low gate-level FFR, a subset of activated (tested) nodes in the SSBDD is identified using the method of fault simulation, that was considered in Sect. 5.1.1. In this example, the subset of activated nodes will contain a single node 2, that represents the path (2, b, c, y) in the gate-level circuit. The faults a ≡ 1 and e ≡ 1, discussed at the gate level, were not considered in the SSBDD model due to their collapsed status. The fault a ≡ 1 in the circuit will be detected by the test pattern which detects the faults SAF/1 of the nodes 3 and 4 in the SSBDD. On the other hand, the fault e ≡ 1 in the circuit will be tested by the test pattern which detects the fault SAF/1 of the node 5 and either the fault SAF/1 of the node 6 or 7 in the SSBDD.

188

5 Logic-Level Fault Simulation

When carrying out critical path backtracing at the FFR-network level, each FFR is treated as a “high-level gate”. The algorithm of processing of FFRs is the same for all FFRs of the network, independently of their structure, in contrast to the gate level, where different types of gates (or complex gates) are processed each based on their own different rules. The problem with gate-level critical path tracing method is related to fan-out convergences, where two or more signal paths in the circuit branch out from the same node called fan-out stem and converge (join) later again in a common gate. For example, in Fig. 5.13c, the backtrace of critical paths stops already at the inputs of the output OR gate of the circuit. However, the fan-out stem becomes again critical, since the fault on the stem (erroneous signal 0) will propagate by “jump” from this point over two non-critical paths and cause the erroneous signal 0 on the primary output. The example shows that the critical paths in general must not be continuous. Another example in Fig. 5.13d shows that continuous backtrace of a critical path may sometimes “unexpectedly” break off at fan-out stems. Here the fault is detected on the upper fan-out branch, but the fault on the fan-out stem is not detected. These two examples show that simple backtracing of critical paths is correct only inside the tree-like circuits called fan-out free regions (FFR). In the FFR networks, where the FFRs are treated as “high-level gates”, we notice the same problem of fan-out convergences. The solution to this problem is discussed in the next Sections.

5.2.3.2

The Main Techniques Applied to the Problem

In the following sections, we discuss the possibilities to speed up the fault simulation in combinational and full scan-path circuits. To achieve this goal, we combine three techniques: • modeling a circuit as a network of FFR components, • modeling the fault propagation paths in the FFR networks with Boolean differential equations, • calculation of the Boolean differential equations with SSBDDs. The first technique targets the reduction of the complexity of fault simulation by replacing gate-level networks with FFR-level networks. Working with FFRs as network components instead of simple logic gates allows the removal from the model of all lines associated with collapsed faults. It reduces the memory space needed for internal variables of the model and helps to process the whole model faster than in the case of flat gate-level. The second novelty is extending the exact critical path-tracing procedure beyond the convergent fan-outs using Boolean differential calculus. Since the fault propagation rules for the components of the FFR-networks cannot be stored in the libraries (like in the case of logic gates), they have to be replaced by generic algorithms to be

5.2 Fault Simulation in Combinational Circuits

189

processed during a simulation on the fly. Boolean differential algebra provides an efficient mathematical tool for synthesizing the needed procedures for fault propagating through FFRs representing arbitrary logic functions. The third idea is to use SSBDDs for implementing the Boolean differential calculus for fault back tracing in FFR-level networks. We exploit the algorithms of processing the SSBDDs for parallel simulating of many test patterns concurrently. Using SSBDDs allows handling all faults inside the FFRs, including the faults on fan-out branches.

5.2.3.3

Reasoning of Critical Path Backtracing

The parallel gate-level critical path back-tracing inside the fan-out free regions (FFR) is straightforward. Let us have an arbitrary FFR as a macro-component of the circuit network with a Boolean function y = F(x 1 , …, x i , x j , … x n ) = F(X). If a SAF is detected at the output y of macro, then for every input x j , the fault is also detected, iff. ∂y = 1. ∂x j The partial Boolean derivative is a Boolean formula that we can calculate in parallel in the vector format for many test patterns. Later, we show that when using SSBDDs, there is no need to create and keep these formulas for different gates (or complex gates) in a pre-calculated library, rather the partial Boolean derivatives we can directly calculate for many test patterns in parallel on SSBDDs with very fast procedures. We use partial Boolean differentials to extend the parallel critical path tracing method beyond the fan-out free regions [409]. Let us have a fan-out convergent complex gate (or FFR) with function y = F(x 1 , …, x i , x j , … x n ), where either two or more branched paths from the same fan-out stem converge, as shown in Fig. 5.14. The sub-circuits f 1 , f i , f j , and f n are all FFRs without fan-outs. Assume that the input variables x 1 , …, x i of the gate (or FFR) F are connected to the fan-out stem x via FFRs represented by functions x 1 = f 1 (x, X 1 ), …, x i = f i (x, X i ). The function of the full convergent fan-out region with a common fan-out stem x and output y in Fig. 5.14 can now be represented as a composite Boolean function y = F( f 1 (x, X 1 ), . . . , f i (x, X i ), x j , . . . , xn )

(5.12)

Consider the full Boolean differential [409] of the gate (or sub-circuit) F ) ( dy = d F = y ⊕ F((x1 ⊕ d x1 ), ..(xi ⊕ d xi ), .., x j ⊕ d x j , .., (xn ⊕ d xn ) (5.13)

190

5 Logic-Level Fault Simulation

Fig. 5.14 A skeleton of a gate-level circuit with convergent fan-out

f1(X1) x

x1 xi

fi(Xi)

F

y

xj X

fj ,…, fn

xn

Here, by dx we denote the change in the value of x because of the effect of a fault on x. On the other hand, by dy = 1 we denote the situation when an erroneous change of the values of arguments of the function y = f (X) causes the change of the value of y—otherwise, dy = 0. In the following, we show that the dependence of the output variable y of a convergent FFR on the fan-out stem x can be represented by a Boolean differential equation, which in turn can be transformed to a very efficient exact procedure for critical path backtracing beyond the convergent FFRs. Theorem 5.3 If a stuck-at fault is detected by a test pattern at the output y of a subcircuit represented by a Boolean function y = F(x 1 , …,x i , x j , …, x n ), then the fault at the fan-out stem x, which converges in y at the inputs x 1 , …,x i , is also detected, iff . ∂y =y⊕F ∂x

((

) ) ( ) ∂ x1 ∂ xi , . . . , xi ⊕ , x j , . . . , xn = 1 x1 ⊕ ∂x ∂x

(5.14)

Proof Considering the impact of only the fault at the stem variable x to the value of y, we have dx j = 0,…, dx n = 0 (no erroneous signals from x propagated to x j , …, x n ). On the other hand, to express the influence of x to the differentials dx h , h = 1, 2,…, i, the latter can be substituted by partial differentials. dx x h = dx f h (x ⊕ d x, X h ) where X h , h = 1, 2,…, i, are the subsets of variables influencing x h but not depending on x (there is no convergence from x to x h ). Then, the partial differential of the full function (5.12) with respect to x can be represented as dx F = y ⊕ ((x1 ⊕ dx f 1 ), . . . , (xi ⊕ dx f i ), x j , . . . , xn ) Since the partial differentials of x h = f h (x, X h ) with respect to x, where h = 1, … i, can be represented as.

5.2 Fault Simulation in Combinational Circuits

dx f h =

191

∂ fh dx ∂x

we get (( dx F = y ⊕ F

x1 ⊕

) ( ) ) ∂ x1 ∂ xi d x , . . . , xi ⊕ d x , x j , . . . , xn . ∂x ∂x

In a similar way, since the partial differential of y with respect to x is. dx F =

∂y d x, ∂x

and assuming that dx = 1 (there is a fault associated with x), we get ∂y =y⊕F ∂x

((

) ) ( ) ∂ x1 ∂ xi x1 ⊕ , . . . , xi ⊕ , x j , . . . , xn , ∂x ∂x

(5.15)

which is the left side in the Eq. (5.12) and can be used to calculate the effect of the fault at the fan-out stem x on the output variable y over a convergent fan-out region. ∎ Let us get back now to the problem of disruption of continuous critical paths illustrated in Fig. 5.13a. In this example, we showed that the faults at the primary inputs of FFRs might sometimes jump from the fan-out stem to the output of the FFR without propagating continuously along the paths inside the FFR. The formula (5.15) allows the detection of these cases. If the condition (5.14) in Theorem 5.3 is satisfied in Fig. 5.14, then the fault at x propagates by jumping to the output y of the FFR. From the formula (5.15), a method results for generalizing the parallel exact critical path backtracing beyond the fan-out free regions. All the calculations in (5.12) can be carried out in parallel because they are Boolean operations.

5.2.3.4

Parallel Critical Path Backtracing in Nested FRRs

Consider a convergent fan-out circuit in Fig. 5.15, represented by a composite function y = F y (x, z, X y ) where z = F z (x, X z ), whereas X y and X z are the subsets of variables that are not depending on x. Theorem 5.4 If a stuck-at fault is detected by a test pattern on the output y of a subcircuit with two nested convergences, y = F y (x 1 , z, X y ) and z = F z (x, X z ), where X y and X z do not depend on x, then the fault at the common convergent fan-out stem x is also detected, iff. ( dx y = y ⊕ Fy

∂ x1 x1 ⊕ , z ⊕ dx F z , X y ∂x

) =1

(5.16)

192

5 Logic-Level Fault Simulation

Fig. 5.15 Critical path tracing in the nested FFRs [448]

x1

x

Fz

z

Fy

y

Xy

Xz

Proof According to the definition of partial Boolean differential for y = F y (x 1 , z, X y ) of x, we have ( ) dx y = y ⊕ Fy x1 ⊕ d x1 , Fz (x ⊕ d x , X z ), X y

(5.17)

Based on the property of invariance of the full Boolean differential in relation to its arguments [409], the following is valid: Fz (X ⊕ d X ) = F z (X ) ⊕ d Fz = z ⊕ d Fz Taking into account that X z does not depend on x, we can transform this property to the partial Boolean differential Fz (x ⊕ d x, X z ) = z ⊕ dx Fz

(5.18)

By substituting (5.18) into (5.17), and by using Theorem 5.3 with respect to dx 1 we can now express the partial differential as ( dx y = y ⊕ Fy

∂ x1 x⊕ , z ⊕ dx Fz , X y ∂x

) (5.19)

that is the left side of the Eq. (5.16). The formula (5.19) can be used for calculating the influence of the fault at the common fan-out stem x on the output y of the convergent fan-out region by consecutive calculating partial Boolean differentials, first d x F z , and then d x y. ∎ It is easy to see that we can generalize the result of Theorem 5.4 iteratively for an arbitrary configuration of nested convergences. On the other hand, according to Theorem 5.3, we can represent the dependencies between fan-out stems and FFR outputs in the form of full Boolean differentials and thereafter transform them into fast computable critical path backtracing procedures.

5.2 Fault Simulation in Combinational Circuits

5.2.3.5

193

Topological Pre-analysis of the Circuit

The proposed exact parallel critical path back-tracing fault analysis method consists of the following steps: • topological pre-analysis of the circuit to create a model for critical path tracing; • parallel simulation for a given set of test patterns to calculate the values of all variables in the circuit for the simulated patterns; • parallel critical path backtracing on the topological model of the circuit set up in the first step. We carry out the topological pre-analysis only once to serve all the next steps of the procedure. The second and third steps are carried out in a cycle until all simulated patterns are analyzed. By topological analysis, in the direction from primary inputs to primary outputs, the following is achieved: • the sets FS y of all convergent fan-out stems for all primary outputs y are created; • the set of all convergent points CP, the nodes where the branches from fan-outs converge; • for each primary output y of the circuit, the topological formulas F(x, y) from all convergent fan-out stems x ∈ FS y are created. Example 5.9 Consider in Fig. 5.16, a topological skeleton of an FFR network, consisting of 8 FFRs, a single primary output y, 5 convergent fan-out stems FS y = {0, 1, 2, 3, 4}, and with 3 converging nodes CP = {A, B, C}. The FFR consists of an internal convergent fan-out. Each edge (x, z) in the skeleton represents a signal path in the circuit without fan-outs. The connections of FFRs with primary inputs of the network, and the related trees of signal paths in gate-level FFRs, we skipped from the skeleton since our primary attention here is on the handling of nested convergent fan-outs. Denote by zy the Boolean derivative ∂z/∂x, calculated for the path (x, z). To each converging node z of CP, a macro-component corresponds with inputs (x 1 , …, x n ), connected via converging paths to the related fan-out stem x. For each such a pair (x, z) we create a formula of convergence F xz (x 1 , …, x n ): ∂z =z⊕F ∂x

(( x1 ⊕

) ( )) ∂ x1 ∂ xn , . . . , xn ⊕ ∂x ∂x

During the topological analysis, all paths in the FFRs, one by one, are backtraced. If a fan-out stem convergent node is detected, the corresponding formula of convergency is created. The results of this procedure are shown in Table 5.9. For example, when reaching node C, we detect convergence from node 1. The connected paths (1, 2) and (2, C1 ) constitute the first convergent path, whereas (1, C2 ) is the second one. The formula F 1C (12 ∧ 2C1 , 1C2 ) represents the expression: ∂C =C⊕F ∂1

) ( )) (( ∂C1 ∂2 ∂C2 , C2 ⊕ , C1 ⊕ ∂2 ∂1 ∂1

(5.20)

194

5 Logic-Level Fault Simulation

Fig. 5.16 A topological skeleton of an FFR network with nested convergences [448]

3 2 0

1 C2

Table 5.9 Results of the topological analysis of the circuit in Fig. 5.16

B1 C1

B2

C

4

A1

B

A2

A

y

A3

No

N

Tracing paths and creation of formulas

1

C

(1, 2), (2, C1 ), (1, C2 ) ⇒ F 1C (12∧2C1 , 1C2 )

2

B

(2, 3), (3, B1 ), (C, 4), (4, B2 ) ⇒ F 1B (12∧23∧3B1 , F 1C ∧C4∧4B2 ), F 2B (23∧3B1 , 2C1 ∧C4∧4B2 )

3

A

(3, A1 ), (B,A2 ), (4, A3 ) ⇒ F 1A (12∧23∧3A1 , F 1B ∧BA2 , F 1C ∧C4∧4A3 , F 2A (23∧3A1 , F 2B ∧BA2 , 2C1 ∧C4∧4A3 ), F 3A (3A1 , 3B1 ∧BA2 ), F 4A (4A3 , 4B2 ∧BA2 )

where C1 and C2 are the respective arguments of the node function C (the variables of the function involved in the convergence).

5.2.3.6

Parallel Path Backtracing with SSBDDs

The topological pre-analysis creates the plan for performing the backtracing process. The formulas shown in Table 5.9 reflect the development of the plan in the forward direction, i.e., from inputs towards outputs of the circuit. We chose this direction due to easier systematization of the information about the nested fan-out convergences. On the other hand, the power of back-tracing of faults stands in that based on the fault dominance relationship, each step towards the inputs on the critical paths increases the number of detected faults, when at the same time, a similar effect takes place sensitizing the faults in the forward direction. This is the reason why we targeted back-tracing of faults. Hence, we process the formulas in Table 5.9 to identify the detected faults in the opposite direction as the system of formulas took its shape. Each formula is a plan for a simulation session. Table 5.10 shows the sequence of formulas created from Table 5.9. The content of Table 5.9 represents the sequence of simulation sessions. We use in Table 5.10 the same notation and its interpretation as we used in Example 5.9 and in Table 5.9. Table 5.10 shows the full set of 18 simulation sessions in the order of performance. In the sessions of the group ST1 = {(1, 2, 3), (4, 5), 7, 9, (10, 11), 14, 18}, simple backtracing takes place. Some of the sessions are joined, for example, such as (1, 2, 3), where all paths in the FFR A are backtraced, but three of them are related to the convergent paths from three different fan-out stems to A.

5.2 Fault Simulation in Combinational Circuits

195

Table 5.10 Table of fault simulation sessions for the FFR network in Fig. 5.16 No

Formula

No

Formula

No

Formula

No

Formula

No

Formula

1

3, A1

5

4, B2

9

C,4

13

F 2A

17

F 1A

18

0, 1

2

B, A2

6

F 3A

10

2, C 1

14

1,2

3

4, A3

7

2,3

11

1, C 2

15

F 1c

4

3, B1

8

F 4A

12

F 2B

16

F 1B

In the sessions of group ST2 = {6, 8, 12, 13, 15, 16, 17}, parallel Boolean derivatives according to the formulas developed in Theorems 5.3 and 5.4 (similar to the example of the expression 5.10), using the calculated results of the sessions in ST1, are calculated. In the case of the SSBDD model, differently from the gate-level simulation, we do not need to back-trace the critical paths inside the FFRs. The nodes of SSBDDs already refer to the paths to be critical. It means that we have to identify only if a particular path is critical or not, in other words, if a node in the SSBDD is tested or not. For example, the joint sessions (1, 2, 3), referring to the subtasks 3A1 , BA2 , and 4A3 in Table 5.10, mean that in the SSBDD, representing the FFR A, for all nodes m, we have to calculate the value of ∂yA /∂x(m), i.e., if the fault at x(m) is propagating to the output yA of the FFR A or not. Example 5.10 Consider a fragment in Fig. 5.17a of a topological skeleton of the FFR network, represented in Fig. 5.16, an FFR network in Fig. 5.17b corresponding to the skeleton, and the SSBDD model of two FFRs in Fig. 5.17c. The tasks that we need to perform for the skeleton are highlighted in Table 5.10. Three paths have to be back-traced: (1, C2 ), (2, C1 ) and (1, 2), which are also highlighted in bold in the FFR network and in the SSBDDs. We calculate the Boolean derivatives: ∂C2 /∂1 = d, ∂C1 /∂2 = b, and ∂2/∂1 = a. After that, we collected the information that allows calculating also the Boolean difference ) ( ) ( ∂yc ∂C1 ∂2 ∂C2 = yc ⊕ C1 ⊕ ∨ C2 ⊕ = yc ⊕ (C1 ⊕ ba) ∨ (C2 ⊕ d). ∂1 ∂2 ∂1 ∂1 In the SSBDD model, we calculate the Boolean derivatives in the SSBDDs GC1 , GC2 and G2 , and the final result we get in the SSBDD GC . For the calculations, we use the method described in Sect. 5.2.2. Since the operations are all Boolean, we can use them for calculating Boolean vectors and carry out parallel fault back-tracing of many patterns, as many as the vector length allows.

196

5 Logic-Level Fault Simulation

a 2 1

C1 C2

(a)

22

11

C

&

b

1

2

FFR2

2

1

C

12

&

d

11

FFRC

C1 1

FFR1

a

22

b

12

d

yc

C2

C1 C2

(c)

(b) Fig. 5.17 Critical path back-tracing in the topology, circuit and SSBDDs

5.2.3.7

Experimental Results of Critical Path Parallel Pattern Backtracing

Table 5.11 presents the results of comparison of the described in this chapter exact critical path parallel pattern backtracing based fault simulator [448] for the circuits of ISCAS’85 benchmark family (column 1) [51] with two popular commercial fault simulators C1 (column 2) and C2 (column 3). The numbers show the speed-up in times achieved by the new simulator. The time needed for topology analysis is negligible, accounting in average 1.4% of the simulation time (in the range between 0.8% and 1.7%). From Table 5.11 we see that the new method outperforms the commercial tools C1 and C2 in average 11.9 and 1.6 times, respectively. Table 5.11 Comparison of the simulation speed of the new method [448] with commercial tools

Circuit

Gain in the simulation speed C1/New

C2/New

1

2

3

c432

10.7

3.1

c499

1.1

1.0

c880

21.3

3.3

c1355

8.6

1.8

c1908

10.5

3.1

c2670

27.2

2.9

c3540

20.2

4.0

c5315

36.3

4.2

c6288

8.5

1.0

c7552

24.9

2.7

Average

16.9

2.7

5.2 Fault Simulation in Combinational Circuits Table 5.12 Comparison of the simulation speed of the new method [448] with critical path back-tracing methods

Circuit

197

Gain in the simulation speed [510]/New

1

2

[448]/New 3

c432

500

1.49

c499

704

2.11

c880

875

5.71

c1355

1208

1.88

c1908

1164

2.39

c2670

1273

5.70

c3540

664

2.74

c5315

1764

11.42

c6288

345

1.82

c7552

775

7.78

Average

549

4.30

In Table 5.12, different critical path-based methods are compared: exact critical path tracing without parallelism [510] in column 2, and parallel critical path tracing combined with parallel simulation of faults at fan-out stems [435] in column 3 with a new method using SSBDDs [448]. The numbers show the speed-up in times achieved by the new simulator. The introduced parallelism into the exact critical path backtracing simulation method presented in [510] shows a dramatic speed-up in fault simulation that was expected. On the other hand, the last column clearly shows the gain in the speed achieved also in comparison with [448] by replacing the parallel pattern fan-out stem fault simulation fault by fault [448] with the proposed new analytical fault reasoning approach. Compared to [448], where the top ological model for back-tracing faults was created for all the outputs separately as a set of sub-models, in [450], a joint topological model was created for the whole circuit to avoid unnecessary repetition of the same calculation procedures. Table 5.13 presents the results of the speed-up of the new fault simulation method [450] compared with the commercial simulators C1 and C2, and with the first version of the parallel critical path tracing fault simulator [448], in columns 2, 3 and 4, respectively. The average gains in speed are 41.1, 5.8, and 2.5 times, respectively.

5.2.4 Fault Simulation for the Extended Class of Faults 5.2.4.1

Two-Phase Fault Simulation of Conditional SAF

In Sect. 4.2, we described a hierarchical fault modeling approach based on defectoriented testing, and the conditional SAF model, described by a set of conditions W c

198 Table 5.13 Comparison of speed up in fault simulation for the new simulator [450] with previous tools

5 Logic-Level Fault Simulation

Circuit

Gain in the simulation speed C1/New

C2/New

[448]/New

1

2

3

4

c432

13.0

3.8

1.2

c499

3.0

2.8

2.7

c880

26.0

4.0

1.2

c1355

44.0

9.0

5.0

c1908

53.0

15.6

5.0

c2670

104.0

11.0

3.8

c3540

191.0

37.4

9.3

c5315

246.0

28.6

6.7

c6288

1159.0

139.2

134.2

c7552

378.0

40.5

15.0

s4863_C

353.0

30.0

N/A

s5378_C

170.0

15.9

s6669_C

416.0

40.8

s9234_C

248.0

26.7

s13207_C

332.0

27.2

s15850_C

470.0

57.8

s35932_C

1751.0

111.6

s38417_C

1351.0

157.0

s38584_C

1399.0

115.3

Average speed gain

41.1

5.8

2.5

for each type of the components of the given circuit. In the following, we join this concept with a critical path fault backtracing approach to fault simulation. Let us define the defect model for a component of the given circuit with function y = f (X) in open form as a set of mappings W y (Δk ) = {Δk → X k }, of defects Δk into the input patterns of the component. The mapping can be given in the form of Table 4.4 or directly as a subset of patterns X k ⊂ X, that can be used as possible conditions for detecting the defect Δk . For testing a defect Δk , any mapping w ∈ W y (Δk ) would do. However, it would be more reasonable not to fix the conditions but to maintain the freedom to choose the conditions depending on the test that is used. Let us have a circuit as a network of components represented by functions {y = f (X)}, where y ∈ Y, and Y is the set of nodes in the circuit, considered as locations for monitoring the SAF as defect manifestations. There is a test set T = {t i }, we have to fault simulate for measuring the coverage of the set of defects given by a set of mappings W y (Δk ) = {Δk → X k }, for all y ∈ Y.

5.2 Fault Simulation in Combinational Circuits

199

We say that the test pattern t i ∈ T detects the defect Δk if (a) the test pattern t i detects a SAF at the node y ∈ Y, and (b) the test pattern t i implies t i → X k , i.e. satisfies the condition Δk → X k . Consider for simulating the coverage conditional SAF the following fault modeling data structure consisting of the following components: (1) Test pattern table PT = || t i,j || where t i,j ∈ {0, 1} is the signal value on the node yj ∈ Y produced by the test pattern t i . (2) Fault table FT = || r i,j ||, r i,j ∈ {0, 1}, where i and j denote the numbers of simulated test patterns and the nodes in the circuit, respectively. For the entries of FT we have: r i,j = 1 if the stuck-at fault at the node yj ∈ Y, either SAF/0 or SAF/1 is detected by the test pattern t i , and r i,j = 0 if none of these faults is detected by the pattern t i . We carry out the whole conditional SAF simulation in the following double-phase way. (1) First phase. By logic-level fault simulation, we determine, using logic reasoning of the current test pattern, which nodes are active. I.e., for which nodes the erroneous signals are propagated from up to the observable nodes (primary outputs or scan path flip-flops). The results of this phase are returned in the form of tables PT and FT. (2) Second phase. Using the conditional SAF model, we identify which physical defects can be detected by the given test pattern. All entries r i,j = 1 in FT are the candidates for mapping detected SAF into suspected physical defects. The second phase has the target to check for all r i,j = 1, where t i ∈ T and yj ∈ Y, if there is for any defect Δk ∈ W y , the mapping Δk → X k satisfied. We carried out fault simulation experiments with circuits from the benchmark families ISCAS’85 [51], ISCAS’89 [39] and ITC’99 [86] to compare the speed up of the new fault simulator for extended fault class of conditional SAF. The comparison is carried out with other fault simulators targeting only the restricted traditional SAF class. Table 5.14 presents the results of comparison of speed-up in times for the first phase of the simulation in comparison with state-of-the-art commercial simulators C1 and C2, and the open source SAF simulator FSIM [245]. For the first phase, we used the simulator based on the exact critical path parallel pattern tracing of propagated erroneous signals, as discussed in Sect. 5.2.3. We have made some improvements in the simulator regarding the optimization of the calculation model and regarding the reduced memory requirements [104]. The goal is to compare different known fault simulators, such as the Parallel pattern single fault propagation (PPSFP) simulator FSIM [245] (column 3), and two state-of-the-art commercial fault simulators C1 and C2 from major CAD vendors (columns 4, 5), compared with the proposed new method [104, 33]. Table 5.15 presents the results of the second phase of simulating the mapping of defects into the SAF coverage calculated in the first phase. The primary interest

200

5 Logic-Level Fault Simulation

Table 5.14 Comparison of the new method of parallel critical path tracing for the extended class of conditional SAF with previous tools simulating the traditional SAF class [451] Circuit

# Gates

Gain in the simulation speed FSIM/New

1 c2670

2 883

C1/New

C2/New

3

4

5

0.8

2.2

24

c3540

1270

2.0

7.4

43

c5315

2079

1.4

5.6

57

c6288

2384

12.1

27.8

284

c7552

2632

2.7

8.1

88

s13207

3214

N/A

5.6

70

s15850

3873

N/A

12.1

111

s35932

12,204

N/A

23.6

390

s38417

9849

N/A

31.4

310

s38584

13,503

N/A

23.2

320

b14

9150

N/A

49.2

N/A

b15

8877

N/A

39.1

N/A

b17

31,008

N/A

117.7

N/A

1.5

4.7

43

Average normalized run-time

was to evaluate the simulation time cost share between both phases. To simplify the experiments, we partitioned the benchmark circuits into FFR blocks with a maximum of 4 inputs, used in the experiments as components and sources for defects to be modeled. We also assumed the worst case of an exhaustive set of 16 patterns as conditions according to the conditional SAF fault model, representing the possible defects. The number of blocks in each circuit shows in Table 5.15, column 2. Column 3 contains the time spent for the first phase of fault simulation of 10,000 test patterns. Column 4 contains the time needed to perform the analysis of detecting the defects by using the conditional SAF model as the sets of mappings W y (Δk ) = {Δk → X k } for the components yj ∈ Y. The last column contains the percentage of time spent on the fault table analysis in the second phase of simulation with respect to the total time spent on fault simulation and fault table analysis together. Comparing Tables 5.14 and 5.15, we see that the new fault simulation method covering an extended class of faults clearly outperforms the popular commercial simulators, which handle a restricted class of only classical stuck-at-faults.

5.2.4.2

X-fault Simulation

Shrinking geometries in today’s deep-submicron processes produce new failure mechanisms in electronic devices, which has forced researchers to develop more

5.2 Fault Simulation in Combinational Circuits

201

Table 5.15 Fault simulation times for the 1st and 2nd phase [451] Circuit

# Blocks

Simulation time (s) 1. Phase

1

2

2. Phase (%) 2. Phase

3

4

5

c2670

290

0.4

0.03

6.7

c3540

486

0.9

0.04

4.3

c5315

708

0.8

0.07

8.2

c6288

1440

7.4

0.12

1.6

c7552

941

1.2

0.09

7.0

s13207

1282

2.0

0.11

5.1

s15850

1649

2.7

0.14

5.0

s35932

6102

5.7

0.53

8.4

s38417

4128

7.0

0.36

4.9

s38584

5171

6.4

0.45

6.4

b14

3242

14.5

0.28

1.9

b15

3448

26.6

0.3

1.1

b17

11,608

77.8

1.08

1.4

11.8

0.28

Average

advanced and complex fault models compared to the simple traditional SAF model [510]. New fault models help to improve the confidence of test quality measures and to increase the accuracy of fault diagnosis. New fault modeling techniques are a challenge for inventing new efficient fault simulation methods to increase the speed of test analysis and fault diagnosis. In this Section, a very fast fault simulation method for combinational circuits or sequential circuits with full scan paths is discussed, which can handle the recently proposed X-fault model [505]. The method is applicable for evaluating the X-fault coverage of the given set of test patterns or for analyzing failed test patterns to extract diagnostic information and construct the diagnosis tables for fault location purposes. Complex physical defects, such as resistive shorts or opens, cause multiple effects around the defect site. For example, a defect, which forces on the fan-out branches of the gate intermediate voltages, may affect the behavior of a fan-out gate. As a result, multiple faulty logic values may appear on the fan-out branches depending on the threshold voltages of the branches. A unified fault model for interconnect opens and bridges using constrained multiple line stuck-at faults is proposed in [88]. To deal with the ambiguities of the changing logic values on the branches, the Byzantine fault model was introduced [170, 262], where a floating line with n branches may lead to 2n – 1 possible fault cases (see Fig. 5.18). Methods are proposed to reduce the number of 2n – 1 to a reasonable smaller subset, which, however, needs additional information about the layout, vias or buffers, threshold voltages of the transistors driven by the floating nodes, or about the occurrence probabilities of possible logic behaviors of physical defects [88, 514].

202

5 Logic-Level Fault Simulation Multiple fault 0 1 0 1

Defekt

Resistive bridge

SAF

X-fault Byzantine fault model

Conditional SAF

Fig. 5.18 Physics of the X-fault model

For efficient diagnosis of realistic faults, leading to the Byzantine effect, a novel X-fault model was developed in [319, 505, 514]. It represents all possible behaviors of physical defects in a gate and/or on its fan-out branches by using different X symbols on the fan-out branches. A dedicated symbolic technique was proposed in [505, 514] for X-fault simulation and for analyzing the relations between observed and simulated responses to extract diagnostic information and to score the results of the diagnosis. The X-fault model [505] is defined as follows. A fan-out gate has one X-fault, corresponding to any physical defect in the gate or on its n fan-out branches. The X-fault assumes n different symbols X i , i = 1,…, n, on the n fan-out branches to represent all possible combinations of faulty logic values in fault simulation. Figure 5.19 shows an X-fault site for an FFR with 3 fan-out branches, where z1 , z2 , and z3 denote 3 branches with arbitrary faulty logic values (F—faulty value, T— true value). By the vector c = (c1 , c2 , c3 ), ci ∈ {0, 1}, we can represent any possible faulty logic combination for the vector of branch variables (z1 , z2 , z3 ). The positive feature of X-fault model is that it can handle unknown behaviors of complex defects such as Byzantine effects. The disadvantage is that the number of unknown faulty behaviors can explode exponentially with the number of fan-out

Stuck-at fault

X-fault

z1/F

FFR

z2/F z3/F

FFR

y

Fig. 5.19 A sub-circuit with a converging fan-out

FFR

z1/T F F z2/F T F

F F T T F F

z3/T T F

F T F

y FFR

5.2 Fault Simulation in Combinational Circuits

203

branches. To reduce the complexity, attempts have been made to restrict the number of faulty combinations on the fan-outs [88, 514].

5.2.4.3

Critical Path Back-Tracing for X-fault Simulation

We propose here for X-fault simulation the method of critical path back-tracing, which gives new possibilities to handle the ambiguity of fault [452]. Instead of processing of X-faults by inserting symbolic values on the fan-out branches, and propagating them by symbolic calculation through the circuit, which allows simulating only a single X-fault at a time one by one, and only for a single test pattern, we generalize the parallel critical path fault back-tracing approach for the case of X-fault model. Consider in Fig. 5.20, the topological skeleton of an FFR network, and the calculation model, a set of formulas, in Table 5.16, developed by the topological analysis of the FFR network in Fig. 5.20, and using the method described in Sect. 5.2.3. We denote the detectability of the faulty combination c = (c1 , c2 , …, cn ) on the fan-out node z with n branches of an FFR with output y, in Fig. 5.19 by Dzy (c). We show now how we can calculate the formulas for Dzy (c) by extending the calculation model in Table 5.16, developed by the method of critical path back-tracing for SAF (see Sect. 5.2.3). Note that a part of the problem of X-fault simulation is already covered by SAF simulation as a “side-effect” as shown in Table 5.17. This part covers the following cases of the X-fault: (1) the case c = cmax when all fan-out branches are carrying faulty signals, and (2) the case when a single fan-out branch is faulty (the case of single fault simulation). A direct consequence of this “side-effect” is additionally that X-fault simulation for the fan-out stems with two branches is fully covered by the proposed method for SAF simulation. For all other faulty combinations for the fan-out stems with at least three branches where at least two branches are affected by the fault, the X-fault detectability can be

2 1 1

1 2 6

3

2 3

3 4

Fig. 5.20 A topological skeleton of an FFR network [452]

5

204

5 Logic-Level Fault Simulation

Table 5.16 Calculation model for SAF simulation [452] Steps

Nodes

Created formulas for SAF

1–3

6

21 6

4–5

5

6–7 8–9

4

10–11

3

12–13 14

35

36 = 35 ∧ 56

45

46 = 45 ∧ 56

13 4

13 6 = 13 4 ∧ 46

23 3

23 6 = 23 3 ∧ 36

12 3

12 6 = 12 3 ∧ 36

2

R26 (21 6' , 22 6' , 23 6' )

1

R13 (11 2∧23 3' , 12 3' )

15–16 17

22 6

11 2

56

11 6 = 11 2 ∧ R26

18

R15 (R13 ∧35' , 13 4∧45' )

19

R16 (11 2∧21 6' , 11 2∧22 6' , R15 ∧56' )

Table 5.17 X-fault coverage by SAF simulation [452]

Nodes

Already available formulas for X-fault model

1/6

D16 (111) = R16 D16 (001) = 13 6, D16 (010) = 12 6, D16 (100) = 11 6

2/6

D26 (111) = R26 D26 (001) = 23 6, D26 (010) = 22 6, D26 (100) = 21 6

calculated as in Table 5.18, constructed using the formulas Dzy (cmax ) = Rzy developed in the first phase of simulation for SAF faults (see Table 5.16). For all other combinations c = (c1 , c2 , …, ck ), where at least two components ci are not zero, the available already formulas Rzy (R1 , R2 ,…, Rk ) can be modified so that any ci = 0 implies Ri = ∅, which has the meaning that ∂x i /∂zi = 0. In such a way the whole simulation method proposed for the X-fault model consists of two phases: SAF simulation, and post-processing to cover the X-fault model. For a better understanding of the construction of X-fault simulation formulas, presented in Table 5.18, Fig. 5.21 illustrates the development of the formula D15 (101), where the faulty signals appear on the fan-out branches 1 and 3 of the FFR 1, and which are observable on the output of the FFR 5.

5.2.4.4

Experimental Results of X-fault Simulation

Table 5.19 presents the characteristics of the benchmark circuits we used for the experiments and the comparison data with state-of-the-art for the first phase of SAF simulation based on the proposed exact parallel critical path tracing of propagated erroneous signals.

5.2 Fault Simulation in Combinational Circuits Table 5.18 Calculation model for X-fault simulation [452]

205

Nodes Updated SAF formulas for X-fault model 1/3

D13 (01X) = R13 (∅, 12 3' ) D13 (10X) = R13 (11 2∧23 3' , ∅)

1/5

D15 (011) = (D13 (01X) ∧35' , 13 4∧45' ) D15 (101) = (D13 (10X) ∧35' , 13 4∧45' ) D15 (110) = (R13 ∧35' , ∅)

2/6

D26 (011) = R26 (∅, 22 6' , 23 6' ) D26 (101) = R26 (21 6' , ∅, 23 6' ) D26 (110) = R26 (21 6' , 22 6' , ∅)

1/6

D16 (011) = R16 (∅,∅, D15 (011) ∧56' ) D16 (101) = R16 (11 2∧21 6' , 11 2∧22 6' , D15 (101) ∧56' ) D16 (110) = R16 (11 2∧21 6' , 11 2∧22 6' , D15 (110) ∧56' )

SAF Formula SAF Formula 1

2

1

1

R13(112 233, 123)

R15 (R13 35, 134 45)

2

6

3 2

D13(10X) = R13(112 233,

3

3

4

5

)

D15(101)= R15(D13(10X) 35,134 45) Formula for computing the detectability of faulty combination D

Fig. 5.21 Example of the construction of the X-fault formulas from SAF formulas

We carried out experiments with benchmark circuits ISCAS’85, ISCAS’89 and ITC’99 to compare different known fault simulators FSIM [245], state-of-the-art commercial fault simulators C1 and C2 from two major CAD vendors, to compare with our critical path back-tracing method. We simulated and calculated simulation time for the sets of random 10,000 patterns. The time for topology analysis is included and is negligible compared to the gain in speed. We run experiments on a 1.5 GHz UltraSPARC IV+ workstation using SunOS 5.10. We carried out another experiment to estimate the cost of the second phase of the simulation. Such an approximation is possible since the support for X-fault simulation is achieved by extending the calculation model used for SAF simulation. Hence, the number of additional formulas for the calculation of the detectability of X-faults (in

206

5 Logic-Level Fault Simulation

Table 5.19 Circuit characteristics and the results of the SAF simulation [452] Circuits

# Fan-outs

# Branches Max

Simulation time (s) Average

Fsim

C1

C2

New

c2670

290

28

3.7

0.8

2.2

24

0.4

c3540

356

22

4.5

2.0

7.4

43

0.9

c5315

510

31

5.0

1.4

5.6

57

0.8

c6288

1456

16

2.6

12.1

27.8

284

7.4

c7552

812

72

4.1

2.7

8.1

88

1.2

s13207

1224

37

3.7

2.5

5.6

70

2.0

s15850

1518

34

3.6

5.4

12.1

111

2.7

s35932

5295

1449

3.4

9.2

23.6

390

5.7

s38417

4569

49

3.2

16.2

31.4

310

7.0

s38584

3946

88

4.5

12.1

23.2

320

6.4

b14

2409

82

4.8

N/A

49.2

N/A

14.5

b15

2353

95

4.8

N/A

39.1

N/A

26.6

b17

8145

149

4.8

Average speed gain

N/A

117

N/A

77.8

1.7

4.7

43

1

addition to the formulas constructed for SAF simulation) determines the time and memory resources needed for X-fault post-processing. In order to perform the estimation, the average time for formula evaluation was calculated as the time spent on complete SAF simulation (Table 5.19) divided by the number of formulas processed for this purpose. Using the average time needed for processing a formula, we can estimate the time required for X-fault simulation based on the number of extra formulas calculated for each benchmark. The estimation of additional memory space required for post-processing also considers the size of arguments for each X-fault simulation formula (since different formulas can occupy different amounts of memory). Table 5.20 presents the results of estimating additional cost for X-Fault simulation in times compared to the first phase of SAF simulation, regarding the memory space (column 2, Table 5.20) and time cost (column 8, Table 5.19). The increase in cost is shown in columns Mem and Time, respectively. We carried out two sessions with limits 4 and 6 to the maximum number of faulty signals on fan-out branches. The columns FC show the X-fault coverage, i.e. the percentage of all X-fault combinations processed regarding the number of all possible combinations. The additional memory and time cost for the second phase of simulation depends on the X-fault coverage needed to achieve. As we see from Table 5.20, both the average time and memory costs increase twice for the fault coverage of 81%, and five times for the fault coverage of 88%. We have treated here the conventional X-fault model [505], where all 2n – 1 faulty combinations on fan-out branches are possible and have equal occurrence probabilities. Using the probability calculation approach proposed in [514], or using additional information about the layout [88, 514], the

5.2 Fault Simulation in Combinational Circuits

207

Table 5.20 Increase of resources for X-Fault simulation [452] Circuit

Mem MB

Limit 4 FC (%)

Limit 6 Mem

Time

FC (%)

Mem

Time

c2670

0.2

84

1.7

2.5

92

2.0

3.1

c3540

0.3

72

2.4

1.8

84

6.7

4.5

c5315

0.3

65

1.1

1.3

73

1.9

3.1

c6288

1.6

98

7.2

2.5

98

7.2

2.5

c7552

0.5

67

1.2

1.3

82

2.6

3.6

s13207

0.8

86

1.0

1.1

91

1.1

1.2 13.2

s15850

1.1

82

2.6

5.7

93

5.3

s35932

1.8

99

1.1

1.4

99

1.1

1.4

s38417

2.7

88

1.2

2.0

94

1.8

4.8

s38584

2.4

76

1.1

1.5

87

1.6

2.8

b14

4.7

86

1.9

1.8

92

6.3

5.4

b15

5.6

72

3.4

2.1

82

14.5

6.4

b17

16.7

Average

74

3.2

2.2

83

81

2.2

2.1

88

8.5 5.1

4.8 5.3

number of 2n – 1 can be significantly reduced, which will result in higher probabilistic fault coverage and will reduce both the memory and time costs of simulation. The proposed new X-fault simulation method has several advantages. Differently from the known symbolic simulation approach, which allows handling a single Xfault and a single test pattern at a time, in the proposed new method, all detectable X-faults are determined simultaneously by a single run for a subset of test patterns in parallel. This feature of the method makes it very attractive for X-fault model based fault diagnosis since the simulated results in the form of all suspected fault candidates can be achieved for all failing patterns (or at least for a part of them) by a single run.

5.2.5 Fault Simulation in Multiple Core Environments A novel fault simulation method, proposed in [148], in combinational circuits is proposed, combining the concurrency in three dimensions: algorithmic causeeffect fault reasoning throughout the full circuit, software-based bit-level parallelism regarding concurrent multiple test pattern reasoning, and hardware-based concurrency, distributing the fault reasoning process between different cores in a multicore processor environment. To increase the speed and accuracy of fault simulation, compared to previous methods, a mixed-level fault reasoning approach is developed, where the fan-out convergence is handled on the higher FFR network level, and the fault simulation inside of FFRs relies on the gate-level information. To allow uniform

208

5 Logic-Level Fault Simulation

and seamless fault reasoning, we use the structural BDDs for both level modeling, i.e., SSBDDs for the FFR level and S3 BDDs for the gate level.

5.2.5.1

The Concept of Simulation Concurrency in Three Dimensions

Fault simulation is one of the most important tasks in digital circuit design and test flow. The efficiency of solving other tasks in this field, such as design for testability, test quality and dependability evaluation, test pattern generation, fault diagnosis, relies heavily on the performance and speed of fault simulation. Such a dependence is growing especially in the case of large circuits, and hence, the scalability of the fault simulation algorithms is decisive. Accelerating the fault simulation would consequently improve all the above-mentioned applications. We propose in this section a new Parallel Exact Critical Path Tracing (PECPT) method, which implements two types of parallelism used during fault simulation: (1) bit-level data parallelism, and (2) resource-level computing parallelism. The first one is for multiple test pattern reasoning, and the second one is for distributing the compiled computing model among a subset of different CPUs in a multi-core computing environment, so that each processor were responsible for parallel critical path tracing in a related sub-circuit area. Another novelty is in the development of a mixed-level fault reasoning approach, where the problems related to the fan-out convergence are handled on the higher FFRnetwork level, and the increased speed and accuracy in fault reasoning are achieved by fault analysis inside FFRs using additional gate-level simulation data. To speed-up simulation and improve the accuracy of fault reasoning compared with state-of-the-art, and with the methods described in the previous chapters, we propose here a mixed-level parallel pattern exact critical path back-tracing method, based on the combined use of two types of structural BDDs, i.e. the SSBDDs and S3 BDDs. Since we process all the faults in the circuit during a single run of parallel analysis of patterns throughout the circuit, we can say that the approach we propose takes advantage of concurrency in three dimensions: pattern dimension, fault dimension and computing model dimension. We use the pattern and fault parallelism in each single CPU core, while the computing model concurrency is achieved using multiple CPUs. Compared to the traditional approaches, which apply only pattern- and/or faultparallelism in multi-CPU systems (at the bit and system level, respectively), such a new dimension addition gives further possibilities to speed up fault simulation in multi-processor systems. The availability of parallel execution environments, such as multiprocessor system-on-chips (MPSoCs), multicore processors and GPGPU devices, provides an option for concurrent execution either of the algorithm for different data, or different parts of the algorithm and the same data. We took advantage of these opportunities to utilize the available new hardware resources, as well as to speed up execution in comparison to a single-processor system. In the landscape of fault simulation, the

5.2 Fault Simulation in Combinational Circuits

209

growing size and complexity of digital circuits also require speed up of available algorithms.

5.2.5.2

Levelized Computational Model of Fault Simulation

Consider a combinational circuit as a network of FFRs in Fig. 5.22 where each of them can be represented as a Boolean function y = F(x 1 , x 2 , … x n ) = F(X). First, consider, for simplicity, the SAF fault class. Since FFR is a tree, then all the internal SAF in the FFR collapse into a representative subset of SAF related to the input variables of X. The fault simulation for a FFR according to the critical path tracing is equivalent to calculation of Boolean derivatives: if ∂y/∂x = 1 then the fault propagates from x to y. This check we can perform in parallel for a given subset of test patterns. In order to extend the parallel critical path tracing beyond the fan-out free regions we use the concept of Boolean differentials (see Sect. 5.2.3.3). Let x be a fan-out variable with branches, which converge in a FFR y = F(X) at the inputs denoted by a subset X’ ⊂ X. In [448] we have shown how to extend the Boolean differential calculation beyond the FFRs: ) ( )) ( ) ∂X ∂ x1 ∂ xn , . . . , xn ⊕ =y⊕F X⊕ ∂x ∂x ∂x ) ( ' ∂X (5.21) = y ⊕ F X' ⊕ , X '' ∂x

∂y =y⊕F ∂x

((

x1 ⊕

where X' ⊂ X is the sub-vector of variables which depend on x, and X '' = X\X ' is the sub-vector of variables which do not depend on x. For example, to get to know if the fault on z2 in the circuit of Fig. 5.22 can be detected on y4 by the given pattern, we have to check if

X2

X4 z21

z2 F2

z22

z31

z11 X1

z1 F1

X3

z12

y4 F4

F3

z3 z13

z32

y5 F5

X5 Fig. 5.22 Combinational circuit with five FFRs [148]

210

5 Logic-Level Fault Simulation

) ( ) ( ∂ y4 ∂z 3 ∂z 3 = y ⊕ F X 4 , z 21 , z 31 ⊕ =1 = y ⊕ F X 4 , z 21 ⊕ 1, z 31 ⊕ ∂z 2 ∂z 2 ∂ z2 The formula (5.21) can be used for calculating the impact of the fault at the fan-out stem x on the output y of the converging fan-out region by consecutive calculating of Boolean derivatives over related FFR chains starting from x up to y. For that purpose, for each converging fan-out stem, we have to construct the corresponding formulas like (5.21) for each FFR involved in the convergence. In the case of nested convergences, the formulas have a nested structure as well. All these formulas constitute a partially ordered computation model for fault simulation, which we compose by the topological analysis of the circuit (see Sect. 5.2.3). Since the formulas are Boolean, we can carry out all computations in parallel for a bunch of test patterns. Definition 5.4 Introduce the following notations for representing symbolically the computing model for fault simulation using the formula (5.21): • • • •

(x, y) – for ∂y/∂x, {X k , y} – for a subset of formulas {∂y/∂x | x ∈ X k } Rxy ((x, x 1 ), … (x, x k )) – for the general case (5.21), where X’ = (x 1 ,…, x k ), Dx – vector, which shows if the fault at the node x is detected or not at any circuit output, • DX – a set of vectors Dx for the nodes x ∈ X. An example of the computing model, for the full fault simulation of the circuit in Fig. 5.22 using the notations of Definition 5.4 is presented in Table 5.21. Table 5.21 Fault model equations ordered by levels (L) L

Partially ordered formulas

Types of simulation tasks

7

DX 4 = {X 4 , y4 }, Dz21 = (z21 , y4 ), Dz31 = (z31 , y4 ); DX 5 = {X 5 , y5 }, Dz13 = (z13 , y5 ), Dz32 = (z32 , y5 )

Fault simulation inside the FFRs (F 4 and F 5 )

6

Dz3 = Dz31 ∨ Dz32

Fault simulation of fan-out-stems (z3 )

5

DX 3 = {X 3 , z3 }∧Dz3 Dz22 = (z22 , z3 )∧Dz3 , Dz12 = (z12 , z3 )∧Dz3

Fault simulation inside the FFRs (F 3 )

4

Dz2 = Rz2,y4 ((z2 , z21 ) ≡ 1, (z2 , z31 )) ∨ ((z22 , z3 )∧Dz32 )

Fault simulation of fan-out-stems (z2 )

3

DX 2 = {X 2 , z2 }∧ Dz2 , Dz11 = {z11 , z2 }∧ Dz2

Fault simulation inside the FFRs (F 2 )

2

Dz1 = Rz1,y4 ((z1 , z2 ),(z1 , z3 )) ∨ Rz1,y5 ((z1 , z3 ),(z1 , z13 ) Fault simulation of ≡ 1), where fan-out-stems (z1 ) (z1 , z3 ) = Rz1,z3 ((z11 , z2 ),(z1 , z12 ) ≡ 1)

1

DX 1 = {X 1 , z1 }∧ Dz1

Fault simulation inside the FFRs (F 1 )

5.2 Fault Simulation in Combinational Circuits

211

The formulas in Table 5.21 are created during the topological tracing of the circuit using the algorithm developed in [448]. The algorithm has linear complexity. However, the complexity of the computational model and the related fault simulation speed depends highly on the structure of the circuit.

5.2.5.3

Reordering the Computing Model Using Levels

Circuit partitioning technique into levels for concurrent execution have been already used before [32, 134, 497]. The level i gate is defined in [32] as one having primary inputs of the circuit and outputs of the level k gates as its inputs, such that k < i. However, in [497], which cites the previous paper, the definition is slightly different, stating that level of a gate represents its distance in gates from primary inputs (PIs) of the circuit. This definition is stricter in the sense that one of the inputs of the level i gate, must originate from the level i − 1, if i /= 0. This difference, however, is crucial for parallelization, because the use of the first definition could potentially result in bigger number of levels with fewer gates in them. As levels should be evaluated sequentially—this could decrease the amount of parallelism dramatically. In our case, as we deal with FFRs, we would stick to the second definition and rephrase it for our purpose. Both the logic simulation model and the computational model for fault backtracing, described in Sect. 5.2.3, are presented as networks of partially ordered formulas linked to each other by variables and computed using SSBDDs. The level of variable is its distance in variables from PIs or, in other words, the level i variable should have at least one of its inputs originating from level i-1 variable, if i /= 0. In the computational model, the variables are numbered in a serial fashion, starting from the primary inputs and finishing at the primary outputs. The variables are serialized such that each input of the variable i is the output of the variable k, where k < i. It is similar to the first definition of levels from [32]. We used OpenCL framework for parallel execution [299]. Therefore, it is necessary to define regions of variables belonging to the same level as a sub-array. Only variables of the particular level must be included in the sub-array. If variable x belongs to level i, then level i should be represented as a continuous sequence of variables starting from variable x to variable y, such that every variable z (x ≤ z < y) belongs to level i, and the variable y belongs to the level i + 1. Therefore, it is necessary to reorder the variables according to our definition of levels. Note that this operation is only required once and does not belong to the fault simulation process. One can save the reordered computational model as a file and use it later for the simulation without the need to repeat this step. The fault model represents segments of the critical path simulated. Each segment starts with the output related to the particular variable and ends at the primary output of the circuit. Therefore, there is a one-to-one correspondence between the critical path segments to be fault simulated and the particular variable. This fact makes it possible to use the structure of the computational model organized in levels for the fault model as well. It is important because, using levels, we can analyze critical path segments starting at the same level in parallel, thus speeding up the fault simulation.

212

5 Logic-Level Fault Simulation

OpenCL framework requires a single program for all the parallel devices, which would manipulate different data. Such a program is called a kernel. It is executed on all available devices in parallel for all variables inside a single level. The best way to provide the data for the kernel is an array. During the preparation of the computational model, the variable indexes are placed into an array according to their levels. The kernel only requires knowing the offset of the level inside the array of variable indexes and the size of the level. The host CPU schedules the kernel executions level by level into the OpenCL execution queue. The execution in the queue is strictly ordered, such that OpenCL driver handles the synchronization between consecutive kernel executions. It ensures that all variables of the current level are computed before moving to the next level.

5.2.5.4

Discussion of Results

We divide the concurrent execution time T p of PECPT fault simulation into two parts: T p = T o + T c . The first part T o is the concurrency overhead. It is required to transition from the single-threaded- to multiple-threaded-execution and back again. This time slot involves creating multiple threads, allocating additional memory, synchronising at the end of computation and transitioning back to the single thread. The second part is time T c , which is pure computation time required by all threads to deliver a result. This time we can see in Table 5.22 and treat it as a lower possible bound for the concurrent computation. The concurrency overhead T o depends on the amount of the parallel hardware used and increases with the number of CPU cores. The computation time T c depends on the amount of computation required. Parallel simulation time T p' = T p + T fm + T ff , where T fm is the time required to compile the fault model and T ff is the time of the fault-free logic simulation. Table 5.22 shows the results of the PECPT execution time T p' in comparison to Parallel Pattern Exact Path Tracing (PPECPT) TPPECPT [450]. As we see, the new method considerably outperforms the previous method, and the gain increases with the size of the circuit (up to an order of magnitude in the case of circuit b19 containing half a million gates). The amount of calculation for small circuits is also small, which makes the share of the overall execution time T p large compared to computation time T c . It can be expressed by the overhead ratio R = T p /T c, as seen from the results in Table 5.22. The overhead ratio is getting closer to one with the growing size of the circuit. In the case of circuit b19, the speed-up gets almost identical to ideal because T p and T c become almost equal. Along with the execution time, there are two speed-up values we compute for every benchmark, compared to TPPECPT [450]. These values are S p and S c . Both include single CPU (non-parallel) computation time of fault model T tpl and fault-free simulation T ffs of the circuit. S p uses parallel execution time T p for its computation, and S c uses pure parallel computation time T c . The equations for speedup values S p and S c are as follows:

5.2 Fault Simulation in Combinational Circuits

213

Table 5.22 Fault model equations ordered by levels (L) [148] Circuit

TPPECPT , s

Concurrency overhead (PECPT)

Pure computation (PECPT)

T p' , s

Tc' , s

Sp

S p #cpu

R

Sc

S c #cpu

c1908

0.06

0.08

0.67

6

2.86

0.03

1.72

5

c2670

0.04

0.09

0.46

4

6.52

0.03

1.21

6

c3540

0.18

0.13

1.39

8

1.81

0.08

2.43

7

c5315

0.09

0.09

0.92

4

3.05

0.05

1.74

5

c6288

1.46

0.62

2.35

6

1.61

0.39

3.76

8

c7552

0.15

0.12

1.30

6

1.94

0.07

2.15

6

s13207

0.18

0.13

1.35

5

5.05

0.09

2.10

10

s15850

0.47

0.21

2.24

8

2.34

0.14

3.44

7

s35932

0.26

0.17

1.47

10

1.95

0.14

1.85

12

s38417

0.75

0.24

3.07

12

1.95

0.19

3.99

12

s38584

0.59

0.25

2.39

9

2.43

0.18

3.32

12

b14

2.77

0.88

3.17

8

1.29

0.73

3.80

9

b15

5.04

1.18

4.28

10

1.49

0.93

5.45

10

b17

14.86

2.40

6.18

20

1.29

2.11

7.03

12

b18

67.33

7.15

9.42

24

1.09

6.77

9.94

24

b19

147.65

14.47

10.20

24

1.03

14.07

10.49

24

Sp =

TP P EC P T TP P EC P T = Tt pl + T f f s + T p T p'

Sc =

TP P EC P T TP P EC P T = Tt pl + T f f s + Tc Tc'

S c can be considered the topmost ideal speedup case by PECPT algorithm. It can be observed from the results that smaller circuits achieve small or negative speed-up. On the other hand, larger circuits take advantage of a higher number of processor cores. Such a result for smaller circuits can be explained by the low parallelism and a high overhead ratio R. Both factors change positively when the circuit size becomes larger. One of the challenges of this method is that a different number of processor cores are required to achieve maximum speed up for different circuits. The number of processor cores used to achieve maximum speed-up is brought under the #cpu column. This number grows along with the circuit size, as shown in Fig. 5.23 for ISCAS’89 and ITC’99 benchmark circuits. The fluctuation in speed-up of some circuits can be explained by the fact that it is up to OpenCL runtime to decide which processors to use for execution. Because our test system has virtual hyper-threading cores, they can also be arbitrarily chosen for execution, which could influence the speed of execution in situations where fewer physical cores are used for computation, although the overall number of cores is larger. For all the benchmark circuits, we can see that after the limit of physical cores

214

5 Logic-Level Fault Simulation

Fig. 5.23 Speed-up depending on #CPU for PECPT for the ISCAS’89 (a) and ITC’99 (b) [148]

is reached, the speedup starts to decline or saturates. For the ITC’99 benchmarks b18 and b19, it slightly increases when more than 12 cores are used. It shows that the larger the circuits, the more cores are involved in achieving the maximum speed-up of the simulation. We compared PECPT to single processor simulators, such as FSIM [245], PPECPT [451] and commercial simulators C1 and C2. We normalized the execution time of all simulators using previous results from [451] and the execution time of PPECPT from Table 5.22, because PECPT was executed on different hardware. The comparison is provided in Table 5.23. PECPT proves to be, on average, 3.8 times faster than FSIM and around 2 times faster than PPECPT for relatively small ISCAS benchmarks. The speedup over commercially available simulators is more than 8 times over C1 and 2 orders of magnitude over C2. If ITC’99 benchmark circuits are also considered, the average speedup over PPECPT grows to 4.0 and over C1 8.7 times on average, which suggests that the simulation of larger circuits benefits the most from this method. We have also compared PECPT speedup results to GPU-based parallel fault simulator and fault table generator GFTABLE [135]. GFTABLE is a pattern parallel simulator which uses bit- and thread-level PP to boost the performance of the single-processor simulator FSIM. We have used the results from Table 4 in [451] to

5.2 Fault Simulation in Combinational Circuits

215

Table 5.23 Execution time comparison [148] Circuits

# Fan-outs

# Branches

Simulation time (s)

Max

FSIM

Avg

C1

C2

PPECPT

PECPT

c2670

290

28

3.7

0.08

0.22

2.43

0.04

0.09

c3540

356

22

4.5

0.41

1.51

8.75

0.18

0.13

c5315

510

31

5.0

0.15

0.59

6.05

0.09

0.09

c6288

1456

16

2.6

2.39

5.49

56.07

1.46

0.62

c7552

812

72

4.1

0.35

1.04

11.33

0.16

0.12

s13207

1224

37

3.7

0.23

0.50

6.29

0.18

0.13

s15850

1518

34

3.6

0.94

2.11

19.38

0.47

0.21

s35932

5295

1449

3.4

0.41

1.06

17.48

0.26

0.17

s38417

4569

49

3.2

1.73

3.34

33.01

0.75

0.24

s38584

3946

88

4.5

Average speedup b14

2409

82

4.8

1.12

2.16

29.73

0.60

0.25

3.79

8.75

92.46

2.02

1

N.A

9.41

N.A

2.77

0.88

b15

2353

95

4.8

7.41

5.04

1.18

b17

8145

149

4.8

22.3

14.86

2.41

8.77

4.12

Average speedup

1

normalize PECPT speedup. Normalization is required because PECPT speedup is computed in relation to PPECPT, while GFTABLE speedup is computed in relation to FSIM. As there is no FSIM execution time provided for ITC’99 benchmarks, we have taken the average ratio of 1.7 reported in [451] to normalize PECPT results for those circuits. Figure 5.24 shows speedup in comparison to the single-processor version of FSIM for both algorithms. We have arranged circuits in the sequence where their corresponding number of gates is growing. This way, we can clearly see the speed-up dependency on the size of the circuit. PECPT proves to be more beneficial on the circuit sizes comparable to ITC’99 benchmarks. For example, for circuit c5315 from ISCAS’85, the speedup is 8.03 times for GFTABLE and 1.50 times for PECPT, while for the ITC’99 circuit b15 the speedup is 2.57 for GFTABLE and 7.28 for PECPT. It is interesting to note that the results of GFTABLE decrease for growing circuit size. In the opposite manner, PECPT gives less gain in speed-up while the circuit size is smaller. It is stated in [135] that the amount of global memory available on GPU influences the performance of GFTABLE for larger circuits. This fact highlights the scalability bottleneck of the GFTABLE. The results of the proposed approach also depend on the amount of system memory available, but CPU systems generally are more flexible in increasing memory size than GPUs. Even for the circuits which could fit into GPU memory, we can see a slight decrease in the performance of GFTABLE. On the contrary, the results of our approach, on average, become better while circuit size increases.

216

5 Logic-Level Fault Simulation

Fig. 5.24 Comparison of GFTABLE [135] and PECPT [150]

In order to make the method more practical, it is necessary to define the number of CPU core involved to provide the best speedup for a particular circuit. We believe it can be achieved by further research because the optimum number of CPU cores and speedup depend on circuit parameters.

5.3 Fault Simulation in Sequential Circuits Due to the importance of the task of fault simulation as the tool for measuring the quality of tests and generating different data structures, such as fault dictionaries for fault diagnosis, accelerating fault simulation procedures is urgently expected. It is especially the case for fault simulation of sequential circuits where the parallel pattern simulation is challenging due to the sequential dependence of test patterns on each other. In this section, we propose two concepts for increasing the speed of fault simulation in sequential circuits. The first concept presented in Sect. 5.3.1 combines classical single-fault simulation with advanced parallel simulation methods described for combinational circuits in Sect. 5.2. For this purpose, the faults are divided into two classes: the faults for which parallelism is possible and the faults for which it is not. The main ideas of this approach were published earlier in [218]. The second simulation concept considered in Sect. 5.3.2 is based on converting sequential fault simulation into a combinational one by introducing monitors into the global feedbacks. The central idea of both approaches is an attempt to apply as much as possible the fast parallel-pattern critical-path fault tracing, described in previous sections for combinational circuits, for fault simulation in sequential circuits. The main ideas of this approach were published earlier in [463], and [222].

5.3 Fault Simulation in Sequential Circuits

217

5.3.1 Combining Combinational and Sequential Simulation In this section, we combine parallel pattern exact critical path back-tracing of faults, used so far only for combinational circuits, with traditional fault simulation in sequential circuits. For that purpose, we develop formulas for the classification of faults into two classes. The first class are the faults eligible for critical path back-tracing, and the second one are the faults which require propagation over the global feedbacks in the circuit by traditional fault simulation methods used for sequential circuits. Combining these two approaches in fault simulation, i.e. the combinational and sequential simulation concepts, allows considerable speed-up of fault simulation in sequential circuits, demonstrated by experimental results.

5.3.1.1

The Main Ideas of the Concept

In previous sections, a very fast parallel pattern fault simulation method based on critical path back-tracing of faults was described, which, however, is applicable only for combinational circuits. For sequential circuits, unfortunately, the critical path back-tracing of faults is not possible due to the sequential (time-related) dependence of signals in the circuit, which in the general case leads to the requirement of critical path tracing over many clock cycles, and hence, to the exponential growth of calculations. In the following, we develop a new method that allows partial application of the parallel critical path tracing of faults in sequential circuits, combining this possibility with the traditional single fault simulation in sequential circuits as usual. We consider in this presentation only the class of stuck-at-faults (SAF). However, the results can also be extended to other fault classes, such as the ones based on conditional SAF, similar to what was demonstrated in the previous chapters for combinational circuits. According to the main idea, in the proposed approach, we first cut the feedback loops of the sequential circuits and transform the circuit into a combinational one, where the primary inputs are complemented with pseudo-inputs, i.e. the outputs of flip-flops, and primary outputs are complemented with pseudo-outputs, i.e. the inputs of flip-flops. In the next step, by simple simulation, the test sequence on the primary inputs is converted into the test sequence on all (i.e., primary and pseudo) inputs of the combinational equivalent of the given sequential circuit. In each simulation step, we classify the faults into two groups, using fault backtracing in the combinational part of the circuit. The first group contains the faults, which propagate by test pattern directly to the primary outputs using a single clock cycle. These faults we classify as immediately detected. The second group contains the faults, which propagate to the pseudo-outputs, and need for propagating to the primary outputs more than one clock cycle. These faults we need to simulate separately by sequential simulation in a traditional way, as it is usually done for sequential circuits.

218

5 Logic-Level Fault Simulation

In such a way, combining two approaches of fault simulation, i.e. the combinational fault back-tracing and the traditional sequential single fault simulation, allows dramatic speed-up of fault simulation in sequential circuits. For the implementation of the presented main idea, we have developed a method to carry out this analysis in parallel, i.e. for many patterns concurrently. The basis of the method is converting the set of test pattern sequences into a set of independent test pattern packages, which allows parallel processing of the packages.

5.3.1.2

The Problems of Parallelization of Critical Path Tracing in Sequential Circuits

The described method for critical path tracing, both for single and parallel simulation of multiple patterns, is possible only for combinational circuits. Moreover, the higher the number of nested convergent FFRs, the more complex the computation model for back-tracing, and the slower the simulation run is. Let us consider the following reasons why the exact critical path tracing of faults is practically impossible in sequential circuits. The substantial problem of fault simulation in sequential circuits lies in the fact that the same fault can influence a particular component in different time cycles. This fact excludes the possibility of exploiting the powerful critical path tracing based method for combinational circuits, explained in Sect. 5.2. The reason is in the exponential explosion of the number of nested and intersected re-converging FFRs over different timeframes. However, this problem can be resolved if it is possible to detect a fault on its first occasion when it is propagating up to any observable point without cycling in global loops. There are two reasons why the same fault may propagate to the same component of the circuit along different paths and in different timeframes: (1) the case of a global feedback loop, which includes a component to which the same fault can have influence via different timeframes, (2) the case of a sequential convergent fan-out, where the fault may propagate to the same component from the same fan-out stem via different timeframes. Figure 5.25 illustrates the first case. The circuit consists of 3 registers and 4 combinational sub-circuits connected by buses. Assume there is a fault Q in the sequential circuit on one of the outputs of the register R2 . Assume as well that the fault is propagating at the given test sequence to the primary output Y of the circuit by two successive test patterns. The first pattern propagates the fault in the clock cycle t-1 through combinational blocks F3 , F4 and F2 (see bold black lines) to the register R3 , and the second test pattern propagates the erroneous signal from the register R3 in the next clock cycle t via block F4 to the primary output Y (see bold grey lines). To better understand the mechanism of critical path tracing of faults in the sequential circuit, let us unroll the sequence of two test patterns into two timeframes t-1 and t of the iterative logic array of the circuit presented in Fig. 5.26. In the iterative logic array in Fig. 5.26, the single fault Q is propagating, and in such a way, its impact is converging on the inputs of the sub-circuit F4 via two

5.3 Fault Simulation in Sequential Circuits Fig. 5.25 Critical path tracing of a fault in a sequential circuit, the case (1) [218]

X1

219

F1

R1 Q

X2

F3 t-1

R2

t t

X3

F2

R3

F4

y

F4

y

t-1

Fig. 5.26 Critical path tracing of a fault in a sequential circuit over two-time frames, the case (1) [218]

X1

F1

R1

Q F3 X2 X3

F2

e

R2 R3

d t t-1 c

X1

X3

F1

R1

Q F3 X2

R2

F2

R3

a F4

y

b

different paths: the path (Q, a, c, d) activated in the time frame t-1, and the path (Q, e) activated in timeframe t. Note that the critical path tracing of faults at the given test pattern is based on the reasoning of the simulated correct signals in the circuit produced by the pattern. Using the correct signals only makes it possible to determine in parallel all the faults, which may propagate along the activated critical paths to the primary observable outputs. In Fig. 5.26, a is activated along lines a, c (during cycle t-1), and d (during cycle t). The conditions of propagating the faults from a to c are determined by signals on the bus b at the time t-1. On the other hand, the conditions of propagating the faults from d to output Y depend on the signals on the bus e at the time t. But it means that the signals on the lines of the circuit in the time-frame t are distorted, and the

220

5 Logic-Level Fault Simulation

back-trace of the critical paths is already not correct for this faulty case. Moreover, since the parallel back-trace targets all faults, and each fault causes its distortion into the back-trace conditions, the back-trace of critical paths is practically impossible. In this particular case, since the fault Q under investigation should be considered in both timeframes of the iterative array, the fault propagation conditions on the buses e, and d essentially depend also on the impact of the fault Q. However, it contradicts the mechanism of critical path tracing of faults which must be carried out using only the correct signals (also on the outputs of the register R2 ). The second case (2) of sequential convergent fan-outs is illustrated in Fig. 5.27. In this case, the single fault Q is propagating to a converging point not along global feedback, but rather along two branching paths activated at different timeframes. Assume there is a fault Q in the sequential circuit on the output of the register R1 . Let us unroll the sequence of two test patterns into two timeframes t-1 and t as shown in Fig. 5.27. The first convergent branching paths consists of the path (Q, a, R6 ) activated at time t-1, and of the path (R6 , b, F3 ), activated at time t. The second convergent branch is formed by the path (F2 , c, F3 ), which is activated at time t. Both Fig. 5.27 Critical path tracing of a fault in a sequential circuit over two-time frames, case (2) [218]

b

R6 X1 X2

F1

F3

c

R1

Q F2 X3

Z1

R2 F2

F4

R7 t t-1

a R6 X1 X2

F1

F3

R1

Q F2 X3

Z1

R2 R7

F4

5.3 Fault Simulation in Sequential Circuits

221

branches, propagating the impact of the same fault Q, converge on the inputs of the sub-circuit F3 at the same clock cycle t. For this case, we can show in a similar way as in case (1) that critical path tracing, using only the correct signals in the circuit, is not possible. As an example, in Fig. 5.27, in the timeframe t, the faults on line c cannot be back-traced since signal b at this time frame is not correct. In such a way, the examples discussed in Figs. 5.26 and 5.27 demonstrate that in sequential circuits, we have to simulate each fault separately, and the classical fault-independent critical path tracing of faults used in combinational circuits is not possible. In the next chapter, we show how the critical path tracing method can still be used at least for one part of sequential circuits’ faults, which helps us considerably decrease the total amount of time needed for fault simulation in sequential circuits.

5.3.1.3

Converting Sequential Fault Simulation into Combinational Critical Path Fault Tracing

Consider a sequential circuit in Fig. 5.28, partitioned into the combinational part, consisting of sub-circuits A, B, and C with related fault sets S A , S B , S C , and a sequential part consisting of flip-flops FF. Let S = S A ∪ S B ∪ S C be the complete set of faults to be simulated in the circuit. The faults in S A always propagate directly to the primary output OUTA , and never to FFs. Hence, the faults in S A can be handled independently in all timeframes of the given test sequence, and they can be simulated by the parallel critical path tracing (PCPT) approach. On the contrary, we can never directly observe the effect of the faults in S B at OUTA without propagating one or more times through the loop of the feedback FF, and hence, these faults do not allow back-tracing the critical paths, using only the correct signals in the circuit (similarly to the examples in Figs. 5.26 Fig. 5.28 A sequential circuit partitioned according to fault types [218]

Combinational circuit

A

OUTA

C IN

B

C’

OUTB

FF

222

5 Logic-Level Fault Simulation

and 5.27). Consequently, the faults S B have to be processed by slow sequential fault simulation (SSFS) used traditionally in sequential circuits. The faults in S C represent a special case. When a test sequence is applied to the circuit, the same faults in S C may sometimes propagate directly to the primary output OUTA , whereas in other times, they may propagate first to OUTB , and only then, after passing through the feedback loop, they may be observed at the primary output OUTA . Let the faults in S C , which propagate first to the pseudo-output OUTB , form a subset S C ' ⊆ S C . The described ambivalence of the faults in S C makes them the key problem of the new fault simulation method. We represent the set of faults that propagate directly to the primary output OUTA , without circulating in the feedback loop, as ) ( S( A) = S A ∪ SC − SC' For propagating these faults to the observable output OUTA , we use a single time frame, and hence, they can be simulated in the same way as in the combinational circuits using the fast PCPT simulation approach. Note that the PCPT is carried out using the correct signals of the circuit, which is the prerequisite of the possibility of parallel analyzing all faults in the circuit concurrently. The rest of the faults S(B) = S B ∪ SC' = S − S( A) propagate to the observable output OUTA using at least two or more time frames. It means that each fault, starting from the second timeframe of its propagation trace up to OUTA , distorts the states of signals in the circuit in all timeframes in its own way, making the concurrent analysis of different faults not possible. Hence, all the faults in S(B) have to be simulated separately by conventional methods used for sequential circuits, which is a very slow procedure. Since conventional slow fault simulators for sequential circuits target all faults S in the circuit, it is promising to extract from S the subset of faults S(A), which can be simulated using the fast parallel critical path tracing. This way, it is possible to speed up dramatically the full fault simulation procedure. We propose a simulation-based method for the classification of all faults into the two subsets S(A) and S(B), where S(A) ∩ S(B) = ∅. The difficulty is related to the fault set SC' because the content of SC' strongly depends on the test sequence. In other words, it is not possible to predefine the subset SC' before the fault simulation, and the content of SC' can be defined only online during the process of the fault simulation. This additional payload may slow down the whole procedure. The method, illustrated in Fig. 5.29, is based on applying critical path tracing for the patterns in a sequence, one by one, separately for the outputs OUTA and OUTB . As a result of the method, we hit two targets: first, we find immediately using critical path tracing a part of detected faults S(A), and second, we extract the subset of faults S(B) which need separate slow fault simulation.

5.3 Fault Simulation in Sequential Circuits Fig. 5.29 Simulation-based fault classification [218]

223

Faults simulated with fast parallel critical path tracing

S3A

S2A S1A

S2B S1B

S3 S4

B

S4A

Faults not yet targeted

B

Faults to be simulated with slow sequential approach

We start from the first pattern of the test sequence and calculate the sets S 1 A and S B of faults detected on the outputs OUTA and OUTB , respectively. The faults of S 1 A can be included immediately into S(A) of detected faults by this first pattern, S(A1 ) = S 1 A , where the faults S 1 B are include in S(B1 ). Next, for the second pattern of the test sequence, we find the sets of detected faults S 2 A and S 2 B . The sets S(A) and S(B) can be then updated in the following way: 1

) ( S(A) = S(A2 ) = S(A1 ) ∪ SA2 − S(B1 ) , ) ( S(B) = S(B2 ) = S(B1 ) ∪ SB2 − S(A2 ) . In the general case, we can express the state of the fault simulation after the k-th pattern of the test sequence as follows: ) ( S(A) = S(Ak ) = S(Ak−1 ) ∪ SAk − S(Bk−1 )

(5.22)

( ) S(B) = S(Bk ) = S(Bk−1 ) ∪ SBk − S(Ak )

(5.23)

The procedure ends if the k-th pattern is the last one in the test sequence. S(A) represents the set of faults simulated and detected very fast by the critical path tracing method, and S(B) represents the faults which need additional fault simulation by any conventional fault simulator for sequential circuits.

5.3.1.4

Combining Parallel Critical Path Tracing with Sequential Fault Simulation

In the previous Section, we considered a test for a sequential circuit as a single test sequence where all test patterns, consisting of primary input patterns and pseudoinput patterns (the output values of flip-flops), are strongly dependent on each other due to the feedback loop. This fact allowed us to exploit the power of critical path tracing only for single patterns. In this section, we describe the method of parallel

224

5 Logic-Level Fault Simulation

critical path tracing concurrently for many patterns, which we developed for combinational circuits. This parallelism is possible due to the independence of test patterns in the case of combinational circuits. In the following, we develop a method for parallel critical path tracing for sequential circuits, exploiting the independence between the test segments in the full test sequence. We assume that each segment of test patterns consists of the initialization of the flip-flops, and all the subsequent patterns serve for fault sensitization and fault propagation to the primary output OUTA . Consider a test sequence consisting of k test segments, where each i-th test segment consists of mi test patterns TS i = (T i, 1 , T i, 1 , … T i, mi ). Since all test segments are independent, then also the first patterns (T 1, 1 , T 2, 1 , … T k, 1 ) in all k segments, and in a similar way, the second patterns (T 1, 2 , T 2, 2 , … T k, 2 ) in all k segments, etc., can be considered together as a set of m packages {TPPi } of independent test patterns. In the general case, the lengths of the packages may be different, and the number of packages m is equal to the number of patterns in the longest test sequence. An example of converting the initial test sequence into a set of packages of independent test patterns is depicted in Fig. 5.30. For each package TPPj = (T 1, j , T 2, j , … T k, j ), j = 1, 2, …, m, of test patterns, parallel critical path tracing can be applied concurrently for all patterns in the package. As the result of this procedure, for each test pattern T i,j ∈ TPPj , the fault sets S i,j A and S i,j B are calculated, and the detected fault sets (fault covers) will be calculated in the following way. Test segments Test

Test pattern packages Test

x 1 x2 … x n

T1,1

T1,1

T1,2

T2,1

TS1



Tk,1

T2,1

T1,2

T2,2

TS2

… T2,m2 …

Tk,mk

Converting sequential test into packages of independent test patterns

T2,2

TPP2

… Tk,2 … T1,m1

Tk,1 …

TPP1



T1,m1

Tk,2

x 1 x2 … x n

TSk

T2,m2 …

TPPm,k

Tk,mk

Fig. 5.30 Converting test segments into the set of independent test pattern packages [218]

5.3 Fault Simulation in Sequential Circuits

225

Test pattern packages Test

Fault coverage table Test

x 1 x 2 … xn

T1,1

T1,1

T2,1

T2,1

TPP1



TPP1



Tk,1

Tk,1

T1,2

T1,2

T2,2

TPP2

… Tk,2 …

Fault simulation with parallel critical path tracing

TPP2

T2,2 … Tk,2

T1,m1

T1,m1 T2,m2

TPPk

TPPk

… Tk,mk

Tk,mk

r1r2 … rp

Fault cover

Fault cover

S1 A

S1 B

Fault cover

Fault cover

S2 A

S2 B



T2,m2 …

r1r2 … rp

Fault cover

Fault cover

SkA

SkB

Fig. 5.31 Results of parallel critical path tracing of the sequential test [218] j

1, j

2, j

k, j

j

1, j

2, j

k, j

SA = SA ∪ SA ∪ . . . ∪ SA SB = SB ∪ SB ∪ . . . ∪ SB

An example of a fault table created by parallel critical path tracing for the given set of k packages of independent test patterns is depicted in Fig. 5.31. Based on the fault sets S j A and S j B calculated by critical path tracing, the sets of faults S(Ak ) and S(Bk ) can be calculated similarly to the formulas (5.22) and (5.23), respectively. The method of combining parallel critical path tracing with sequential faults simulation for sequential circuits is presented as Algorithm 5.1 [218].

226

5 Logic-Level Fault Simulation

Algorithm 5.1 Converting a test sequence into test packages of independent test patterns. 1: 2: 3: 4: 5: 6: 7: 8: 9:

Convert the test segments into test packages of independent test patterns (Fig. 5.30) FOR each test package TPPj Apply parallel critical path tracing to calculate the fault sets S j A and S j B END FOR FOR k = 1, 2,.., m (m is the number of test packages, S(A0 ) = S(B0 ) = ∅) S(Ak ) = S(Ak-1 ) ∪ (S k A – S(Bk-1 )) S(Bk ) = S(Bk-1 ) ∪ (S k B – S(Ak )) END FOR RETURN S(A) = S(Am ), S(B) = S(Bm ).

The set of S(A) represents the faults detectable by the given test, calculated by parallel critical path tracing (PCPT). The set of S(B) represents the faults whose detectability is not possible to determine by PCPT. The faults in S(B) must be simulated by conventional fault simulation methods for sequential circuits.

5.3.1.5

Estimating the Speed-Up of the Proposed Method

Let us now calculate the timing analysis’s characteristics and possible speed-up of the proposed method. Let us use the following notations: n nB t anal t CP t CL t OLD t seq

is the number of faults in the circuit; is the number of faults in S(B) to be simulated by slow sequential fault simulator; the total time needed for critical path tracing and fault analysis consisting of two parts t anal = t CP + t CL , where time needed for critical path tracing, and time needed for fault classification. time needed for fault simulation of all faults in a sequential circuit by conventional simulation; the average time of sequential simulation of a fault t seq = t OLD /n.

The total time needed for the proposed method, which combines critical path tracing with conventional sequential fault simulation, can be calculated as t N E W = tanal + (tseq × nB ) = tC P + tC L + (t O L D × n B )

(5.24)

The speed-up of using the proposed method, compared to the sequential approach, we characterize as follows tseq × n tOLD ) ( = tNEW tanal + tseq × n B

5.3 Fault Simulation in Sequential Circuits

227

Denote p = t anal / (t seq × nB ). Since tanal < < tseq × nB , it is reasonable to calculate the higher limit of speed-up when p → 0: n × tseq tO L D n = lim = p→0 t N E W p→0 n B × tseq nB lim

(5.25)

From (5.25), we see that the proposed method’s speed-up substantially depends on the number of faults nB in S(B), which cannot be simulated by critical path tracing. Note that the content of S(B) and the proposed method’s speed-up effect also depend on the given test sequence to be fault simulated. Since the set of S(B) can be calculated by parallel critical path tracing of faults, separately by each pattern, the procedure is very fast compared to the traditional methods of sequential fault simulation.

5.3.1.6

Experimental Results

The experimental research aimed to measure and investigate the speed-up potentials of the proposed new method for fault simulation in sequential circuits compared to conventional fault simulation. The comparison was carried out for a representative selection of 27 circuits with different complexities (numbers of possible faults) of two benchmark families ISCAS’89 [39] (16 circuits) and ITC’99 [86] (11 circuits). The number of SAF faults in the circuits ranged from 614 up to 129,422. Table 5.24 presents the main characteristic data of the circuits used in the experiments. We use the following notations: the number of inputs (#in), the number of outputs (#out), the number of flip-flops (#FF), the ratio between the numbers of observable primary outputs and not directly observable flip-flops #out/#FF, which characterizes the sequential complexity of the circuit, the number of gates (#gates) and the number of possible stuck-at-faults (#faults). Table 5.25 presents data from the experimental research, where the benchmark circuits are ordered according to their increasing complexity, i.e. the number of faults. We apply a sequential test consisting of 32 independent test segments to each benchmark circuit and create a sequence of 50 random input patterns in each segment. We assume the circuit flip-flops can be initialized (reset) before each test sequence. The total length of the full test sequence is 1600 test patterns. In columns 4–6, the test sequence is characterized for all circuits by two types of fault coverages S(A) and S(B), which mean the percentages of faults in the sets S(A) and S(B), respectively, in relation to the full set of faults given in column 3. The sum S(A) + S(B) in column 6 characterizes the sets of faults propagating to the primary outputs and flip-flops, respectively. Note that the faults in S(A) are detected directly by the critical path tracing, but the faults in S(B) need additional sequential fault simulation to determine if they also propagate via feedback loops to the primary outputs.

228

5 Logic-Level Fault Simulation

Table 5.24 Characteristics of the benchmark circuits used in experiments [218] Circuit s349

#In

#Out

9

11

#FF 15

#Out/#FF 0.7

#Gates 161

#Faults 614

s386

7

7

6

1.2

159

744

s510

19

7

6

1.2

211

900

s526

3

6

21

0.3

193

984

s641

35

24

19

1.3

379

1164

s713

35

23

19

1.2

393

1266

s953

16

23

29

0.8

395

1720

s1423

17

5

74

0.1

657

2610

s1494

8

19

6

3.2

647

2864

s5378

35

49

179

0.3

2779

9122

s9234

19

22

228

0.1

5597

16,756

s13207

31

121

669

0.2

7951

24,882

s15850

14

87

597

0.1

9772

29,682

s35932

35

320

1728

0.2

16,065

65,248

s38417

28

106

1636

0.1

22,179

69,662

s38584

12

278

1452

0.2

19,253

72,346

b04

11

8

66

0.1

652

2836

b05

1

36

34

1.1

927

3952

b07

1

8

49

0.2

383

1712

b08

9

4

21

0.2

149

706

b10

11

6

17

0.4

172

806

b11

7

6

31

0.2

726

2894

b12

5

6

121

0.0

944

4426

b13

10

10

53

0.2

289

1350

b14

32

54

245

0.2

9767

38,982

b15

36

70

449

0.2

8367

36,496

b17

37

97

1415

0.1

30,777

129,422

All simulation time costs are given in seconds. The time costs t CP and t CL in columns 7 and 8 characterize the time needed for critical path tracing and the time needed for fault classification, respectively. Hence, the time t CP + t CL represents the full cost of the parallel fault simulation of faults by critical path tracing. The total time cost t NEW of fault simulation by the method proposed, where the parallel critical path tracing and the traditional sequential fault simulation are combined, is depicted in column 9. Columns 10 and 11 present the time costs of the baseline methods, i.e. the traditional sequential fault simulation. Here, two approaches are considered, i.e. simulation with fault dropping [14] (t OLD_FD ) and simulation with non-fault dropping

5.3 Fault Simulation in Sequential Circuits

229

Table 5.25 Experimental results and comparison of the proposed method versus conventional fault simulation [218] Time Total time cost for fault No Circuits # Faults Fault coverage cost simulation (s) of (%) for t NEW t OLD critical path tracing (s) S(A) + S(B)

T anal

Results Gain Higher t OLD_NFD / bound t NEW

t OLD_FD t OLD_NFD

1

2

3

4

7

9

10

11

12

13

1

s349

614

96

0.1

0.5

0.03

0.44

0.89

1.13

2

b08

706

98

0.1

0.6

0.05

0.51

0.84

1.04

3

s386

744

72

0.1

0.3

0.10

0.49

1.61

2.78

4

b10

806

92

0.1

0.7

0.08

0.65

0.93

1.12

5

s510

900

93

0.1

0.6

0.12

0.73

1.21

1.54

6

s526

984

36

0.1

0.4

0.25

0.80

2.14

3.09

7

s641

1164

87

0.2

0.7

0.16

1.56

2.15

2.69

8

s713

1266

81

0.2

0.8

0.23

1.75

2.17

2.66

9

b13

1350

64

0.1

1.2

0.41

1.72

1.43

1.61

10

b07

1712

80

0.2

2.3

0.47

2.70

1.15

1.25

11

s953

1720

94

0.2

2.5

0.31

2.73

1.07

1.16

12

s1423

2610

70

0.2

4.6

1.04

6.38

1.39

1.46

13

b04

2836

91

0.2

7.4

0.57

7.84

1.06

1.11

14

s1494

2864

58

0.2

1.5

1.38

6.28

4.22

4.95

15

b11

2894

68

0.2

5.3

1.58

7.60

1.43

1.48

16

b05

3952

34

0.4

4.5

4.22

13.0

2.86

3.11

17

b12

4426

39

0.3

6.3

4.25

15

2.50

2.61

18

s5378

9122

72

0.6

22.9

11.0

85.21

3.72

3.82

19

s9234

16,756

32

0.9

91.1

105.0

300.0

3.29

3.33

20

s13207

24,882

52

1.4

306

139.0

667.2

2.18

2.19

21

s15850

29,682

37

1.8

309

250.0

946.5

3.06

3.08

22

b15

36,496

50

6.0

521

228.6

1036.2

1.99

2.01

23

b14

38,982

75

4.2

919

258.8

1226.4

1.33

1.34

24

s35932

65,248

84

2.5

2827 367.0

3734.2

1.32

1.32

25

s38417

69,662

52

3.8

2516 1189.0

4956.6

1.97

1.97

26

s38584

72,346

78

3.7

3255 704.0

4650.5

1.43

1.43

27

b17

129,422 25

17

3226 2856.2

12,857.8

3.99

4.01

230

5 Logic-Level Fault Simulation

(t OLD_NFD ). In column 12, the advantage (the gains in time cost reduction) of the proposed method is presented, and in column 13, for comparison, the higher bounds of the possible gains, calculated by the formula (6), are depicted. Note that the target of this research was not to generate test patterns with the highest fault coverage rather, the goal of experiments was to investigate the speed-up of fault simulation compared with the baseline for any test sequence generated and supposed to be the objective of fault simulation. In the current case, we selected randomly generated test sequences with an equal test length for all circuits.

5.3.1.7

Discussion of the Experimental Results

The test sequences were analysed twice. First, we calculated by the proposed method of parallel critical path tracing the sets of faults {S k A } which propagated to the primary outputs OUTA , and the sets of faults {S k B } which propagated to the pseudooutputs OUTB (see Figs. 5.28 and 5.29). The time cost of this procedure we measured as t CP . Then we classified the sets of faults into two subsets S(A) ⊂ {S k A } and S(B) ⊂ {S k B }. The subset S(A) included the faults which could be declared as detected faults. As a result of the critical path tracing method, the subset of faults S(B) included those faults that had to be simulated by a conventional simulator of sequential circuits. The time cost of the fault classification procedure we measured as t CL . Compared to the baseline method, the experimental measurement of the gain in speed-up of the new method, in column 12, and the higher bounds for the gain are illustrated by the charts in Fig. 5.32. The new method outperforms the baseline method in a broad range of up to 4 times (on average 2 times), except for very small circuits (less than 200 gates). On the other hand, we see that the experimental results and the theoretically calculated bounds, using the formula (5.25), are very close in the case of small circuits. In the case of larger circuits, they nearly match. It gives an excellent possibility to predict by a simple calculation of the formula (5.25) the expected speed-up of fault simulation by the new proposed method. Figure 5.32 illustrates the dependence of the gain in simulation speed-up on the complexity of circuits (the number of faults). To smoothen the irregularities of the data for different circuits on the X-axis, as shown in Fig. 5.33, and to consider the Fig. 5.32 Comparison of the speed-up of simulation with higher bound [218]

6 4 2 0 1

3

5

7

9 11 13 15 17 19 21 23 25 27

Benchmark circuits Gain in speed-up

Higher bound

231

500000

50

400000

40

300000

30

200000

20

100000

10

Gain

Fig. 5.33 Dependence of speed-up of simulation on the complexity of circuits [218]

Number of faults

5.3 Fault Simulation in Sequential Circuits

0

0 1 3 5 7 9 111315171921232527 Cumulative gain

Cumulative number of faults

Fig. 5.34 Dependence of gain on the feedback level [218]

100 80 60 40 20 0

5 4 3 2 1 0

Gain

trends of changing the comparable data, we compare the cumulative gain and the cumulative number of faults over the complete set of benchmark circuits used in the experiments. The diagrams in Fig. 5.33 show that the trend of the gain is nearly linear over the complete set of circuits ranked in the order of their complexity. We see from Fig. 5.33 that the linearity trend also continues in the second part of the ranked list of circuits (from 19 to 27), where complexity increases rapidly. This finding allows claiming that the proposed method is well-scalable because the method starts to be more efficient in case the circuits’ complexity increases. The cumulating functions in the graphics in Fig. 5.33 smoothen the anomalies of the gain curve in Fig. 5.32 and the nonlinearity of spreading the number of faults on the X-axis. We can explain this anomaly by the two charts in Fig. 5.34, which illustrate the impact of the feedback intensities (the share of the faults propagating over the feedback loops) and the gain in speed-up of the simulation for the full range of benchmark circuits. To characterize the feedback intensities, we use the values of S(B) in Table 5.25, which represent the percentage of faults that do not propagate directly to the primary output but start propagating via feedback loops to the subsequent clock cycles. These faults represent the bottleneck of the proposed method because we cannot simulate them by fast critical path tracing. In Fig. 5.34, we see that for all circuits where s(B) is high, the gain from the method is low and vice versa. We compared the proposed method with two conventional fault simulation methods for sequential circuits as baselines, i.e. simulation with and without fault

1 3 5 7 9 11 13 15 17 19 21 23 25 27

Benchmark circuits Feedback intensity s(B)

Gain in speed-up

232

5 Logic-Level Fault Simulation

Fig. 5.35 Comparison of the speed of the proposed method with two baselines: conventional fault simulation with and without fault dropping [218]

Comparison with two baselines

dropping. The time costs are depicted in Table 5.25 in columns 10 and 11, respectively. The comparison results for the proposed method are also shown in Fig. 5.35. We see that the proposed method outperforms the traditional fault simulation without fault dropping but loses in speed to the simulation with fault dropping. Note, however, that the fault-dropping-based fault-simulation approach can be used for evaluating only the fault coverage of the given test. The simulation without fault dropping provides not only the fault coverage data, but also the data for fault diagnosis purposes, e.g., the data needed for creating fault tables and fault dictionaries. On the other hand, the core of the proposed method is the parallel critical path tracing of faults. It is substantially tailored to get diagnostic information for each test pattern separately, similar to the sequential simulation without fault dropping. Hence, the comparison baseline for the proposed new method should be simulation without fault dropping, and it would be even unfair to compare the new method with the simulation method, which uses fault dropping. In Fig. 5.34, we saw that the high values of S(B), which characterize the sequential level of the given circuit, act against the efficiency of the proposed method. The values of S(B) are high, while the ratio of the number of flip-flops and the number of primary outputs (#FF/#out) is very high. On the other hand, we also see from Fig. 5.34 that if the value of S(B) reduces, then the proposed method’s speed increases. This fact can motivate the redesign of sequential circuits for better observability to reduce the cost of testing, both, the test length and the time of fault simulation. Table 5.26, and Fig. 5.36 present the results of an experiment with 8 different designs of circuit b17 with different numbers of flip-flops made observable by additional primary outputs. Row 1 corresponds to the original circuit b17 used in Tables 5.24 and 5.25, whereas the last row shows the extreme case where all flipflops are directly observable. Here, we see a dramatic speed-up of simulation and an increase in the gain if the number of directly observable flip-flops grows. 5 4 3 2 1 0 1

3

5

7

9 11 13 15 17 19 21 23 25 27

Fault dropping

Without fault dropping

5.3 Fault Simulation in Sequential Circuits

233

Table 5.26 Dependence of the speed-up gain on the observability of flip-flops of the benchmark circuit b17 [218] s(A) (%)

s(B) (%)

Observable FFs, #

Time costs (s)

Gain

t CP

t CL

t NEW

0.3

25.0

0

15.80

2.01

3227

4

2

3.8

21.5

200

15.71

2.03

631

20

3

6.6

18.7

400

15.61

1.94

551

23

4

8.4

16.9

600

15.73

1.88

500

26

5

9.8

15.5

800

15.93

1.88

460

28

6

15.0

10.3

1000

15.70

2.04

312

41

7

23.6

1.7

1200

15.69

2

65

197

8

25.3

0.0

1415

15.74

2.03

18

724

Fig. 5.36 Relationship between the speed-up gain and the observability of flip-flops of the benchmark circuit b17 [218]

# Observable FFs

1

700 600 500 400 300 200 100 0

1400 1200 1000 800 600 400 200 0 1

2

3

4

Observable FFs

5

6

7

Gain

Design no

8

Gain in speed-up

As already mentioned, the results of the experimental research suggest a very fast procedure for predicting the speed-up achieved by the new method compared to the conventional methods. Since t anal = t CP + t CL