Foundations of Embedded Systems [1st ed.] 978-3-030-11960-7, 978-3-030-11961-4

This book is devoted to embedded systems (ESs), which can now be found in practically all fields of human activity. Embe

930 125 5MB

English Pages XIII, 167 [180] Year 2019

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Foundations of Embedded Systems [1st ed.]
 978-3-030-11960-7, 978-3-030-11961-4

Table of contents :
Front Matter ....Pages i-xiii
Introduction into Embedded Systems (Alexander Barkalov, Larysa Titarenko, Małgorzata Mazurkiewicz)....Pages 1-22
Design of Embedded Systems (Alexander Barkalov, Larysa Titarenko, Małgorzata Mazurkiewicz)....Pages 23-48
Implementation of Computational Algorithms in Embedded Systems (Alexander Barkalov, Larysa Titarenko, Małgorzata Mazurkiewicz)....Pages 49-80
Field Programmable Gate Arrays (Alexander Barkalov, Larysa Titarenko, Małgorzata Mazurkiewicz)....Pages 81-106
Implementing Control Algorithms with FPGAs (Alexander Barkalov, Larysa Titarenko, Małgorzata Mazurkiewicz)....Pages 107-144
Programmable Logic Controllers (Alexander Barkalov, Larysa Titarenko, Małgorzata Mazurkiewicz)....Pages 145-162
Back Matter ....Pages 163-167

Citation preview

Studies in Systems, Decision and Control 195

Alexander Barkalov Larysa Titarenko Małgorzata Mazurkiewicz

Foundations of Embedded Systems

Studies in Systems, Decision and Control Volume 195

Series editor Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland e-mail: [email protected]

The series “Studies in Systems, Decision and Control” (SSDC) covers both new developments and advances, as well as the state of the art, in the various areas of broadly perceived systems, decision making and control–quickly, up to date and with a high quality. The intent is to cover the theory, applications, and perspectives on the state of the art and future developments relevant to systems, decision making, control, complex processes and related areas, as embedded in the fields of engineering, computer science, physics, economics, social and life sciences, as well as the paradigms and methodologies behind them. The series contains monographs, textbooks, lecture notes and edited volumes in systems, decision making and control spanning the areas of Cyber-Physical Systems, Autonomous Systems, Sensor Networks, Control Systems, Energy Systems, Automotive Systems, Biological Systems, Vehicular Networking and Connected Vehicles, Aerospace Systems, Automation, Manufacturing, Smart Grids, Nonlinear Systems, Power Systems, Robotics, Social Systems, Economic Systems and other. Of particular value to both the contributors and the readership are the short publication timeframe and the world-wide distribution and exposure which enable both a wide and rapid dissemination of research output.

More information about this series at http://www.springer.com/series/13304

Alexander Barkalov Larysa Titarenko Małgorzata Mazurkiewicz •



Foundations of Embedded Systems

123

Alexander Barkalov Institute of Metrology, Electronics and Computer Science University of Zielona Gora Zielona Gora, Poland

Larysa Titarenko Institute of Metrology, Electronics and Computer Science University of Zielona Gora Zielona Gora, Poland

Małgorzata Mazurkiewicz Institute of Control and Computation Engineering University of Zielona Gora Zielona Gora, Poland

Kharkiv National University of Radio Electronics Kharkiv, Ukraine

ISSN 2198-4182 ISSN 2198-4190 (electronic) Studies in Systems, Decision and Control ISBN 978-3-030-11960-7 ISBN 978-3-030-11961-4 (eBook) https://doi.org/10.1007/978-3-030-11961-4 Library of Congress Control Number: 2018968085 © Springer Nature Switzerland AG 2019 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

In our daily lives, we meet with a huge number of computer systems. The civilized humanity is literally surrounded by various computer systems. Computers and embedded systems can be found in practically all fields of human activity. The first microprocessors appeared in the 1970s and gradually they became the basis of computer systems. Today, an overwhelming number of microprocessors are used in the implementation of various embedded systems. It is believed that embedded systems include more than 95% of all produced microprocessors. This trend will continue in the future. The development of embedded systems stimulates the development of the Internet of Things. In turn, this leads to an increase in the production of microprocessors. The circle closes, but this is not a vicious circle. Expanding the scope of use of embedded systems can significantly improve people’s quality of life. There are some specifics of embedded systems which distinguish them from universal computing systems. First of all, a particular embedded system executes a very limited amount of tasks. It is quite possible that only a single task is solving again and again. This allows a designer to optimize the hardware and software of a system. Second, embedded systems monitor and control some physical objects. To do it, they should interact with the physical world. This requires reconciling differences in the form of information representation and in the speed of information processing (cybernetic processes) and the speed of processes in a physical object. So, most of the embedded systems belong to the class of cyber-physical systems. Third, many embedded systems are mobile and require batteries as a source of electricity. So, for mobile and autonomous systems, it is especially important to reduce energy consumption. This extends their useful life. As a rule, universal computer systems do not have such problems. Their main task is to perform various calculations as quickly as possible. So, the designer of embedded systems must have specific knowledge. The up-to-day state of the art in this area is characterized by three major factors. The first factor is a development of ultracomplex VLSI such as “system-onprogrammable-chip” with billions of transistors and hundreds of millions of equivalent gates. The second factor is a development of hardware description v

vi

Preface

languages such as VHDL and Verilog that permits to capture a design with tremendous complexness. The third factor is a wide application of different computer-aided design (CAD) tools to design very complex projects in the satisfactory time. These three factors affected significantly the process of embedded systems’ design. Now the hardware design is very similar to the development of computer programs. There is no need to write embedded software using assembly languages. The mutual application of hardware description languages and CAD tools allows concentrating the designer’s energy on the basic problems of design, whereas a routine work remains the prerogative of computers. Tremendous achievements in the area of semiconductor electronics turn microelectronics into nanoelectronics. Actually, we observe a real technical boom connected with achievements in nanoelectronics. It results in development of very complex integrated circuits, particularly the different application specific integrated circuits and field-programmable gates arrays (FPGA). The largest of commercial chips have billions and billions of transistors. So, they are so huge that it is enough only a single chip to implement a very complex digital system such as a multi-processing system-on-a-chip. So, currently, there are practically all possibilities for implementing complex and diverse embedded systems. We say “practically” because today some important factors are missing. First, there are not enough qualified designers to implement complex embedded systems. Second, there are no efficient software tools for formalized designing on both system and processor levels. Third, in Poland there are practically no monographs related to the design of embedded systems. If there are such books, they are dedicated to microcontrollers and microcontroller-oriented programming. However, embedded systems are not just microcontrollers. They may include programmable logic, intellectual property cores, bus arbiters, memory blocks, controllers of standard interfaces, and other blocks. The book contains six Chapters and Conclusion. It is written by a research group from University of Zielona Góra (Poland). Professors Alexander Barkalov and Larysa Titarenko wrote Introduction, Chaps. 1–4, and Conclusion. Chapters 5 and 6 are written by Ph.D. Małgorzata Mazurkiewicz. Chapter 1 provides some basic issues connected with embedded systems. Two types of computer systems are discussed (visible and invisible to a user). Next, some example is shown allowing understanding the basic features of embedded systems. The common features of embedded systems and their characteristics are discussed. The final section is devoted to basic requirements to embedded systems. Chapter 2 is devoted to different design methodologies targeting complex embedded systems. The hierarchy of levels is introduced, as well as Y-Chart proposed by Prof. Daniel Gajski. Using the Y-Chart, some basic methodologies are discussed such as bottom-up, top-down, and meet-in-the-middle. The third section is devoted to platform-based design of embedded systems. Next, the peculiarities of ASIC-based design are discussed. The last section is devoted to very important problem of increasing the energy efficiency of embedded systems.

Preface

vii

Chapter 3 presents some methods used for implementing computational algorithms in embedded systems. All methods are illustrated using the same end-to-end example. We start from the hardware implementation when some operational units (adders, multipliers, and other) are used to implement an algorithm during a single cycle of operation. Next, we discuss how to minimize hardware due to introducing an internal control unit. The third section is devoted to software implementation of computational algorithms based on microcontrollers. Some families of microcontrollers are analyzed. Next, we show how to choose blocks for implementing the embedded system for our example. We choose the microcontrollers ATmega32 to illustrate this implementation of the embedded system. At last, the heterogeneous implementation of computational algorithms is discussed. It is based on well-known conception of hardware-software co-design. We introduce the conception of hardware-software interface and show how to create an algorithm of its operation. Chapter 4 deals with field-programmable gate arrays (FPGA). The basic stages are shown concerning evolution of programmable logic (from PROMs and PLAs to FPGAs). Next, the evolution of FPGAs is analyzed. Three ages of FPGAs are shown. Next, the modern FPGAs are produced by Xilinx and Intel (Altera). The last section is devoted to design methods targeting FPGAs. Basing on this information, we show how to execute calculations using digital signal processors, embedded memory blocks, and some components of Zynq 7000 by Xilinx. Chapter 5 is devoted with FPGA-based implementing control algorithms represented using the language of graph-schemes of algorithms. We use models of Mealy and Moore finite state machines (FSM) to design the circuits of control units. We start from single-level FSM circuits implemented using look-up table (LUT) elements and/or embedded memory blocks (EMB). These methods are illustrated using standard benchmark FSMs. Next, the methods of structural decomposition are discussed. We discuss how to reduce hardware using the replacement of logical conditions, encoding the collections of microoperations, encoding the terms, and transformation of objects. We show how to use the pseudoequivalent states for optimization of logic circuits of Moore FSMs. The third section is devoted to hardware reduction of Moore FSMs targeting the replacement of logical conditions. At last, we discuss how to optimize hardware replacing state registers by state counters. The discussed methods target control units based on models of Moore FSMs. Chapter 6 is devoted to programmable logic controllers (PLC). We start from the classification of PLC, their architecture, and cycle of operation. Next, the main laws of Boolean algebra are shown. We show the connection between the Boolean algebra and basic logic functions used in programming of PLC. Next, different programming languages used for PLC are shown. The last part is devoted to examples of programming for different Boolean functions and simple control algorithms. All programs are written using the Ladder Diagram language. We understand that many questions have not been examined. This is almost impossible to do it in a single book. However, we have tried to provide basic information sufficient for self-study of other important problems related to

viii

Preface

embedded systems. We hope that our book will be interesting and useful for students and Ph.D. students in the area of Computer Science, as well as for designers of modern embedded systems. Zielona Gora, Poland October 2018

Alexander Barkalov Larysa Titarenko Małgorzata Mazurkiewicz

Contents

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

1 1 7 14 18 22

2 Design of Embedded Systems . . . . . . . . . . . . . . . . . . . 2.1 Hierarchy of Levels . . . . . . . . . . . . . . . . . . . . . . . 2.2 Methodologies of Design of Embedded Systems . . 2.3 Platform-Based Design of Embedded Systems . . . . 2.4 ASIC-Based Design . . . . . . . . . . . . . . . . . . . . . . . 2.5 Improving Energy Efficiency in Embedded Systems References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

23 23 29 32 35 42 48

3 Implementation of Computational Algorithms in Embedded Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Hardware Implementation of Algorithms . . . . . . . . . . . . . . . 3.2 Hardware Reduction for Operational Devices . . . . . . . . . . . . 3.3 Software Implementation of Algorithms . . . . . . . . . . . . . . . . 3.4 Heterogeneous Implementation of Computational Algorithms References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

49 49 54 61 72 80

4 Field Programmable Gate Arrays . . . . . . . . . . 4.1 Evolution of Programmable Logic Devices 4.2 Evolution of FPGAs . . . . . . . . . . . . . . . . . 4.3 Modern FPGAs . . . . . . . . . . . . . . . . . . . . 4.4 Design with FPGAs . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. 81 . 81 . 87 . 94 . 99 . 106

1 Introduction into Embedded Systems . . . . . . 1.1 Two Types of Computer Systems . . . . . . 1.2 Analysis of Example . . . . . . . . . . . . . . . . 1.3 Common Features of Embedded Systems . 1.4 Requirements for Embedded Systems . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

ix

x

Contents

. . . . . . . . . . . 107 . . . . . . . . . . . 107 . . . . . . . . . . . 116

5 Implementing Control Algorithms with FPGAs . . . . . 5.1 Implementing Single-Level Circuits . . . . . . . . . . . . 5.2 Basic Methods of Structural Decomposition . . . . . . 5.3 Replacement of Logical Conditions in EMB-Based Moore FSMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Replacement of State Registers by Counters . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . 132 . . . . . . . . . . . 137 . . . . . . . . . . . 143

6 Programmable Logic Controllers . . . 6.1 Introduction . . . . . . . . . . . . . . . . 6.2 Classification of PLCs . . . . . . . . 6.3 Architecture of PLC . . . . . . . . . . 6.4 Principle of Operation of PLC . . . 6.5 Programming the PLCs . . . . . . . . 6.6 Introduction to Logical Operators 6.7 Finite State Machine . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

145 145 146 147 148 149 153 159 161

Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165

Abbreviations

ADC ALM ALU ASIC ASSP BE BF BIMF BMO CAD CB CE CFM CLB CMO CMOS CPLD CPS CPU DAC DC DSP DST DVFS EDA EEPROM EMB EMBer ES ESP

Analog-to-digital converter Adaptive logic module Arithmetic and logical unit Application specific integrated circuit Application specific standard product Basic element Block of functions Block of input memory functions Block of microoperations Computer-aided design Communication block Communication element Configuration flesh memory Configurable logic block Collection of microoperations Complimentary metal-oxide-semiconductor Complex programmable logic device Cyber-physical system Central processing element Digital-to-analog converter Decoder Digital signal processor Direct structure table Dynamic voltage and frequency scaling Electronic design automation Electrically erasable programmable read-only memory Embedded memory block Logic block consisting from EMBs Embedded system Embedded system platform

xi

xii

FF FPGA FPLD FSM GAL GFT GOPS GPU GSA HSC HSI HW HWA I/O IC ICU IE IMF IoT IP ISA IT LAB LAN LCD LCS LUT LUTer MC ME MEMO MO MoC MP MPSoC OS PAL PC PCB PCI PE PES PG PI PLA

Abbreviations

Flip-flop Field-programmable gate arrays Field-programmable logic device Finite state machine Generic array logic Generalized formula of transition Giga operations per second Graphic processing unit Graph-scheme of algorithm Hardware-software co-design Hardware-software interface Hardware Hardware accelerator Input-output Integrated circuit Internal control unit Interface element Input memory functions Internet of things Intellectual property Industry Standard Architecture Information technology Logic array block Local area network Liquid-crystal display Linear chain of states Look-up table Logic block consisting from LUTs Microcontroller Memory element Mixed encoding of microoperations Microoperation Model of computation Microprocessor Multiprocessor system-on-a-chip Operating system Programmable array logic Personal computer Printed circuit board Peripheral component interconnect Processing element Pseudoequivalent states Productivity gap Programmable interconnect Programmable logic arrays

Abbreviations

PLL PLS PMM PROM PU RAM RG RLC ROM RTL RTOS RTS SB SBF SFT SG SoC SOP SPLD SRAM SSI STG STT SW TOC UFM ULSI USB VGA VLSI

xiii

Phase-locked loop Programmable logic sequencer Power-mode management Programmable read-only memory Processing unit Random access memory Register Replacement of logical conditions Read-only memory Register transfer level Real-time operating system Real-time system Switching block System of Boolean functions System of formulae of transitions System gate System-on-a-chip Sum of products Simple programmable logic device Static random access memory Stacked silicon interconnection State transition graph State transition table Software Transformation of object codes User flash memory Ultra large scale integration circuit Universal serial bus Video graphic array Very large scale integration circuit

Chapter 1

Introduction into Embedded Systems

Abstract This chapter provides some basic issues connected with embedded systems. Two types of computer systems are discussed (visible and invisible to a user). Next some example is shown allowing understanding the basic features of embedded systems. The common features of embedded systems and their characteristics are discussed. The final section is devoted to basic requirements to embedded systems.

1.1 Two Types of Computer Systems In the modern civilized world, a human being literally is surrounded by various computer systems. We live in the biosphere. It is a natural physical world. But in the process of evolution, mankind created an artificial world consisting from various artefacts. Inter alia, this artificial world includes various machines and mechanisms that facilitate the performance of different physical activities. In addition, mankind has created several virtual worlds (or spaces). One of them is the noosphere or the sphere of knowledge accumulated by people. Drawing on the resources of noosphere, people could create different artefacts and new knowledge. Since the second half of the twentieth century, people began to create the cyberspace. This is a virtual space where the processing and transmission of digital information takes place. This space is hidden inside different computer systems and communication channels. The term “cyberspace” first appeared in the short story “Burning chrome” written by William Gibson in 1982. Now, it is enough to have an Internet-connected personal computer (PC) to get access to a huge variety of services. Some of the available services are shown in Fig. 1.1. These services are obvious and familiar to any user. For access to services, the user needs a modem. The services themselves are provided by different servers. A server is a device which provides functionality for other devices called “clients”. So, the user from Fig. 1.1 is a client of such servers as Google, Skype, Amazon and so on. So, the computer system is the source (user’s PC) and end (a server) points of communication via the Internet (or other networks). © Springer Nature Switzerland AG 2019 A. Barkalov et al., Foundations of Embedded Systems, Studies in Systems, Decision and Control 195, https://doi.org/10.1007/978-3-030-11961-4_1

1

2

1 Introduction into Embedded Systems

Information search

Google ...

Communication with people Reservation of tikets LOT

PKP

PKS

e-mail Viber Skype ...

E-tiket

Buying goods Booking hotels

Booking.com ...

Amazon Allegro Showroom ...

Search free ebooks Smashwords Flibusta Payment of utilities

Mobile Banking

Visa MasterCard Eurobank ING ...

Other services …. Internet User

PC

Modem

Fig. 1.1 Services available via internet

The computer is a part of any computer system. There are six categories of computers (Fig. 1.2). Let us give a short characteristic for each type of computers: 1. A tablet computer (or simply, tablet) is a portable PC typically having a mobile operating system (OS), a liquid-crystal display (LCD) touchscreen and rechargeable battery in a single thin flat package. It belongs to unique user (an owner). 2. A laptop computer (or simply, a laptop or a notebook) is a small portable PC with a “damshell” form factor. It means that a laptop consist of two sections that fold via a hinge. A laptop has an alphanumeric keyboard and thin computer screen (either LED or LCD). As a rule, there is only a unique owner of such a computer. 3. A personal computer is a multi-purpose computer whose size, capabilities and price make it feasible for individual use. As a rule, it includes some separate blocks such as system block, display, keyboard and mouse. As follows from Fig. 1.2, these computers (tablet, laptop and PC) are used to editing text, Web searching, playing computer games, communication (e-mail, Skype, Viber), providing home finance. Very often, users work out some software using personal computers.

1.1 Two Types of Computer Systems

3

Computer Tablet

Laptop PC

Mini computer Server

Mainframe computer

Super computer ...

Editing of texts Web Search Workstation Weather Molecular Computer games forecasting Modeling e-mailing Home finance Bulk Software development Data ... Processing ... Traffic Energy Army Banks systems Govement Big Scientific organizations

organizations

corporations

Telecomunications

Fig. 1.2 Six categories of computers

4. A mini computer is a small computer used for control, instrumentation, human interaction and communication. They were developed in the mid-1960s. A good example is a PDP family by DEC. In the beginning, they were mostly used as workstations running computer aided design (CAD) software. Now they are mainly used as small and mid-range servers operating business and scientific applications. 5. A mainframe computer is a general purpose computer having much greater computing power than a minicomputer. They are used by large organisations for critical applications and bulk data processing (census, planning and so on). They are used by rather rich organizations having their own computer centrers. Of course, they could by used by different users from different organisations due to cloud computing. 6. A supercomputer is a computer with a high level computing performance compared with a mainframe computer. They are used for a wide range of computationally intensive tasks in various fields such as quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular modelling, physical simulations (airplane and spacecraft aerodynamics, the detonation of nuclear weapons, nuclear fusion). All aforementioned computers are general-purpose computers. It means that they are used to solve a wide range of scientific, technical, economic and other problems. They have a large memory, a well-developed man-to-machine interface, a lot of input– output units. Starting from mid-1970s, microprocessors are used for implementing central processing units (CPU) of computers. A microprocessor (MP) is a computer processor which incorporates the functions of CPU on a single integrated circuit. The microprocessor is a multipurpose, clock driven, registers-based digital integrated circuits (IC) which processes digital information.

4

1 Introduction into Embedded Systems

A distinctive feature of computers shown in Fig. 1.2 is that they are visible either for the user or to maintenance personal. Usually, any incident of such computers does not lead to accidents for other equipment; they do not pose a threat to the environment. The exceptions are computers that control traffic or technological processes. But even in these critical cases, the intervention of an operator (a human being) may prevent an accident. However, a large number of computer systems are invisible to their users. They are embedded in different equipment and carry out the functions of monitoring and control of this equipment [1, 2]. Such invisible computer systems are called embedded systems. An embedded systems (ES) could be defined as follows. An ES is a computer system with a dedicated function incorporated into a larger mechanical or electrical system for control of this larger system. There is an opinion, that 98% of all microprocessors are used as components of ESs. There are some important areas of application of embedded systems shown in Fig. 1.3. They are the following: 1. Consumer electronics. Audio and video devices is a very important sector of electronics industry. Including embedded systems into TV sets leads to smart TV sets. Smartwatches are very popular in the recent time. They are based on ESs. Different digital signal processors (DSP) are used in mobile phones turning them into smartphones. Also, embedded systems could be found in washmachines and dishwashers, microwaves and electrical furnace, game consoles and GPS devices. This sector is constantly expanding. 2. Automotive electronics. There is a significant amount of microprocessors in modern cars. They form distributed embedded systems controlling the operation of engine, airbags, anti-braking system (ABS), GPS, and so on. Almost every year, new systems enter this field. For example, automatic car parking systems make life much easier for drivers. 3. Aircraft electronics. Modern planes, helicopters, satellites are literally filled with various electronic devices. The term “avionics” was forged specifically to denote the electronics systems used in aircraft systems. Let us point out that the Apollo Guidance Computer (1960, MIT Instrumental Laboratory) is the first example of an embedded system. So, the first known ES was designed for a spacecraft. Now, ESs are applied in the systems used for flight control, anti-collision, pilot information and other. 4. Military applications. The last seventy years are connected with wide application of computer systems to enhance the defence of states. Let us point out that the first ES of mass production was the developed for control of ballistic missiles Minuteman. It was an on-board computer Autonetics D-17 (1961). Now embedded systems could be found in unmanned airial vehicles (drones), fire control systems, rugged routers and so on. As a rule, military organizations have huge budgets, which gives them the opportunity to sponsor frontiere research and use the results. Then these results could be used by other civil society organizations. Now in this area there is a trend of combining the individual military-oriented embedded systems in the framework of the so-called Internet of Battlefield Things.

1.1 Two Types of Computer Systems Fig. 1.3 Application of embedded systems

5

Audio equipment Video equipment

Embedded systems Consumer electronics Internet of Things Telecomunnication

TV sets Mobile Phones; Smartwatches Game consoles GPS; Washing machines Microwaves; Dishwashers Fire control system

Miltary applications

UAV controllers Automative Air-Bag control system Engine control system electronics Anti-braking system GPS-control system Aircraft Flight control system electronics Anti-collision system Pilot information system Train Engine control system electronics Safety control system Medical systems Tomograph Digital X-ray Fabrication equipment Smart buildings Smart City Robotics Authentification systems

Light control HVAC control Irrigation control Effect of presence Face control Finger prints control SMART pen

5. Medical systems. Using digital processing allows improving the medical equipment. Now, different medical devices are transformed into smart-devices. For example, there are known such devices as automatic internal defibrillators, magnetic resonators, digital x-rays, electronic scales, individual activity monitors. Using remote access gives possibility to organize the remote health monitoring. The increase in life expectancy stimulates research in this area. 6. Internet of Things (IoT). The Internet of Things is a network of physical devices, vehicles, home applications and other items connected trough a network. The network enables these objects to connect and exchange data. This term was coined in 1999 by Kevin Ashton of Procter & Gamble. Experts think that IoT will consist of about 30 · 109 objects by 2020. The IoT allows objects to be sensed or controlled remotely across existing network infrastructure. It creates opportunities for more direct integration of the

6

1 Introduction into Embedded Systems

physical world into the cyberspace. It results in improved efficiency, accuracy and economic benefits in addition to reduced human intervention. It seems that IoT finds applications in nearly every field. It started as radio frequency identification (RFID) devices and constantly expanded the area of application. 7. Smart buildings. Embedded systems can be used to increase the comfort level in buildings. They can reduce the energy consumption, improve safety and security. As a rule, distributed ESs are used for these purpose. Such subsystems as lighting control, HVAC (heating, ventilation and airconditioning) control, irrigation control and other are connected into a single system. This approach allows, for example, to create the effect of presence in an empty house. The system automatically turns on and off the lights, TV or radio, opens and closes the blinds and so on. When someone is approaching to the house, the system can simulate a dog barking or just call the police. In the event of fire, the system may call the Fire Department. Also, this approach makes it possible the remote control (for example, through a smartphone). 8. Smart city. It is a development of IoT and smart building’s conceptions. A smart city is an urban area that uses different types of electronic data collection sensors to supply information which is used to manage assets and resources efficiently. This includes data collected from citizens, devices, and assets. The data are used to monitor and manage traffic and transportation systems, power plants, water supply networks, waste management, law enforcement, information systems, libraries, schools, hospitals, and other community services. Currently, these technologies are rapidly developing and the future belongs to them. Naturally, such a high level of automation allows hackers to completely paralyse the life of a smart city. This necessitates the development of effective safety tools allowing security for information systems serving a smart city. 9. Telecommunication. Telecommunication is the transmission of information (signs, signals, messages, words, writings, images and sounds) by wire, radio, optical or other electromagnetic systems. A telecommunications network is a collection of transmitters, receivers and communications channels that send information to one another. The largest network is Internet. Also, there are local area networks (LAN) and wide area networks (WAN). Telecommunications systems use a lot of embedded systems. They include routers, cryptographic systems, network bridges and so on. Only these systems make possible appearance of smart buildings, smart cities, Internet of Things and even the World Wide Web. There are other examples of embedded systems. They are used in robotics, train electronics, fabrication equipment, industrial printers, systems of identification (authentication) and so on. So, there is a huge variety of embedded systems. They could be physically different, but they have also some common characteristics. To find these common characteristics, let us discuss some examples.

1.2 Analysis of Example

7

1.2 Analysis of Example To understand the problems connected with development of embedded systems, let us discuss a very simple example. It is a system of temperature control. The system could be used, for example, in a smart building. Example 1.1. After the power is turned on, the system maintains the room temperature within the specified limits. To achieve this, the system controls the air-conditioner having two modes of operation: the heating and cooling. If the temperature exceeds the upper limit (Tmax ), then the cooling mode is activated. If the temperature is below the lower limit (Tmin ), then the heating mode is turned on. If temperature reaches a preset average value (T0 ), the air-conditioner turns off. The system switches into stand-by state and monitors temperature. 1. Analysis of the problem. To organise the operation of the system, it necessary to have the air conditioner (ACD), a temperature measuring device (TMD), and a device for switching the operation modes (SD) of the conditioner (Fig. 1.4). There are three switches for control of ACD’s mode of operation: cooling on (C-on), heating on (H-on) and ACD turning off (ACD-off). The TMB measures temperature of the room. It is transmitted through the Data channel. The ES receives these Data and analyses them. The ES gives SD some control commands determining the mode of operation of ACD. If there is the command for cooling mode, then the SD executes the action A1. If there is the command for heating mode, then the SD executes the action A2. If the ACD should be turned off, the SD executes the action A3. 2. Modes of operation of ES. The modes of ES’s operation depend on two factors. The first factor is the initial room temperature, i.e. the temperature immediately after power-up. The second factor is the process of changing the temperature in the room depending on external conditions. For example, in winter the room is quickly cooled, while in summer the room can quickly heat up (after turning off the ACD). So, there are four different models of interaction between the ES and physical environment: (a) TR > Tmax ; (b) TR < Tmin ; (c) summer mode and (d) winter mode. They are shown in Fig. 1.5, a–d. The symbol TR stands for the initial temperature of the room. Let us analyse Fig. 1.5, a. Let the power be up in t1 . The ES needs some time to analyse the data from the TMD. The solution is found in the instant t2 . Fig. 1.4 Organisation of air-conditioning control system

TMD

Data Value of T

Temperature, T

ES Control commands SD

A2 A3 Actions A1 C-on H-on ACD-off ACD

8 Fig. 1.5 Modes of interaction: TR > Tmax (a), TR < Tmin (b), summer mode (c), winter mode (d)

1 Introduction into Embedded Systems

(a)

(b)

T Tmax T0 Tmin TR

T TR Tmax T0 Tmin 0 t1 t2

t3 t4

0 t1 t2

t

(c)

t3 t4

t

(d) T

T

Tmax T0 Tmin

Tmax T0 Tmin

0 t1 t2 t3 t4 t5 t6

t

0 t1 t2 t3 t4 t5 t6

t

The cooling mode turns up in t2 . In the instant t3 , the temperature reaches the level T0 . The ES analyses this situation and makes a decision. In the instant t3 , the ACD turns off. Let us analyse Fig. 1.5, b. Let the power be up in t1 . The ES finds that TR < Tmin (the instant t2 ). It turns up the heating mode. The points t3 , t4 are the same as for Fig. 1.5, a. Let us discuss the summer mode (Fig. 1.5, c). In the instant t = 0, the ES is in the standby mode; the ACD is not in operation. The temperature begins to rise, but the ES is in standby mode till the instant t1 . In this instant, there is T > Tmax . After finding the occurrence of this event, the cooling mode is activated (the instant t2 ). In the instant t3 , there is T = T0 . The ES finds this event and makes a decision to make the ACD off (the instant t4 ). However, due to high external temperatures, the room temperature begins to grow. The ES is in the standby mode after the instant t4 . The instant t5 is similar to t1 , and t6 to t2 . This mode of operation is periodically repeated. The mode of operation for winter conditions is clear from Fig. 1.5, d. It is practically the same as the summer mode, but it is necessary to heat up the room. 3. Algorithm of ES’s operation. As a a result of the performed analysis, it is possible to work out an algorithm of operation of the discussed ES. It is shown in Fig. 1.6. The node “Start” corresponds to the moment of power up. The ES receives data from the TMD. These data are analysed to make a decision. If T > Tmax , then the cooling mode switches up (C-on: = 1). Next, the system waits till the temperature reaches the value of T0 . It corresponds to a waiting node checking the condition T = T0 . When the level of T0 reaches, then ACD switches off (ACD-off:=1). The system transits into the standby mode. If the standby mode ends, the system checks the data again. If T ≤ Tmax but T < Tmin , then the another branch of algorithm is activated. In this case there is H-on: = 1 and ES transitions to the waiting node.

1.2 Analysis of Example

9

Fig. 1.6 Algorithm of operation of air-conditioning control ES

Start

Receiving the value of T from TMD 0 0

TTmax 1

1 C-on:=1

H-on:=1 T=T0

0

1 ACD-off:=1

Standby mode

If the temperature mode is normal (Tmin < T < Tmax ), then the standby mode is activated. It is the third branch of the algorithm. There is no end node in this algorithm. The operation could be terminated in any instant of time due to switching the power off. The following conclusions can be made for this example. The digital embedded system interacts with the air conditioner. The digital system controls the physical process of changing the temperature in a confined space. The parameters of the physical process (the values of temperature) are used to select the appropriate branches of the control algorithm. In turn, the control signals of ES are converted into actions that change the state of the physical process, i.e. the temperature. Thus, the ES and physical process change mutually states of one another in the process of interaction. We do not discuss how to organize the standby mode. This issue does not influence the process of interaction. It is rather the internal function of the ES. We can only say that the standby mode is organized with the help of a timer. 4. Modelling the interaction of systems. So, there is an interaction between the embedded system and air conditioning system. It can be represented by the structural diagram (Fig. 1.7). In a broader sense, it is the interaction between the physical and digital (cyberspace) worlds. Physical processes occur in real time. They have a continuous character, as physical time itself. Digital processes occur in discrete time. The moments of state changes of discrete processes are determined by a special clock generator which sets the course of discrete time. Before implementing

10

1 Introduction into Embedded Systems

Fig. 1.7 Structural diagram of interaction

Physical World TMD

Digital World Data Temperature

Embedded system

ACD Control data

the ES, it is necessary to check the correctness of the control algorithm and the correctness of interactions. To do it, we can use the mathematical models of the physical world and the embedded system. To simulate the process of temperature changing, we can use the equations of thermodynamics. In the discussed case, we should use the differential equations describing the process of temperature change in continuous time. To simulate the dynamics of ES, we can use the model of finite state machine (FSM). In this case, the system is represented by a 6-tuple S = A, X, Y, δ, λ, a1 .

(1.1)

In (1.1), A = {a1 , . . . , a M } is a finite set of states, X = {x1 , . . . , x L } is a set of input variables, Y = {y1 , . . . , y N } is a set of output variables, δ : A × X → A is a function of transitions, λ : A → Y is a function of outputs, a1 is an initial state (a1 ∈ A). It is a Moore FSM. Each node of algorithm including some actions is represented by an unique state. Each conditional node corresponds to some logic condition (input variable). Each action corresponds to some output variable. For example, the following sets X and Y could be derived from Fig. 1.6: X = {x1 , x2 , x3 } and Y = {y1 , y2 , y3 , y4 , y5 }. Let us form the list of these variables: x1 = 1 i f

T > Tmax ;

x2 = 1 i f T < Tmin ; x3 = 1 i f T = T0 ; y1 = 1 f or checking the value o f

T;

y2 = 1 f or C − on := 1; y3 = 1 f or H − on := 1; y4 = 1 f or AC D − o f f := 1; y5 = 1 f or organi zing the standby mode. Using this list, we can obtain a state transition graph (STG) shown in Fig. 1.8. As follows from Fig. 1.8, there is a set A = {a1 , . . . , a5 }. The state a5 ∈ A is added to organize the waiting mode of operation (if T = T0 ). Both models can be represented by computer programs. It allows simulating the process of interaction between the physical and digital worlds.

1.2 Analysis of Example

11

x1 y1 a1

x1x2 y5 a4

Sensor

UX TU

1

a2

x3 a2

x1x2

Fig. 1.9 Transmission of data from sensor

y2

y1

wire CU

a5

y3 1

a3 x3

ADC

...

Fig. 1.8 State transition graph of FSM corresponding to ES

Binary code

Temperature

5. Organization of interaction. There are three problems that need to be solved to organize the interaction. The first relates to the different nature of signals, transmitted by TMD, and signals necessary for digital processing. The TMD is a sensor, i.e. the device transforming the value of measured parameter into corresponding electric voltage. Voltage is a continuous value, but an ES can process only the binary codes. So, it is necessary to use some converters of analog signals to digital codes. Such devices are called analog-to-digital converters (ADC). After forming voltage, the sensor should pass it to ADC. Let us use the electrical wire as a medium for voltage transmission. Transferred voltage must be constant during its processing by the digital part of ES. To do it, we can use the “sampleand-hold” device (SAH). It is shown in Fig. 1.9 together with the sensor and ADC. The scheme (Fig. 1.9) operates as follows. The sensor continuously converts temperature (T) into voltage (Ux ). If y1 = 1, then the transistor TU opens and the capacitor CU is charged up to the level of Ux . When the capacitor is charged, then the signal y1 = 0 is generated. It closes the transistor TU . So, the capacitor maintains the level of Ux as a constant level. The unification of TU and CU forms the SAH. The second problem is the discrepancy between the levels of the currents generated by the digital part of the ES and required for control of ACD. For example, the digital part generates microamperes (μA) and to control the switches of ACD needs dozens of milliamperes (m A). To solve this problem, it is necessary to use a digital-to-analog converter (DAC) between the digital part and switches of ACD (Fig. 1.10). The third problem is related to the different frequency characteristics of the physical process and the processing of digital information. Processing of digital data can be performed with a frequency of hundreds of megahertz (MHz). But the temperature may change very slowly. For example, the rate of change could be

12 Fig. 1.10 Organization of connection between digital part and ACD

Fig. 1.11 Organization of standby mode

1 Introduction into Embedded Systems

y2 y3 y4

Digital Part

I2 DAC

I3 I4

SwC SwH Sw off

ACD

TACT R T C

CC Clock CG Switch

Timer TEND

X Digital Part Y

equal to one Celsius degree in two minutes. It gives the frequency equal to 0.0083 Hz. As you can see, the difference in operating frequencies is huge. During calculations, the electrical energy is consumed by a digital part. This leads to early deterioration of the battery and the need for its replacement. If the system is powered from the mains, it is necessary to pay more for used electricity. To reduce energy consumption, it is advisable to switch the ES into standby mode. We can use timers to organize the standby mode. A timer is a specialized type of clock for measuring time intervals. It counts down from a specified time interval. Let us use timer which is activated by the signal T AC T and in five minutes generates a signal TE N D . Using these signals we can disable the clock generator (CG) from the digital circuit (Fig. 1.11). If T AC T = 1, then the timer starts operation and at the same time the trigger T generates the signal CC = 0. The switch moves to the closed state and the pulses Clock are not received by the digital part of ES. If the countdown is over, the signal TE N D is generated. Now CC = 1 and pulses Clock enter the ES. The valid standby time can be obtained from the results of joint modelling physical process and ES’s operation. Note that because of the standby mode situation is possible, when there is no possibility to determine the moment when T = T0 . Thus, it is necessary to clarify the algorithm of ES’s operation. 6. Improvement of algorithm. As it became clear from the previous point, it is necessary to make some change to the original algorithm (Fig. 1.6). Now, when cooling it is necessary to check a condition T < T0 . Next, when heating, it is necessary to check a condition T > T0 . To organize the standby mode, it is necessary to generate the signal T AC T = 1. The exit from this mode occurs automatically using TE N D = 1. There is an improved algorithm shown in Fig. 1.12. The discussed example clearly shows us typical initial stages of ES’s development. The process can be represented as follows (Fig. 1.13). Now, a designer has all the information to start the process of ES’s implementation. We discuss this stage in Chap. 2.

1.2 Analysis of Example

13 Start

Receiving data from the sensor (y1) x1 0 0

x2 TTmax

C-on:=1 (y2) TACT:=1 (y6)

1

Receiving data from the sensor (y1) x3

TACT:=1 (y6)

H-on:=1 (y3) TACT:=1 (y6)

1

ACD-off:=1 (y4) TACT:=1 (y6)

Receiving data from the sensor (y1)

TT0

0

1 TACT:=1 (y6)

Fig. 1.12 Improved algorithm of ES’s operation Fig. 1.13 Stages of the process of development of embedded systems

Start The concept of developed system Analysis of the problem Development of a preliminary algorithm of the system’s operation Development of models of physical process and behaviour of ES

Modelling of interaction The choice of equipment for the organization of interaction The implement of initial algorithm of system’s operation

End

14

1 Introduction into Embedded Systems

1.3 Common Features of Embedded Systems In previous sections we show different examples of embedded systems. Obviously, these systems are very diverse. However, they share common traits that distinguish them from other computing systems. Common characteristics of embedded systems are shown in Fig. 1.14. Let us explain them. 1. Cyber-physical systems. A cyber-physical system (CPS) is a combining computations with physical processes. Embedded systems execute monitoring and control of physical processes. As a rule, there is a feedback loop where physical process affects computation process and vice versa. We shown this nature of ESs in the example from Sect. 1.2. This term was coined by Helen Gill in 2006 (the National Science Foundation, USA). The first part of this term stems from the word “cybernetics”. This word comes from Greek language. It means “to steer, navigate or govern”. The term “cybernetics” was coined by the physicist Andre-Marie Ampere in 1834. It described the science of civil government. It was borrowed by Norbert Wiener and used in his famous book “Cybernetics: Or Control and Communication in the Animal and Machine” (1948). The typical cybernetic model includes two entities. The first of them is a control system, the second is an object of control (Fig. 1.15). A control system monitors the state of an object of control. The state signals carry information about the state of the object of control. The control system acts using a control law and initial data. A control law is represented as a control algorithm. The control system used both issues (state signals and control algorithm) to generate control signals. These signals change the state of the object of control in accordance with the control algorithm. So, there is a feedback loop among the object of control and the control system. The cybernetic system is a closed system with a feedback control loop. In the case of technical systems, the state signals are delivered through sensors and ADCs to digital parts of ESs. The digital part could be viewed as a control system generating binary codes of control signals. These signals are transmitted to physical objects using DACs and actuators.

Embedded systems

Cyber-physical systems

Real-time systems

Dedicated systems

Fig. 1.14 Common characteristics of embedded systems

Reactive systems

Systems with dedicated user interface

1.3 Common Features of Embedded Systems

15

Fig. 1.15 Typical cybernetic model

Initial data Control signals

Control system

Object of control

State signals Fig. 1.16 Models of the closed system

Fig. 1.17 Explanation of time constrain

Model of ES

Actions

Model of communications

Event (E1)

Model of physical world

Response (R1) Deadline

No Calculations of actions an solution t1 t0 0 tr Tmax

Time

So, the system (Fig. 1.7) is a typical cyber-physical system. Its control law is represented by the algorithm shown in Fig. 1.12. In the case of CPSs, different models are necessary. For example, three models describe the system from Fig. 1.7. It is the model of ES, the model of communications and the model of controlled object (Fig. 1.16). Using these models, a designer can get information about the mutual dynamics of a control system and an object of control. It gives different temporal constraints for an ES, as well as other important data. Of course, these models should be accurate but not redundant. 2. Real-time systems. A majority of ESs must meet real-time constrains. It means that the computations should be executed with a given time-frame. In the case of a real-time system (RTS), the time of response determines the quality of the system. The RTS should find a correct solution to solve a particular problem before some deadline. The response time (tr ) should not exceed some maximal time (Tmax ). So, if some event occurs, then the RTS should operate as it is shown in Fig. 1.17. If some event does not take place, then there is no action (from T = 0 till T = t0 ). In the instant t0 , the event E 1 takes place. The system analyses it, makes some calculations and makes a decision. The response R1 is produced in the instant t1 . So, there is the time tr = t1 − t0 . If tr ≤ Tmax , then the RTS meets constraints for the event E 1 . Obviously, these values could be different for different events. There are three types of RTS: hard, soft and firm. They have different consequences in missing a deadline.

16

1 Introduction into Embedded Systems

In the case of hard RTS, the missing a deadline is a total system failure. In these systems it is absolutely imperative that response occurs within the required deadline. For all kinds of systems, a deadline is a given time after a triggering some event by which a response has to be completed. We can use such function as function of usefulness of response U (t) to show different systems. In the case of hard RTS, the function U (t) is represented by following equation:  1 if t < Tmax ; (1.2) U (t) = −∞ if t ≥ Tmax . The Eq. (1.2) is illustrated by Fig. 1.18. Consider the following example. Let a car having speed Vcar = 13.8m /sec hit a tree having speed Vt = 0. To avoid serious injury or even the death of the driver, it is necessary to inflate the air-bag. It is the task of air-bag control system. Let it be necessary tab = 30 ms to inflate the air-bag. Let the driver’s head is located at a distance of 50 cm from the steering wheel (d = 50 cm). In this case, the head will hit the steering wheel in 30 ms (it is calculated using some equations from physics). So, the air-bag control system has only 6 ms to make a decision and inflate the airbag. The inflation should be executed just in right time. The negative consequences will be associated with both early and late inflation of the air-bag. If the process of filling the air-bag with gas is started too late, then the driver’s head hits the steering wheel. If the process starts too early, then air-bag will lose pruinosity before the head hits it. In both cases, the driver will be harmed. In the case of soft RTS, the usefulness of a result degrades after its deadline. It degrades the system’s quality of service. So, in this case the deadlines are important but the system will still function correctly if they are occasionally missed. It is illustrated by Fig. 1.19, as well as the Eq. (1.3).

U (t) =

⎧ ⎪ ⎨1t ⎪ ⎩

tT −Tmax

if t < Tmax ; if Tmax ≤ t < tT ;

0

if t ≥ tT .

T −t

(1.3)

As follows from (1.3) and Fig. 1.19, the obtained result is 100% useful if it is produced before Tmax . Next, it is still useful but the usefulness decreases with time. When time is equal to some extreme value tT , the usefulness is equal to zero.

Fig. 1.18 Hard real-time system’s usefulness

U(t) 1 0 ∞

Tmax t0

t Deadline

1.3 Common Features of Embedded Systems Fig. 1.19 Usefulness of soft RTS

17 U(t) 1 0

Fig. 1.20 Usefulness of firm real-time systems

t0

Tmax

tT t

t0

Tmax

t

U(t) 1 0

There are some examples of soft RTSs. It could be multimedia systems, mobile phone control or data acquisition systems. Consider an example from digital television. The picture and sound should be synchronized. If the sound lays behind the image, there is no harm to the viewer, but the quality of service lost. In the case of firm RTS, also there is no harm for a controlled system. But there is no benefits from late delivery of services in such systems. The usefulness of firm RTS is represented by Eq. (1.4) and Fig. 1.20.  U (t) =

1 if t < Tmax ; 0 if t ≥ Tmax .

(1.4)

The on-line transaction systems or stock-price quotation systems are examples of firm RTSs. So, they are soft RTSs with a strict deadline. Sometimes, the firm RTSs are called stringent. 3. Dedicated systems. An ES executes some control algorithm. It means that such a system executes only calculations connected with this algorithm For example, a processor controlling the car engine always controls only this car engine. It will never run a computer game or a multimedia application. Due to this specialization, it is possible to optimize hardware and software of a particular ES. Due to the optimization, we can design a system with desirable characteristics and do not pay such a high price as for general purpose computing systems. 4. Reactive systems. A typical ES executes monitoring the state of a controlled object. So, it is waiting for occurrence of some events. The events are determined by signals on the system inputs. For each input, an ES executes some calculations and generates some outputs. So, there is a lot of waiting nodes in control algorithms. Generating an output function changes the states of ES. So, reactive systems could be represented using models of finite state machines. Waiting for events, the systems does not perform calculations. So, it is possible to diminish the voltage or to block the synchronization for a digital part of ES. It helps saving energy consumed by an ES.

18

1 Introduction into Embedded Systems

5. Systems with dedicated user-interface. As a rule, there is a huge difference between user-interfaces for embedded systems and computers. There are no mice, keyboards, monitors and other similar things in ESs. They have a dedicated user interface including steering wheels, pedals, push-buttons, small LED displays. Now a reader has general information about embedded systems. Let us analyse the requirements for these systems.

1.4 Requirements for Embedded Systems There are two requirements for embedded systems (Fig. 1.21). These systems should be dependable and efficient. Let us discuss these issues. A lot of embedded systems interact with environment because of their cyberphysical nature. So, they are safety-critical and should be dependable. Dependability is very important when an ES is directly connected with physical world. There are many examples where dependability is a key issue (control systems in cars, trains, airplanes, military systems etc.). Dependability could be estimated using five factors (Fig. 1.22). Let us explain them. Reliability is the probability that a system will not fail. It is necessary to diminish the risk of the failure. The risk of failure increases with the transition from microelectronics to nanoelectronics. It would seem that progress in technological processes must be accompanied by improving the reliability. However, it is not. This paradox was formulated in 2009 by ITRS organization. It states that decreasing feature sizes of semiconductors will be resulting in a reduced reliability of semiconductor devices. Because of it, it is very important to have tools of dependability analysis. It helps to design fault tolerant embedded systems. To do it, it is necessary to find failure rates of a system. The characteristics of a system should be improved to make the

Fig. 1.21 Main requirements for embedded systems

Embedded systems

Dependable systems

Fig. 1.22 Main features of dependable systems

Efficient systems

Embedded systems

Reliability

Availability

Maintainbility

Security Safety

Survivability

1.4 Requirements for Embedded Systems

19

rate of a catastrophe less than 10−9 per hour. In this case, a system should work for 109 h without a failure. Or, there is one case of failure per 100 000 systems operating for 10 000 h. Maintainability is the probability that a failing ES can be repaired in a certain time-frame. It could be determined as the easy which the system can be maintained into order to: (a) correct defects or their causes; (b) repair or replace faulty or worn out components without having to replace still working parts; (c) prevent unexpected working condition; (d) maximize a system’s useful life; (e) meet new requirements or (f) cope with a changed environment. Availability is the probability that a system is available. It is the proportion of time an ES is in a functioning condition. This is often described as a mission capable rate. It can be found as the ratio of the total time a functional unit is capable of being used during a given interval to the length of the interval. In having availability systems, a metric known as nines is used. It corresponds to the number of nines following the decimal point. With this convention, “five nines” equals 0,9999 (or 99,999%) availability. Availability of an ES is typically measured as a factor of system’s reliability. If reliability increases, the availability increases, too. It also can be increased by increasing testability, diagnostics and maintainability. It is known that improving maintainability during the early design phase is much easier than reliability. But it is necessary also to control the reliability of a system. If reliability is not under control, then many sort of issues may arise. Some of them are: (a) the need for complex testability requirements such as built-in test sensors, hardware and software; (b) the need for detailed diagnostic procedures; (c) availability of manpower and spare parts and (d) lack of repair knowledge and expert-personnel. The problem of unreliability may also become out of control due to “domino effect” of maintenance included failures after of problem solving. So, reliability is the most important part of availability. Safety is a property of system ability to operate (normally or abnormally) without danger for human beings and environment. In the case of primary safety-critical systems, their failure leads to danger situations. Secondary, safety-critical systems may lead to faults in other systems. Security is the system ability to protect itself from accidental or deliberate external attack. It leads to situation when confidential data remains confidential and that authentic communication is guaranteed. There are three issues connected with security: confidentiality (only authorized individuals have access to data), integrity (data cannot be changed without the knowledge of people who own it), availability. Survivability is the ability of ES to operate in the face of deliberate or accidental attack. It is very important for distributed systems whose security can be compromised. This issue is connected with resilience when a system continue to operate in spite of component failures. So, dependability is very important property of ES. It is a non-functional requirement but it reflects the user’s degree of trust in that system. It reflects the extent of the user’s confidence that an ES will operate as expected and that it will not “fail”

20 Fig. 1.23 Redundant embedded system

1 Introduction into Embedded Systems Input data

Hot parts Digital part1

Agreed result Output Comparator

Digital part2 Digital part3 Idle part

Fault manager

Digital part4

in normal circumstances. Systems that are not dependable may by rejected by users because they are unreliable, unsafe or insecure. There are two ways for achieving dependability. They are redundancy and diversity. Redundancy means that more than one version of a critical component is available. Diversity means that the same functionality is provided in different ways. But adding diversity and redundancy adds complexity and this can increase the chance of error. For example, there are four digital parts in ES shown in Fig. 1.23. Three of them are “hot” and one is “idle”. There are the same input data processed by digital parts 1–3. They form results which are compared by output comparator. If all three results are the same, then the agreed result (control signals) enter other parts of a system. If some result is different, then the fault manager finds the failures parts, and replaces it by the idle digital part 4. At the same time somebody should repair the defected digital part. Obviously, it is more expensive system than the one having only a single digital part. If costs of critical ES failure are very high, then it can be designed using development methods that are not cost-effective for convenient computer systems. These methods are formal methods of software development, methods of static analysis, external quality assurance or independent verification and validation. But even in this case, different failures are possible. For example, some components of ES have reached the end of their natural life. A system could be very dependable, but a human operator can make a mistake leading to failure. Unfortunately, there are no systems with dependability equal to 100%. Let us discuss the issue of efficiency. It is the ability to avoid wasting materials, energy, efforts, money, and time in doing something or in producing a desired result. In more scientific term, it is a measure of the extent to which input is well used for intended task or function (output). In general, efficiency is a measurable concept, quantitatively determined as percentage of useful output to total input. So, it is possible to measure efficiency. There are the main metrics used for evaluating the efficiency of ESs shown in Fig. 1.24. 1. Energy. An ES should be energy efficient. Energy is the ability to do the job. In the case of ES, electrical energy is used. It is measured in Joule or Watt-hour. The second important characteristic is power. It is the measurement calculating the

1.4 Requirements for Embedded Systems

21

Fig. 1.24 Metrics of efficiency of embedded systems

Metrics of efficiency

Energy

Run-time

Code size

Weight

Cost

time by which the energy has been used. So, energy is what a source delivers and power is the rate at which it is delivered. Energy is measured in Watts. A Watt equals to a Joule per second. Electric power is the rate at which electrical energy is transferred by a circuit. Power cannot be converted or transformed. But it is possible for energy. For example, a battery converts chemical energy to electrical energy. Power is an energy per unit of time. In electrical circuit, the power P is a function of voltage V and current I (or resistance R): V2 . (1.5) P=V·I = R Obviously, both voltage and current are functions depended on time. So, the following formula determines the consumed energy:  E=

t2

P(t) · dt.

(1.6)

t=t1

A very simple conclusion may by made from (1.6): to decrease the consumed energy it is necessary to diminish power used for execution of some work. In the case of ES, the work is reduced to execution of calculations. So, executions of calculations consumes energy. Computational requirements are increasing very rapidly. For, example it is necessary from 10 to 100 billion operations per second for advanced multimedia applications. It means that such systems should used rather expensive specialized hardware. Many embedded systems are mobile devices obtaining energy from batteries. It means that they can use only limited energy. For example, mobile phones could use only 1 watt for computations. The more efficient is a device, the longer it lives using batteries. If a device takes energy from some electrical network, it is also very important to decrease the energy consumption. This fully corresponds to the modern concepts of green information technologies. Reducing the energy consumption saves limited resources of the Earth and is environmentally friendly. 2. Run-time efficiency. To execute necessary calculations, it is necessary to use the minimum amount of resources. The system should be able to meet time constraints using minimum hardware and energy. It could be done because ESs are dedicated systems. 3. Code size. As a rule, the executed code is kept inside an ES. There are no hard discs in ESs. So, the code size should be optimized for a given application. Maybe

22

1 Introduction into Embedded Systems

flash-based memories could change this situation, but it is the future trend. But now, the code size should be as small as possible. 4. Weight. Obviously, a smartphone should be not heavy. A customer will buy device with less weight. On-board computers must have a reasonable weight. It is true for all portable systems. 5. Cost. It is clear, if two devices have the same functionality, people will buy a cheaper product. So, the cost is a crucial issue to achieve success in the market characterized by high competition. Obviously, the decrease in the cost should not be accompanied by decrease in quality of ES. So, embedded systems are very important. They differ from rather simple (temperature control) to very complex (multimedia or flight control). They should be efficient and dependable. They should meet time constraints. They should be friendly for the user and environment. All these features should be taken into account by designers of embedded systems. Let us discuss the main methodologies of embedded systems, design.

References 1. E.A. Lee, S. Seshia, Introduction to Embedded Systems - A Cyber-Physical Systems Approach, 2nd edn. (2015) 2. P. Marwedel, Embedded System Design: Embedded Systems, Foundations of Cyber-Physical Systems, and the Internet of Things, 3rd edn. (Springer Publishing Company Incorporated, Berlin, 2017)

Chapter 2

Design of Embedded Systems

Abstract The chapter is devoted to different design methodologies targeting complex embedded systems. The hierarchy of levels is introduced, as well as Y-Chart proposed by Prof. Daniel Gajski. Using the Y-Chart, some basic methodologies are discussed such as bottom-up, top-down and meet-in-the middle. The third section is devoted to platform-based design of embedded systems. Next, the peculiarities of ASIC-based design are discussed. The last section is devoted to very important problem of increasing the energy efficiency of embedded systems.

2.1 Hierarchy of Levels In the heart of any product is some idea. These ideas can arise from the needs of the market, but also they can be brilliant insights. In this book, we deal with electronic systems. Any electronic system starts from a description (specification) of its desired properties. It is rather an abstract description of a certain product. For the manufacturing the product, it is necessary to work out technical documentation. It is done in the process of design. The design process is a methodical series of steps that engineers use in creating functional products (or processes). This process is highly interactive. It means that it is often necessary to repeat some parts of a process. It is a decision making process using knowledge to convert available resources optimally to meet a stated objective. There are three main stages in the process of appearance of a product on the market (Fig. 2.1). The design process is connected with application of different computeraided design tools [1–3]. Computer-aided design (CAD) is the use of computer systems (or workstations) to aid in the creation, modification, analysis, or optimization of a design. The CAD software is used to increase the productivity of the designer, improve the quality of design, improve communications for manufacturing. Its use in designing electronic systems in known as electronic design automation (EDA). A characteristic feature of CAD systems is the presence of libraries containing standard solutions. These can be standard software components (subroutines) or schematic solutions. Using libraries allows significantly speeding up the design © Springer Nature Switzerland AG 2019 A. Barkalov et al., Foundations of Embedded Systems, Studies in Systems, Decision and Control 195, https://doi.org/10.1007/978-3-030-11961-4_2

23

24

2 Design of Embedded Systems Stage 2

Stage 1 Specyfication of abstract object

Design

Technical documentation for manufacturing Stage 3

Manufacturing

Physical object with desirable propertis

Fig. 2.1 Three stages of time-to-market

Stage I

Step1

Step2

Library1

...

Library2

Stage II

StepN

LibraryN

Fig. 2.2 Multi-step nature of design process

CPU1 Arbiter

CPU2

Memory

CPU bus HW

Bridge

IP bus IP

Fig. 2.3 Multicomponent embedded system

process. Now, a designer should develop only some original parts of a system absent in the depository. Use of CAD tools allows to reduce the time-to-market and make a product more competitive. Design process is a multi-step one (Fig. 2.2). To execute each step, different libraries are used. The process is iterative and includes a lot of feed-back connections. Embedded systems may by very complex. It could include different processing elements (PE), communication elements (CE), memory elements (ME), interface elements (IE). Processing elements could be represented by either standard processors or custom made intellectual property (IP) cores. Communications elements are buses or routers. Interface element are necessary to match different input-output protocols. There are such IEs as transducers, bridges, arbiters and interrupt controllers. Memory elements are represented by local and shared memories. Also, an ES could include some specialized hardware elements (HW). A system could be implemented either as a single silicon chip or as a multi-component circuit placed on a printed circuit board (PCB). An example of an ES is shown in Fig. 2.3. It includes two standard CPUs, a shared memory block, a specialized HW, and a block of IP. The arbiter is used to share the memory access between CPU1, CPU2 and HW. The bridge is necessary to harmonize the protocols of CPU bus and IP bus.

2.1 Hierarchy of Levels

25

To implement such complex systems, it may be necessary to use billions of transistors. Modern silicon technology allows implementing such a system as a single chip. But it is impossible to start such a circuit from the transistor level. To design such complex systems, it is necessary to use their hierarchical representation. In the case of embedded systems, four levels of hierarchy are used. They are system, processor, logic and circuit levels. System level includes different components from processor level; processor level includes different components from logic level, and so on. The components of circuit levels are logic gates consisting from transistors. Each level of hierarchy has its own library. This library includes elements constructed from components of previous level of hierarchy. Let us consider Fig. 2.4. There is a system including two processing elements (PE1, PE2), a memory block (MB), the common bus and its arbiter (AB). It is the system level of hierarchy. Elements of system level are constructed using libraries of processor level elements. Practically, there are no efficient CAD tools for system level of ES. A choice of elements is mostly executed manually using experience of a design team. The processor level is also known as a register transfer level (RTL). Each element of RTL is represented either as a memory block (registers) or a combinational block (ALU, multiplexer and so on). Buses are used to transfer data between the blocks. Each element of a register file is a register (RG). It is a memory block keeping a word of data. Each element of ALU is an operational (or functional) block executing some arithmetic or logic operation. These blocks are adders (summators), shifters, multipliers, multibit AND blocks and so on. To create buses, multiplexers and buffer elements are used. All these elements create the logic level. To create a register, it is necessary to have flip-flops (or latches). All operational blocks are implemented using gates. As a rule, it is the NAND gate in the heart of design. Obviously, flip-flops can be viewed as circuits consisting of NAND gates. Both gates and flip-flops are the elements of circuit level. But there is one more level which is basic for all other levels. It is the level of transistors. A transistor is a semiconductor device used to switch or amplify electronic signals. It has at least three terminals for connection with on external circuit. A voltage (or current) applied to one pair of terminals controls the current through another pair of terminals. The first practically implemented device was a point-contact transistor invented in 1947 by American physicists John Bardeen, Walter Brattain and William Shockley. The first working silicon transistor was developed at Bell Labs on January, 26, 1954 by Morris Tanenbaum. The first commercial silicon transistor was produced by Texas Instruments in 1954 (Gordon Teal). Semiconductor transistor is the base of an integrated circuit (IC). Now the dominant technology for ICs is a complementary metal-oxidesemiconductor (CMOS) technology. It is used in microprocessors, microcontrollers, memory blocks and so on. It was patented by Frank Wanlass in 1963 (Fairchild Semiconductor). Now, to characterize different semiconductor manufacturing processes the issue of technology node is used. Previously, it referred the different features of a transistor such a gate length or M1 half-pitch. The half-pitch refers to half the

26

2 Design of Embedded Systems PE1

PE2

MB

AB

System level

Common bus Start Clock

Bus A Bus B

Register file

ALU

Procesor level

Operational blocks

Register

Logic level D C R Circuit level

Processor components library

Add

Shifter

Library of processor blocks ...

32 Otput

RTL components library

TT

Library of logic blocks ...

Start Clock

Multipler

Register transfer level Input 32

Library of system blocks

NAND gate

D flop-flop nMOS

pMOS

Transistors Transistor level

Logic components library Library of gates and flip-flops

Circuit components library

Fig. 2.4 Levels of hierarchy in embedded systems

minimum center-to-center distance spacing (or pitch) between metal lines. Now, it is measured in nanometers. In 1965, the Intel founder Gordon Moore states that the number of transistors in a dense IC will double approximately every two years. The shrinking of transistors enables a larger number of them to be held within the same area. It results in improving the processor performance and decrease the consumed energy. This law was adhered to rigidly for 50 years. It states that the technology node in 2018 is around 10 nm. It was 45 nm in 2008. In 1971, Intel’s first CPU, the 4004, contained 2300 transistors. Current chips have around 6 billion transistors (it is the 24 core Xeon Broadwell-WS processor). But modern CAD tools cannot use efficiently such huge amount of transistors. There is the well-known productivity gap between technology capabilities and hardware design capabilities. Technology capabilities are measured in transistors per chip. Design capabilities are measured in transistors per day (it is design productivity). The complexity has an annual growth rate of 58% while productivity has annual growth rate of 21%.

2.1 Hierarchy of Levels

27

Table 2.1 Productivity gap for 50 years Time, 1 5 10 20 years PG

1, 3

3, 79

(a)

I1

...

IK

14, 4

(b) I1

208

I3

I2

25

30

35

40

50

1991

3008

11429

43424

626843

I4

(c)

I1 I2 I3 I4 I5 PCB

Black box ... O1

SE1

SE1 O1

ON

O=F(I, t)

I5

SE2 SE2

SE3 O2

SE3 Bus 1

2

O1 O2 Fig. 2.5 The dimensions of abstraction level

There does not seem to be such a big difference between 21 and 58%. But time makes this gap bigger and bigger. Let 50 years ago it be C T = PT = A, where C T is the complexness (technology capabilities) and PT is the productivity of a designer. To the end of the first year, there is the productivity gap (PG) equal to (C T · 1, 58)/(PT · 1, 21) = 1, 58A/1, 21A = 1, 306. To the end of the second year, there is PG equal to [1, 58A · 1, 58]/[1, 21A · 1, 21] = 1, 3062 = 1, 705. So we can find that P G(t) = (1, 306)t .

(2.1)

In 2.1, the time t is measured in years. Let us form Table 2.1 showing some points for the function P G(t). For example, let it be 2 · 109 transistors in some chip. Using the last column of Table 2.1 (50 years) gives PT = 3190 transistors per day. It gives 1161375 transistors during the year. So the capabilities are used only on 0,005%. Of course, it is true if only transistors are used as basic elements of design. But in any case, Table 2.1 leads to a very pessimistic conclusion: modern CAD tools very little use the abilities of modern chips. To diminish the gap, it is necessary to increase the productivity of CAD tools. It means that it is necessary to improve CAD tools for the highest levels of abstraction (system and processor). It is a real challenge for software engineers. There are three dimensions for each abstraction level: behaviour, structure and physical. Behaviour represents a design as a black box (Fig. 2.5a). It has inputs and outputs depended on inputs over time. But it is unknown how the desired behaviour is implemented by the black box. Sometimes, this dimension is named functionality or specification.

28

2 Design of Embedded Systems

There are K inputs and N outputs for some device shown in Fig. 2.5a. The functionality is represented by system of functions F. The outputs Ok ∈ O depend on inputs Ik ∈ I and time t. The structure dimension shows how the functions of a black box are implemented (Fig. 2.5b). It is represented as sets of structural elements (SE) and connections. For example, there are tree structural elements SE1-SE3 in some device shown in Fig. 2.5b. The physical dimension specifies the size of each component (height and width), its position on a container (silicon chip, PCB, etc.), each port (pins) and connections (bus). For example, the structure from Fig. 2.5b is mapped to a PCB (Fig. 2.5c). There are 5 pins in the input port and 2 pins in the output port of PCB. The structural elements are connected using a bus. The bus also connects elements with ports. D. Gajski invented an Y-Chart showing different abstraction levels and dimensions of ES (Fig. 2.6). There are four levels and three dimensions on the Y-Chart. So, there are 12 crosspoints showing different steps of design process. The structure of a black box can be represented by a netlist of structural elements. The transition from a specification to a netlist is executed in the process of synthesis. A netlist could be transformed into the physical layout. It is executed during the process of technological mapping. During this process such procedures are executed as floor planning, placement and routing. So, the design process for each level of abstraction could be represented as a graph (Fig. 2.7). Data from previous level (i − 1) determine the behaviours of elements of ith level (i ∈ {1, . . . , 4}). The synthesis is executed (transition from Bi to Si ). If the structure meets design constraints, then v = 1 and the technological mapping is executed. Otherwise (v = 0), it is necessary to clarify the behaviour and to repeat the synthesis. When the layout is obtained, it is validated. If v = 1 (constraints are met), then the design process enters the next level (level i + 1). Otherwise (v = 0), the layout should be redesigned. We use such graph representation to explain different methodologies of design.

Behavior dimension B1 System level Processor level Logic level Circuit level

Fig. 2.6 Y-Chart

Structure dimension S1 B2

S2 B3

S3 B4 S4

P4 P3 P2 P1 Physical dimension

2.2 Methodologies of Design of Embedded Systems

29

From level i -1 To level i -1 B1

Synthesis v1

Technological mapping

Si

Pi

v To level i+1

v1 From level i+1

Fig. 2.7 Design graph for ith level of ES

2.2 Methodologies of Design of Embedded Systems There are three possible approaches for designing complex systems: 1. Bottom-up methodology. 2. Top-down methodology. 3. Meet-in-the-middle methodology. Bottom-up methodology. It is based on creating parts of a product before assembling the whole product. It is used, for example, when buildings are constructed or cars as assembled. Let us explain it using Y-Chart. The process starts on the circuit level. A designer can use the libraries of transistors, logic, RTL, and processor components. On the circuit level transistors are used to develop circuits and their layouts for the basic logic components (gates, flip-flops, buffers, and so on). These components are used as standard cells for logic level component library. In this library, the standard cells are stored together with their functionality, structure and layout. They are used to create RTL components such as ALUs, multipliers and other components for processor level. To synthesize them, a designer uses system of Boolean functions or finite state machines (FSM). When RTL components are synthesized, it should be executed the placement and routing with standard cells. Next, these components are stored in RTL library. To design the processor level of ES, it is necessary to start either from C code or instruction set. It allows generating the structures of PEs, CEs, and so on. At this level, it is executed the floor planning, placement and routing for obtained components (PEs, CEs,. . .). To do it, we use the RTL components as standard cells. At last, these are stored in processor library. On the system level, process starts from a model of computation (MoC). Using the MoC leads to generating the system structure consisting of a lot of PEs and CEs. Next, the physical design is executed using the components from processor library as standard cells. Now, we have the layout of ES (one of possible variants). This methodology is represented by a graph shown in Fig. 2.8. We assume that each component has all necessary metrics (performance, area, power consumption, and other). So, we cam choose components leading to the system with required characteristics. So, there are no feedback loops in this graph.

30 Fig. 2.8 Graph for bottom-up methodology

Fig. 2.9 Graph for top-down methodology

2 Design of Embedded Systems B4

S4

P4

B3

S3

P3

P1

S1

B1

P2

S2

B2

B1

S1

B2

S2

B3

S3

v1

v P1

P4

S4

B4

The advantage of this methodology is the clear separation of abstraction levels. Each level has its own library. It makes easier design management on each abstraction level, because the level i gives library for level i + 1. The disadvantage of bottom-up methodology is a huge size of libraries. Each library should include all possible components with optimized parameters. It is very difficult to take into account all requirements for the level i on the level i − 1. Top-down methodology. It starts from system level using MoC. The system level synthesis produces a structure of the system where each component has defined parameters and metrics. The technological mapping is skipped. On the processor level, each PE or CE is decomposed into smaller RTL components having defined parameters and metrics. Now each operational unit, for example ALU, has specified all executed functions and requirements for performance (delay) and power consumption. Next, these RTL components are decomposed into logic components or gates. At last, each logic component is represented by transistors netlist. In the netlist, each transistor layout represents a basic cell. Next, the placement and routing are executed for all basic cells of the entire embedded system. It gives the final result which can be checked (verified). This process is shown in Fig. 2.9. If result of verification is positive (v = 1), then the system from the node P1 is treated as a final product. Otherwise (v = 0), it is necessary to start the design from the node B1. The drawback of this approach is the difficulties in optimizing the design. It is connected with the fact that all system metrics are known only on the stage P4. Because of it, there is a lot of iterations of the design process. Both discussed methodologies were used in very simple ESs. But modern ESs are very complex. So, modern design technology is the following one. Meet-in-the-middle methodology. Its goal is to eliminate the drawbacks and efficiently use the benefits of two previous methodologies. It is justified by the fact that there are efficient CAD tools for circuit and logic levels, but they are absent for two higher levels of abstraction. This methodology allows a designer to use CAD tools for lower level abstractions and to reduce the design layouts on both processor and system levels.

2.2 Methodologies of Design of Embedded Systems

31

(a) B4

S4

P4

B3

S1

B2

S2

S3

P3

(b) B1

B2

S2

P2

B1

S1

P1

P1 B4

S4

P4

B3

S3

P3

Fig. 2.10 Graph for meet-in-the-middle methodology

We can treat this approach as top-down method for system and processor levels and bottom-up method for other levels. There are two approaches in the organizing meeting points for these styles. They are represented by graphs shown in Fig. 2.10. As follows from Fig. 2.10a, the design process could start with a MoC. It results in the system platform which includes virtual PEs and CEs. It is the top-down part of the process. Next, the bottom-up process starts. The virtual PEs and CEs are either synthesized from RTL components or are represented by commercially available IPs. So, all PEs and CEs are represented by library elements of RTL level. These RTL elements have their own structures and layouts (generated through some bottomup approach in the library). These RTL layouts are combined together using floor planning and routing. It gives the layout of the final multi-core platform. So, in this approach the layouts should be generated three times. It is done once for standard cells, once for RTL components, and once for the entire system. The last step uses layouts of RTL components. The advantage of this approach is the possibility to evaluate the system using metrics of RTL components. There are two disadvantages, too. Firstly, it is necessary, to do layouts more than once. Secondly, the number of RTL components are limited. So, they may not meet some specifics of a required ES. For example, the library has only ALUs with 8, 16 and 32 bits, but it is required the ALU with 24 bits. As follows from Fig. 2.10b, it is possible to use layouts of standard cells. As in the previous case, the design starts from a MoC and virtual PEs and CEs are obtained on S1 step. Next, those PEs and CEs are synthesized with RTL components. These RTL components are further synthesised using logic components. Commercially available IPs are represented on the RTL level. So, they should be transformed into logic components netlists. After the S3 step, all IP, PE and CE components are decomposed into logic components. Each logic component has its layout as a standard cell. So, now it is possible to generate the layout of a multi-core platform. In this case, it is necessary two stages of layout generation. It generates standard cells and the entire system. The advantage of this approach is the necessity for upgrading only rather simple layouts of standard cells to meet some new requirements. In the previous case, it was necessary to refine rather complex layouts of RTL components. But it is more difficult to optimize the layout based on standard cells. Also, the design metrics are less predictable, then in the case of using RTL components.

32

2 Design of Embedded Systems

This methodology is rather ideal case of some concept. In reality, design starts from some already-defined platform. So, a designer has no big freedom in the choice of components on the system level. The platform are defined by some well-known platform producer. Sometimes, the platforms are defined inside the company producing ESs. Let us discuss this approach.

2.3 Platform-Based Design of Embedded Systems We could not find a simple and clear definition of the platform for ES. So, we start from some examples to make the general definition. PC/104 is a family of embedded computer standards intended for specialized environments where a small, rugged ES is required. The standard is modular and allows to stack together boards from various commercial of-the-self (COTS) manufactures to produce a customized ES. It has size 90 × 96 mm; it is a bit smaller than a desktop motherboard. These boards are stacked on top of each other like building blocks. Because of it, the typical PC/104 system is referred as a “Stack”. As a rule, it includes a CPU board, power supply board, and one or more peripheral boards such as a data acquisition modules (to convert analog signals into digital), GPS receiver (to get geolocation and time information from Global Positioning System), or Wireless LAN controller (to organize a wireless computer network). The majority of its CPU boards are x86 compatible (based on Intel 8086 or 8088 CPU). They have standard PC interfaces such as serial ports (it transfers one bit at a time), USB (Universal Serial Bus to connected a computer with devices), Ethernet (to transfer streams of data used LAN and other area networks) and VGA (video graphic array using in high definition video). A x86 PC/104 system can operate using DOS, Windows or Linux operating systems. Also it can use a real-time operating system (RTOS) such as VxWorks. So, they can be used for industries such as aerospace, defence, medical equipment, industrial equipment, robotics, energy, transportation, network infrastructure, automotive, and consumer electronics. It can operate using PCI bus (Peripheral Component Interconnect) and PCI Express. Also, it can use ISA bus (Industry Standard Architecture). So, PC/104 is IBM PC compatible. There are some improved versions of PC/104 such as PC/104-Plus, PCI-104, PCI/104 Express, PCIe/104 and other. System on Module (SOM) is a board level circuit that integrates a system function in a single module. It may integrate digital and analog functions on a single board. It is oriented on ES. It allows a design-reuse. SOMs are extensively used by Sun Microsystems, Motorola, Xerox, DEC and IBM. SOMs are not complete embedded computers. They require baseboards to operate. A typical SOM consists of processor core, memory blocks (ROM, RAM, EEPROM, flash memory), industry standard interfaces (USB, Fire Wire, Ethernet, UART, SPI), analog interfaces including ADCs and DACs, and other devices. They are used in

2.3 Platform-Based Design of Embedded Systems

33

Blade servers, storage blades, medical devices, data acquisitions, network appliances, Internet of Things. Biscuit PC. The word “Biscuit” comes from French word derived from “bits” meaning “twice” (from the Latin) and “cuit” meaning “cooking”. We did not find explanation for this name. But such platforms have only a single PCB. It is improvement of PC/104. Let us consider the chip PCM-9370 (Advantech). It has the size 145 mm (the same as hard disc 3, 5”). It includes the processor Transmeta Crusoe which is a very long instruction word (VLIW) processor. To contact with environment, it has such interfaces as LAN, VGA, Audio, HDD, FDD, 2xRS-232C, Centronics, PC/104 (ISA 16 bit) and PC/104+ (PCI). These PCBs target different multimedia applications. They could be used in DVD players, MP3, sound recording for hard discs and CD-RW. The Zynq-7000 All Programmable SoC (Xillinx). This platform integrates the software programmability of an ARM-based processor with the hardware program mobility of field programmable gate arrays (FPGA) enabling different embedded applications. There are some families of these platforms. For example Zynq-7000 devices are equipped with dual-core ARM processors integrated with 28 nm Artix-7 or Kintex7 programmable logic. They give excellent performance-per-watt and maximum design flexibility. They have up to 6, 6 · 106 logic cells and transceivers ranging from 6, 25 Gb/s to 12, 5 Gb/s. Its MP Core has two ARM Cortex-A9 processors and cache 512 KB with 256 KB of on-chip memory. Its programmable logic block includes system gates, digital signal processors (DSP) and RAMs. There are up to 8 transceivers compatible with PCIe standard. To communicate with physical world, it supports such standard interfaces as SPI, I2 C, CAN, UART, GPIU, USB, SDIO. This platform targets a lot of embedded applications. It can be used in: 1. Multi-function printers. In these area such features are supported as integrated displays, real-time image analytics, fine motor control, different interface standards. 2. Medical endoscope. It is used in preventive medicine and minimally invasive surgery. It provides high bandwitdh and can execute intensive computations. It integrates camera unit control and 1080p60 image processing. 3. Machine and computer vision. It is necessary to have a high-speed imagerer interface, high-speed image processing, video pre-processing. Zynq provides image processing algorithms with the high performance and low cost and power. 4. Motor control. Xilinx has high-precision, high-performance, scalable and efficient motor control algorithms. It is possible to implement variety of motor controllers with very low power consumption. There are many examples of different platforms. They differ from single-board processors to multi- processor-system-on-a-chip (MPSoC). They are produced by different manufactures such as Advantech, ICP, Intel, Lanner Electronics, Siemens,

34

2 Design of Embedded Systems

Tochiba, Xilinx just to the name them. But these embedded system platforms (ESP) have some common features. They include processing elements (CPU, DSP, GPU), analog-to-digital and digital-to-analog converters, they support different industrial and other communication protocols. At last, very often execution of programs are controlled by special software named real-time operating systems (RTOS). Let us discuss this very important issue. Complex embedded systems could execute many tasks in the same instant of time. These tasks are named embedded applications. To enable concurrent execution of embedded applications, a multi-task ES should use RTOS. An RTOS is a software layer that controls the concurrent execution of applications providing various services for synchronisation and communication. There are three key requirements turning an operating system into RTOS: 1. Predictable timing behaviour. It means that execution of applications should be within predictable bounds. Usual OS gives a fairy amount of execution time for all running tasks. An RTOS should take into account time constrains of physical world. 2. Precise scheduling of tasks. Scheduling is the mapping tasks to intervals of execution time. The tasks should always meet deadlines. As you remember, there are hard and soft real time systems. Precise scheduling is especially important in hard RTSs. 3. High performance. Some applications should be executed in a fraction of a second. So, RTOS must be able to support such applications. An ROTS provides many services. It creates, terminates and controls tasks (task management). It enables tasks to exchange information and synchronize with each other (inner task communication). It offers services for mutual using the CPU and memory allocation for programs and data. Also, it provides timing support for a timed execution. There are many RTOS implementations. For example, QNX Neutrino, VxWork, eCos, RTEMs, uC/OS-II, RT-Linux, Windows CE. They have many different features but all of them enable concurrent execution of embedded applications. Using this information, we can propose the definition of an ESP. It is the following. An embedded system platform is a pre-designed container (either silicon chip or PCB) including hardware and software components which allow monitoring and control of physical systems. It is programmable to enable flexible tailoring toward particular applications. So, an ESP may have already some standard components (memory blocks, standard processors) with well defined layouts. There is some RTOS kept in the memory of standard PEs. If software implementation of a control algorithm does not meet the time constrains, then it is necessary to implement some parts of the algorithm using customized hardware components. We discuss this HW-SW implementation is Chap. 4. The IP components should be synthesized using processor and RTL synthesis tools. Next, the layout of each IP component should be converted into the standard cell

2.3 Platform-Based Design of Embedded Systems B4

S4

P4

S3

B3

35 P3

B2

S2

P2

S1

P1

Fig. 2.11 Graph for platform methodology

layout. Such standard cell layout is combined with layouts of standard components. It gives the final layout of an ES (Fig. 2.11). Using a platform, we should execute three cycles of layout generation. First, layouts of standard cells. Second, standard cells are used to generate layouts of custom components. Third processor component layouts are used to generate the final layout of ES. It is pure meet-in-the-middle methodology with all its advantages and drawbacks. But it is one of the most used approaches in ES design. Processor and memory blocks of modern ESs are implemented as application specific integrated circuits (ASIC). Let us discuss this issue.

2.4 ASIC-Based Design An ASIC is an integrated circuit customized for a particular use. There are three main classes of ASICs (Fig. 2.12). But all of them are based on CMOS transistors connected using layers of metallization. Let us start from full custom ASICs. In this design, the resistors, transistors, digital logic, capacitors and analog circuits are all positioned in the circuit layout. Generally full custom designs are referred as “handcrafted” designs. A microprocessor is a simple example of a full custom IC. Usually, the manufacturing and design of it is very expensive. It is necessary to invest millions and millions of dollars to design and manufacture a full custom ASIC. Each design is connected with research and development activity. In the ASIC case, it is necessary to create libraries of standard cells and masks for their intercon-

Fig. 2.12 Classification of ASICs

ASIC

Full Custom

Semi-Custom

Standard cell based

Gate array based

Programmable

PLDs

FPGAs

36

2 Design of Embedded Systems

nections. Because of it, there is a very high cost of creating a new product. It is so called non-recurring engineering (NRE). The benefits of full custom design usually include reduced area (and, therefore, recurring component cost), performance improvements, and also the ability to integrate analog components and other pre-designed and thus full verified components, such as microprocessor cores that form a system-on-a-chip. Literally, each transistor could be specified for benefits of area or power. This disadvantages of full-custom design can include increased manufacturing and design time, increased non-recurring engineering costs, more complexity in the computer aided design system, and much higher skill requirements on the part of the design team. Semi-Custom design is an alternative to full custom design. Here we can use components from a standard library for design purposes. Thus, in semi-custom ASIC designs, all logic cells are predesigned and some mask layers are only customized. The advantage of using predesigned logic cells from the library is that it can make semi-custom ASIC chip design easier. Standard cell libraries are usually designed using full custom designs. There are basically two types of semi-custom ASICs: Standard Cell Based ASIC and Gate Array Based ASIC. A standard cell based ASIC commonly uses predesigned logic cells like logic gates, flip-flops, multiplexers, demultiplexers, etc. These predesigned logic cells are known as Standard Cells. Flexible blocks also known as standard cell areas in the ASIC design consist of many rows of standard cells. The standard cell areas can be usually used in combination with larger standard cells like microcontrollers. The larger standard cells are also known by the names Megacells, Megafunctions, System custom blocks, Fixed Blocks, cores or System Level Macros. In this approach, only some (may by all) layers of masks are customized. Development of mask increases the level of NRE cost. For digital-only designs, however, “standard-cell” cell libraries, together with modern CAD systems, can offer considerable performance/cost benefits with low risk. Automated layout tools are quick and easy to use and also they offer the possibility to “hand-tweak” or manually optimize any performance-limiting aspect of the design. Gate array design is a manufacturing method in which the diffused layers, i.e. transistors and other active devices, are predefined and wafers containing such devices are held in stock prior to metallization (in other words, unconnected). The physical design process then defines the interconnections of the final device. For most ASIC manufactures this, consists from two to as many as nine metal layers. Each metal layer is running perpendicular to the one below it. Non-recurring engineering costs are much lower, as photolithographic masks are required only for the metal layers, and production cycles are much shorter, as metallization is a comparatively quick process. Gate- array ASICs are always a compromise as mapping a given design onto what a manufactured held as a stock wafer gives 100% utilization. Often difficulties in routing the interconnect require migration onto a large array device with consequence increase in the piece part price. These difficulties are often a result of the layout software used to develop the interconnect.

2.4 ASIC-Based Design

37

Pure, logic-only gate-array design in rarely implemented by circuit designers today, having been replaced almost entirely by field-programmable devices, such as field-programmable gate arrays (FPGAs). They can be programmed by the user and thus offer minimal charges of non-recurring engineering, only marginally increased price part cost, and comparable performance. Today, gate arrays are evolving into structured ASICs that consist of a large IP cores like a CPU, DSP, peripherals, standard interfaces, integrated memories SRAM, and a block of reconfigurable, uncommitted logic. This shift is largely because ASIC devices are capable of integrating such large blocks of system functionality and “system-on-a-chip” requires far more than just logic blocks. There are two classes of programmable ASICs: programmable logic devices (PLD) and FPGAs. We discuss them in Chap. 4 of our book. Application Specific Integrated Circuits find many applications in fields of medical and industrial sectors, automotive and sensors. Today ASIC chips can be used in satellites, modems, computer PCs etc. Electronic Odometer, Engine Monitor are some ASIC products suitable for automobile applications. Electronic Odometer helps to record the mileage of a vehicle. Engine Monitor and Warning Light Controller is another ASIC product that monitors different parameters like temperature, voltage etc. of a vehicle. ASICs can be widely used for industrial applications. Some ASIC-based industrial products are Micro-Power 555 Programmable Timer, Thermal Controller, 8 Bit microcontroller etc. In medical applications, biometric monitors, hearing aids etc. are based on ASICs. Today, for security applications many ASIC products are coming out. One of them is RFID tags. Last but not the least, ASICs can be used for many applications and in the near future we can expect a low cost ASIC technology. The ASIC technology allows obtaining circuits with very high density. The logic density is defined as the amount of logic that can be implemented in a given silicon area. It can be measured as the number of transistors per a square millimeter (mm2 ). We analyses some ASIC-based products manufactured in 2017. They include processors, graphic processing units (GPU) and memory blocks. Results are show n in Table 2.2. For example, microprocessor Intel 4004 (1971) included 2300 transistors, had area 12 mm2 and technology node equal to 10 000 nm. Table 2.2 shows incredible progress in comparison with density of Intel 4004 equal to 191 transistors/mm2 . We did not find some data for memory elements. But it is known that 16 GB Flash has 64 · 109 transistors. But the transistor count is predictable for memory blocks. Typical CMOS SRAM cell includes 6 transistors; DRAM has only a single transistor could keep up to 3 bits of data. It is follows from the fact that a single transistor which could have up to eight distinctive levels of resistance. There are nine basic steps in the ASIC chip design. They are the following: 1. Design Entry: In this step, the designer starts the design with a text description or a system specific language like HDL, C language etc. 2. Logic Synthesis: Logic synthesis generally helps to produce the netlist consisting the description and interconnection of logic cells.

38

2 Design of Embedded Systems

Table 2.2 Characteristics of modern ASIC products Product Company Number of Node (nm) transistors A11 Bionic Ryzen Z14 Centriq 2400 Vega10 GV100 Volta 64 GB DRAM 128 Gb DRAM

Area, mm2

Density

Type

Apple AMD IBM Quaicomm

4.3 · 109 4.8 · 109 6.8 · 109 18.3 · 109

10 14 14 10

89 192 696 398

4.8 · 106 2.5 · 106 0.34 · 106 4.5 · 106

CPU CPU CPU CPU

AMD Nvidia

12.5 · 109 21.1 · 109

14 12

484 815

2.5 · 106 2.5 · 106

GPU GPU

Micron

68.7 · 109

?

?

?

ME

Micron

137.4 · 109

30

?

?

ME

3. System Partitioning: Here partitioning a large design into a small ASIC design takes place. 4. Prelayout Simulation: Prelayout Simulation allows checking whether the design functions correctly. 5. Floorplanning: Using this step we can plan the arrangement of the blocks present in the netlist on the chip. 6. Placement: allows the placement of cells present in the block. 7. Routing: This step is carried out to provide all necessary interconnections between the cells. 8. Circuit Extraction: Here the translation of the integrated circuit to electrical circuit takes place. 9. Postlayout Simulation: It allows checking the final layout of the design. There is a block-diagram of this process shown in Fig. 2.13. As follows from it, some stages of design could be repeated. It significantly increases NRE cost. The design risks could be diminished using so called structured ASIC design. It is often referred as “platform ASIC design”. In this case, we can use the same definition of a platform as it is done in the previous section. In a “structured ASIC” design, the logic mask-layers of a device are predefined by the ASIC vendor (or in some cases by a third party). Design differentiation and customization is achieved by creating custom metal layers that create custom connections between predefined lower-layer logic elements. “Structured ASIC” technology is seen as bridging the gap between field-programmable gate arrays and “standard-cell” ASIC designs. Because only a small number of chip layers must be custom-produced, “structured ASIC” designs have much smaller non-recurring expenditures than “standard-cell” or “full-custom” chips, which require that a full mask set be produced for every design.

2.4 ASIC-Based Design Fig. 2.13 Design flow for ASIC

39 prelayout simulation

4

design entry

1

logic synthesis

2

system partitioning 3 postlayout simulation

9

floor planing

5

placemenet 6 circuit extraction

8

routing

7 finish

This is, in fact, the same definition as a gate array. What makes a structured ASIC different in that in a gate array, the predefined metal layers serve to make manufacturing turnaround faster. In a structured ASIC, the use of predefined metallization is primarily to reduce the cost of the mask sets as well as making the design cycle time significantly shorter. For example, in a cell-based or gate-array design the user must often design power, clock, and test structures themselves; these are predefined in most structured ASICs and, therefore, can save time and expense for the designer compared to gate-array. Likewise, the design tools used for structured ASIC can be substantially lower cost and easier (faster) to use that cell-based tools, because they do not have to perform all the functions that cell-based tools do. In some cases, the structured ASIC vendor requires that customized tools for their device (e.g., custom physical synthesis) used, also allowing for the design to be brought into manufacturing more quickly. All designers are agree that the basic premise of a structured ASICs is that both manufacturing cycle time and design cycle time are reduced compare to cell-based ASICs, by virtue of there being pre-defined metal layers (thus, reducing manufacturing time) and pre-characterization of what is on the silicon (thus, reducing design cycle time). Now, let us briefly discuss different tools used for design of ASIC-based devices. They are used during each stage of the design flow (Fig. 2.13). Icarus Verilog is a Verilog simulation and synthesis tool. It operates as a compiler, compiling source code written in Verilog (IEEE-1364) into some target format. For batch simulation, the compiler can generate an intermediate form called vvp assembly. This intermediate form is executed by the “vvp” command. For synthesis, the compiler generates netlists in the desired format.

40

2 Design of Embedded Systems

The compiler proper is intended to parse an elaborated design descriptions written to the IEEE standard IEE Std 1364-2005. This is a fairly large and complex standard, so it will take some time to fill all the dark alleys of the standard. The main porting target is Linux, although it works well on many similar operating systems. Various people have contributed precompiled binaries of stable releases for a variety of targets. These releases are ported by volunteers, so what binaries are available depend on who takes the time to do the packaging. Icarus Verilog has been ported to That Other Operating System, as a command line tool, and there are installers for users without compilers. You can compile it entirely with free tools, too, although there are precompiled binaries of stable releases. Verilog-XL: This is the most standard simulator in the marked, as this is the sign off simulator. NCVerilog: This is the compiled simulator which works as fast as VCS, and still maintains the sign off capabilities of Verilog-XL. This simulator is good when it comes to gate level simulations. VSC: This is the fastest simulator. This is also a compiled simulator like NCverilog. This simulator is faster when it comes to RTL simulation. Few more things about this simulator are direct C kernel interface, Covermenter code coverage embedded, better integration with VERA and other Synopsys. Aldec: This simulator from Aldec supports VHDL, Verilog, SystemC, SystemVerilog, PSL. You name it and it supports it. You can even use it to replace your existing Modelsim/VCS/NCverilog licenses. Modelsim: This is the most popular simulator. It has got very good debugger. It supports SystemC, Verilog, VHDL and SystemVerilog. Smash: This is a mixed signal (spice), Verilog, VHDL simulator. VeriLogger Extreme: This is a high-performance compiled-code Verilog 2001 simulator. This simulator has a very easy to use debugging environment that includes a built-in graphical testbench generator. The top level module ports can be extracted into a timing diagram window that lets the user quickly draw waveforms to describe input stimulus. The testbench is generated automatically and results are displayed in the timing diagram window. Jove: This is the Open Verification Environment for the Java (TM) Platform. Jove is a set of Java APIs and tools to enable Verilog hardware design verification of ASICs and FPGAs using the Java programming language. Jove has been tested extensively with Synopsys VCS and to a lesser extent with the GPL version by Pragmatic C Software. Veritak: This is a Verilog HDL Compiler/Simulator supporting major Verilog 2001 HDL features. It has the integral environment including VHDL to Verilog translator, syntax highlight editor (Veripad), class hierarchy viewer, multiple waveform viewer, source analyzer, and more. TestBencherPro: This tool generates bus-functional models and testbenches from language independent timing diagrams. The generated testbenches are capable of applying different stimulus vectors depending on simulation response so that the testbench functions as a behavioural model of the environment in which the system being tested will operate. It generates code for Verilog, VHDL, and SystemC.

2.4 ASIC-Based Design

41

Timing Diagrammer Pro: This is a professional timing diagram editor with an unbeatable feature set. It performs true full-range min/max timing analysis to help you find and eliminate all timing violations and race conditions. Also, it automatically calculates critical paths and adjusts for reconvergent fanout. At last it, inserting diagrams into word processors is painless, thanks to a variety of image capture formats. TimeGen: TimeGen is an engineering CAD tool which allows a digital design engineer the capability to quickly and effectively draw digital timing diagrams. The waveforms can easily by exported to other Window programs, such as Microsoft Word, for use in writing design specifications. TimeGen has the less price compared to other tools. Wave Former Pro: This tool generates VHDL, Verilog, and SPICE codes, and Patter Generator stimulus from timing diagrams. It imports waveforms from HDL simulators, HP logic analyzers, VCD files, or draws them using the built-in timing diagram editor. Also, it automatically determines critical paths, verify timing margins. It is adjust for reconvergent fanout effects, and performs “what if” analysis to determine the optimum clock speed. WaveFormer Pro also lets you specify and analyze system timing, perform RTL level simulation, and generate synthesizable models directly from Boolean and registered logic equations. Perlilog: Perlilog is a design tool, whose main target is the easy integration of Verilog IP cores for System-on-Chip designs. The philosophy behind Perilog is that IP core should be like a black box. Fitting it for a certain purpose should be as easy as defining the desired requirements. Connecting the cores, to become a system, should be as easy as drawing a block diagram. Perlilog is written in Perl, currently with no GUI. While the scripts, that the system consists of, are rather sophisticated, only plain Perl knowledge is needed to use its scripting capabilities. Leda: Leda is a code purification tool for designers using the Verilog and VHDL Hardware Description Languages. Leda is uniquely qualified to analyse HDL code pre-synthesis and pre-simulation and is totally compatible with all popular synthesis and simulation tools and flows. By automating more than 500 design checks for language syntax, semantics and questionable synthesis/simulation constructs, Leda detects common as well as subtable and hard-to-find code defects, thus freeing designers to focus on the art of design. So, there are different tools used for designing ASIC-based systems. Let us repeat the basic steps leading to obtain an ES digital part using the standard-cell ASICs. They are the following: 1. A team of design engineers starts with a non-formal understanding of the required functions for a new ASIC, usually derived from requirements analysis. 2. The design team constructs a description of an ASIC to achieve these goals using an HDL. This process is analogous to writing a computer program in a high-level language. This is usually caled the RTL design. 3. Suitability for purpose is verified by functional verification. This may include such techniques as logic simulation, formal verification, emulation, or creating

42

4.

5.

6.

7.

2 Design of Embedded Systems

an equivalent pure software model (see Simics, for example). Each technique has advantages and disadvantages, and often several methods are used. Logic synthesis transforms the RTL design into a large collection of lower-level constructs called standard cells. These constructs are taken from a standard-cell library consisting of pre-characterized collections of gates (for example, two input NAND, inverters, etc.). The standard cells are typically specific to the planned manufacturer of the ASIC. The resulting collection of standard cells, plus the needed electrical connections between them, is called a gate-level netlist. The gate-level netlist is next processed by a placement tool which places the standard cells onto a region representing the final ASIC. It attempts to find a placement of the standard cells, subject to a variety of specified constraints. The routing tool takes the physical placement of the standard cells and uses the netlist to create the electrical connections between them. Since the search space is large, this process will produce a “sufficient” rather than “globally optimal” solution. The output is a file which can be used to create a set of photomasks enabling a semiconductor fabrication facility (commonly called a “fab”) to produce physical ICs. Given the final layout, circuit extraction computes the parasitic resistances and capacitances. In the case of a digital circuit, this will then be further mapped into delay information, from which the circuit performance can be estimated, usually by static timing analysis. This, and other final tests such as design rule checking and power analysis (collectively called signoff) are intended to ensure that the device will function correctly over all extremes of the process, voltage and temperature. When this testing is complete, the photomask information is released for chip fabrication.

These steps, implemented with a level of skill common in the industry, almost always produce a final device that correctly implements the original design, unless flaws are later introduced by the physical fabrication process. Embedded systems should be energy-efficient. There are some methods targeting this very important issue. Let us discuss them.

2.5 Improving Energy Efficiency in Embedded Systems In 2014, the number of mobile computing devices has become almost equal to the population of the world. Embedded systems are now integrated in smart phones, smart watches, wireless data modems, video cameras, net browsers, wearable systems, and so on. All mobile devices use batteries as a source of energy supply. Due to it, they have stringent power budgets (around 1 or 2 W). Also, there are ultra low-power ESs, for example, wearable systems, having a power budgets of a few milliwatts. Due to low power budgets, ESs require efficient power management to improve their energy efficiency. For example, a 3G mobile phone receiver should execute around 40 giga operations per second (GOPS) to provide a 14, 4 Mbps channel. But

2.5 Improving Energy Efficiency in Embedded Systems

43

it has a power budget around 1 W. It means that the receiver should have energy efficiency around 25 pJ per operation. This example shows the growing importance of efficient power management for ESs. As we defined before, an ES is a computing system designed for specific control functions and embedded into a complete device which may include hardware and mechanical parts. Modern computing systems are implemented using CMOS technology. There are two main categories of power consumption in CMOS-based systems, namely, dynamic power (PD ) and static power (PS ). The dynamic power consumption is connected with charging and discharging of the load capacitance and the short circuit currents. The static power (or leakage power) is connected with leakage currents flowing even when a transistor is closed. The following formulae determines PD and PS : PD = αC V 2 F;

(2.2)

PS = I L V.

(2.3)

In (2.2)–(2.3), α is the switching activity, C is the load capacitance, V is the operating voltage, F is the operating frequency, I L is the leakage current. The switching activity is the measurement of changes of signal values. It has two parts: probability and toggle density. The probability is the likelihood that a signal will have the logic value “1”. Toggle density is the number of switches per unit of time. That unit is generally one clock cycle. There are many reasons why the power management is so important in ESs. We shown them in Fig. 2.14. Let us briefly characterize these seven reasons: 1. A lot of mobile ESs uses batteries for power supply. Power consumption determines how long an ES can operate. Power consumption leads to heating, which is unacceptable, for example, in wearable ESs. These systems have limited sizes.

Reasons for power management 1 Limited size and Battery

2

3

5

4 Countermeasures against Redundant resources

Ensuring Longevity

6 CMOS scaling

Trends in Usage pattern

Fig. 2.14 Reason for importance of power management

Growing Performance requirements

7 Enabling Green computing

44

2.

3.

4.

5.

6.

7.

2 Design of Embedded Systems

It limits the amount of heat-dissipation that can be managed. Power consumption determines sizes of power supplies, the heat-dissipation overhead, the cost, weight and area of ESs. If temperature increases, then the device failure rates increases, too. The power dissipation leads to heating. So, the power management is necessity to increase the reliability of ESs. There are hard time constraints into ESs. To meet them, designers use pessimistic estimations of worst-case execution time. It leads to redundancy in system resources to meet worst-case performance requirements. These resources lead to energy wastage. The power management should diminish this effect. Nowadays ESs often execute functions required resource-intensive applications (for example, multimedia processing). It leads to optimization of ESs in the performance direction. But it leads to increasing the switching activity and (as follows from (2.2)) to the growth of PD . The development of CMOS technology leads to increasing both on-chip transistor density and speed. It results in the chip utilization wall which limits the fraction of the chip that can be simultaneously used at full speed within the power budget. The further scaling has no sense without efficient power management. More and more ESs are used in mobile convergence applications. For example, they are key platforms for web browsing, imagining, and video steaming. Such huge amount of ESs makes their total power consumption very high. Data processing leads to power consumption. In 2010, it was estimated that information and communications technology (IT) contributes around 3% of overall carbon footprint. Now many corporate IT departments follow the main tendency of green computing. It means that they try to reduce the environmental effect of their activity. The efficient power management is very important for achieving these goals.

There are many energy saving approaches. But they could be divided into four categories (Fig. 2.15). Let us start from the dynamic voltage and frequency scaling (DVFS) approach. Its idea is clear from (2.2). As follows from (2.2), reducing the voltage and/or frequency leads to energy saving. This technology is used in many microprocessors produced by AMD and Intel. It has three disadvantages:

Techniques of energy saving

Dynamic voltage and frequency scaling

Using low power modes

Microarcitectural techniques

Fig. 2.15 Classification of energy saving approaches

Using specialized IP-cores

2.5 Improving Energy Efficiency in Embedded Systems

45

1. The circuit needs more time to execute necessary calculations. It can lead to missed deadlines. 2. It requires programmable clock generator and DC-DC converters. It leads to increasing chip area (energy overhead). 3. Increase of PL and using multi-core processing diminish returns from DVFS. Let us point out that missing some task deadlines can be acceptable in multimedia applications. They just remain unnoticed by human beings. So, DVFS has rather good perspectives in such systems. For example, there are two operation modes in a portable MP3 player. The mono mode requires less computation power than stereo mode. So, it is possible to use smaller voltage at the time of mono mode. It leads to saving the battery power. In ESs, the hardware has a range of operating modes. Different modes require different amount of power. Also, they need different time to return back to the normal mode. When a system is in the idle mode, it can operate with very low voltage. The techniques of this category are known as the power mode management (PMM). The microarchitecture techniques intend to save energy in specific blocks of ESs. For example, it is possible to compress the data in RAM to increase the effective size of the memory. Also, it can be done by tuning the size of cache- memory. The performance of CMOS-based circuit is primarily sharped by power concerns. Because of it, the performance could be improved due to using specialized IP-cores such as GPUs, FPGAs, DSPs, ASICs etc. These cores allow combining increasing performance and decreasing power consumption. So, there are a lot of different techniques for efficient power management. A designer should find an approach leading to a balance between the system performance and power usage. It should be done in such a way that dependability of an ES is high enough. Let us discuss a case of a microcontroller (MC) targeting low-power applications. Its mode of operation is show in Fig. 2.16. Let us explain it. If there is no need in data processing, then all circuitry responsible for calculations is in the deep sleep mode. The synchronization is switched off and there are as low levels of voltage as it is possible. If there are no wake-up events, the MC keeps sleeping. Only its part responsible for energy management does not sleep. So, there are very low levels of voltage and clock pulses do not enter in its digital part. If it is necessary to execute some task (there is a wake-up event), then the wakeup maintenance is executed. When necessary voltage levels are reached and the synchronization is connected with digital part, it is possible to execute a task. The firmware analyses the task nature. For example, the execution of a digital filtering algorithm is a process-dependent task. It is executed with the fastest clock speed feasible to meet all other design requirements. Next, a receiving a byte of data through a serial interface is a time-dependent task. It should use the slowest clock speed feasible in the design. When the current task is executed, there are two possible scenarios. If some tasks are waiting to be executed, then the cycle of execution is repeated for a new task. If there are no tasks, then the MC returns to the sleep mode.

46

2 Design of Embedded Systems

Fig. 2.16 Mode of operation of power saving microcontroller

Sleep mode No

Wake-up Yes

Wake-up maintenance

No

Ready?

Yes

Yes Run clock at lowest clock frequency that can be used to complete task

No

Timedependent task?

Task complete? Yes

No Run clock at highest clock frequency available to complete task

Yes Other pending tasks?

No

Two general rules follow from this example: 1. It is necessary to keep the MC in its lowest power, deep-sleep mode whenever possible. 2. When a task is executed, it is necessary to operate in the lowest power operational state possible. To design such a system, a designer should start from estimation of the power budget. A designer should find which functional components are time-dependent and which are process-dependent. Next, it is useful to estimate the time required to execute each firmware component. We understand the firmware as a set of microprograms kept in the MC’s memory. Also, it is necessary to find out the frequency at which each component will execute. It gives a rough estimate of the average current consumption. Creating the preliminary power budget helps designers to make a choice of: 1. The type or types of batteries that will be appropriate for the design. 2. The type of microcontroller which is best suited to the project.

2.5 Improving Energy Efficiency in Embedded Systems

47

What should check a designer if some 32-bit microcontroller is chosen? For example, the answer could be found in the report of Silicon Labs “Balancing Performance and Power Efficiency in Embedded Systems”. Next, we present an except from this report, where abbreviation MCU means microcontroller. “When choosing 32-bit microcontroller for an embedded system, designers must dig deeply into product specifications. Most MCU vendors offer product selector guides and matrices that show basic feature sets, code and RAM footprints and pinouts of the MCUs they offer, which are useful first-pass tools to eliminate MCUs that do not meet basic design requirements. However, be cautious when comparing feature sets between MCU vendors, as specifications from one vendor might be defined differently by other vendors. For example, if a design requires a high-performance analog-to-digital converter (ADC), make sure that the ACD achieves the level of performance you require across the temperature and voltage range that matches the operating range of your product. Some integrated ADCs cannot achieve their highest performance specifications across the full operating range of the MCU. Careful specification examination is especially important in low-power applications. Some MCUs will be affected by multiple performance compromises when operating at the lower voltage levels seen in battery-powered applications. In addition to checking high-performance analog peripherals for specification compromises, make sure to check that the microcontroller can operate at the system clock speed your product requires. An MCU that can only operate at 75 percent of the expected oscillator speed when operating on battery power will have a dramatic impact on average current consumption in systems with process-dependent firmware components. One microcontroller capable of delivering the performance required without exceeding our low-power current consumption requirements is Silicon Labs’ SiM3C1xx Precision32T M 32-bit microcontroller based on ARM ® Cor-texT M M3 processor. This 32-bit MCU includes a sophisticated power management block designed to achieve low power performance, an on-chip 16-bit capacitance-to-digital converter, many serial interfaces and code/RAM/feature set that gives enough performance margin to be future-proofed and ready for reuse in more advanced applications. In may respects, architecting a low-power system in an exercise in “doing more with less”. At every point in the development process, from MCU selection to code creation, designers should be asking themselves whether there is a way to avoid performing unnecessary tasks. When the task must be performed, the driving priority is to complete the task in a way achieves design requirements while consuming as little current as possible. However, designers must take care that optimizations made to the system do not severely compromise performance.” It is estimated that the next generation of ESs will possess capabilities for highspeed video processing and communication which will require at least an order of magnitude better energy efficiency than it is available now. It shows the necessity of efficient power management. It is necessary at all design levels discussed in Sect. 2.1.

48

2 Design of Embedded Systems

References 1. D.D. Gajski, A. Gerstlauer, S. Abdi, G. Schirner, Embedded System Design: Modeling, Synthesis and Verification (Springer, US, 2006) 2. S. Mittal, A survey of techniques for improving energy efficiency in embedded computing systems (2014), arXiv:1401.0765 3. M.J.S. Smith, Application Specific Integrated Circuits (Pearson Education, London, 2006)

Chapter 3

Implementation of Computational Algorithms in Embedded Systems

Abstract The chapter presents some methods using for implementing computational algorithms in embedded systems. All methods are illustrated using the same end-to-end example. We start from the hardware implementation when some operational units (adders, multipliers, and other) are used to implement an algorithm during a single cycle of operation. Next, we discuss how to minimize hardware due to introducing an internal control unit. The third section is devoted to software implementation of computational algorithms based on microcontrollers. Some families of microcontrollers are analysed. Next, we show how to choose blocks for implementing the embedded system for our example. We choose the microcontrollers ATmega32 to illustrate this implementation of the embedded system. At last, the heterogeneous implementation of computational algorithms is discussed. It is based on well-know conception of hardware-software co-design. We introduce the conception of hardware-software interface and show how to create an algorithm of its operation.

3.1 Hardware Implementation of Algorithms The embedded system [1–3] considered in Sect. 1.1 made decisions based on the analysis of data coming from the temperature sensor. This is quite simple and does not require the use of computations. However, very often decision-making is accompanied by the need to process large amounts of data. For example, the missile control system should find the point of meeting with a target. To do, it is necessary to use some system of equations. There are such arguments of these equations as coordinates of the missile and target, target speed and acceleration, weather conditions, and so on. The solution of that system of equations could lead to a change in the speed or trajectory of the flight of the missile. Let us discuss a very simple example. In this example, we consider some abstract problem. Example 1 The input of ES receives data from the sensor A and B. The data are represented as binary codes having 8 bits. For decision-making it is necessary to find the value of the function P defined as follows: © Springer Nature Switzerland AG 2019 A. Barkalov et al., Foundations of Embedded Systems, Studies in Systems, Decision and Control 195, https://doi.org/10.1007/978-3-030-11961-4_3

49

50

3 Implementation of Computational Algorithms in Embedded Systems

P = AB + A2 B 2 + A3 B 3 .

(3.1)

If P > 1250, then it is necessary to execute the following action: Alar m := 1. So, the ES reports a potential threat. Further decision is made either by the operator or other subsystems of some monitoring and control system. Any computational algorithm could be implemented as a program (software implementation) or some device (hardware implementation). However, in both cases it is necessary to present the initial problem in the form of a block diagram. The development of such a block diagram is an art; the quality of the solution largely depends on the designer’s experience. The diagram is represented as a composition of blocks. There are four main types of blocks: a start (begin) block, an end (finish) block, an operational block and a conditional block. Each operational block is connected with executing some operations. The execution of computational operations takes time. So, we will have two requirements for the block diagram. First, it should be as short as possible. Second, each operational block should contain only standard operations that compute values of some functions of at most two arguments. The first requirement allows minimizing the execution time of the algorithm. The second requirement allows using only standard operating elements to perform calculations. In turn, this allows reducing the time of development and the cost of a system. To fulfill the first requirement, it is often necessary to transform an original formula. For example, it is possible to transform the Eq. (3.1): P = AB + (AB)2 + (AB)3 = AB + AB · AB + (AB · AB)AB.

(3.2)

The expression (3.2) defines the block diagram shown in Fig. 3.1.

Fig. 3.1 Block diagram of algorithm for calculating P

Start

1

A := B :=

2

S1 := A × B

4 P := S1 × S2 5

P := P + S2

6

P := P + S1

3 S2 := S1 × S1 No

7

Yes

P>1250 8

Alarm := 1 End

3.1 Hardware Implementation of Algorithms

51

The block1 is associated with reading data from outputs of the sensors. The block2 calculates the value of AB, the block2 the value of (AB)2 and the block3 the value of (AB)3 . Now, there is P = (AB)3 . Adding S2 = (AB)2 to P (block5) gives (AB)3 + (AB)2 . Adding S1 = AB to P (block6) gives the final value of P. There is the checking of the condition P > 1250 in the block7. If P ≤ 1250 (output “No” from the block7), then we should read the next operands (the feedback to the block1). Otherwise, it is necessary to indicate that there is the alarm situation (block8). The termination of calculations occurs automatically, that is not necessary some additional checks. The algorithm shown in Fig. 3.1 allows obtaining the structural diagram of the computing device (Fig. 3.2). We use registers R A and R B to keep the values of A and B, respectively. If y1 = 1, then R A := A. If y2 = 1, then R B := B. At the same time, y1 causes the operation Alar m = 0 (it is the output of RS trigger T ). Obviously, the sign “X” stands for multiplication, the sign “+” for summation. Parallel multipliers are used to perform the multiplication, parallel adders to perform the addition. Each operational element is denoted by a digit corresponding to the number of the operational block in Fig. 3.1. The checking block is shown as a trapezoid. Its output is connected with input S of the trigger. So, if P > 1250, then there is Alar m = 1. As a rule, there are many options for hardware implementing an algorithm. To choose the optimal solution, it is necessary to evaluate each of these possible options. The following estimations could be used: the cost (it depends on the hardware amount), performance (how much time is required for execution of the algorithm), or consumed energy. Let us find how many adders are necessary to implement the circuit of the device shown in Fig. 3.2. Let us call it the device U1 . Let us start from parallel adders. Consider the following example: S = A + B, where A = 101 and B = 011. So, there is n = 3, where n is a number of bits in A and B. This example is shown in Fig. 3.3a. Naturally, we get S = 1000 as a result of addition A = 101 and B = 011. So, if operands have n bits, then the sum could have n + 1 bits. To find the result of addition, it is necessary to connect three one-bit adders (Fig. 3.3b). Each of them has

A × y1 0

RA

2 ×

B × y2 7

RB

0

4 ×

S1 3 ×

S2

7

5 +

6 +

P >1250

(AB)3

1 × y1

Fig. 3.2 Structural diagram of device for calculating the value P

S T R

Alarm

52

3 Implementation of Computational Algorithms in Embedded Systems

(a) A 101 5 B 011 3 S 1000 8

(b)

a3

b3

1

2

a2

b2 2

a1

b1 2

3

1

3

1

P

S

P

S

P

S

S4

S3

P2

S2

P1

S1

3

P3

Fig. 3.3 Example of addition (a) and corresponding adder (b)

three inputs and two outputs. We do not discuss this circuit in details. Let us only point out that the most significant bit of the result is determined by as carry-out P3 . Now, let us discuss an example of multiplication: P = Ax B. Let A = B = 7 and n = 3. As a result we have P = 4910 = 1100012 . It means that: (1) the product could have 2n bits and (2) there is no overflow during the multiplication. It is possible to execute the multiplication using only an adder having n bits and a shifter. But this process requires n cycles. So, it is rather slow. Parallel multiples are used to accelerate the multiplication. But they require significantly more adders than it is necessary for organizing a sequential circuit. There are a large number of implementation methods for parallel multipliers. Let us choose an approach that requires (n 2 − n)/2 single-bit adders to organize the parallel multiplication. Let us find the number of adders in U1 . To find S1 , it is necessary (82 − 8)/2 = 28 adders. There are 16 bits in S1 . To find S2 , it is necessary (162 − 16)/2 = 120 adders. There are 32 bits in S2 . To find S3 = (AB)3 it is necessary (322 − 31)/2 = 496 adders. There are 48 bits in S3 (because there are 32 bits in S2 and 16 bits is S1 ). So, it is necessary 48 adders to find the result of summation for S1 and S2 . At last, it is necessary 48 adders to find the value of P. So, it is necessary 740 adders in U1 (N (U1 ) = 740). Let us determine the time characteristics of U1 . Analysis of Fig. 3.2 shows that the calculation chain includes three multipliers and two adders. So, the latency time includes three times for multiplication and two for addition: TU1 = 3Tmul + 2Tad . Let us show that reorganization of U1 could save the number of adders and improve time characteristics. Let us consider the device U2 (Fig. 3.4). There is a number of differences in the organization of U1 and U2 . At first, there are 32 adders in the adder S4 . Secondly, the calculation of S3 and S4 occurs in parallel. It means that TU 2 = 3Tmul + 2T ad. Also, there are N (U2 ) = 724 adders. It means that a simple reorganization of a computation process brought about a saving in both hardware and time. In addition, a reduction in hardware amount leads to a reduction in the amount of energy consumption by U2 . The quality of the circuit can be improved by taking into account the characteristics of the data. Let us analyse the initial problem. We can assert the following: 1. If A = 0, then P = 0. In this case Alar m = 0 and there is no need in calculations. 2. If B = 0, then P = 0. So, there is no need in calculations. 3. If A = 10, B = 1, then P = 1110. No alarm.

3.1 Hardware Implementation of Algorithms

B × y2

A × y1 0

RA

×

53

RB

0

7

7

S4

+

S1

× ×

+

S3

P >1250

S2

y1 ×

Alarm

S T R

Fig. 3.4 Structural diagram of device U2 Fig. 3.5 Improved algorithm of calculations

Start A := B := 1

A=0

S1 := A × B 1

0 1

AB>10 0

B=0

S2 := S1 × S1

0 1

A>0

P := S1 × S2

0 1

P := P + S2

B>10 0

P := P + S1 Alarm := 1 1 End

P>1250

0

4. If A = 11, then P ≥ 1463. So, there is Alar m = 1. 5. If B = 11, then P ≥ 1463. So, there is Alar m = 1. 6. If AB > 10, then P ≥ 1463. So, there is Alar m = 1. This analysis allows us to draw the following conclusion: the calculations should be executed if either A > 10 or B > 10. Now we can reorganize the initial algorithm (Fig. 3.1). We should add some checking points in the new algorithm (Fig. 3.5). The improved algorithm has more branches than the original one. However, it takes into account the data structure, which makes it possible to produce fewer

54

3 Implementation of Computational Algorithms in Embedded Systems

calculations. Since the calculations are related to energy consumption, the improved algorithm is more energy efficient. Let us continue the analysis. So, calculations should be performed only for numbers A, B which do not exceed 10. It means that there are only four least significant bits of operands. That makes it possible to significantly reduce the number of adders in the device circuit. Let us use the device U2 . Now there are: (1) 6 adders in S1 ; (2) 28 adders in S2 ; (3) 70 adders in S3 ; (4) 16 adders in S4 and (5) 24 adders in P. So, there are only 144 adders in U2 based on the improved algorithm of calculations. This is five times less than the value required in the device U2 (Fig. 3.4) based on the initial algorithm of calculations. To reduce the energy consumption, it is possible to use the results of checking. For example, if A = 0, then the supply voltage for the operational blocks is reduced. It makes the computations impossible. So, now the device consumes much less energy. Let us note that if the time characteristics of U2 previously satisfied to time constraints, the fact that the calculations are made faster does not play a significant role. However, reducing the hardware amount leads to a reduction in the price of ES. In addition, reducing the energy consumption extends the time of device’s life and also makes it more economical. Now, let us discuss how to execute these calculations having only a single parallel adder and a single parallel multiplier. In the devices U1–U2 , the computational process proceeds as it takes place in space. In the computational process, information is transferred from one operational unit to another. What is very important, calculations take place in a single cycle, albeit fairly long. If the number of blocks is limited then these blocks are used repeatedly during the calculation process. At the same time, the computation process takes place in time.

3.2 Hardware Reduction for Operational Devices It is possible to implement the algorithm from Fig. 3.5 using only a single adder and a single multiplier. Of course, we should define the number of bits for each device. Analysis of U2 shows that there are 24 adders in the adder P. So, we should use a single adder having 24 bits. Next, there are 24 bits in the word S3 . So, we should use a multiplier capable to find the products having 24 bits. We need some registers to keep initial values of A and B, as well as the product P and intermediate results of calculations (S1 , S2 , S3 , S4 ). We need buses to transfer data from registers to computational part including the adder and multiplier. The bit width of buses should be enough for transferring the longest word of information. So, the buses should have 24 bits. We conduct the synthesis in two stages. During the first stage, we find some rough solution. Next, we will try to optimize it. The optimization is reduced to diminishing the amount of registers and interconnections between the buses and registers. Consider the device U3 (Fig. 3.6).

3.2 Hardware Reduction for Operational Devices

55

BA

BZ y1 ×

RA

y2 ×

RB

y3 ×

RS1

y4 ×

RS2

y5 ×

RS3

y6 ×

RS4

y7 ×

RP

y8 × y9 × y10 × y11 × y12 y13 × y14 y15 × y16 y17 × y18 y19 × y21 y22 ×

+ ×y22

BB

× × y23 × y24 × y25

> 1250

x1 y26 Alarm y25

=0

x2

>10

x3

S T R

Fig. 3.6 Structural diagram of device U3

There are three buses in U3 (B A, B B and B Z ). To transfer information, internal control signals yn ∈ Y are used. Here Y = {y1 , . . . , y N } is a set of internal control signals. There is N = 26 for U3 . The meaning of signals is obvious from Fig. 3.6. For example, if y1 = 1, then R A := B Z (the data from the bus B Z are loaded into R A). If y22 = 1, then B Z = B A + B B. If y25 = 1, then two actions are executed: B Z =< data2 > and Alar m := 0. There are 7 registers having 24 bits in U3 . Data from the registers can be transferred through the buses to the computational part. For example, if y8 = 1, then B A = R A. If y21 = 1, then B B = R P. Let us explain the difference between actions with the signs “=” and “:=”. If A = B, then this operation is executed always. If A := B, then the operation is executed when the pulse of synchronization Clock is changed, for example, from 1 to 0. The sign “:=” stands for operations with registers, whereas “=” with combinational parts of circuits. Now each operation from Fig. 3.6 should be expressed as some sets of operations initiated by signals yn ∈ Y . So, the algorithm from Fig. 3.5 should by transformed. The transformed algorithm is shown in Fig. 3.7. Let us explain this algorithm. We replaced all actions by corresponding internal control signals yn ∈ Y . Also, each checking is shown as some logical condition xe ∈ X = {x1 , x2 , x3 }. As follows from Fig. 3.6, x1 = 1 if B Z > 1250, x2 =1 if B Z = 0 and x3 = 1 if B Z > 10. The first node is connected with the following actions: B Z =< data1 >, R A := B Z . It corresponds to the action A :=< data1 > from Fig. 3.5. During this cycle, the value of A presents on the bus B Z . So, we can check both conditions A > 10 and A = 0 before executing the next action. The node 2 is connected with the action

56

3 Implementation of Computational Algorithms in Embedded Systems

Start y24y1

1 0

1

x3 0

α 8

y25y2 0

y12y13y23y4

5

y12y15y23y7

6

y14y21y22y7

7

y12y21y22y7

x2 1

2

4

y26 End 0

x2

1 α

1 x3

x3

1

0 3

y8y11y23y3

x3

1

0 Fig. 3.7 Transformed algorithm of calculations

B :=< data2 >. To do it, the signals y2 and y25 are generated. During this cycle, it is possible to check the conditions B = 0(x2 ) and B > 10(x3 ). If x3 = 0, then we should start the calculations. In the node 3 the following actions are executed: B A = R A, B B = R B, B Z = B A × B B, RS1 := B Z (y18 , y11 , y23 , y3 ). During this cycle the condition AB > 10 can be checked. If x3 = 1, the Alar m is set up (node 8). Otherwise, the calculations are continued as a sequence of nodes 4–7. Analysis of Fig. 3.7 shows that there is no using registers RS3 and RS4. So, they should be eliminated (together with corresponding control signals). Next, some control signals are redundant. They form a set Y R = {y5 , y6 , y18 , y9 , y10 , y16 , y17 , y19 , y20 }. Basing on this analysis, we can improve the circuit of device U3 (Fig. 3.8). In U3 , there are 24 adders in the parallel adder and 70 adders in the parallel multiplier. So, there are 94 adders in the circuit. So, it is the best of solutions from the point of view of hardware. But it is necessary 7 cycles to get the result if A < 10, or B < 10, or AB < 10.

3.2 Hardware Reduction for Operational Devices

BA

BZ y1 ×

RA

y2 ×

RB

y3 ×

RS1

y4 ×

RS2

y7 ×

RP

y25 y26

57

×

y8

y10 × y12 y13 × y14 y15 ×

+

BB

× × y23 × y24 × y25

×y22

> 1250

x1

y21

S T Alarm R

=0

x2

>10

x3

Fig. 3.8 Improved device U3 Fig. 3.9 Structural diagram of U3 -based embedded system

A

B

Operational Unit U3

X Y

Internal Control Unit

There is the second peculiarity of this approach. It is necessary to have an internal control unit (ICU) to generate correct sequence of control signals yn ∈ Y . So, the ES based on U3 includes two parts (Fig. 3.9). The operational unit is represented as U3 . The ICU is implemented by STG (Fig. 3.10). It is necessary to synthesize the circuit of ICU. To synthesize ICU, it is necessary to execute the following: 1. 2. 3. 4.

To encode the states am ∈ A. To construct the direct structure table of FSM. To derive the system of Boolean functions representing FSM circuit. To implement ICU circuit using particular logic elements.

In the discussed case, there is the set of states A = {a1 , . . . , a9 }. It gives M = |A| = 9. Let us encode states am ∈ A by codes K (am ) having R bits: R = log2 M.

(3.3)

Let us use state variables Tr ∈ T = {T1 , . . . , TR } for the state encoding. In the discussed case, there is R = 4. It gives the set T = {T1 , . . . , T4 }.

58

3 Implementation of Computational Algorithms in Embedded Systems

Fig. 3.10 State transition graph of Moore FSM corresponding to ICU

y1y24 a1

x2x3

a2

y26 a9

x2 x2 x3 x2x3 y2y25

x2 x3 a5

a3

a6

y7y12 y15y23

a7

y7y14 y21y22

a8

y7y12 y21y22

x1

x2 x3

y4y12 y13y23

y3y11 a4 y y 18 23 x3

x1

Fig. 3.11 Structural diagram of Moore FSM

Φ

There are many different methods for executing the state encoding. Each method can lead to a circuit with hardware amount different from the circuits obtaining by using other methods. Now we just try to understand how to get the circuit. So, let us encode the states in the trivial way: K (a1 ) = 0000, . . . , K (a9 ) = 1000. State codes are kept in the special state register (RG). There are R flip-flops in the RG. As a rule, these flip-flops have informational inputs of the D type. The flip-flops of RG have common inputs of synchronization and reset. To change the content of RG, special input memory functions (IMF) are used. They form the set Φ = {D1 , . . . D R }. There is a structural diagram of Moore FSM shown in Fig. 3.11. It includes a combinational part and a register. There is a feed-back connected the RG and combinational part. The register is a standard element. But each FSM has original circuit of its combinational part. To implement its circuit, it is necessary to find systems of Boolean functions (SBF):

3.2 Hardware Reduction for Operational Devices Table 3.1 Direct structure table of ICU am K (am ) as a1 a2 (y1 , y24 )

0000 0001

a3 (y25 )

0010

a4 a5 a6 a7 a8

(y3 , y11 , y18 , y23 ) (y4 , y12 , y13 , y23 ) (y7 , y12 , y15 , y23 ) (y7 , y14 , y21 , y22 ) (y7 , y12 , y21 , y22 )

a9 (y26 )

0011 0100 0101 0110 0111 1000

a2 a9 a3 a2 a9 a4 a2 a5 a6 a7 a8 a9 a1 a1

59

K (as )

Xh

Φh

h

0001 1000 0010 0001 1000 0011 0001 0100 0101 0110 0111 1000 0000 0000

1 x2 x3 x2 x¯3 x¯2 x2 x3 x2 x¯3 x¯2 1 1 1 1 x1 x¯1 1

D4 D1 D3 D4 D1 D3 D4 D4 D2 D2 D4 D2 D3 D2 D3 D4 D1 – D1

1 2 3 4 5 7 7 8 9 10 11 12 13 14

Φ = Φ(T, X );

(3.4)

Y = Y (T ).

(3.5)

To find the functions (3.4)–(3.5), it is necessary to construct the direct structure table (DST ) of Moore FSM. It includes the following columns: am is the current state; K (am ) is a code of the state am ∈ A; as is the state of transition; K (as ) is a code of the state as ∈ A; X h is the input signal determining the transition am , as ; Φh is a collection of IMF equal 1 to replace the code K (am ) by the code K (as ) into the RG; h is the number of transition (h ∈ {1, . . . , H }). In the column am is written a collection of control signals yn ∈ Y generated in this state. In our example, the DST is represented by Table 3.1. There is the clear correspondence between the STG (Fig. 3.10) and DST of ICU (Table 3.1). The column Φh includes a symbol Dr if there is 1 in the r th position of K (as ) from the hth row of DST. To derive the functions (3.4), it is necessary to find functions Fh corresponding to rows of DST. They are determined as Fh =

R 

Trlmr · X h (h ∈ {1, . . . H }).

(3.6)

r =1

Functions (3.6) are called product terms. The first member of (3.6) corresponds to conjunction Am . It represents the code K (am ) from the hth row of DST. The symbol lmr stands for the value of the r th bit of K (am ), lmr ∈ {0, 1}, Tr0 = T¯r , Tr1 = Tr (r ∈ {1, . . . R}).

60

3 Implementation of Computational Algorithms in Embedded Systems

For example, there is the following equation for the function D1 : D1 = F2 ∨ F5 ∨ F12 ∨ F14 . The following equations determine the terms: F2 = A2 X 2 = T¯1 T¯2 T¯3 T4 x2 x3 , F5 = A3 X 5 = T¯1 T¯2 T3 T¯4 x2 x3 , F12 = A8 X 12 = T¯1 T2 T3 T4 x1 and F14 = A9 X 14 = T1 T¯2 T¯3 T¯4 . Terms could be optimized using insignificant assignments of state variables. In this case, there are 7 such assignments: 1001, . . . , 1111. It allows to delete one literal from terms F2 , F5 , F12 and three literals from F14 . Now, we have the following equation: D1 = T¯2 T¯3 T4 x2 x3 ∨ T¯2 T3 T¯4 x2 x3 ∨ T2 T3 T4 x1 ∨ T1 . The simpler is the formula, the less logic elements there are is in the circuit implementing this formula. The functions (3.5) could be derived from the column am of a DST. They depend on terms Am (m ∈ {1, . . . , M}). For example, the following Boolean function corresponds to the signal y7 : y7 = A6 ∨ A7 ∨ A8 = T¯1 T2 T¯3 T4 ∨ T¯1 T2 T3 T¯4 ∨ T¯1 T2 T3 T4 = T2 T¯3 T4 ∨ T2 T3 T¯4 ∨ T2 T3 T4 = T2 T3 ∨ T2 T4 . The insignificant assignments were used for the first simplification. As a result, the number of literals is diminished. The second simplification is based on the law of expansion. As a result, the function y7 is expressed by two terms and only four literals. It is very important to optimize the hardware in the circuit of ICU. The methods of optimization depend strongly on logic elements used for implementing the circuit. In Chap. 6, we discuss this problem for field programmable gate arrays. It is possible to diminish the hardware in the operational unit. We can eliminate the multiplier and execute the multiplication using only the adder. In this case, it is necessary n cycles of operation to find the product. Let us consider the following example. Let us find the product P = A × B if A = 1011 and = 1101. Let us begin multiplication starting with the least significant bit of the multiplier B. If the current bit of B is 1, than the multiplicand A is added to the partial product. Next, the partial product is shifted one bit to the right. As a result, we should get P = 100011112 = 14210 . This example is shown in Fig. 3.12. In the beginning, we expect the P = 0. Analysis of B4 shows that it is necessary to add A to get P1 : P1 := 0 + A. Next, it is shifted: P1 := R1 (P1 ). Analysis of B3 shows that no addition should be implemented. So, P2 := B1 (P1 ). Next we should shift it to the right. And so on. We can represent this algorithm by a block diagram (Fig. 3.13). Some counting device CT is used to find the instant of algorithm’s termination. To simplify the block diagram, we execute shift to the right for the multiplier B. It allows checking only the bit B[n]. Now the number of blocks does not depend on the number of bits of multiplier. Now we can turn the device U3 into U4 . To do it, we should eliminate the block of parallel multiplication and add a shifter. Also, we need a register to organize the counter. This approach allows to reduce the number of adders up to 24. However, such saving of hardware leads to reducing the performance. Obviously, the device U4 is much slower than U3 . So, there are several variants of implementation of the same computations. Calculations can be performed in a single cycle. This approach is associated with the highest hardware consumption. Saving hardware increases the number of cycles. It is also connected with the emergence of internal control unit.

3.3 Software Implementation of Algorithms

61

Fig. 3.12 Sequential multiplication of binary numbers

Fig. 3.13 Block diagram of sequential multiplication

Start A := ; B := ; P := 0; CT := n; CT=0 0

1

B[n]

0

1 End

P :=P+A

A := ; B := ; P := 0; CT := n;

3.3 Software Implementation of Algorithms Any computational algorithm may be represented as a program kept into a computer memory. A computer is a universal program-controlled artificial system targeting receiving, storing, processing and transferring information. One of the fundamental principles of computer organization is the principle of program control formulated by John von Neumann in 1945. According with it, program and data are kept in a computer memory.

62

3 Implementation of Computational Algorithms in Embedded Systems

The development of semiconductor technology results in the appearance of very large scale integration (VLSI) circuits. Now, microelectronics turns into nanoelectronics leading to appearance of various ultra large scale integration circuits (ULSI). The ULSIs include more than 106 transistors. Modern ULSIs include billions of transistors. One of representatives of ULSI is a microcontroller (MC). An MC is a computing device targeting embedded applications. A typical MC includes arithmetical and logical unit (ALU), memory of programs, memory of data, ADCs, timers, a watchdog, controllers for different standard interfaces, a controller of local area network (LAN) and other blocks. So, it has the Hardware architecture because it includes separate memories of programs and data. The memory of program is represented by either the electrically erasable programmable read-only memory (EEPROM) block or the flash memory. The memory of data is represented by the random-access memory (RAM) block. A computational algorithm should be transformed into a program. Today, programs are written using some high-level programming languages such as C or C++. To do it, a designer can use a personal computer. The program is transformed into the executive code represented on the level of a machine language of a particular MC. This exec code is loaded into MC’s memory of program. This program is an embedded software, whereas a MC represents an embedded hardware of a particular ES. There is a simplified structural diagram of an MC shown in Fig. 3.14. It includes six blocks connected by three main buses: address bus, data bus and control bus. There is a clock system generating Clock pulses. There is a reset block used to initialize the operation of MC. The I/O ports are used to connect the MC with periphery. The central processing unit (CPU) includes ALU and a control unit. The I/O devices are addressed as memory cells. Communication with I/O devices is curried out trough the I/O ports. Different controllers support different protocols of data exchange. There is a lot of various MCs on the market.

Fig. 3.14 Simplified structural diagram of a microcontroller

Address Bus

Clock System Reset Vcc GND

CPU Reset Block Data Bus

Program Flash Data RAM

I/O Signals

Control Bus

I/O Ports Controllers

3.3 Software Implementation of Algorithms

63

Table 3.2 Features of microcontrollers Feature Description MIPS Frequency Flash RAM Digital I/O Timers PCA ADC DAC Comms Price

Million instructions per second Relative performance measurement MC clock frequency. An alternative speed measure used when MIPS are not available Novolatile memory for the program. Measured in kilobytes Could also by ROM, EPROM, EEPROM Volatile memory for program data. Usually SRAM Measured in bytes or kilobytes Number of digital I/O lines Number of timers Number of programmable counter arrays Number of analog to digital converts and resolution (e.g. 12-bit) Number of digital to analog converts and resolution List of integrated serial and Ethernet communications Price for a given quantity

For example, the 8051 family is a very popular on the modern market. The original 8051 was developed by Intel in 1980. Intel sold it until the 1990s. Today, Intel does not sell them at all, but there are about twenty manufactures making enhanced 8051compatible devices. This family has both commercial and open source development tools. The most popular is a commercial compiler C51 (from Keil) and the most popular open access compiler is SDCC. It is really difficult to choose the best MC for a given ES. Table 3.2 describes some features witch should be taken into account when a choice should be done. There are some characteristics of 8051 family produced by Silicon Labs (http:// embeddedbook.x85.com). This company specializes in mixed signal (digital and analog) MCs with high performance. The prices are shown for 2009. These chips are the most costly, but they have the highest performance and number of ADC channels (Table 3.3). Let us compare these chips with production of Maxim Inc. (www.maxim-ic.com). Their characteristics are shown in Table 3.4. It is very important to to prevent programs and data from hacking. To do it, there is only SRAM-based non-volatile memory blocks in AS2251T. There is a very interesting proposition from Microchip Technology (www. microchip.com). They produce the PIC family of 8-bit, 16-bit and 32-bit microcontrollers. These MCs are supplied only by Microchip Technology. Their MCs have a Harvard architecture (separate blocks of memory for program and data). So, they use no-volatile Flash or EEPROM memory for programs and volatile SRAM for keeping the data.

64

3 Implementation of Computational Algorithms in Embedded Systems

Table 3.3 Characteristics of Silicon Labs 8051 family Model C8051F305-GS C80551F015-GQ C8051F500-IQ Package MIPS or Freq Flash RAM I/O Timers PCA ADC DAC Comms

14-pin SOIC 25MIPS 2K 256 8 3 3 No No UART, I2C

64-pin TQFP 25MIPS 32K 2304 32 4 5 8 channels 2 channels UART, I2C, SPI

Price

$1.74@100

$9.58@100

C8051F122-GQ

48-pin QFP 50MIPS 64K 4352 40 4 6 32 channels No CAN 2.0, LIN 2.0, SPI, UART, I2C $4.91@100

100-pin TQFP 100MIPS 128K 8448 64 5 6 16 channels 2 channels 2UATR’s, I2, SPI $15.39@100

Table 3.4 Characteristics of Maxim 8051 family Model

DS80C320

DS89C450

DS80C411

DS2251T (secure)

Package

MQFP/44, PDIP/32, PDIP/40, PLCC/44, TQFP/44

PDIP/40, PLCC/44, TQFP/44

LQFP/100

LQFP/100

MIPS or Freq

33 MHz

33MIPS @33 MHz

75 MHz

16 MHz

Flash

External only

64 kB

64 kB

Up to 128 kB (non-volatile SRAM)

RAM

256 + external

1280 bytes

64 kB

Up to 128 kB (non-volatile SRAM), 128 bytes, scratchpad RAM

I/O

32

32

64

32

Timers

3

3

4

2

PCA

0

0

0

0

ADC

No

8 No

No

No

DAC

No

No

No

No

Comms

2UART’s

2UART’s

3UART’s, 1-wire, Ethernet

UART

Price

$6.00@1k

$7.78@1k

$8.00@1k

$68.67@1k

They have rather simple architecture and limited memory resources. It is possible to use either C or Assembler to develop an application program. The main characteristics of PIC microcontrollers are shown in Table 3.5. The Atmel have two lines of microcontrollers AVR (www.atmel.com). There are 8-bit and 32-bit MCs. They are based on the Harvard architecture Reduced Instruction Set Computers (RISC). They execute instructions during a single cycle. The MCs combine good performance and low power consumption. The chips can operate from 1.8VDC to 5.5VDC. They have flexible sleep modes and it is possible to use the software control of the frequency. So, these chips target decreasing for the power

3.3 Software Implementation of Algorithms Table 3.5 Characteristics of PIC family Model PIC10F200 PIC18F2610 (8-bit) (8-bit) Package

6/SOT-23, 8/DFN, 8/PDIP MIPS or Freq 4 MHz Flash 256 words RAM 16 bytes I/O 4 Timers 1 (8-bit) PCA No ADC No DAC Comms

No None

Price

$0.53@1K

28/PDIP, 300mil 28/SOIC, 300mil 10 MIPS 64K 3968 bytes 25 4 No 10 channels (10-bit) No EUSART, SPI, I2C $4.25@1K

65

PIC24FJ 256GA106 (16-bit)

PIC32MX 360 F512L (32-bit)

100-Pin TQFP

100-Pin TQFP

16 MIPS 256 kB 16384 bytes 53 5 No 1 channel (10-bit) No 4UART, 3SPI, 3I2C $4.18@1K

80 MHz 512 kB 32 kB 85 5 Yes 16 channels (10-bit) No 2UART, 2SPI, 2I2C $6.35@1K

consumption. There are different packages (from 8 pins to 100 pins) and different flash sizes (from 1 to 256 Kb). In the case of AVR MCs with 32 bits, there are such additional features as: digital signal processing (DSP) instructions, embedded Ethernet and USB. Because of these features, they can be used in signal processing and multimedia applications. Table 3.6 represents the characteristics of 8-bit AVR MCs. Table 3.7 shows the main characteristics of 32-bit AVR MCs. Let us say something about ARM microprocessors. The first of them was produced by Acorn Computers Limited (UK) in 1983. The acronym now means the Advanced RISC Machine. Now they are produced by ARM Limited. For example, 32-bit ARM is used widely in different embedded products (Call phones, iPad by Apple). There is a huge market of microcontrollers. There are constant changes with leading companies. For example, NXP acquired Freescale Semiconductor in 2105; Microchip purchased Atmel in 2016; Cypress acquired Spansion. Table 3.8 presents 8 top companies and their market share for 2015–2016. All sums are shown in millions of dollars. As follows from Table 3.8, the MC market is equal to 15 · 109 of dollars. The leading companies target embedded control applications such as IoT, automotive, robotics. For example, NXP Semiconductors produces series of 8-bit (LPC 700, LPC900), 16-bit (XA), and 32-bit microcontrollers (ARM7, ARM9, ARM CortexM0, ARM Cortex M3, and ARM Cortex M4). Let us discuss how to organize the ES for our example. Let A and B are some values of temperature. Let we have two sensors to measure the temperature. Let the value of A be obtained from a digital sensor having the internal ADC. Let the value of B be obtained through an analog sensor considerably remote from the sensor A.

66

3 Implementation of Computational Algorithms in Embedded Systems

Table 3.6 Characteristics of Atmel 8-bit family Model

ATiny13A (tinyAVR)

ATiny861 (tinyAVR)

ATmega1284P (megaAVR)

ATmega384A1 (XMEGA)

Package

8-pin PDIP, 8-pin SOIC

20-pin PDIP, 20-pin SOIC

40-pin PDIP, 44-pin TQFP, 44-pin VQFN

100-pinTQFP, 100-pin CBGA

MIPS or Freq

20 MHz/20MPIS

20 MHz/20MPIS

20 MHz/20MPIS

32 MHz/32MPIS

Flash

1 kB 64 byte EEPROM

8 kB 512 byte EEPROM

128 kB 4 kB EEPROM

384 kB 4 kB EEPROM 4 kB Boot Code

RAM

64 bytes 31 registers

512 bytes

16 kB

32 kB

I/O

6

16

32

78

Timers

1, 8-bit

1, 16-bit 2, 8-bit

2, 16-bit 2, 8-bit

8, 16-bit

PCA

None

None

None

None

ADC

4, 10-bit channels

11, 10-bit channels

8, 10-bit channels, (44-pin VQFN)

16, 12-bit channels

DAC

None

None

None

4, 12-bit channels

Comms

None

None

None

8UATRT, 4SPI

Price

$0.843@500

$1.83@500

Not available

Not available

Table 3.7 Characteristics of Atmel 32-bit family Model

AT32UC3B164 (UC3)

AT32UC3A0512 (UC3)

AT32AP7002 (AP7)

AT32AP7000 (AP7)

Package

48-pin TQFP, 48-pin VQFN

144-pin LQFP

196-pin TFBGA

256-pin TFBGA

MIPS or Freq

60 MHz/72DMPIS

66 MHz/80DMPIS

150 MHz/210DMPIS

150 MHz/210DMPIS

Flash

64 kB

512 kB

None

None

RAM

16 kB

64 kB

32 kB 16 kB/16 kb Inst/data cache

32 kB 16 kB/16 kb Inst/data cache

I/O

28

109

85

160

Timers

3, 16-bit

3, 16-bit

3

3

PCA

None

None

3

3

ADC

10-bit, 8 channel

10-bit, 8 channel

12-bit channel for image sensor

12-bit channel for image sensor

DAC

None

16-bit stereo audio

16-bit stereo audio

None

Comms

2UART’s, USB2.0, full speed

4UART’s, USB2.0, full speed, Ethernet (10/100)

4UART’s, USB2.0, high speed

4UARTs, USB2.0, high speed, 2Ehernet (10/100)

Price

$6.30@100

$11.985@800

$11.14@100

$14.413@100

It means that an MC is placed near a sensor A. Let us organize the radio channel to deliver the data B to the MC. To transmit and receive the data through a radio channel, it is necessary to have, for example, two transceivers.

3.3 Software Implementation of Algorithms Table 3.8 Leading suppliers of microcontrollers 2016 Rank Company Sales 2015 1 2 3 4 5 6 7

NXO Renesas Microchip Samsung ST Infenion Texas instruments Cypress

8

67

% Change

% Marketshare

2016

1350 2560 1355 2170 1514 1060 820

2914 2458 2077 1866 1573 1106 835

116 –4 50 –14 4 4 2

19 16 14 12 10 7 6

540

622

15

4

A transceiver is a device comprising both a transmitter and a receiver that are combined and share a single housing. We use a radio frequency (RF) module which could transfer radio signals between two devices. They provide wireless communication and do not require lines of sight. They are various types and ranges. Some of them can transmit up to 500 m. RF modules may comply with a defined protocol for RF communications such a Zigbee, Bluetooth low energy, or Wi-Fi, or they may implement a proprietary protocol. The circuit is typically designed for half-duplex operation. In this case, two devices can communicate in both directions but not simultaneously. They can use such communication protocols as UART, SPI (serial peripheral interface) and USB (universal serial bus). The higher-speed protocols (Wi-Fi) require a high-speed interface (USB). The slow protocols (Bluetooth Low Energy) may use UART interface. So, we should find the value of P = AB + (AB)2 + (AB)3 . If P ≥ C, then the alarm situation should be indicated. Let us use a light-emitted diode to indicate alarm. Now, we can develop the block diagram of the ES (Fig. 3.15). It includes 2 sensors, 2 transceivers, two microcontrollers and LED. There are two PCBs in this system. Of course, it is necessary some batteries to supply voltage for all devices. There is the following mode of operation of this ES. The MC1 takes the data A from the sensor A. At the same time, the MC2 takes voltage from the sensor

Sensor B

Sensor A Binary Code PCB1 Volage supply

LED

Voltage Wireless Channel PCB2

MC1 Transceiver1

Fig. 3.15 Block diagram of embedded system

Volage supply

MC2 Transceiver2

68

3 Implementation of Computational Algorithms in Embedded Systems

B. The MC2 executes the analog-to-digital conversion and gives these values to the T ransceiver 2. Also, the MC2 initialize the data transferring through the radio channel. The T ransceiver 1 receives the value B and load it into RAM of the MC1. There is a program of calculation the value of P kept into the flash memory of MC1. If there is P ≥ C, then L E D is switched on. Otherwise, the operation continues in the same pattern. Let us choose the components for organizing the ES. For example, the following choice can be done: 1. Sensor A. It should be a digital sensor. Let us choose the DS18B20 digital thermometer. It provides 8-bit to 12-bit Celsius temperature measurements. It communicates over a 1-Wire bus that requires only one data line and ground (GND) for communication with a central processor. It can derive power directly from the data line eliminating the need for the external power supply. Each DS18B20 has a unique 64-bit serial code, which allows multiple DS18B20s to function on the same 1-Wire bus. Thus, it is simple to use one microprocessor to control many DS18B20s distributed over a large area. Applications that can benefit from this feature include HVAC environmental controls, temperature monitoring systems inside buildings, equipment, or machinery, and process monitoring and control systems. 2. Sensor B. It is an analog sensor. Let us choose the TMP36 low-voltage, precision centigrade temperature sensor. It provides a voltage output that is linearly proportional to the Celsius temperature. The TMP36 is specified from −40 ◦ C to 125 ◦ C, provides a 750 mV output at 25 ◦ C, and operates to +125 ◦ C from a single 2, 7 V supply. It has the output scale factor equal to 10 mV/1 ◦ C. So, it has only a one wire to transfer the value of temperature to an ADC. 3. Transceivers. We can choose the universal transceiver RFM12B by Hopfer Electronics. It can be used in home security and alarm, remote control and keyless entry, wireless keyboard, mouse and other PC peripheral, toy control, tire pressure monitoring, telemetry, personal or patient data logging, and remote automatic meter reading. The RFM12B dramatically reduces the load on the microcontroller with the integrate digital data processing features: data filtering, clock recovery, data pattern recognition, integrated FIFO and TX data register. The automatic frequency control (AFC) feature allows the use of a low accuracy (low cost) crystal. To minimize the system cost, the RFM12B can provide a clock signal for the microcontroller, avoiding the need for two crystals. The RFM12B has a single-pin crystal oscillator circuit, which provides a 10 MHz reference signal for the PLL. To reduce external parts and simplify design, the crystal load capacitor is internal and programmable. The transceiver can supply a clock signal for the microcontroller; so, accurate timing is possible without the need for a second crystal. When the microcontroller turns the crystal oscillator off by clearing the appropriate bit using the Power Management Command, the chip provides a fixed number (192) of further clock pulses (“clock tail”) for the microcontroller to let it go to idle or sleep mode.

3.3 Software Implementation of Algorithms

69

Data Bus 8-bit Program Counter

Instruction Decoder Control Lines

Direct Addressing

Instruction Register

Status and Control

Indirect Addressing

Flash Program memory

32 x 8 General Purpose Registrers

Interrupt Unit SPI Unit Watchdog Timer

ALU Analog Comparator Data SRAM

I/O Module 1

I/O Module 2 EEPROM I/O Lines

I/O Module n

Fig. 3.16 Structural diagram of AVR MC

An SPI compatible serial interface lets the user select the frequency band, center frequency of the synthesizer, and the bandwidth of the baseband signal path. Division ratio for the microcontroller clock, wake-up timer period, and low supply voltage detector threshold are also programmable. Any of these auxiliary functions can be disabled when not needed. All parameters are set to default after power-on; the programmed values are retained during sleep mode. The interface supports the read-out of a status register, providing detailed information about the status of the transceiver and the received data. The transmitter block is equipped with two 8-bit wide TX data registers. It is possible to write 8 bits into the register in burst mode and the internal bit rate generator transmits the bits out with the predefined rate. It is also possible to store the received data bits into a FIFO register and read them out in a buffered mode. 4. Microcontrollers. It is necessary a single byte to represent the values of temperature. So, we can choose an MC with 8 bits. There are a lot of such devices. Let us choose the ATmega32 AVR Microcontroller by Atmel. Its architecture is shown in Fig. 3.16. In order to maximize performance and parallelism, the AVR uses a Harvard architecture (with separate memories and buses for program and data). Instructions in the Program memory are executed with a single level pipelining. While one instruction is being executed, the next instruction is pre-fetched from the Program memory.

70

3 Implementation of Computational Algorithms in Embedded Systems

This concept enables instructions to be executed in every clock cycle. The Program memory is In-System Reprogrammable Flash memory. The fast-access Register File contains 32 × 8-bit general purpose working registers with a single clock cycle access time. This allows single-cycle Arithmetic Logic Unit (ALU) operation. In a typical ALU operation, two operands are output from the Register File, the operation is executed, and the result is stored back in the Register File. It is done in one clock cycle. Six of the 32 registers can be used as three 16-bit indirect address register pointers for Data Space addressing - enabling efficient address calculations. One of these address pointers can also be used as an address pointer for look up tables in Flash Program memory. The ALU supports arithmetic and logic operations between registers or between a constant and a register. Single register operations can also be executed in the ALU. After an arithmetic operation, the Status Register is updated to reflect information about the result of the operation. Program flow is provided by conditional and unconditional jump and call instructions, able to directly address the whole address space. Most AVR instructions have a single 16-bit word format. Every Program memory address contains a 16- or 32-bit instruction. The Atmel®AVR core combines a rich instruction set with 32 general purpose working registers. All 32 registers are directly connected to the Arithmetic Logic Unit, allowing two independent registers to be accessed in one single instruction executed in one clock cycle. The resulting architecture is more code efficient while achieving throughputs up to ten times faster than conventional CISC microcontrollers. The ATmega32 provides the following features: 32Kbytes of In-System Programmable Flash Program memory with Read-While-Write capabilities, 1024bytes EEPROM, 2Kbyte SRAM, 32 general purpose I/O lines, 32 general purpose working registers, a JTAG interface for Boundary-scan, On-chip Debugging support and programming, three flexible Timer/Counters with compare modes, Internal and External Interrupts, a serial programmable USART, a byte oriented Two-wire Serial Interface, an 8-channel, 10-bit ADC with optional differential input stage with programmable gain (TQFP package only), a programmable Watchdog Timer with Internal Oscillator, an SPI serial port, and six software selectable power saving modes. The idle mode stops the CPU while allowing the USART, Two-wire interface, A/D Converter, SRAM, Timer/Counters, SPI port, and interrupt system to continue functioning. The power-down mode saves the register contents but freezes the Oscillator, disabling all other chip functions until the next External Interrupt or Hardware Reset. In powersave mode, the Asynchronous Timer continues to run, allowing the user to maintain a timer base while the rest of the device is sleeping. The ADC Noise Reduction mode stops the CPU and all I/O modules except Asynchronous Timer and ADC, to minimize switching noise during ADC conversions. In standby mode, the crystal/resonator Oscillator is running while the rest of the device is sleeping. This allows very fast start-up combined with low-power consumption. In extended standby mode, both the main Oscillator and the Asynchronous Timer continue to run.

3.3 Software Implementation of Algorithms

71

The device is manufactured using Atmel’s high density nonvolatile memory technology. The On-chip ISP Flash allows the program memory to be reprogrammed in-system through an SPI serial interface, by a conventional nonvolatile memory programmer, or by an On-chip Boot program running on the AVR core. The boot program can use any interface to download the application program in the Application Flash memory. Software in the Boot Flash section will continue to run while the Application Flash section is updated, providing true Read-While-Write operation. By combining an 8-bit RISC CPU with In-System Self-Programmable Flash on a monolithic chip, the Atmel ATmega32 is a powerful microcontroller that provides a highly-flexible and cost-effective solution to many embedded control applications. So, the MC could get data through 1-Wire channel (from the sensor A), through SPI (from and to transceiver), through an analog port PAO (from the sensor B). Also, it can control the LED through, for example, the pin PC4. Thus, this MC satisfies all requirements of the designed ES. Now it is possible to replace the structural diagram (Fig. 3.15) using selected blocks. We show the PCB1 and sensor A in Fig. 3.17, the PCB2 and sensor B in Fig. 3.18. We use the pin names from the corresponding technical documentation. Also, we show the direction of transferring data between MCs and transceivers. Let us explain Fig. 3.17. The data from sensor DS18B20 enter the MC through 1Wire interface (input PC5). Communication with the transceiver is executed through SPI interface. It connects the pins PB7, PB6, PB5, and PB4 of MC with pins SLK, SD0, SDI, and SEL of RMFM12B. The connection PD2-IRQ is used for organization of “master-slave” mode. The MC is a master sending the signal “Interrupt” to its slave RFM. It initializes the data exchange through SPI. The pin PC4 is a general I/O port. It makes the LED on if necessary. We do not show batteries on this circuit.

Fig. 3.17 PCB1 and sensor A

+5V 4,7K DS18B20 Data GND

1-Wire

+5V GND

+5V LED

PC5 ATmega 32L PC4

1K

PB7 PB6 PB5 PB4 PD2 Interrupt

+5V SLK SDO SDI SEL IRQ RFM12B GND

72

3 Implementation of Computational Algorithms in Embedded Systems

Fig. 3.18 PCB2 and sensor B

+5V TMP36 GND Voltage +5V

PAO ATmega 32L

GND

PB7 PB6 PB5 PB4 PD2 Interrupt

+5V SLK SDO SDI SEL IRQ RFM12B GND

Let us explain Fig. 3.18. The analog output of TMP36 (sensor B) is connected with the analog port PA0 of the MC. If necessary, the MC initializes the data transfer to receiver from PCB1. It is necessary to synchronize the operation of MCs from PCB1 and PCB2. All devices should sleep when no action is executed. In requires a program with corresponding operators. Also, it is necessary to write codes for data interexchange, as well as for calculating the value of P and producing its comparison with C. We do not discuss these details in this Section.

3.4 Heterogeneous Implementation of Computational Algorithms A computational algorithm may be implemented by a microcontroller. It should be represented as a program kept into the flash memory of MC. Such a system is rather cheap but slow. If software (SW) solution does not meet time constraints, it cannot be used. A computational algorithm may be implemented as a hardwired circuit, for example, as ASIC. It is a hardware solution (HW). As a rule, the HW solution is very fast but very expensive. So, it is necessary, to find a compromise between the slow SW solution and expensive HW solution. If an SW solution is too slow, then a hardware accelerator (HWA) may be used. The HWA is implemented as ASIC. This solution corresponds to the ideology of hardware-software co-design (HSC). It is necessary to use a special hardwaresoftware interface (HSI) to organize the interaction between SW and HW. This idea is illustrated by Fig. 3.19. The microcontroller interacts with sensors, actuators, and other peripheral devices. It executes some calculations. If necessary, it wakes up the HWA through the HSI.

3.4 Heterogeneous Implementation of Computational Algorithms Fig. 3.19 Heterogeneous implementation of computational algorithm

73

Interaction with physical world Microcontroller Data Control Hardware – Software Interface Data Control Hardware accelerator

Fig. 3.20 Design flow for hardware-software co-design of embedded systems

Specyfication of embedded system

Partition the system into hardware and software parts Synthesis of HWA

Development of programs

Development of hardwaresoftware interface Execution of co-simulation Implementing the ES

The HSI organizes the data transferring between the MC and HWA. Also, it turns the HWA into a sleep state. This approach requires a special design methodology of HSC. It is shown in Fig. 3.20. If a general-purpose computing system is designed, then the software is developed when the hardware part is ready. In the case of ES, it is possible to design SW and HW simultaneously. The most important step of co-design is a partitioning of the task by HW and SW parts. It requires extensive experience of the system development team. So, this step is performed manually (at least, till 2018). It ends with generating specifications for HW and SW. These specifications are used to develop the programs and the hardware accelerator. Next, the algorithm and hardware of HSI should be designed. If a result of co-simulation of HW and SW shows that the ES meets all

74

3 Implementation of Computational Algorithms in Embedded Systems

Fig. 3.21 Structural diagram of digital part of ES

A

B MC

Wake-up Sleep

Data

In

HSI Reset ShR1

Fix

Data Data Out In

HWA

constraints, then it is possible to start the implementing step. Otherwise, it is necessary to return to the earlier steps of the co-design process. Let us return to example where we should calculate P = AB + (AB)2 + (AB)3 and indicate the alarm situation. There are three parts in this systems: 1. The microcontroller samples the sensors A and B. It is too slow to execute multiplications. So, the MC executes only summations and comparison. It wakes up the HSI and transfers A, B to the HWA. 2. The hardware accelerator executes multiplications. Both A and B are registered, as well as AB, (AB)2 and (AB)3 . It includes shift registers to receive data (A, B) and transmit the results of multiplications. 3. The hardware-software interface transmits data between the MC and HWA. If is an FSM executing some control algorithm. The structural diagram of the digital part of ES is shown in Fig. 3.21. We do not show the internal part of MC. It is just a black-box. The signals have the following meaning: 1. 2. 3. 4. 5.

Wake−up = 1 wakes up the HSI. Sleep = 1 transfers the HSI into the sleep mode. Reset = 1 clears resisters of the HWA. Sh R1 = 1 initializes shifting data into registers of HWA one position to the right. Fi x = 1 permits storing data into output registers.

Let us discuss the general case, when Alar m = 1 if P ≥ C. The value C is kept in the memory of MC. It is possible that AB ≥ C, or (AB + (AB)2 ) ≥ C, or (AB + (AB)2 + (AB)3 ) ≥ C. We organize the process in such a way that operations of addition and comparison are executed by MC. So, the MC receives the value of AB and checks whether AB ≥ C. If AB ≥ C, then Alar m = 1 and both HSI and HWA should wait for the next event. If AB < C, then the value (AB)2 is received by MC. The MC finds the value AB + (AB)2 and compares it with C. If there is no Alar m, then the value of (AB)3 is transmitted. The MC calculates P and compares it with C. So, the MC executes the following algorithm (Fig. 3.22).

3.4 Heterogeneous Implementation of Computational Algorithms

75

Start Receiving A and B

Wake-up := 1

Wake-up := 1

In

0

1 1

In

0

Receiving (AB)2

Receiving AB In 1

In

0

1

0 D := D –(AB)2

D := C - AB 0

D>0

1

0

Alarm := 1 Sleep := 1

D>0 1

Wake-up := 1

End

1

In

0

Receiving (AB)3 1

Sleep := 1

In 0

D := D – (AB)3 0

D>0

1

Fig. 3.22 Algorithm of MC’s operating

The variable I n shows that: (1) the HWA is ready to transmit the data (I n switched from 0 into 1) or (2) the transmission is finished (I n switched from 1 into 0). The variable D is used for comparison with C. Two options are possible to finish the algorithm. If result of comparison is negative, then Alar m := 1 and Sleep := 1. Otherwise, there is no Alar m and Sleep := 1.

76

3 Implementation of Computational Algorithms in Embedded Systems

Reset Data In

ShR1

Fix

ShR1 0

0 RB

8 X

AB 16

7

32

X

R3

(AB)3 48

47 0

0 RA

X

(AB)2

R2

8

7

31 0 R1 Data Out 15

1

Fig. 3.23 Structural diagram of HWA

The HWA includes three multiplies and five registers. They are organized as two shift registers. It allows using a single wire to obtain A, B and single wire to transmit AB, (AB)2 , (AB)3 . The structural diagram of HWA is shown in Fig. 3.23. The value of AB is kept into R1 having 16 bits. The value of (AB)2 is kept into R2 having 32 bits. The value of (AB)3 is kept into R3 having 48 bits. If Fi x = 1, then the following operations are executed: R1 := AB, R2 := (AB)2 , and (AB)3 → R3 . If Sh R1 = 1, then it is executed the shift to the right for R A, R B, R3, R2, and R1. If Reset = 1, then R A = R B = 0. It does not change content of R1 − R3. The HSI uses timer (TM) to organize the interchange of data between the MC and HWA. For example, it is known that each count of TM corresponds to the transmission of a single bit from or to the MC. It is necessary 10 counts to get the result of a multiplication by the HWA. Using these preliminaries, we can create the algorithm of HSI’s operation (Fig. 3.24). Next, it is necessary to transform the algorithm (Fig. 3.22) into a program of MC. Also, the algorithm (Fig. 3.24) is used to design an FSM executing the functions of control unit. We discus the problem of FSM design in Chap. 5 of this book. Of course, the proposed solution is not unique. Let us discuss a case when the HWA calculates the value of P and finds the difference between P and C. In this case, it is necessary three registers for keeping A, B, and C. Obviously, it is possible that C occupies 48 bits. So, it should be 48 bits in the register RC. To find the result P − C, it is necessary to represent C as a 2-complement. It can be done as C := C¯ + 1. So, it is necessary to execute a bitwise negation of C. We assume that A and B are positive numbers, so P ≥ 0.

3.4 Heterogeneous Implementation of Computational Algorithms

77

Start

0

Wake -up

In := 0

1 TM :=16

0

Wake -up ShR1 := 1 TM :=TM-1

Sleep 1

1

0 Reset

In := 1 TM :=32 0

TM=0 0

1 TM :=10

End TM=0 1 In := 0

TM := TM - 1 0

0

Wake -up

TM=0

Sleep

1

1 Fix := 1

In := 1 TM :=32

In := 1; TM := 16

ShR1 := 1 TM :=TM-1 0

ShR1 := 1 TM :=TM-1

TM=0

0

1

1 End

0

TM=0

1

Fig. 3.24 Algorithm of HSI’s operation

Consider the examples shown in Fig. 3.25. As follows from Fig. 3.25, if P ≥ C, then the sign of difference P − C is positive. It is necessary to have an adder to find the value of 1 + carr y out from the most ¯ it is necessary to have 48 NOT gates. significant bit of result. To find the value of C, To execute the operation C¯ + 1, it is necessary to add 1 as a carry in for the least significant bit of the adder.

78

3 Implementation of Computational Algorithms in Embedded Systems

Fig. 3.25 Examples of subtraction

(a)

(b)

(c)

P 0.101 +5 C 1.101 -3 0.010 +2

P 0.101 +5 C 1.010 -6 1.111 -1

P 0.101 +5 C 1.011 -5 0.000 0

This analysis allows to construct the structural diagram of modified HWA (Fig. 3.26). The signal Fi x is used to clear registers R A, R B and RC and to write results of comparison into the flip-flop F F. The modification of HWA leads to the modification of HSI’s operation. Now, the transfer of words A, B, and C is executing for 64 cycles of a timer. Let it be necessary 12 cycles of a timer to find the result of comparison P ≥ C. There is the modified algorithm of HSI’s operation shown in Fig. 3.27. Since all calculations are made by the HWA, it simplifies the algorithm of MC’s operation (Fig. 3.28). This algorithm is considerably simpler than the initial one shown in Fig. 3.22. The same is true for the initial and modified algorithms of HSI. So, the complication of HWA leads to a simplification of algorithms of MC and HSI. In addition, it decreased the amount of information that is exchanged between the MC and HWA. Also, it reduces the overall time of the decision’s making. So, changing the circuit of HWA changes the time of decision. It is possible to choose the simplest possible circuit of HWA which satisfies to the required time constraints.

ShR1 Data In Fix 0 RA

X 8 X

AB

X

(AB) 32

(AB)3 48

16

7

32

0 RB

2

8 Data Out

7

1 SnR D FF C

0 48 RC

NOT

C 1

7 Fig. 3.26 Modified HWA

Fix

3.4 Heterogeneous Implementation of Computational Algorithms

79

Start

0

Wake -up

Fix := 1 RA :=0; RB :=0 RC :=0; FF:=SnR In := 1 Begining of transmission

1 TM :=64

In:=0 Transfer ShR1 := 1 A, B, C TM :=TM-1 Into RA, RB, RC 0

End

Ending of transmission

TM=0 1 TM :=12

Waiting for the end of calculations

TM := TM - 1 0

TM=0

1

Fig. 3.27 Modified algorithm of HSI’s operation

Start Receiving A and B from sensors Wake-up := 1 1 1

In

SnR

0

0

Receiving SnR

In 0 Fig. 3.28 Modified algorithm of MC’s operation

Alarm :=1 Sleep :=1

Sleep :=1

End

80

3 Implementation of Computational Algorithms in Embedded Systems

References 1. M. Adamski, A. Barkalov, Architectural and Sequential Synthesis of Digital Devices (University of Zielona Góra Press, Zielona Góra, 2006) 2. K. Qian, D. Den Haring, L. Cao, Embedded Software Development with C (Springer, US, 2009) 3. M. Wolf, Computers as Components: Principles of Embedded Computing System Design. Engineering Professional Collection (Elsevier/Morgan Kaufmann, Amsterdam, 2012)

Chapter 4

Field Programmable Gate Arrays

Abstract The chapter deals with field-programmable gate arrays (FPGA). The basic stages are shown concerning evolution of programmable logic (from PROMs and PLAs to FPGAs). Next, the evolution of FPGAs is analysed. Three ages of FPGAs are shown. Next, the modern FPGAs produced by Xilinx and Intel (Altera) The last section is devoted to design methods targeting FPGAs. Basing on this information, we show how to execute calculations using digital signal processors, embedded memory blocks, and some components of Zynq 7000 by Xilinx.

4.1 Evolution of Programmable Logic Devices A device is programmable if its functionality can be changed by changing a program [1, 2]. So, such devices can be programmed to perform specific tasks. Microprocessors are programmable and their programs are kept into memory blocks (ROM, RAM). Programmability gives a certain degree of freedom: 1. Any memory cell can contain any operator within a given instruction set. 2. Any cell can be connected with any other cell through an address part of instruction. 3. A program can be changed any number of times by an end user (the property of field-programmability). Software engineers can use this property from 1950s. But hardware engineers had to wait until the end of sixties. These devices belong to the class of fieldprogrammable logic devices (FPLD). An FPLD is a general purpose chip whose hardware could be configured by the end user to implement some particular project. Programmable read-only memory (PROM) chips were the first representatives of FPLDs. In 1970, first PROMs were manufactured by Harris Semiconductor. They included a fixed array of AND gates (AND-array) and a programmable array of OR gates (OR-array). Its structure is shown in Fig. 4.1a. The AND-array is a fixed address decoder (DC) having S inputs and q = 2S outputs. Each output determines an unique address of a memory cell. The content © Springer Nature Switzerland AG 2019 A. Barkalov et al., Foundations of Embedded Systems, Studies in Systems, Decision and Control 195, https://doi.org/10.1007/978-3-030-11961-4_4

81

82

(a) 1

...

(b)

s

Address Decoder AND-array

OR-array 1

x1

...

Fig. 4.1 Architecture of PROM (a) and implementation of SBF (b)

4 Field Programmable Gate Arrays

q 1 t

xL

Fig. 4.2 Architecture of PLA

1 1 . . . PROM . . . s t

yN

s

1

1 2 ...

y1

...

AND-array

OR-array ...

q 1

t

Fig. 4.3 Implementing FSM with PLA

of OR-array is programmable. It is shown using the sign “X” in Fig. 4.1a. Each cell has t outputs. For example, a PROM could contain a microprogram. Also, this architecture perfectly fits for implementing a system of Boolean functions (SBF) Y = Y (X ) where |X | ≤ S and Y ≤ t (Fig. 4.1b). But the number of address inputs was very small and its incrementing by 1 leads to the doubling of PROM capacity. It was necessary to have FPLDs capable to implement minimal sum-of-products (SOP) forms of SBF. Such ability came with appearance of programmable logic arrays (PLA). First PLA were introduced in 1974 by Signetics. In PLA, both arrays are programmable (Fig. 4.2). Thanks to programmability of AND and OR arrays, PLA can efficiently implement sparse SBF. Such functions have much less terms than their maximum possible number 2L . Such systems are common for FSMs. But it is necessary to have flip-flops to implement an FSM circuit (Fig. 4.3). The flip-flops create a state register (RG) which is external respectively to PLA chip. More external connections leads to less reliable circuits. This disadvantage can be eliminated due to adding flip-flops inside a chip. It results in programmable logic sequencers (PLS). Both PLA and PLS have a serious drawback: a significant part of the chip area is occupied by programmable switches.

4.1 Evolution of Programmable Logic Devices

83

Fig. 4.4 Architecture of PAL

To overcome this drawback, Monolithic Memories introduces programmable array logic (PAL) chips in 1978. They have programmable AND-array and fixed OR-array (Fig. 4.4). A PAL includes t macrocells MC1-MCt having S mutual inputs and q terms. A single OR gate creates the output of a macrocell. The output of macrocell is connected with output pin of a chip. This architecture uses the fact that real Boolean functions have many arguments but very small amount of terms. Adding flip-flops to outputs of PAL turns them into generic array logic (GAL) chips. They were proposed by Lattice in 1985. They are still proposed by Lattice, Atmel, Texas Instruments, etc. For example, the GAL devices GAL16V8 have S = 16, t = 8 and 20 pins. Some pins are bidirectional (they can be programmed as either inputs or outputs). In GALs, such important issues are possible as an optional flip-flop in a macrocell and feedback to the input of AND array from outputs of OR array. It gives tools for implementing very flexible state machines. Such chips as PROM, PLA, PLS, PAL and GAL belong to the class of simple programmable logic devices (SPLD). They have around 40 input/output pins and are equivalent to not more than 500 two-inputs NAND gates (system gates). There are some characteristics of SPLD shown in Table 4.1. To program an SPLD means to fix connections between the elements of a chip. There are five main ways to accomplish this task: 1. Programming by a mask. It is executed during the manufacturing process. The program is fixed forever. It leads to read-only memories (ROM). This technology is used for ASIC chips. 2. One-time programming. It is executed for PROM. The programming is executed by a user. Two approaches are possible. If an internal chip has connections between all components, then some fuses are burnt. If there are no connections, they should be created. Next, this program cannot be altered or erased. 3. Reprogramming with erasing. The initial program can be completely erased and PROM can be reprogrammed. This approach is based on usage of floatinggate transistors. It leads to erasable SPLD, for example, to EPROM. The previous data are erased by EPROM exposing to ultraviolet light (for several minutes). To do it, a chip should be taken out from PCB. Special programator is used to reprogram a chip. The writing process is around 1000 times slower that reading. 4. Reprogramming with electrical erasing. The initial program can be electrically erased. It leads to electrically erasable SPLD, for example, to EEPROM. An EEPROM can be reprogrammed (erasing and programming) being on a PCB.

84

4 Field Programmable Gate Arrays

Table 4.1 Characteristics of SPLD chips Type PROM Programmable AND-array Programmable OR-array Internal flip-flops Feedback from OR to AND Programmable I/O pins

No Yes No No No

PLA

PLS

PAL

GAL

Yes Yes No No No

Yes Yes Yes Yes No

Yes No No Yes Yes

Yes No Yes Yes Yes

Typical EEPROM can be reprogrammed from 10 to 40 000 times. Writing data is around 500 times slower than reading. 5. Partial reprogramming. Such SPLD are divided by small fixed-size blocks. These blocks can be reprogrammed independently. These devices are named Flash Memory. Using PALs, one could get a design with predictable performance. But around the middle of 1980s many projects have appeared which required more than a single PAL chip to be used. The simplest way to meet these requirements is to increase the amount of macrocells per chip. But it was not possible. Why? The answer can be found from analysis of PAL architecture. The main specific of PAL is the square growth of number of switching points with the linear growth of the number of inputs. The PALs with high transistor density require to much area and they are very slow. It is connected with the fact that their word and bit lines span to entire die. It increases the total capacitance and, therefore, the power consumption. So, large PALs were impractical in both important dimensions area and performance. So, in the middle of 1980, three types of integrated circuits dominated on the market: ASICs, microprocessors, and FPLDs. But ASICs are very expansive, their NRE charge is very high (millions of dollars), their time-to-market is rather big (months and months). So, a company should invest a lot of money in a very risky project. There are no such big number of companies having such money. The microprocessors are universal programmable devices, but they are rather slow for many projects (for example, for multimedia). The PALs are not a good compromise between very expensive full custom ASICs and rather slow microprocessors. If more than a single PAL chip is required to implement a design, then it loses the reliability, compared with a single-chip version. In addition the multi-chip design requires a more powerful source of energy, it often needs cooling, more wires, and bigger PCB in comparison with a single-chip design. So, designers wanted to have very large scale integrated (VLSI) circuits which are: (1) programmable; (2) much cheaper than ASICs; (3) much faster that microprocessors and (4) powerful enough to implement the most complex projects. And desirable has become a reality with the advent of FPGAs. FPGAs are a development of the idea being in the base of gate-based ASIC. In the case of the latter, a lot of prefabricated gates could be connected using channels

4.1 Evolution of Programmable Logic Devices

85

(a)

(b) R &

Q

&

Q

Word(Q)

Word(Q)

S

Data(R)

Data(S)

Fig. 4.5 Electrical (a) and logic (b) circuit of SRAM cell

of metallization. In the case of the former, a lot of prefabricated look-up table (LUT) elements could be connected using programmable connections. Each LUT was a programmable block. Such a style is named the instant-style technology. The instant are represented by configurable logic blocks (CLB). S. Trimberg gave the following definition of FPGA [3]. “An FPGA is a generalpurpose, multi-level programmable logic device that is customized in the package by the end user. FPGAs are composed of blocks of logic connected with programmable interconnect (PI). PI between blocks allows users to implement multi-level logic”. FPGAs have on-chip programming overhead circuitry. It cannot be used by customers. Of course, it lowers the FPGA gate density. Programmable switches also increase signal delay by adding resistance and capacitance to interconnected parts. So all CLBs of a chip are embedded into a configurable interconnect structure and surrounded by configurable blocks of input-outputs (I/O blocks). As a rule, FPGAs used CMOS technology and static memory cells. Static memory cells are used in static random-access memory (SRAM) blocks. In FPGAs, these cells are used: 1. To program CLB that are used to implement logic functions. 2. To program the routing interconnect of FPGAs that are generally stored by small multiplexors. SRAM programming technology is a dominant approach for FPGAs due to its two dominant advantages: (1) the use of standard CMOS process technology and (2) re-programmability. There are 6 transistors in a static memory cell (Fig. 4.5a). Each pair of vertical transistors forms the 2NAND gate. Their connections give R¯ S¯ trigger (Fig. 4.5b). These cells could be used to organize a multiplexor. It is enough two cells and 6 transistors to organize a 4-to-1 multiplexor (Fig. 4.6). The circuit from Fig. 4.6 implements the following equation: f = Q1 Q2 a ∨ Q¯1 a2 b ∨ Q1 Q¯2 C ∨ Q¯1 Q¯2 d .

(4.1)

86

4 Field Programmable Gate Arrays

Fig. 4.6 Organization of 4-to-1 multiplexor

x1

x2

Cell1

Cell2

Q1

Q2

Q1

Q2

a b

f

c d

Fig. 4.7 Architecture (a) and graphic symbol (b) of LUT

x1 x2

(a) Cell1 Cell2 Cell3 Cell4

a b c d

MX

(b) f

x1 x2

LUT

f

The Eq. (4.1) is a typical equation of a multiplexor. If values of variables a − d are generated by SRAM cells, then we have a LUT having two inputs (Fig. 4.7a). The circuit from Fig. 4.6 corresponds to the multiplexor (MX) in the (Fig. 4.7a). Changing values of x1 and x2 , we can choose any cell of LUT. So, there are 16 different binary words which can be written in cells 1–4. So, a LUT could implement an arbitrary Boolean function having SL arguments where SL is the number of LUT’s inputs. The first FPGA chip was introduced by Xilinx in 1984. It was the device XC2064 having 64 CLBs. Each CLB included 2 LUTs with SL = 3 and a flip-flop. It was an invention of Ross Freeman, the circuit was designed by Bill Carter, fabricated by Seiko in Japan and marketed by Bernie Vondersmitt. The first working devices were powered in September 1985. Its price was from 55 to 88 USD. It had 38 I/O pins. It allows maximum clock frequency 50 MHz and was produced on a 2,0 micron process. This chip offered 800 gates (85 000 transistors). The development of FPGAs contributed to the competition between the two companies - Xilinx and Altera. Xilinx was founded in 1984 by Ross Freeman and Bernard Vondersmitt. They both worked for Zilog Corp. (a subsidiary of Exxon Corp.) as chip engineers. Altera was founded in 1983 by Rodney Smith (he was a manager of Fairchild Semiconductor). Three other engineers were founding members together with R. Smith. They were Robert Hartmann of Signetics corp., James Sansbury of Hewlett- Packard corp., and Paul Newhagen of Fairchild Semiconductor. It was an idea of R. Freeman to design a computer chip that effectively acted as a blank tape allowing the user to program the chip himself rather than having to purchase an ASIC from the manufacturer. Combined efforts of Freeman and Vondersmitt allowed them to attracted several million dollars of venture capital. Money were used to design the first commercially viable FPGA chip. In 1984 they incorporated the venture as Xilinx. In November of 1985 they began selling their first device, XC2064.

4.1 Evolution of Programmable Logic Devices Table 4.2 Market of FPGA chips Vendor 2011 Sales, Market millions share Xilinx Intel (Altera) Microsemi (Actel) Lattice QuickLogic Others

87

2015 Sales, millions

Market share (%)

2016 Sales, millions

Market share (%)

20144 1389

53 36

2167 1486

53 36

2869 1954

49% 40%

207

4%

301

8

297

7

297 26 –

6% 1% –

124 19 2

3 0 0

144 11 2

3 0 0

In the beginning, the market belonged to Xilinx. In 1988 its sales were around 30,5 million of dollars, in 1989 around 50 million. The company sold nearly $100 million worth of its products in 1990. Starting from 1992, other companies entered the FPGA market. Altera was the biggest. Altera was known due to its complex programmable devices (CPLD). They are chips including PAL-based macrocells. But in 1992, Altera produced its first FPGA chips, FLEX800. It was a family of FPGAs allowing from 26 to 162 logic arrays blocks (LAB), from 2500 to 16 000 usable gates, from 79 to 208 user pins, and from 282 to 1500 flip-flops. Let us point out that Intel bought Altera in December, 2015. We created Table 4.2 where biggest vendors of FPGA chips are shown. We used only data for 2011, 2015 and 2016. In its evolution, FPGAs passed through three stages and entered into the fourth. Let us discuss them.

4.2 Evolution of FPGAs This Section of our book is inspired by the brilliant article of S. Trimberg “Three ages of FPGAs” (2015). He stated that FPGAs have passed through the following stages: the Age of Invention (1984–1991), the Age of Extension (1992–1999), and the Age of Accumulation (2000–2007). He could not define a name for the current Age, which continues for 11 years (starting from 2008). The FPGAs are based on the island-style architecture (Fig. 4.8). They include CLBs connected with other CLBs through programmable switching blocks (SB) and with I/O pins through commutation blocks (CB). Programming CBs allows transfer input data to CLBs and output data from CLBs to output pins. A CLBs could be either a transistor (it is a fine-grained basic component) or an entire microprocessor (it is a very coarse-grained basic component). In the case of fine-grained CLBs, they are very weak from logic design point of view. It is necessary to have a lot of components to implement a rather simple Boolean

CLB

CLB

CLB

CLB

CLB

CLB

CLB

CLB

CLB

CLB

...

...

...

...

4 Field Programmable Gate Arrays

...

88

Vertical Routing Chanel I/O Blocks

Horizontal Routing Chanel Fig. 4.8 Island-style architecture of FPGA Fig. 4.9 Architecture of LUT-based basic element

a b c d

fL LUT Clock

FF D C TT R Reset

y0 f fR

MX

function. In addition, it is necessary a huge amount of switch boxes to organize a circuit. Obviously, such a circuit is slow and consumes a lot of energy. In the case of very coarse-gained components, it may be enough a single CLB to implement a very complex SBF. But it leads to mindless consumption of computational power when implementing simple functions. Xilinx found a good trade-off between too fine and too coarse CLBs. Their CLB is based on LUT element connected with a programmable flip-flop. It is a basic element (BE) of FPGA (Fig. 4.9). A LUT can implement an arbitrary function depended on SL arguments. There is SL = 4 for LUT from Fig. 4.9. A Boolean function should be represented by its truth table. The LUT’s output, fL , is connected with D input of master-slave flip-flop (FF). An FF has inputs of synchronization (connected with global Reset). Using signal y0 and multiplexor MX gives possibility to by-pass the FF. For example, it is possible the following organization: (4.2) f = y0 fL ∨ y¯0 fR . So, the output of LE can be either combinational (y0 = 1) or registered (y0 = 0). Existence of FFs gives possibility to organize FSMs and other sequential devices. FPGAs are based on SRAM cells. They have three major drawbacks: (1) rather big size (up to 6 transistors per cell); (2) volatility and (3) poor security. If there is no voltage, FPGA’s configuration disappears. It necessitates the use of external device to permanently store the configuration data when the FPGA is powered down. These external devices (flash or EEPROM) add to the cost of SRAMbased FPGA.

4.2 Evolution of FPGAs

89

The configuration must be located in FPGA at power up. So, there is the possibility that the configuration data could be intercepted and stolen by competitors. Now some FPGAs families use encryption techniques to make the level of security higher. There is SL ≤ 6. It is true for 1985 and for 2017. Of course, increasing SL leads to circuits with less levels of logic. But there are tree key metrics used for estimation of SL : area, speed and power. Various experiments have shown that the optimal number of LUT’s inputs is in the range from 5 to 6. In 1985, FPGAs found their niche between ASICs and FPLDs. The ASICs allow low cost, high density and high speed. But they require around a year to be manufactured and a lot of money for research and development. An addition, each mistake could lead to repetition of the design process. So, ASICs were cheap only for mass production when circulation is in the millions. The FPLDs were rather small to implement an average project as a single chip. In addition, they used the bipolar transistors required much more energy than CMOS-based FPGAs. In the first Age, long wires were one of the main drawbacks of FPGAs. To answer, the architectures combine from 2 to 4 LUTs in a slice. Each slice has very fast (and short) interconnections among the neighbour LUTs. But still the problem of efficient routing was a critical issue at design with FPGAs. Because of it, early FPGAs were very difficult to use. In the Age of Invention, FPGAs were small and designers could implement circuits practically manually. There were no efficient CAD tools targeting FPGA-based projects. Each architecture of FPGAs requires its own CAD tool. So, it was impossible to have universal CAD tools. So, each FPGA vendor should invent his own CAD tools. But it could be viewed as positive factor. Due to it, FPGAs escaped the negative influence of external CAD vendors as it was in the case of FPLDs. The PAL vendors, for example, were restricted to those architectures external CAD vendors supported. Such an effect hinders the development of architecture. Through the Age of Expansion, the transistor count could was doubled each two years. Each new technology node doubled the number of transistors per chip. In turn,it doubled the size of the largest possible FPGAs and halved the cost per function. The chemical-mechanical polishing permitted to stack more metal layers. It allowed to add more interconnection channels into a chip. As a result, area was still important but now it could be traded off for performance, features and ease-of-use. FPGAs became too big for manual design. In 1992, the best chip had around 10 000 gates (XC4010). In 1999, the best chip had around 106 gates (Virtex XCV1000). So, the design was impossible without CAD tools. During this Age, SRAM was accepted by all FPGA vendors, as well as LUT was viewed as a logic cell of choice. Also, FPGAs successfully adsorbed nearly the entire low-end of the ASIC business. By the start of XXI century, FPGAs where common components of digital systems. But now they were larger than the typical problem size. So, customers did not want to pay a large premium for the largest FPGA. They did not want to have a lot of empty space. At the same time, the productivity gap continued to grow as it shown in Fig. 4.10. So, it was necessary to offer something new to meet the expectations of end users.

90

4 Field Programmable Gate Arrays

Fig. 4.10 Illustration of productivity gap

Chip density Transistors per chip (thousands) 1,000,000 100,000 10,000 1000 100 10 1980

Designer productivity (Transistors per Person per month) Design complexity

Designer productivity 1985

1990

2000

2005

10,000 1000 100 10 1

2010

The FPGA vendors addressed this challenge in two ways: 1. They produced families of lower-capacity, lower-performance, low-cost FPGAs: Spartan (Xilinix), Cyclone (Altera), EC/ECP (Lattice). It was a proposition for the low-end of market. 2. They added libraries of soft logic IP cores for important functions. For example, soft cores of microprocessors were introduced by Xilinx (MicroBlaze) and Altera (Nios). It was a proposition for the high-end of the market. Also, different communications protocol stacks and memory controllers was added as soft IPs. 3. To overcome the productivity gap’s negative effect, different dedicated hard blocks were added. They included large memories, microprocessors, multipliers, flexible I/O and source-synchronous transceivers. It allowed to reduce the overhead of programmability in area, power, performance and design effort. As a result, the term “FPGA platform” was coined by Xilinx. The dedicated blocks were built using custom-designed transistors. They were often more efficient than ASIC implementations. So, the high-end FPGAs included multiple PowerPC microprocessors, embedded memory blocks (EMB), multipliers, clock management blocks, gigahertz-state transceivers and bit-stream encryption to protect the design. For example, the chips of Spartan-3 family (Xilinx) include up 104 memory blocks with 18 Kb for each of them. So, they include up to 1,87 Mb of memory in their embedded RAMs (BRAM). Their frequency of operation can be variable (from 25 MHz to 325 MHz). They support 23 different I/O standards. So, these chips target different fields of digital automatics, and multimedia systems. There are some characteristics of Spartan-3 family shown in Table 4.3. The symbol BRAM stands for the total volume of memory for EMBs. The symbol DRAM stands for distributed memory when each LUT was used as a RAM block. To compare different vendors, we add Table 4.4 witch characteristics of Cyclone family of Altera. They include up to 20 060 logic elements and 288 K bit of RAMs. The Chap. 5 is devoted to design of FSMs with LUTs and/or EMBs. Let us discuss the peculiarities of EMBs. The main specific of EMB is its reconfigurability. It means that a designer can change the number of address inputs (SA ) and outputs of cells (tF ). The total size (V0 ) is always the same:

4.2 Evolution of FPGAs

91

Table 4.3 Characteristics of Spartan-3 family Device Number of Number of macrocells system gates XC3S50 XC3S200 XC3S400 XC3S1000 XC3S1500 XC3S2000 XC3S4000

1728 4320 8064 17280 29952 46080 62208

50 K 200 K 400 K 1M 1,5 M 2M 4M

Table 4.4 Characteristics of Cyclone family Device Number of LEs Number of BRAM (128 × 36) EPIC3 EPIC4 EPIC6 EPIC12 EPIC20

2910 4000 5980 12060 20060

13 17 20 52 64

V0 = 2SA · tF .

BRAM (K)

DRAM (K)

72 216 288 432 576 720 1728

12 30 56 120 208 320 432

Total size of RAMs, bits

Number of user pins

59904 78336 92160 239616 294912

104 161 185 249 301

(4.3)

Now, typical EMB has V0 = 36Kbit. It has the following configurations: 36 K × 1, 18 K × 2, 9 K × 4, 4 K × 8, 2 K × 16, 1 K × 32, and 512 × 64, bits. It corresponds to the following pairs SA , tF : 15, 1, 14, 2, 13, 4, 12, 8, 11, 16, 10, 32 and 9, 64. As we will see, this property allows “tailoring” EMBs to meet the needs of particular FSM. A single LUT implements an arbitrary Boolean function having up to SL arguments. But a single EMB could implement from1 to 64 Boolean functions depending on up to SA arguments. The input arguments determine memory cell addresses. Modern FPGAs give opportunity to implement two systems of Boolean functions having up to 14 arguments (Spartan-6). In Spartan-6 family, for example, there are from 12 to 128 EMBs. Each of them can store up to 18 Kb of data. Each EMB has two input ports. So, it can be programmed as a single 18 Kb block or two independent 9 Kb EMBs. Both ports are configured independently as 16 K × 1, 8 K × 2, 4 K × 4, 2 K × 8, 1 K × 16, and 512 × 32 bits. Data can be written to either a single or to both ports and can be read from either one or both ports. Each port has its own pins for address, data in, data out, clock, clock enable, and wire enable. There are from 135 to 1880 EMBs in Spartan-7 family. Each of them could store up 36 Kb of data. Each memory access in controlled by a clock. All operations are

92 Fig. 4.11 Simplified structural diagram of dual-port EMB

4 Field Programmable Gate Arrays Input Address Data Data WEA ClkA tFA SAA

Input Address Data Data WEB ClkB tFB SAB

Port A

Port B Memory Array

EMB

tFB

tFA

Output Data

Output Data

Table 4.5 Dedicated logic in FPGAs

Type of logic block

Device

Gates, flip-flops and routing Three-state bus Dedicated arithmetic Memory Embedded memory blocks Dynamic reconfiguration Universal I/O blocks Programmable clock Microprocessor Source-synchronous transceiver Bit-stream encryption Transceiver Multiplier Ethernet MAC/PCI express System monitor Analog-to-digital controller Floating point arithmetic

XC2000 XC3000 XC4000 XC4000 FLEX CAL/XC6200 Virtex Virtex Excalibur Virtex-II Virtex-II Virtex-II Pro Virtex-4 Virtex-4 Virtex-6 Virtex-7 Stratix-10

registered. It means that data do not change during the current cycle. Let us consider the simplified structural diagram of typical dual-port EMB (Fig. 4.11). There tFA bits of data as inputs and outputs of the port A and tFB of the port B. There are SAA address inputs for the port A and SAB for port B. Each port has its own signals of writing enable (W EA , W EB ) and synchronization (ClkA , ClkB ). In Table 4.5 we show different blocks of dedicated logic which can be found in FPGAs. The column “Device” contains names of devices where a particular block has firstly appeared. As follows from Table 4.5, some facilities were introduced by Xilinx and some by Altera. Their competition results in improvement of FPGAs. What do we have now, after the ending of the Age of Accumulation?

4.2 Evolution of FPGAs

93

PIO

Power Mgt. Ctrl. PLL Osc RC Osc

Pherip heral Bridge

PIO

Prog. Int. Timer Watchdog Timer Real Time Timer Debug Unit PID Ctrl.

APB

Reset Ctrl. Brownout Detect Power On Reset

EBI SRAM

Flash

Peripheral Data Controller

Flash Programmer

Application-Specific Logic

Ethernet MAC

CAN

USART0-1

USB Device

SPI

PWM Ctrl

Two Wire Interface

Synchro Serial Ctrl

ADC0-7

Timer/Counter0-2

PIO

Advanced Int. Ctrl

Voltage Regulator

Memory Controller

System Controller

ASB/ AHB

ARM Processor

JTAG Scan

Fig. 4.12 ARM-based system-on-chip

Modern FPGAs should struggle with multicore processors, graphic processing units (GPU) and software programmable application specific standard products (ASSP). The ASSPs can be viewed as a new form of ASIC. ASSPs belong to class of system-on-chip (SoC) devices. An SoC combines a collection of fixed function blocks along with a microprocessor subsystem. The function blocks are specified for some specific application domain (image processing, networking). The microprocessor controls the data flow and allows customization through programming. There is a typical ARM-based SoC shown in Fig. 4.12. So, now there is a new age of FPGAs. It requires better CAD tools, novel architectures, using everything from current technological nodes, greater accumulation of fixed blocks. It is relay difficult to predict the outcome of competition between FPGAs and other VLSI devices such as microprocessors, SoCs, ASSPs, GPUs. This outcome will be seen, when the current Age could be viewed from retrospective.

94

4 Field Programmable Gate Arrays

4.3 Modern FPGAs Let us compare again the three key elements that are amenable to implementing embedded systems. They are microprocessors, ASICs and FPGAs. Each of them has its drawbacks and advantages. Microprocessors are very powerful and flexible implementation medium for various digital systems. They come in a wide variety of models which can fit to different cost and performance requirements. But they are general purpose devices. They are rather slow. So, they are not the best choice for applications which need to be optimized for performance, power or board space. ASICs can be optimized for a particular application. But they require very high NRE cost. So, ASICs are cost effective for the highest performance or highest volume applications. FPGAs are a really good alternative to rather slow microprocessor-based solutions and very expensive ASIC-based solutions. But it is important to combine all positive features of MPs and ASICs together with programmability of FPGAs. It is a way chosen by leading FPGA vendors, Xilinx and Altera (Intel). Let us analyse their latest products to understand the nature of modern FPGAs. As follows from www.xilinx.com, Xilinx offers the following products: Spartan6 (45nm), Virtex-7, Kintex-7, Artix-7, Spartan-7 (28 nm), Virtex UltraScale, Kintex UltraScale (20 nm, 16 nm). Let us discuss some of them. Spartan-6 family offers high logic-to-pin ratios, small form-factor packing, Microblaze soft processor, and a diverse number of supported I/O protocols. They target consumer electronics, automotive infotainment (it is a linguistic blend of words “information” and “entertainment”), and industrial automation. They can be used for full HD Intelligent Digital Signal, industrial networking, vehicle networking and connectivity, high resolution video and graphics. They support up to 40 I/O standards for simplified system design. They have PCI Express with integrated endpoint block. There are 13 devices in this family. We show some of them and their features in Table 4.6. There is SL = 6 for LUTs of Spartan-6. Each slice includes 4 LUTs and 8 flip-flops. Each DSP48A1 slice contains an 18 × 18 multiplier, an adder, and an accumulator.

Table 4.6 Spartan-6 FPGA feature summary (XC6SL) Device X4 X16r X100 X25T Logic cells 3840 Slices 600 Flip-flops 4800 DSP48A1 8 Block RAM 12 PCI express 0 Maximal 132 user I/0

14579 2278 18224 32 32 0 232

101261 15822 126576 1800 268 0 480

24051 3758 30064 38 52 1 250

X75T

X150T

74637 11662 93296 132 172 1 348

147443 23038 184304 180 268 1 540

4.3 Modern FPGAs

95

Table 4.7 Comparison a Virtex-7 famlies Family Spartan-7 Artix-7 Logic cells Block RAM DSP slices Transceivers Transceivers speed Serial bandwidth PCIe interface Memory interface I/O pins I/O voltage

Kintex-7

Virtex-7

102 K 4,2 Mb 160 – –

215 K 13 Mb 740 16 6,6 Gb/s

478 K 34 Mb 1920 32 12,5 Gb/s

1955 K 68 Mb 3600 96 28,05 Gb/s

– – 800 Mb/s 400 1,2V–3,3V

211 Gb/s X4 Gen2 1066 Mb/s 500 1,2V–3,3V

800 Gb/s X8 Gen2 1866 Mb/s 50 1,2V–3,3V

2784 Gb/s x8 Gen3 1866 Mb/s 1200 1,2V–3,3V

Each block RAM (EMB) has V0 = 18 Kb. There are two branches in the family: LX targets logic optimization and LXT targets high-speed serial connectivity. There are three types of CLB slices in this family: SLICEM (25%), SLICEL (25%), and SLICEX (50%). Each slice includes different sub-circuits. In a SLICEM, LUTs can be configured as either a 6-input LUT or as dual 5input LUTs with identical 5-bit addresses and two independent outputs. These LUTs can be used as distributed 64-bit RAM and as 32-bit shift register with addressable length. Each LUT output can be registered in a flip-flop. There is a high-speed carry chain propagating carry signals in a column of slices. It allows creating adders and counters. There are no memory/shift register functions in SLICELs. There are no arithmetic carry and wide multiplexors in SLICEXs. There are four FPGA families in Xilinx-7. They have a wide range of characteristics targeting low cost, small form factor, cost-sensible, high-volume applications, as well as ultra high-end connectivity band width, logic capacity and signal processing capability for the most demanding high-performance applications. These families are Spartan-7, Virtex-7, Kintex-7, and Artix-7. All families have some common features such as: 1. 2. 3. 4.

6-input LUTs configurable as distributed memory. 36 Kb dual-port RAM with built-in FIFO logic. DSP slices with 25 × 18 multiplier, 48-bit accumulator and pre-adder. High-performance select I/O technology that supports DDR3 interfaces up to 1,8866 MB/s. 5. A user configurable analog interface (XADC) incorporating on-chip terminal and supply sensors. 6. Integrated block for PCI Express (PCIe), for up to x8 Gen3 Endpoint and Root Port designs. Table 4.7 gives the comparison of these families. As you can see, all families (Spartan-7 is an exception) include transceivers with different performance.

96

4 Field Programmable Gate Arrays

Spartan-7 family is optimized for low cost, lowest power, and high I/O performance. It is available in low cost, very small form-factor packaging for smallest PCB footprint. Artix-7 family is optimized for low power applications, requiring serial transceivers and high DSP and logic throughput. It provides the lowest total bill of materials cost for high-throughput consecutive applications. Kintex-7 family is optimized for best price-performance witch a 2X improvement compared to previous generation (it is a new class of FPGA). Virtex-7 family is optimized for highest system performance and capacity, with a 2X improvement in system performance. It is based on stacked silicon interconnection (SSI) technology. There are 6 devices in Spartan-7 family (from XC7S6 to XC7S100), 8 devices in Artix-7 family (from XC7A127 to XC7200T), 7 devices in Kintex 7 family (from XC7K70T to XC7K480T), and 11 devices in Virtex-7 family (from XC7V585T to XC7VH870T). So, there are 32 different devices in Xilinx 7 family. Xilinx also produces Zynq-7000 All Programmable SoC family. It integrated the software programmability of ARM-based processor with the hardware programmability of FPGA. It enables key analytics and hardware acceleration while integrating CPU, DSP, ASSP, and mixed signals functionality on a single device. There are a single-core Zynq-7000S device and dual-core Zynq-7000 device. Zynq-7000S devices feature a single-core ARM Cortex-A9 processor mated with 28 nm Artix-7 based programmable logic. It includes 6,25 Gb/s transceivers and commonly used hardened peripherals. It targets cost-optimized system integration suitable for industrial IoT applications (motor control, embedded vision). Zynq-7000 devices have dual-core ARM Cortex-A9 processors integrated with Artix-7 or Kintex-7 programmable logic. It gives excellent performance-per-watt and maximum design flexibility. They target a wide range of embedded applications (multi-camera drivers assistance systems, 4K2K Ultra-HDV, machine vision, multifunction printers). There are tree main directions in Zynq. There are Zynq-7000 SoC, Zynq UltraScale + MPSoC, and Zynq UltraScale + RFSocC, where RF means radio frequency. Some their characteristics are given in Table 4.8. As you remember, Altera is now a part of Intel. So, let us see what is proposed by Intel. Intel FPGAs offers a wide variety of configurable embedded SRAMs, highspeed transceivers, high-speed I/Os, logic blocks and routing. Their products include IP cores allowing to diminish the development time, power consumption and cost. There are four families of Intel FPGAs available on today market: Stratix, Cyclone, Arria, MAX10 (www.altera.com). Max10 FPGAs are produced from 2014, they are based on 55 nm technology. They include logic array blocks (LAB), analog-to-digital converters, user flash memory (UFM), embedded multiplier blocks, embedded memory blocks (M9K), general purpose I/Os, high-speed LVDS I/Os, external memory interfaces and configuration flash memory (CFM). There are 7 members in this family: 10M02, 10M04, 10M08, 10M16, 10M25, 10M40, and 10M50. Each LAB includes 16 LEs, carry chains through each LE of a block, fast local interconnect, flip-flops, register chains to transfer the output of FF to the adjacent FF in a LAB. Each LAB includes logic for driving control signals to its LEs. There

4.3 Modern FPGAs

97

Table 4.8 Characteristics of Zynq Family Zyng-7000 SoC Application processing unit (PU) Real-time PU

Single/Dual Core ARM Cortex-A9 –

Multimedia PU



High-Speed Peripherals

USB2.0, Ethernet, SD/SDIO

Max I/O pins Max Logic Cells (K) Max Memory (Mb) Max DSP Slices 33 Gb Transceivers Max ADCs Max DACs Soft-Decision FEC Cores

128 44 26,5 2010 – – – –

Zynq UltraScale + MPSoC

Zynq UltraScale + RFSocC

Dual/Quad Core ARM Cortex-A53 Dual Core ARM Cortex-R5 GPU ARM Mali Video Codec PCIe, USB3.0, SATA 3.1, DisplayPort, Gb Ethernet SD/SDIO 214 1143 70,6 3528 – – – –

Quad Core ARM Cortex-A53 Dual Core ARM Cortex-R5 – PCIe, USB3.0, SATA 3.1, DisplayPort, Gb Ethernet SD/SDIO 214 930 60,5 4272 16 16 16 8

are the following control signals: two clock signals, two clock enable signals, two asynchronous clear signals, one synchronous clear signal, and one synchronous load signal. Each LE has the following features: a four-input LUT, a programmable FF, a carry chain connection, a register chain connection, and register feedback support. There are 6 inputs in each LE: four data inputs from LAB local interconnect, one carry-in from the previous LE carry-chain, and one register chain connection. There are two modes of operation for LE: normal and arithmetic. The normal mode is suitable for general logic applications and combinational functions. The arithmetic mode targets implementing adders, counters, accumulators, and comparators. In this mode, an LE implements a two-bit full adder and basic carry chain. The EMB of MAX10 are optimized for applications such as trough put packet processing, embedded processor program, and embedded data storage. There is the basic structure of EMB: 9,216 bits= V0 . It is named M9K block and can be configured in different widths. It provides different memory functions such as RAM, ROM, FIFO and shift registers. There are the following configurations of M9K blocks: 8K × 1, 4K × 2, 2K × 4, 1K × 8, 512 × 16, 256 × 32, 1K × 9, 512 × 18, and 256 × 36. There are embedded multipliers which can operate using one of the following modes: (1) one 18 × 18 multiplier and (2) two 9 × 9 independent multipliers. It is possible to implement both multiplier adder and multiplier accumulator functions. Now, let us discuss the Cyclone family of Altera. It is based on 1,5 V, 0,13 µm, all-layer copper SRAM processes. It has such features as phase-locked loops (PLL),

98

4 Field Programmable Gate Arrays

Table 4.9 Characteristics of cyclone devices Device EP1C3 EP1C4 LEs EMBs (128 × 36) Total RAM. bits PLLs I/O pins

2910 13 59904 1 104

4000 17 78336 2 301

EP1C6

EP1C12

EP1C20

598 20 92160 2 185

12060 52 239616 2 249

20060 64 24912 2 301

dedicated double data rate (DDR) interface to meet DDR SRAM and fast cycle RAM memory requirements. They support different I/O standards including LVDS at data rates up to 311 Mb per second (MBps) and 66 MHz, 32-bit peripheral component interconnect (PCI) for interfacing with supporting ASSP and ASIC devices. It supports external memory, including DDR SRAM (133 MHz), FCRAM, and single data rate RAM. Also, it supports multiple IP-cores including Altera MegaCore functions and Altera Megafunction Partners Program. Table 4.9 represents mainly characteristics of Cyclone family. It includes 5 devices: EP1C3, EP1C4, EP1Cb, EP1C12, and EP1C20. Its LABs include 10 LEs. Each LE includes a LUT with SL = 4, a programmable FF, a carry chain with carry select capability. A LUT also supports dynamic single bit addition or subtraction mode. Each LUT drives all types of interconnects: local, row, column, LUT chain, register chain, and direct link interconnects. The Stratix V family of Altera includes on enhanced core architecture, integrated transceivers up to 28 Gbps, a unique array of integrated IP blocks. Due to it, these devices are optimized for: 1. Bandwidth-centric applications and protocols, including PCI Express Gen3. 2. Data-intensive applications for 40G/100G and beyond. 3. High-performance, high-precision DSP applications. There are four variants of Stratix V Family: GT, GX, E, and GS. All variants have redesigned adaptive logic module (ALM), 20Kbit (M20K) EMBs, variable precision DSP blocks, and fractional PLLs. They are based on 28 nm technology and have 0,8 or 0,9 V core voltage. They use Programmable Power technology and Quartus II integrated PowerPlay Power Analysis. They include variable precision DSP blocks having up to 600 MHz performance and supporting signal processing with precision ranging from 9 × 9 up to 54 × 54. They have 800 MHz fabric clocking. For example, there are characteristics of Stratix V GT Devices shown in Table 4.10. This family includes only 2 chips. Also, there are 10 members in GX-family, 5 members in GS-family, and 2 members in E family. So, there 19 different members in Stratix V family. The most powerful devices of Altera (Intel) are Arria 10 Devices. There are thee variants of devices: GX, GT and SX. They are based on 20 nm technology. They include second generation hard processor system with ARM Cortex-A9 processor. Their ALMs have 8 inputs; they have up to 65 Mb of embedded memory, hard PCI

4.3 Modern FPGAs

99

Table 4.10 Characteristics of stratix V GT family Device 5SGTC5 Logic elements, K Adaptive logic modules Registers (FFs), K 28,05/12,5 Gbps Transceivers PCI hard IP blocks Fractional PLLs M20K Memory blocks M20K Memory (Mbits) Variable Precision Multipliers (18 × 18) Variable Precision Multipliers (27 × 27) DDR3, SDRAM × 72 DIMM Interfaces

425 160400 642 4/32 1 28 2304 45 512 256 4

5SGTC7 622 234720 939 4/32 1 28 2536 50 512 256 4

Express Gen3 blocks, a lot of DSP blocks. So, it is a system-on-a-chip class devices. Due to dual-core ARM, these devices belong to the class of MPSoPC devices. The modern FPGAs target both simple and very complex embedded systems. They vendors produce families of devices with different tools. The simplest FPGAs include only logic elements and EMBs. The most complex FPGAs combine together ASIC-blocks, microprocessors and programmable logic. So, they are multiprocessorsystem-on-a-programmable-chip devices. Thus, modern designer have the widest choice of FPGA chips for implementing various embedded systems.

4.4 Design with FPGAs Modern FPGAs are too complex for manual design of digital systems. So, CAD tools play a critical role in FPGA-based design. A CAD targeting FPGAs should include efficient tools for: routing and placement, circuit clustering, technology mapping, architecture-specific optimization, physical synthesis, RTL and behaviour-level synthesis, and power optimization. As a rule, the top-down methodology is used in FPGA-based design. An FSM could be used to represent either very simple ES or only its internal CU. There is a typical design flow for FPGA-based FSMs shown in Fig. 4.13. Let us explain some steps of this design flow. We discuss them using typical steps from design process used by Xilinx. 1. Register-transfer specification. A design entry can be the schematic editor (if an FSM is represented by flow circuit) or by the state editor (if an FSM is represented by its state transition graph) or by a program written in either VHDL or Verilog. This specification can be corrected if necessary.

100

4 Field Programmable Gate Arrays

Register-transfer specification

State assignment Minimization, etc. Routability model Cell library

Technology-independent logic synthesis

Timing analysis Wiring model

Technology-dependent logic synthesis Technology database

Timing analysis Timing analysis

Place and route

Project Veryfication

Layout

Fig. 4.13 Design flow for FPGA-based FSMs Fig. 4.14 A part of STG before (a) and after (b) elimination of equivalent states

(a)

(b)

y1y2 a3

y3

1 y1y2 1 1 a4 y1y2

a6

y1y2 a3

y3 1

a6

a5

2. State assignment, minimization, etc. During this step some redundant states are eliminated (minimization). To do it, CAD finds equivalent states. For example, there are equivalent states in a part of STG shown in Fig. 4.14a. The states a3 , a4 , a5 are equivalent because: (1) the same outputs y1 , y2 are generated for each of them and (2) there are the same transitions from all of them. So, two states can be eliminated without any harm for FSM behaviour. For example, elimination of a4 and a5 results in the minimized part of STG (Fig. 4.14b). After minimization, each state am ∈ A is represented by a binary code K(am ). There are R bits in this code, where log2 M  ≤ R ≤ M .

(4.4)

In (4.4), the symbol M stands for the number of FSM’s states. This step is very important. Its outcome significantly affects the circuit area, performance and power consumption.

4.4 Design with FPGAs Fig. 4.15 Implementing function (4.5) with gates

101

(a) a b c d

(b) f

a b

f

c d

3. Technology-independent logic synthesis. During this step, the package FPGA Express executes logic synthesis and optimization of an FSM logic circuit. Its outcome is a Netlist file. It is represented in either EDIF or XNF format. During this step the routability model is created (the interconnections between abstract elements). Also, it is possible to execute a rough timing analysis (to find the longest paths of signals propagation). 4. Technology-dependent logic synthesis. During this step initial abstract elements from Netlist are replaced by elements of cell library. In the case of FPGA, these elements are LUTs and EMBs. The first step is named mapping. During the mapping gate-level functions of the Netlist are replaced by LUTs and EMs. It is not one-to one mapping because: (1) a single LUT can represent a number of logic gates and (2) more than a single LUT could be necessary to implement a given expression. Consider the following equation: f = ab ∨ cd .

(4.5)

It can be represented by a circuit shown in Fig. 4.15a. Each NAND gate has 2 inputs. It is necessary 2NAND gates to implement the circuit for (4.5). Let us point out that using 3NAND gates leads to the circuit (Fig. 4.15b) with the same amount of gates. So, the increase the number of gate inputs did not affect the number of gates in the circuit. In the case of FPGA, the Eq. (4.5) is implemented using LUTs. Let us start from SL = 3. In this case the value of SL is less than the number of inputs in the function f . So, the Eq. (4.5) should be transformed in such a way that each subfunction has not more than SL arguments. This step is the functional decomposition. For example, it is possible to represent (4.5) as a two Eqs. (4.6): f1 = ab; f = f1 ∨ cd .

(4.6)

Next, each subfunction and resulting function should be represented as a truth table. There are two truth tables in the discussed case (Tables 4.11 and 4.12). Table 4.11 determines the bit-stream for LUT1 and Table 4.12 for LUT2 (Fig. 4.16a). If SL = 4, then it is necessary only a single LUT to implement the circuit for (4.5). The circuit is shown in Fig. 4.16b. The LUT1 is represented by the truth table (Table 4.13). Thus, the increase in SL affects significantly the following characteristics of the circuit: (1) the number of LUTs; (2) the number of logic levels and (3) the number

102

4 Field Programmable Gate Arrays

Table 4.11 Truth table of f1

Table 4.12 Truth table for f

abx

f1

000 001 010 011 100 101 110 111

0 0 0 0 0 0 1 1

f1 c d

f

000 001 010 011 100 101 110 111

0 0 0 1 1 1 1 1

Fig. 4.16 Implementing function (4.5) with LUTs having 3 (a) and 4 (b) inputs

(a) a b

(b) f1

LUT1

c d

Table 4.13 Truth table for function (4.5) abcd f abcd f 0000 0001 0010 0011

0 0 0 0

0100 0101 0110 0111

0 0 0 1

LUT2

f

a b c d

LUT1

abcd

f

abcd

f

1000 1001 1010 1011

0 0 0 1

1100 1101 1110 1111

1 1 1 1

f

of interconnections. Due to it, the circuit (Fig. 4.16b) requires less chip area, fewer interconnections, less power than the circuit (Fig. 4.16a). In addition, it is two times faster than its counterpart from (Fig. 4.16b). The mapping step gives results for executing the packing. During the packing LUTs and FFs are packed into CLBs. There are many variants of executing mapping and packing. After the packing, the circuit is represented as a netlist of CLBs. But they are rather abstract and should be replaced by physical objects.

4.4 Design with FPGAs

103

4. Place and route. Now each CLB should find its place in the chip. This is executed during the placement step. It is a difficult problem due to its size. Very often, it is necessary to place thousands and thousands of CLBs. Next, these CLBs should be connected together. It is executed during the routing step. This step should be executed in a way giving the maximum possible performance and minimum possible power consumption. The outcome of placement affects significantly the outcome of routing. So, both these procedures could be repeated till the project will meet required constraints. After this step, it is possible to find the real performance of a design, Also, a BitStream is formed to execute the chip configuration (programming). Now, the initial specification is transformed into a layout of the FPGA chip. 5. Project verification. It is the up-to-down process. So, the real characteristics of an FSM could be found only when the first 4 steps are finished. Now, it is possible to use the actual values of delays among the physical elements. If the result of verification is negative, then the previous design steps should be executed. If a result is positive, then the chip programming could be executed. In Chap. 5, we discuss problems connected with FPGA-based FSM design. Now, let us discuss how to implement computational algorithms using possibilities given by FPGA chips. To calculate something, it is possible to use microprocessors, DSP blocks, or EMBs. Let us discuss how to do it. Let us return to the formula P = AB + (AB)2 + (AB)3 . Both A, B ∈ {0, . . . , 255}; so, they are represented by bytes. Let Alarm = 1 if P ≥ C. It is possible to implement a microprocessor (MP) as a soft-core IP-lock. Various FPGA vendors have tools for synthesis of soft-MP: Xilinx, Intel (Altera), Altium, Lattice. The soft-core MPs are implemented using CLBs. They are rather slow devices, having operating frequency less than 200 MHz. Their frequency is limited by the speed of FPGA fabric. But they can be easily modified and turned to specific requirements or custom instructions. It is possible to create multiple cores to implement a multi-processor ES. A lot of soft-MPs exist, for example: PicoBlaze, MicroBlaze, NIOS, LEON, Cortex-M1, OpenFire, and OpenRISC. Also, hard-core MPs can be used. They are designed using ASIC-technology. They are based on transistors, not CLBs. Due to it, they have around 1 GHz of operating frequency. Of course, they are fixed and cannot be modified. Modern FPGA chips include a lot of DSP slices. For example, some hypothetical DSP is shown in Fig. 4.17a. In reality, each input can be registered, as well as a result of any operation. We do not show these registers and multiplexors in Fig. 4.17a. Multiplexors are necessary to avoid the registering. Instead, we show four programmable switches S1 − S4 . Depending on positions of the switches, 16 different functions can be implemented by this DSP (Table 4.14). It is enough three DSP slices to implement the formula P (Fig. 4.18). We use the graphic symbol from Fig. 4.17b. We show functions implemented by each DSP in Table 4.15. In this Table, we show which bits of DSP outputs are connected with which inputs of other DSPs. There are functions implemented by each DSP shown in the

104

4 Field Programmable Gate Arrays

(a) a b c d

(b) 32

S1

32

S2

16

S3

16

S4

+

32

x

48

48

+

f

a b c d

DSP

f

Fig. 4.17 Simplified architecture (a) and graphic symbol (b) of DSP Table 4.14 Functions implemented by DSP abcd f 0000 0001 0010 0011 0100 0101 0110 0111

0 d 0 d b b+d bc bc + d

A 0 B 0

a b DSP f1=AB c 1 d

S1 S2 S3 S4

f

1000 1001 1010 1011 1100 1101 1110 1111

a a+d ac ac + d a+b a+b+d (a + b) · c (a + b) · c + d

a 0 b DSP c 2 d

f2

a 0 b DSP c 3 d

P

Fig. 4.18 Implementing DSP-based circuit for calculation of P Table 4.15 Functions of different DSP slices Slice DSP1 DSP2

Output

a[24 : 31] = A c[24 : 31] = B b=0 d =0 f1 = AB

a[16 : 31] = f1 [32 : 47] c[16 : 31] = f1 [32 : 47] b=0 d [16 : 31] = f1 [32 : 47] f2 = (AB)2

Bits

16

32

Inputs

DSP3 a[0 : 31] := f2 [8 : 47] c[16 : 31] := f1 [32 : 47] b=0 d [16 : 31] := f1 [32 : 47] P= (AB)3 + (AB)2 + AB 48

4.4 Design with FPGAs

105

Fig. 4.19 Implementing EMB-based circuit for generating Alarm

A B[0:6] B[7]

A B[0:6]

EMB1

EMB2 f1

f2 MX Alarm

Fig. 4.20 Implementing calculations using Zynq 7000

A

B

Cortex-9A1 HSI

FPGA block

Alarm

Zynq 7000

row “Outputs”. In the row “Bits”, we show how many bits has a result. For example, f1 occupies 16 least-significant bits of the output of DSP1, and so on. To determine the occurrence of Alarm, it is possible to use an EMB. In the discussed case, it is necessary to use an EMB having SA = 16. As follows from Sect. 4.3, there are only EMBs with SA = 15 (32K × 1, bits). So, it is necessary to use two EMBs (Fig. 4.19). It is necessary to create a truth table for calculating the occurrence of Alarm situation. Next, the table is divided by two tables. There are 15 bits used as an adders of EMBs: A[0 : 7] ∗ B[0 : 6]. The LSB of B is used to control the multiplexer MX. The value of Alarm depends on values of f1 (EMB1) and f2 (EMB2): Alarm = f1 · B[7] ∨ f2 B[7]. It is the fastest possible solution. The calculations can be executed using MP and some soft logic. For example, additions can be executed by the Cortex-9A1 by Zynq and multiplications by soft logic FPGA blocks (Fig. 4.20). It is a standard example when FPGA-based block is used for acceleration of calculations. It is a “slave block”. The MP is a “master block” calling for calculations. The block of hardware-software interface (HSI) is used for transferring data among the master and slave. They used standard protocols for data exchange. The block from Fig. 4.20 are placed into the chip Zynq 7000. But the same approach can be used if some standard MC is used. In this case MC, HSI and FPGA are different blocks placed on some PCB. So, modern FPGA chips are very powerful and flexible blocks. They can be used for implementing either some part or the whole embedded system.

106

4 Field Programmable Gate Arrays

References 1. I. Kuon, R. Tessier, J. Rose, FPGA architecture: survey and challenges. Found. Trends Electron. Des. Autom. 2, 135–253 (2008) (Lightning Source) 2. V. Sklyarov, I. Skliarova, A. Barkalov, L. Titarenko, Synthesis and Optimization of FPGA-Based Systems. Lecture Notes in Electrical Engineering, vol. 294 (Springer International Publishing Switzerland, Cham, 2014) 3. S.M. Trimberger, Three ages of fpgas: a retrospective on the first thirty years of FPGA technology. Proc. IEEE 103(3), 318–331 (2015)

Chapter 5

Implementing Control Algorithms with FPGAs

Abstract The chapter is devoted with FPGA-based implementing control algorithms represented using the language of graph-schemes of algorithms. We use models of Mealy and Moore finite state machines (FSM) to design the circuits of control units. We start from single-level FSM circuits implemented using look-up table (LUT) elements and/or embedded memory blocks (EMB). These methods are illustrated using standard benchmark FSMs. Next, the methods of structural decomposition are discussed. We discuss how to reduce hardware using the replacement of logical conditions, encoding the collections of microoperations, encoding the terms, and transformation of objects. We show how to use the pseudoequivalent states for optimization of logic circuits of Moore FSMs. The third section is devoted to hardware reduction of Moore FSMs targeting the replacement of logical conditions. At last, we discuss how to optimize hardware replacing state registers by state counters. The discussed methods target control units based on models of Moore FSMs.

5.1 Implementing Single-Level Circuits FSMs are used in ESs, for example, to implement the circuit of hardware-software interface block or other control algorithms. There are thousands of articles devoted to designing FSM circuits. Design methods changed with changing logic elements used for implementing FSM circuits [11, 25]. In this chapter, we discuss FPGA-based methods. As a rule, four optimization problems are connected with FPGA-based design [17, 18]: 1. The minimizing the chip area occupied by an FSM circuit. It is a problem of hardware reduction. 2. The minimizing the FSM performance. 3. The minimizing the power consumption. 4. The improvement of testability. We discus the first problem. It is known [26] that the hardware reduction leads to decreasing for propagation delay and power consumption. Minimizing hardware is © Springer Nature Switzerland AG 2019 A. Barkalov et al., Foundations of Embedded Systems, Studies in Systems, Decision and Control 195, https://doi.org/10.1007/978-3-030-11961-4_5

107

108

5 Implementing Control Algorithms with FPGAs

(a)

(c)

(b) Start

End

(d) 0

y1y2

x3

1

Fig. 5.1 Types of GSAs vertices

(a)

(b) Start a1

Start

y1y3

y1y3

a1

a2

a2 1

1

0

x1 1

x2

1

0 y1y4

y2

0

x1

y2

0

x2

y1y4

a3

a4

a3 y3y4

y3y4

a5

End

a1

a1 End

Fig. 5.2 Initial GSA Γ1 marked by states of Mealy (a) and Moore (b) FSM S1

better than increasing the operating frequency. As we know, the consumed power is directly connected with operating frequency. In this chapter, FSMs are represented using either graph-schemes of algorithms (GSA) or state transition tables (STT). An example of GSA is shown in Fig. 3.7. An STT can be viewed as a DST without the columns K (am ), K (as ), and Φh . As follows from Fig. 3.7, a GSA has vertices which can be start, end operator and conditional [3]. The start vertex determines the beginning of a control algorithm. It possesses only an output (Fig. 5.1a). The end vertex determines the stop point of an algorithm (Fig. 5.1b). An operator vertex has a single input and a single output. The vertex contains a collection of microoperations (CMO) Yq ⊆ Y executed concurrently (Fig. 5.1c). A conditional vertex includes a logic condition xl ∈ X (Fig. 5.1d). It has two outputs corresponding to the values xl = 0 and xl = 1. Let us discuss the GSA Γ1 (Fig. 5.2). It has four operator vertices and two conditional vertices. The following sets can be derived from the GSA Γ1 : X = {x1 , x2 }, Y={y1 , . . . y4 }. Let us denote |X | = L and |Y | = N . To design an FSM circuit, it is necessary to find the set of states A [3, 21]. There are Mealy and Moore FSMs [20]. For Mealy FSM, microoperations are determined

5.1 Implementing Single-Level Circuits Table 5.1 STT of Mealy FSM S1

109

am

as

Xh

Yh

h

a1 a2

a2 a1 a3 a3 a1

1 x1 x¯1 x2 x¯1 x¯2 1

y1 y3 y3 y4 y2 y1 y4 y3 y4

1 2 3 4 5

a3

by states am ∈ A and inputs xl ∈ X , where A = {a1 , . . . , a M } is a set of states. For Moore FSM, microoperations are determined only by states. This difference leads to the difference in the marking GSA Γ by states [3]. In the case of Mealy FSM, the output of Start and input of End vertices are marked by the initial state a1 ∈ A. Each input following the operator vertex is marked by an unique state. There is the set A = {a1 , a2 , a3 } for Mealy FSM corresponding to GSA Γ1 (Fig. 5.2a). For Moore FSM, each operator vertex is marked by an unique state. The Star t and End vertices are marked by the state a1 . There is the set A = {a1 , . . . , a5 } in the case of Moore FSM corresponding to GSA Γ1 (Fig. 5.2b). So, there is the set A = {a1 , . . . , a5 } in the case of Moore FSM. We can transform a GSA Γ into the STT. An STT is a list of transitions among the states am ∈ A. The STT for Mealy FSM S1 is represented by Table 5.1, for Moore FSM S1 by Table 5.2. To implement the FSM circuit, it is necessary to encode the FSM states. It is a step of state assignment [20, 21]. We use the binary state assignment when the number of bits in the state code K (am ) is determined as (3.2). Let symbol M0 stand for the number of states of Mealy FSM, M1 for Moore FSM, R0 for the bitness of K (am ) of Mealy FSM, R1 for Moore FSM. To implement the circuit of Moore FSM, it is necessary to find the systems (3.3)– (3.4). To implement the circuit of Mealy FSM, it is necessary to find the system (3.3) and Y = (T, X ). (5.1) To define these SBFs, it is necessary to construct a DST of an FSM. For GSA Γ1 , Table 5.3 is a DST of Mealy FSM, whereas Table 5.4 of Moore FSM. In the case of Mealy FSM there is Φ = {D1 , D2 }, for Moore FSM Φ = {D1 , D2 , D3 }. In the case of Mealy FSM, there are the codes K (a1 ) = 00, K (a2 ) = 01, K (a3 ) = 10. In the case of Moore FSM, there are the codes K (a1 ) = 000, K (a2 ) = 001, . . . , K (a5 ) = 100. There are H0 = 5 rows in DST of Mealy FSM S1 , and H1 = 7 rows in DST of Moore FSM S1 . The Mealy FSM S1 is represented by the following SBFs:

110

5 Implementing Control Algorithms with FPGAs

Table 5.2 STT of Moore FSM S1 am as a1 (−) a2 (y1 y3 )

a2 a5 a3 a4 a5 a5 a1

a3 (y2 ) a4 (y1 y4 ) a5 (y3 y4 )

Table 5.3 DST of Mealy FSM S1 am K (am ) as a1 a2

00 01

a3

10

a2 a1 a3 a3 a1

Table 5.4 DST of Moore FSM S1 am K (am ) as a1 (−) a2 (y1 y3 )

000 001

a3 (y2 ) a4 (y1 y4 ) a5 (y3 y4 )

010 011 100

a2 a5 a3 a4 a5 a5 a1

Xh

h

1 x1 x¯1 x2 x¯1 x¯2 1 1 1

1 2 3 4 5 6 7

K (as )

Xh

Yh

Φh

h

01 00 10 10 00

1 x1 x¯1 x2 x¯1 x¯2 1

y1 y3 y3 y4 y2 y1 y4 y3 y4

D2 – D1 D1 –

1 2 3 4 5

K (as )

Xh

Φh

h

001 100 010 011 100 100 000

1 x1 x¯1 x2 x¯1 x¯2 1 1 1

D3 D1 D2 D2 D3 D1 D1 –

1 2 3 4 5 6 7

y1 = T¯1 T¯2 ∨ T¯1 T2 x¯1 x¯2 ;

(5.2)

y2 = T2 x1 x2 ; y3 = T2 x1 ∨ T1 ; y4 = T2 x1 ∨ T1 x¯2 ∨ T1 . D1 = T2 x¯1 ; D2 = T¯1 T¯2 .

(5.3)

To optimize these Boolean functions, we use the insignificant assignments T1 T2 and the law of expansion [19]. The system (5.2) corresponds to the system (3.1), the system (5.3) to (3.3).

5.1 Implementing Single-Level Circuits Fig. 5.3 Structural diagram of LUT-based Mealy FSM

111

X Clock Start LUTerΦ

LUTerY T

Fig. 5.4 Structural diagram of LUT-based Moore FSM

X Clock Start LUTerΦ T

Y

LUTerY Y

The Moore FSM S1 is represented by the following SBFs: y1 = A2 ∨ A4 = T¯1 T3 ∨ [T1 T3 ] = T3 ; y2 = A3 ∨ A5 = T2 T¯3 ∨ T1 ; y3 = A2 = T¯2 T3 ;

(5.4)

y4 = A4 ∨ A5 = T2 T3 ∨ T1 . D1 = T¯2 T3 x1 ∨ T2 ; D2 = T¯2 T3 x¯1 ; D3 = T¯2 T¯3 ∨ T¯2 T3 x¯1 x¯2 .

(5.5)

The system (5.4) corresponds to the system (3.4), the system (5.5) to (3.3). To optimize these functions, we use the insignificant input assignments 101–111 and the law of expansion [19]. To implement FSM circuits, it is possible to use: (1) LUTs forming the blocks LUTers; (2) EMBs forming the blocks EMBers or (3) mixed circuits with LUTers and EMBers [14, 15]. Let us analyse the possible solutions. In the case of LUT-based FSMs, state registers are distributed among the LUTs implementing functions Dr ∈ Φ [11]. So, there are no registers shown on structural diagrams of LUT-based Mealy (Fig. 5.3) and Moore (Fig. 5.4) FSMs. In both cases, the LUTerΦ implements the system (3.3). The LUTerY implements the system (5.1) for Mealy FSM and (3.4) for Moore FSM. Let L(yn ), L(Dr ) be the number of literals [20] in function yn ∈ Y or Dr ∈ Φ, respectively. Let the following conditions take places: (5.6) L(yn ) ≤ SL (n = 1, N ); L(Dr ) ≤ SL (r = 1, R).

(5.7)

112

5 Implementing Control Algorithms with FPGAs

x1

x2 Start

1

2

1 256 LUT1

Clock

3

4

1 26

1 56

LUT2

y1

y2

LUT3

1 256

1 634

5 634

LUT4

LUT5

LUT6

T1

T2

y4

y3

5

6

Fig. 5.5 Logic circuit of Mealy FSM S1 for SL = 4 Fig. 5.6 Implementing functions y1 , y4 if SL = 3

T2 x1 x2 LUT1 f1

T1

T2 x1 x2

T1

LUT2

LUT3

LUT4

y1

f2

y4

In this case all functions representing the FSM circuit are implemented as a single LUT. It is the single-level circuit. It includes exactly R0 + N LUTs for Mealy FSM and R1 + N for Moore FSM. It is the best possible LUT-based solution. For example, let it be SL = 4. In this case, there are R0 + N = 6 LUTs in the circuit of Mealy FSM S1 (Fig. 5.5). Let it be SL = 3. In this case, the condition (5.6) is violated for functions y1 and y4 from (5.2). To implement the circuit, it is necessary to apply the functional decomposition [22, 23]: y1 = T¯1 (T¯2 ∨ T2 x¯1 x¯2 = T¯1 ( f 1 ); y4 = [T2 x1 ∨ T2 x¯2 ] ∨ T1 = f 2 ∨ T1 .

(5.8)

It is necessary two LUTs with SL = 3 to implement the equations from (5.8). The circuit is shown in Fig. 5.6. Now, there are two levels in the circuit of Mealy FSM S1 . Each LUT should be represented by its truth table [19]. For example, Table 5.5 is a truth table of LUT1 and Table 5.6 of LUT2. We use the symbol ∗ to show the unused input of a LUT. If SL = 3, then the following conditions are true for Moore FSM S1 : L(D1 ) = L(D2 ) ≤ SL ;

(5.9)

L(D3 ) > SL .

(5.10)

The condition (5.6) takes place for yn ∈ Y of Moore FSM S1 . So, it is necessary to apply the functional decomposition for function D3 ∈ Φ: D3 = T¯2 (T¯3 ∨ T3 x¯1 x¯2 ) = T¯2 f 1 .

(5.11)

5.1 Implementing Single-Level Circuits Table 5.5 Truth table of LUT1

Table 5.6 Truth table of LUT2

113

T2 x1 x2

f1

000 001 010 011 100 101 110 111

1 1 1 1 1 0 0 0

∗ f 1 T1

y1

000 001 010 011 100 101 110 111

0 0 0 0 0 0 1 0

The logic circuit of Moore FSM S1 is shown in Fig. 5.7. It has two levels of logic and includes N + R + 1 − 1 = 7 LUTs with SL = 3. If SL = 4, then there are 6 LUTs and a single level of logic for Moore FSM S1 . If SL = 3, then there are 8 LUTs and two levels of logic for Mealy FSM S1 . So, using the model of Moore FSM gives better results if a control algorithm is represented by GSA Γ1 . An FSM circuit may by implemented using EMBs of an FPGA chip [14, 15, 24, 26]. An EMB can be determined by its configuration S A , t F  where S A is the number of address bits and t F is the number of cell outputs if an EMB has 2 S A cells. If t F = 1, then 2 S A = V0 . Let it be possible to find a configuration S A , t F  such that the following condition takes place: (5.12) 2 L+R (N + R) ≤ V0 . If R = R0 , then a Mealy FSM may be implemented as a single EMB plus a state register. If R = R1 , then a Moore FSM is implemented as a single EMB and RG. So, FSMs are represented by a structural diagram shown in Fig. 5.8. Let us discus the EMB-based implementation of Mealy FSMs. To implement the circuit of FSM, it is necessary to transform a DST into a truth table. There are the following columns in the truth table: K (am ), X , Y , Φ, q. The first two columns

114

5 Implementing Control Algorithms with FPGAs x1 1 5 71 LUT1

x2

Start

2

3 3

Clock 4

5 71 4

LUT2

3

7 12 4

T2

T1

7

LUT3 f1

y1 6

LUT4 5

6

7

5 67

T3

3

LUT5

4

y2

67

567

LUT6

LUT7

y3

y4

Fig. 5.7 Logic circuit of Moore FSM S1 for SL = 3 Fig. 5.8 Structural diagram of EMB-based FSM

X

EMB

Y

Φ

T RG

Start Clock

determine the cell address and the second two columns its content. The column q (q = 1, Q) determines the number of a cell. For example, Table 5.7 represents the truth table of Mealy FSM S1 . It has Q = 2 R0 +L = 16 rows. We add the column h to show the correspondence between Tables 5.3 and 5.7. It is necessary (5.13) H (am ) = 2 L rows to represent transitions from state am ∈ A. Only HU rows have useful information where (5.14) HU = M · H (am ). It means that H R = Q − HU cells of EMB are redundant. In the discussed case, there is Q = 16, H (am ) = 4, HU = 12, and H R = 4. So, 25% of cells are not used. In the case of Moore FSM S1 , there is Q = 32, H (am ) = 4, HU = 20, and H R = 12. So, 37% of cells are not used in the case of Moore FSM S1 . If the condition (5.12) is violated, then it is necessary n E EMBs to implement the FSM circuit. To find n E , it is necessary to find the configuration S A , t F  such that S A = L + R. It gives the number of EMB outputs (t E ): t E = V0 /2 L+R .

(5.15)

5.1 Implementing Single-Level Circuits

115

Table 5.7 Truth table of LUT2 K (am ) T1 T2 X x1 x2 Y y1 y2 y3 y4 00 00 00 00 01 01 01 01 10 10 10 10 11 11 11 11

00 01 10 11 00 01 10 11 00 01 10 11 00 01 10 11

The value of n E is equal to

1010 1010 1010 1010 1001 0100 0011 0011 0011 0011 0011 0011 0000 0000 0000 0000

Φ D1 D2

q

h

01 01 01 01 10 10 00 00 00 00 00 00 00 00 00 00

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

1 1 1 1 4 3 2 2 5 5 5 5 * * * *

n E = (N + R)/t E .

(5.16)

Let the following condition take place: 2 L+R > V0 .

(5.17)

In this case, there is n E > N + R. Obviously, the best case for EMB-based FSMs is when the condition (5.12) takes place. It leads to the FSM circuit having only a single EMB. If condition (5.12) is violated, the FSM circuit may be implemented as composition of LUTs and EMBs. Modern FPGA chips include EMBs having registered outputs [2, 27]. In this case, there is no need in RG. The pulses Start and Clock enter EMBs. In this section, we discuss so named P FSMs. They have single-level structural diagrams. It means that there is a single level of logic with inputs X , T and outputs Φ, Y . Of course, some circuits can have more than one level of LUTs (due to functional decomposition). But it concerns the internal structure of the circuit, not its structural diagram. If conditions (5.6)–(5.7) take places, then P FSMs is the best solution. They have minimum number of LUTs (L+R) and only a single level of logic. If conditions (5.6)–(5.7) are violated, then there is more than a single level of logic in P FSMs. The circuit have complex interconnections. It increases the consumed energy and makes more difficult the routing problem. Let us analyse the data from

116

5 Implementing Control Algorithms with FPGAs

Table 5.8. It contains characteristics of benchmarks Mealy FSMs collected in the library [29]. The functions (3.3) and (5.1) may include up to L + R literals. We mark by the sign “+” the benchmarks where L + R > 6. There are 30 such signs in Table 5.8. So, it may be necessary to use the functional decomposition for 62% of all benchmarks. Some terms (3.5) can have up to 16 literals for the benchmarks kir kman, s208, and sand. We investigated these benchmarks using the CAD ISE of Xilinx [28]. We use the device XC5VLX30 by Virtex-5 family. It has LUTs with SL = 5. The results are shown in Table 5.9. We use two methods of state assignment (Compact and Auto). The columns “LUTs”, contain the numbers of LUTs, the columns Fr eq the operating frequencies in MHz. We can compare the number of LUTs and value of N + R for each benchmark. Obviously, if L + R > 5, then the number of LUTs exceeds the value N + R. The more is the value of (L + R)/5, the less is the operating frequency of the resulting circuit. Obviously, it is necessary to diminish the numbers of literals in FSM’s functions. In this chapter, we discuss how it may be done using the methods of structural decomposition.

5.2 Basic Methods of Structural Decomposition To reduce hardware in LUT-based circuits, it is necessary to diminish the number of arguments in SBFs representing FSM circuits. Functions Dr ∈ Φ and yn ∈ Y depend on logical conditions xl ∈ X and state variables Tr ∈ T . In the case of Moore FSMs, functions yn ∈ Y depend on state variables. Due to proper state assignment [12, 13, 25], it is possible to reduce the number of state variables in some functions from the systems (3.3), (3.4) and (5.1). But to optimize all functions, it is necessary to use the methods of structural decomposition [9]. In this case, an FSM circuit includes more than a single structural level. Some specific functions are implemented on each level of the circuit. It can result in decrease of operating frequency. But for embedded systems the circuits with less chip area and power consumption are better than just fast circuits. So, the increase in the number of levels is acceptable if the resulting ES meets time constraints and consumes less energy than its single-level equivalent. There are the following methods of structural decomposition [8, 30]: 1. 2. 3. 4. 5.

The replacement of logical conditions (RLC). The encoding of collections of microoperations (CMO). The encoding of terms of DST. The transformation of object codes (TOC). The replacement of state RG by state counter.

5.2 Basic Methods of Structural Decomposition Table 5.8 Characteristic of benchmarks Benchmark L N H bbara bbsse bbtas bbcount cse dk14 dk15 dk16 dk17 dk27 dk512 donfile ex1 ex2 ex3 ex4 ex5 ex6 ex7 keyb kirkman lion lion9 mark1 mc modulo12 opus planet planet1 pma s1 s1488 s1494 s1a s208 s27 s298 s386 s8 sand

4 7 2 3 7 3 3 2 2 1 1 2 9 2 2 6 2 5 2 7 12 2 2 5 3 1 5 7 7 8 8 8 8 8 11 4 3 7 4 11

2 7 2 4 7 5 5 3 3 2 3 1 19 2 2 9 2 8 2 7 6 1 1 16 5 1 6 19 19 8 6 19 19 6 2 1 6 7 1 9

60 56 24 28 91 56 32 108 32 14 30 96 138 72 36 21 32 34 36 170 370 11 25 22 10 24 22 115 115 73 107 251 251 107 153 34 109 64 20 184

117

M

R

L+R

N+R

10 16 6 7 16 7 4 27 8 7 15 24 20 19 10 14 9 8 10 19 16 4 9 15 4 12 10 48 48 24 20 48 48 20 18 6 218 13 5 32

4 4 3 3 4 3 2 5 3 3 4 5 5 5 4 4 4 3 4 5 4 2 4 4 2 4 4 6 6 5 5 6 6 5 5 3 8 4 3 5

8 11 5 6 11 6 5 7 5 4 5 7 14 7 6 10 6 8 6 12 16 4 6 9 5 5 9 13 13 13 13 14 14 13 16 7 11 11 9 16

6 11 5 7 11 8 7 8 6 5 7 6 24 7 6 13 6 11 6 12 10 5 5 20 7 5 10 25 25 13 11 25 25 11 7 4 14 11 6 14

+ +

+

+

+ + + + + + +

+

+ + + + + + + + + + + + + + (continued)

118

5 Implementing Control Algorithms with FPGAs

Table 5.8 (continued) Benchmark L

N

H

1 7 10 4 3 6 21 1

16 56 166 49 156 44 5 14

Table 5.9 Results of investigations Benchmark Auto LUTs

Freq.

shifreg sse styr tav tbk tma train11 train4

bbara bbsse bbtas bbcount cse dk14 dk15 dk16 dk17 dk27 dk512 donfile ex1 ex2 ex3 ex4 ex5 ex6 ex7 keyb kirkman lion lion9

1 7 9 4 6 7 2 2

11 29 5 7 49 8 7 16 6 5 17 15 64 14 12 15 14 29 14 56 51 3 6

639 559 962 952 480 545 1062 556 952 900 730 558 586 940 980 962 986 553 988 384 874 1084 980

M 8 16 30 4 32 20 11 4

R

L+R

N+R

3 4 5 2 5 5 4 2

4 11 14 6 11 12 6 4

4 11 15 6 8 11 25 3

Comp LUTs 13 29 5 7 46 8 7 15 6 5 7 14 74 16 13 16 15 20 15 65 53 3 5

+ + + +

Freq. 635 582 966 952 463 945 1062 625 952 897 899 612 447 985 986 962 986 621 990 358 569 1080 996 (continued)

5.2 Basic Methods of Structural Decomposition Table 5.9 (continued) Benchmark Auto LUTs mark1 mc modulo12 opus planet planet1 hline pma s1 s1488 s1494 s1a s208 s27 s298 s386 s8 sand shifreg sse styr tav tbk tma train11 train4

27 5 26 22 100 100 73 77 140 124 77 28 4 362 26 4 99 3 29 118 6 55 30 28 8

119

Freq.

Comp LUTs

Freq.

726 1071 612 596 888 88 554 550 425 412 550 559 962 406 577 962 569 1584 559 430 1556 406 440 560 416

19 5 28 21 138 138 72 75 141 143 75 23 4 330 28 4 121 3 28 127 6 71 32 26 10

708 1071 632 628 389 389 438 447 432 442 447 669 962 313 581 962 426 1584 543 269 911 465 438 580 466

Let us explain these approaches using a GSA Γ2 (Fig. 5.9). The structural decomposition can be used for both Mealy and Moore FSMs. But some additional approaches are possible for Moore FSMs. We discuss them later. So, we mark the GSA Γ2 by states of Mealy FSM. There are the sets X = {x1 , . . . , x6 }, Y = {y1 , . . . , y8 }, and A = {a1 , . . . , a6 } in Mealy FSM S2 corresponding to Γ2 . So, there is M = 6. Using (3.2), gives R = 3 and there are the sets T = {T1 , T2 , T3 } and Φ = {D1 , D2 , D3 }. Let us encode the states am ∈ A in the trivial way: K (a1 ) = 000, . . . , K (a6 ) = 101. Now we can get the DST of FSM S2 (Table 5.10). It has H0 = 13 lines. In the case of RLC, additional variables pg ∈ P replace the logical conditions xl ∈ X [8, 30]. There are G additional variables. The method can be used if G L.

120

5 Implementing Control Algorithms with FPGAs

Fig. 5.9 Marked GSA Γ2

Start a1 y1y2 a2 1 1

x2

0

x3 1

0 y3y4

y3 1

0

x1

y1y5

y5y6 a4

a3 1

x4 0

x5

0

y3y6y8

y1y7

a6

a5

1

0

x1

y1y2

1 y3y4

0

x6

y2y5 a1 End Table 5.10 DST of Mealy FSM S2

am

K (am ) as

K (as ) X h

Yh

Φh

h

a1 a2

000 001

a3

010

a4

011

a5 a6

100 101

001 010 010 011 011 100 000 101 011 000 000 000 011

y1 y2 y3 y2 y4 y1 y5 y5 y6 y1 y7 y1 y2 y3 y6 y8 y5 y6 y1 y2 y3 y4 y1 y5 y5 y6

D3 D2 D2 D2 D3 D2 D3 D1 – D1 D3 D2 D3 – – – D2 D3

1 2 3 4 5 6 7 8 9 10 11 12 13

a2 a3 a3 a4 a4 a5 a1 a6 a4 a1 a1 a1 a4

1 x1 x2 x1 x¯2 x¯1 x3 x¯1 x¯3 x4 x¯4 x5 x¯5 1 x1 x¯1 x6 x¯1 x¯6

5.2 Basic Methods of Structural Decomposition Fig. 5.10 Structural diagram of MP Mealy FSM

X

121

Block of RLC

P

Y

Block of Functions

T

Φ RG Start Clock

Table 5.11 Table of RLC for Mealy FSM S2

pg \am a1

a2

a3

a4

a5

a6

p1 p2 p3

x1 x2 x3

– – x4

– x5 –

– – –

x1 – x6

– – –

As a rule, this condition is true for practical FSMs [16]. If RLC is executed, then the FSM circuit has two structural levels (Fig. 5.10). It is an MP Mealy FSM [8]. The block of RLC generates functions P = P(T, X ).

(5.18)

The functions (5.18) are arguments of functions yn ∈ Y and Dr ∈ Φ. Now, these functions are represented as Φ = Φ(T, P); (5.19) Y = Y (T, P).

(5.20)

The functions (5.19)–(5.20) are generated by the block of functions (BF). There are the following steps in synthesis method of MP Mealy FSM: 1. 2. 3. 4.

Executing the RLC. Transformation of initial DST. Constructing the systems (5.18)–(5.20). Implementing the FSM circuit.

Let us use the denotation MP(Γ j ) to show that the model MP is used for the GSA Γ j . Let us discuss how to design the Mealy FSM MP(Γ2 ). Let X (am ) be a set of logical conditions determining transitions from the state am ∈ A. Let us find the value G, where G = max(|X (a1 )|, . . . , |X (a M )|).

(5.21)

The function (5.21) determines the set P = { p1 , . . . pG }. In the discussed case, there is G = 3. So, there is the set P = { p1 , p2 , p3 }. Let us form Table 5.11 showing the distribution of xl ∈ X among the variables pg ∈ P. The following function can be derived from Table 5.11:

122

5 Implementing Control Algorithms with FPGAs

Table 5.12 DST of Mealy FSM MP(Γ2 ) am K (am ) as K (as ) a1 a2

000 001

a3

010

a4

011

a5 a6

100 101

a2 a3 a3 a4 a4 a5 a1 a6 a4 a1 a1 a1 a4

001 010 010 011 011 100 000 101 011 000 000 000 011

Xh

Ph

Y6

Φh

h

1 x1 x1 x¯1 x¯1 1 1 1 1 1 x1 x¯1 x¯1

1 p2 p¯2 p3 p¯3 p3 p¯3 p2 p¯2 1 1 p3 p¯3

y1 y2 y3 y3 y4 y1 y5 y5 y6 y1 y7 y1 y2 y3 y6 y8 y5 y6 y1 y2 y3 y4 y1 y5 y5 y6

D3 D2 D2 D2 D3 D2 D3 D1 – D1 D3 D2 D3 – – – D2 D3

1 2 3 4 5 6 7 8 9 10 11 12 13

p1 = x 1 ;

(5.22)

p2 = A 2 x 2 ∨ A 4 x 5 ; p3 = A 2 x 3 ∨ A 3 x 4 ∨ A 6 x 6 . So, only two functions should be implemented by the BRLC. Replacing variables Am by corresponding conjunctions, we can get the system representing the BRLC of Mealy FSM MP(Γ2 ): p2 = T¯1 T¯2 T3 x2 ∨ T2 T3 x5 ; p3 = T¯1 T¯2 T3 x3 ∨ T2 T¯3 x4 ∨ T1 T6 x6 .

(5.23)

There are the following columns in the transformed DST [30]: am , K (am ), as , K (as ), X h , Ph , Yh , Φh , h. The column X h includes only the variable x1 ∈ X . The column Ph includes other logical conditions. Let us point out that in the common case the column X h is absent in the DST of MP FSM. The DST of Mealy FSM MP(Γ2 ) is represented by Table 5.12. The system (5.18) is represented by Eq. (5.23). The systems (5.19)–(5.20) are derived from Table 5.12. For example, the following equations can be derived (after minimization): D2 = T¯1 T¯2 T3 ∨ T2 T3 p¯2 ∨ T1 T3 p¯3 x1 ; y5 = T¯1 T¯2 T3 x¯1 p3 ∨ T2 T3 p¯2 ∨ T2 T3 x¯1 p¯3 .

(5.24)

5.2 Basic Methods of Structural Decomposition Fig. 5.11 Structural diagram of PY Mealy FSM

X

123 Z Block of Φ Functions

T RG

Block of Microoperations

Y

Start Clock

The last two terms are the same in Eq. (5.24). It can be used for optimizing the circuit after the functional decomposition [22, 23]. We do not discuss this possibility. Also, we do not discuss the final step of the design. The method of encoding of CMOs is based on finding codes K (Yq ) of CMOs Yq ⊆ Y (q = 1, Q) [30]. The COMs codes have R Q bits: R Q = log2 Q .

(5.25)

The CMOs are encoded using the variables zr ∈ Z , where |Z | = R Q . It leads to PY Mealy FSM [30]. Its structural diagram is shown in Fig. 5.11. In PY FSM, the BF implements functions (3.3) and Z = Z (T, X ).

(5.26)

The functions zr ∈ Z are arguments of SBF Y = Y (Z ).

(5.27)

There are the following steps in the synthesis of PY Mealy FSM: 1. 2. 3. 4. 5.

Finding the CMOs Yq ⊆ Y . Encoding the CMOs Yq ⊆ Y . Transformation of initial DST. Deriving the systems (3.3), (5.26)–(5.27). Implementing the FSM circuit.

In the case of Mealy FSM PY(Γ2 ), there are Q = 7 CMOs Yq ⊆ Y . They are the following: Y1 = {y1 , y2 }, Y2 = {y3 }, Y3 = {y3 , y4 }, Y4 = {y1 , y5 }, y5 = {y5 , y6 }, Y6 = {y1 , y7 }, Y7 = {y3 , y6 , y8 }. So, there is R Q = 3 and Z = {z 1 , z 2 , z 3 }. Let us form the system yn = f n (Yq ). It is the following in the discussed case: y1 = Y1 ∨ Y4 ∨ Y6 ; y2 = Y1 ; y3 = Y2 ∨ Y3 ∨ Y7 ; y4 = Y3 ; y5 = Y4 ∨ Y5 ;

(5.28)

124

5 Implementing Control Algorithms with FPGAs

Fig. 5.12 Codes of CMOs of Mealy FSM PY(Γ2 )

z3

z1 z2 Y1

Y4



Y6

Y3

Y5

Y7

Y2

y6 = Y5 ∨ Y7 ; y7 = Y6 ; y8 = Y7 .

Let us encode the CMOs in such a manner that the numbers of arguments are minimum for each function yn ∈ Y . To do it, we can use the method [1]. In the discussed case, the codes K (Yq ) are shown in Fig. 5.12. Using these codes turns the system (5.28) into the following system: y1 = z¯3 ;

(5.29)

y2 = z¯1 z¯2 z¯3 ; y3 = z¯2 z 3 ∨ z 1 z 3 ; y4 = z¯1 z¯2 z¯3 ; y5 = z¯1 z 2 ; y6 = z 2 z 3 ; y7 = z 1 z¯3 ; y8 = z 1 z 2 . In the common case, there are Q · R Q = 24 arguments in functions (5.27).There are 18 arguments in functions (5.29) Each argument corresponds to an interconnection between BF and BMO. So, there is 25% of saving the interconnections due to the used approach of encoding. There are practically the same columns in the initial and transformed DSTs. But the column Yh is replaced by the column Z h . It contains the variables zr ∈ Z equal to 1 in the code K (Yq ) of CMO from the hth row of the initial table (h = 1, H0 ). In the discussed case, Table 5.13 is a DST of Mealy FSM PY(Γ2 ). It includes H0 = 13 rows. This table is the base for deriving the system (5.26). For example, the following equation could be derived from Table 5.13 (after minimizing): z 2 = T¯1 T¯2 T3 x¯1 ∨ T2 T3 ∨ T1 T¯2 T3 x¯1 = T¯2 T3 x¯1 ∨ T2 T3 .

(5.30)

5.2 Basic Methods of Structural Decomposition Table 5.13 DST of Mealy FSM PY(Γ2 ) am K (am ) as K (as ) a1 a2

000 001

a3

010

a4

011

a5 a6

100 101

a2 a3 a3 a4 a4 a5 a1 a6 a4 a1 a1 a1 a4

001 010 010 011 011 100 000 101 011 000 000 000 011

125

Xh

Zh

Φh

1 x1 x2 x1 x¯2 x¯1 x3 x¯1 x¯3 x4 x¯4 x5 x¯5 1 x1 x¯1 x6 x¯1 x¯6

– z1 z3 z3 z2 z2 z3 z1 – z1 z2 z3 z2 z3 – z3 z2 z2 z3

D3 D2 D2 D2 D3 D2 D3 D1 – D1 D3 D2 D3 – – – D2 D3

Table 5.14 Table of BMO of Mealy FSM PY(Γ2 ) Yq K (Yq ) Y z1 z2 z3 y1 y2 y3 y4 y5 y6 y7 y8 Y1 Y2 Y3 Y4 Y5 Y6 Y7

000 101 001 010 011 100 111

11000000 00100000 00110000 10001000 00001100 10000010 00100101

h 1 2 3 4 5 6 7 8 9 10 11 12 13

q 1 2 3 4 5 6 7

Equations similar to (5.30) are used to implement the circuit of BF. As well, the Eq. (3.3) are used, too. The BMO is represented by a table having the columns: Yq , K (Yq ), Y , q. Only Q rows of this table have useful information. In the discussed case, it is Table 5.14. Table of BMO is used to get the system (5.27). To get the system (5.29), it is necessary: (1) to construct the system yn = f n (Yq ) and (2) to optimize the functions using Karnaugh map similar to Fig. 5.12. Now, we have functions (3.3), (5.26) and (5.27), The next step is the implementing FSM circuit. We do not discus this step. Let us encode a term Fh by the binary code K (Fh ) having R H bits: R H = log2 H .

(5.31)

126

5 Implementing Control Algorithms with FPGAs

Fig. 5.13 Structural diagram of PH Mealy FSM

X

Block of Terms

Z

Block of Functions

Y T

Φ RG Start Clock

Let us use variables zr ∈ Z to encode the terms, where |Z | = R H . Now, it is possible to represent functions Dr ∈ Φ and yn ∈ Y as functions depending on Z . In the case of yn ∈ Y , it is the system (5.27). In the case of Dr ∈ Φ, it is the system: Φ = Φ(Z ).

(5.32)

It leads to PH Mealy FSM (Fig. 5.13). The block of terms (BT) implements the system (5.26). The BF implements the systems (5.27) and (5.32). There are the following steps in the synthesis of PH Mealy FSM: 1. 2. 3. 4. 5.

Creating the set F of DST terms. Encoding of terms Fh ∈ F. Transformation of initial DST. Deriving the systems (5.26), (5.27), (5.32). Implementing the FSM circuit.

In the case of Mealy FSM PH(Γ2 ), there is H = 13. It gives the value of R H = 4 and the set Z = {z 1 , . . . , z 4 }. It is possible to encode the terms in a way minimizing the number of literals in functions (5.27) and (5.32). To do it, we should start from systems Y = Y (F); (5.33) Φ = Φ(F).

(5.34)

In the discussed case, these systems are the following: y1 = F1 ∨ F4 ∨ F6 ∨ F7 ∨ F10 ∨ F12 ; y2 = F1 ∨ F7 ∨ F10 ; y3 = F2 ∨ F3 ∨ F8 ∨ F11 ; y4 = F3 ∨ F11 ; y5 = F4 ∨ F9 ∨ F12 ∨ F5 ∨ F13 ; y6 = F5 ∨ F9 ∨ F13 ∨ F8 ; y7 = F6 ; y8 = F8 .

(5.35)

5.2 Basic Methods of Structural Decomposition

127

Fig. 5.14 Codes of terms

Fig. 5.15 Structural diagram of PA Mealy FSM

X

V Block of Φ Functions

T RG

Block of Microoperations

Y

Start Clock

D1 = F6 ∨ F8 ; D2 = F2 ∨ F3 ∨ F4 ∨ F5 ∨ F9 ∨ F13 ;

(5.36)

D3 = F1 ∨ F4 ∨ F5 ∨ F8 ∨ F9 ∨ F13 . Let us encode the terms as it is shown in Fig. 5.14. We encode the terms with the aim to optimize the system (5.35). Now, there is a significant simplification of some equations from (5.35). For example, there are the following equations: y1 = z¯1 ; y2 = z¯1 z¯2 ; y3 = z 1 z¯2 ; y4 = z 1 z¯2 z 4 ; y7 = z 2 z¯3 z 4 . Other equations of (5.35)–(5.36) have 4 literals each. The systems (5.35)–(5.36) are used to implement the circuit of BF. To implement the BT, it is necessary to transform the initial DST. To do it, the columns Φh and Yh of initial DST are replaced by the column Z h . This column includes the variables zr ∈ Z equal to 1 in the code K (Fh ). It is executed in the trivial way. So, we do not discuss this step. There are the following objects in Mealy FSM [5]: states, microoperations and CMOs. It is possible to represent, for examples, MOs yn ∈ Y as functions of states am ∈ A and identifiers Ik ∈ I . It leads to PA Mealy FSM (Fig. 5.15). The BF implements the systems (3.3) and V = V (T, X ).

(5.37)

The variables vr ∈ V encode the identifiers Ik ∈ I . The BMO implements the system Y = Y (T, V ).

(5.38)

128

5 Implementing Control Algorithms with FPGAs

Table 5.15 Table of object transformation for PA (Γ2 ) Yq Y1 Y2 Y3 Y4 am IK

a1 I1

a2 ∅

a3 I1

a1 I2

a3 I2

a1 I3

a4 I1

Y5

Y6

Y7

a4 I2

a5 ∅

a6 ∅

Obviously, the variables vr ∈ V replace the logical conditions in the system (5.1). If |V | < |X |, then the number of arguments in (5.38) is less than in (5.1). There are the following steps in the design method of PA Mealy FSM: 1. 2. 3. 4. 5. 6.

Finding the set of identifiers I = {I1 , . . . Ik }. Encoding of identifiers Ik ∈ I by codes k(Ik ). Representing CMOs Yq ⊆ Y as pairs am , Ik . Transformation of initial DST. Creating the systems (3.3), (5.37), (5.38). Implementing the FSM circuit.

Let us discuss an example of synthesis of Mealy FSM PA (Γ2 ). Let us analyse Table 5.10. Let us find the sets Y (am ) ⊆ Y including CMOs generated during the transition into states am ∈ A. In the discussed case, there are the following sets: Y (a1 ) = {Y1 , Y3 , Y4 }, Y (a2 ) = {Y1 }, Y (a3 ) = {Y2 , Y3 }, Y (a4 ) = {Y4 , Y5 }, Y (a5 ) = {Y6 }, Y (a6 ) = {Y7 }. The value of K is determined as K = max(|Y (a1 )|), . . . , |Y (a M )|.

(5.39)

In the disused case, there is K = 3. It gives the set I = {I1 , I2 , I3 }. If |Y (am )| = 0, then there is no need in identifiers to represent a single CMO using am ∈ A. Let us encode each identifier Ik ∈ I by a binary code k(Ik ) having R K bits: R K = log2 K .

(5.40)

For PA (Γ2 ), there is R K = 2. It gives the set V = {v1 , v2 }. Let us encode the identifiers in the trivial way: K (I1 ) = 00, K (I2 ) = 01, and K (I3 ) = 10. Let us construct the table of object transformation. It represents CMOs Yq ⊆ Y as pairs am , Ik . It is Table 5.15. The transformation of DST is reduced to the replacement of column Yh by the column Vh . The column Vh includes variables vr ∈ V equal to 1 in the code K (Ik ) from the hth row of DST. For PA (Γ2 ) the transformed DST is represented by Table 5.16. The column Ih is added to clarify the values of vr ∈ V in each row. The identifiers are taken from Table 5.15. The system (5.37) is constructed in the trivial way. For example, v2 = F3 ∨ F5 ∨ F9 ∨ F11 ∨ F13 , v1 = T1 T3 x¯1 x6 . To create the system (5.38), it is necessary to use a table similar to Table 5.15 and the system yn = f n (Yq ). So, in the discussed case, we use Table 5.15 and the system (5.28).

5.2 Basic Methods of Structural Decomposition

129

Table 5.16 Transformed table of PA (Γ2 ) Mealy FSM am K (am ) as K (as ) Xh Ih a1 a2

000 001

a3

010

a4

011

a5 a6

100 101

a2 a3 a3 a4 a4 a5 a1 a6 a4 a1 a1 a1 a4

001 010 010 011 011 100 000 101 011 000 000 000 011

1 x1 x2 x1 x¯2 x¯1 x3 x¯1 x¯3 x4 x¯4 x5 x¯5 1 x1 x¯1 x6 x¯1 x¯6

I1 I1 I2 I1 I¯2 – I1 – I2 I1 I2 I3 I1

Vh

Φh

h

– – v2 – v2 – – – v2 – v2 v1 v2

D3 D2 D2 D2 D3 D2 D3 D1 – D1 D3 D2 D3 – – – D2 D3

1 2 3 4 5 6 7 8 9 10 11 12 13

For example, there is y1 = Y1 ∨ Y4 ∨ Y6 . As follows from Table 5.15: Y1 = A1 · I1 ∨ A2 , Y4 = A1 · I3 ∨ A4 · I1 and Y6 = A5 . So, the following equation may be obtained for the function y1 : y1 = T¯1 T¯2 T¯3 v¯1 v¯2 ∨ T¯1 T¯2 T3 ∨ T¯1 T¯2 T¯3 v1 ∨ T1 T2 T3 v¯1 v¯2 ∨ T¯2 T¯3 .

(5.41)

Using the same approach, it is possible to get other equations from (5.38). All Eq. (5.38) are similar to (5.41). So, now functions yn ∈ Y depend on R + R K variables. If R K L, we can expect the hardware reduction in comparison with P Mealy FSM. It is possible to represent states am ∈ A by pairs Yq , Ik . It leads to PY Mealy FSM [6]. We do not discuss this approach in this book. All these methods may by used for optimizing circuits of Moore FSMs. Using them, we should take into account the existence of pseudoequivalent states (PES) of Moore FSM [4]. States am , as ∈ A are PES if am , X h  = as , X h  for h ∈ {1, . . . , H }. So, the same inputs result in the same states of transitions for PES am , as ∈ A. For example, the states a3 , a4 ∈ A (Fig. 5.2b) are PES. They correspond to the state a3 of the equivalent Mealy FSM S1 (Fig. 5.2a). Using this definition, it is possible to find the partition Π A = {B1 , . . . , B I } of the set A by the classes of PES. For example, Π A = {B1 , B2 , B3 , B4 } for FSM S1 , where B1 = {a1 }, B2 = {a2 }, B3 = {a3 , a4 }, and B4 = {a5 }. As you can see, the following relation takes place: (5.42) I = M0 + 1. Two methods can be used to simplify the circuit of Moore FSM [4]:

130

5 Implementing Control Algorithms with FPGAs

Table 5.17 Transformed DST of Moore FSM S1 Bi K (Bi ) as K (as ) B1 B2

00 01

B3

10

a2 a5 a3 a4 a5

001 100 010 011 100

Xh

Φh

h

1 x1 x¯1 x2 x¯1 x¯2 1

D3 D1 D2 D2 D3 D1

1 2 3 4 5

1. Optimal state assignment. 2. Transformation of state codes into class codes. Let us discuss the second approach. Let us encode the classes Bi ∈ Π A by binary codes K (Bi ) having R A bits: R A = log2 I .

(5.43)

In the discussed case, there is R A = 2. It determines the set T = {τ1 , τ2 }. whose variables encode the classes Bi ∈ Π A . Let us encode the classes in the trivial way: K (B1 ) = 00, . . . , K (B4 ) = 11. Let us transform the DST of Moore FSM. To do it, the columns am , K (as ) are replaced by the columns Bi , K (Bi ), respectively. It transforms Table 5.4 into Table 5.17. There is no transition B4 , a1  in Table 5.17. It is justified by the fact that Dr = 0 (r = 1, 3) for this transition. Now the following system may be derived from the transformed DST: Φ = Φ(T , X ). (5.44) To construct this system, for our example the input assignment 11 can be used. It gives the system D 1 = τ 2 x 1 ∨ τ1 ; D2 = τ2 x¯1 ;

(5.45)

D3 = τ¯1 τ¯2 ∨ τ2 x¯1 x¯2 . The system (3.4) may by derived from initial DST. To obtain the system (5.44), the following system should be found: T = T (T ).

(5.46)

So, both systems (3.4) and (5.46) depend on the states am ∈ A. It means that the states can be encoded in such a way that the total number of literals in systems (3.4) and (5.46) be minimal.

5.2 Basic Methods of Structural Decomposition

131

Fig. 5.16 Refined state codes of Moore FSM S1

Fig. 5.17 Structural diagram of PC Moore FSM

The systems (3.4) and (5.46) are the following for our example: y1 = A2 ∨ A4 ;

(5.47)

y2 = A3 ∨ A5 ; y3 = A2 ; y4 = A4 ∨ A5 ; τ1 = B3 = A3 ∨ A4 ; τ2 = B2 = A2 . This system can have up to (N + R A ) · R literals. This number can be diminished using the refined state assignment [10, 11]. In the discussed case, the state codes are shown in Fig. 5.16. Using the codes from Fig. 5.16, the following system may be obtained from the system (5.47): y1 = T1 T2 ; y2 = T¯1 T2 T¯3 ∨ T1 T¯2 T¯3 ;

(5.48)

y3 = T3 ; y4 = T2 T¯3 ; τ1 = T1 T¯3 ; τ2 = T3 . There are 10 literals in system (5.48). In the common case it should include (N + R A ) · R = (4 + 2) · 3 = 18 literals. So, we have 56% of economy due to the refined state assignment. Of course, the system (5.44) is different from (5.45). This approach leads to PC Moore FSM. Its structural diagram is shown in Fig. 5.17.

132

5 Implementing Control Algorithms with FPGAs

In PC Moore FSM, the block of input memory functions (BIMF) implements the system (5.44), the BF the systems (3.4) and (5.46). The method of synthesis is clear from the previous example. This method belongs to TOC-based methods. In PC Moore FSM, the state codes K (am ) are transformed into the class codes K (Bi ). If a GSA Γ includes more than 70% of operator vertices, then it is a linear GSA [7]. If an FSM is represented by a linear GSA, then the state register may be replaced by a state counter. It decreases the number of terms in functions Dr ∈ Φ. There are some methods allowing to reduce the number of literals in functions (3.3), too. We discuss one of them in Sect. 5.4.

5.3 Replacement of Logical Conditions in EMB-Based Moore FSMs Let us consider the model U1 of Moore FSM shown in Fig. 5.18. Let us consider how to optimize the FPGA-based circuit of FSM U1 . In FSM U1 , the BIMF implements the system (3.3), the BMO the system (3.4). To diminish the area occupied by an FSM logic circuit, it is very important to implement both BIMF and BMO using EMBs. There are four known EMB-based models of Moore FSM [15, 26]. The trivial structural diagram of Moore FSM U2 is the same as one shown in Fig. 5.8. It may be used if the condition (5.12) is true. Let the symbol Ui (Γ j ) means that the model Ui is used for the GSA Γ j . In the case of U2 (Γ j ), the EMB is represented by a truth table similar to Table 5.7. So, both blocks BIMF and BMO are implemented as a single EMB. Let the condition (5.12) be violated. Let the following conditions take places: R · 2 L+R ≤ V0 ;

(5.49)

N · 2 R ≤ V0 .

(5.50)

In this case, the FSM U3 can be used (Fig. 5.19a). In U3 , the EMB1 implements the BIMF, the EMB2 the BMO. The synthesis method of U3 includes a step of creating the table of EMB2. It is similar to Table 5.14. But the columns Yq , K (Yq ) are replaced by columns am . K (am ), respectively.

Fig. 5.18 Structural diagram of Moore FSM U1

X Block of Input Memory Functions

Φ

RG

Start Clock

T

Block of Y Microoperations

5.3 Replacement of Logical Conditions in EMB-Based Moore FSMs

(a)

(b) Φ EMB1

RG

T

MX

EMB2

Start Clock

Y

P

X X

133

EMB

Φ

T RG

Start Clock

Y

(c) X

P MX

T

Φ RG

EMB1

Start Clock

EMB2

Y

Fig. 5.19 Structures of EMB-based of Moore FSMs

The models U4 (Fig. 5.19b) and U5 (Fig. 5.19c) are based on RLC. In this case, the multiplexer MX generates the functions (5.18). The model U4 may be used if the following condition takes place: 2G+R (R + N ) ≤ V0 .

(5.51)

The model U5 can be used if the condition (5.51) is violated, but condition (5.50) takes place, as well as the following condition: 2G+R · R ≤ V0 .

(5.52)

The circuit of MX is implemented using LUTs. So, it is a LUTer. It is necessary to diminish the number of LUTs and their levels in the LUTer. Let us discuss an approach proposed in [15]. The discussed method is based on using the codes of classes of PES. Also, only a part of the set X is replaced by the variables pg ∈ P. Let us use the GSA Γ3 (Fig. 5.20) to illustrate this method. Let us form the set Π A = {B1 , . . . , B I } for the set A. Let us eliminate each class Bi including only transitions into the state a1 . It transforms the set Π A into the set Π A having I0 elements. Obviously, there is I0 ≤ I . Because D flip-flops are used in the RG, the transitions into a1 ∈ A are executed using only pulse Clock. Let us encode the classes Bi ∈ Π A by binary codes K (Bi ) using R B variables, where   (5.53) R B = log2 I0 . Let the variables τr ∈ T be used for such an encoding where |T |= R B .   Let us represent the set X as X 1 ∪ X 2 where X 1 ∩ X 2 = ∅,  X 1  = L 1 and  X 2  = L 2 . Let us replace the logical conditions xl ∈ X 1 by additional variables pg ∈ P,

134

5 Implementing Control Algorithms with FPGAs

Fig. 5.20 Marked GSA Γ3

Start

a1

y1y2

a2

1 y2y8

0

x1 a3

x2

0

1 y1y9y10 a4

1 x4

0

x3 0

1

1 y2y4

0

x5

a5

y3y5

1

a6

y1y5y6 a7

x1

0 1

y1y2

x6

y3y7

a8

0 y3y7y10 a10

a9 y3y5

a11

End

a1

where |P| = G. Let the following conditions hold: 1

+R B

× R ≤ V0 ;

(5.54)

2 R · (R B + N ) ≤ V0 .

(5.55)

2G+L

In this case, the proposed in [16] model U6 (Fig. 5.21)can be used. In FSM U6 , the MX replaces logical conditions xl ∈ X 1 and generates functions

5.3 Replacement of Logical Conditions in EMB-Based Moore FSMs

135

Fig. 5.21 Structural diagram of Moore FSM U6

  P = P T , X1 .

(5.56)

The block EMB1 implements functions Dr ∈ Φ:   Φ = Φ T , P, X 2 .

(5.57)

The block EMB2 implements functions (3.4) and functions (5.46). Obviously, the logic circuit of FSM U6 requires 2 blocks of RAM, as it is for the FSM U5 . But its multiplexer should be simpler, because it transforms only logical conditions xl ∈ X 1 . There are the following steps in the design method of U6 : 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

Constructing the marked GSA Γ .

Constructing the sets A, Π A and Π A . Executing the state assignment using binary codes with minimum amount of bits. Finding partition of the set X and constructing sets X 1 and X 2 . Constructing the table of RLC.

Executing an optimal encoding for classes Bi ∈ Π A . Constructing the reduced structure table. Constructing the transformed reduced structure table. Constructing the table of microoperations. Implementing the FSM logic circuit.

Let us discuss the application of this approach for design of FSM U6 (Γ3 ). Let the following characteristics exist for an FPGA chip in use: SL = 4 (the number of LUT inputs), V0 = 512. Let an EMB have the following configurations: 29 × 1, 28 × 2, 27 × 4, 26 × 8 and 25 × 16. It gives S A ∈ {5, . . . , 9}, t F ∈ {1, 2, 4, 8, 16}. The following parameters can be found from analysis of GSA Γ3 : M = 11, L = 6, N = 10, and R = 4. The transitions from a3 and a4 depend on three logical conditions, therefore, G = 3. Analysis of conditions (5.12), (5.51), (5.52), (5.54), (5.55) shows that only the model U6 (Γ3 ) can be used. Analysis of GSA Γ3 shows that there is the partition Π A = {B1 , . . . , B6 } where B1 = {a1 }, B2 = {a2 }, B3 = {a3 , a4 }, B4 = {a5 , . . . , a8 }, B5 = {a9 , a10 } and B6 =

{a11 }. Obviously, the class B6 ∈ / Π A , therefore, there is Π A = {B1 , . . . , B5 }. Let us point out that state codes do not affect the number of LUTs in MX for FSM U6 . Because of it, let us execute the state assignment in a trivial way: K (a1 ) = 0000, K (a2 ) = 0001, . . . , K (a11 ) = 1010. Because there is R = 4, then the configuration 27 × 4 of EMB should be chosen.

It means that S A = 7. The set Π A includes I0 = 5 elements. Using (5.53), it can be

136

5 Implementing Control Algorithms with FPGAs

Table 5.18 Table  of RLC for Moore FSM U6 Γ2

P

B1

B2

B3

B4

B5

p1 p2

– –

x1 x2

x3 –

x1 x6

– –

Fig. 5.22 Optimal codes of classes

found that R B = 3. It can be found that G + L 2 + R B = 7 and, therefore, G + L 1 = 4. Let us construct the following sets: X 2 = {x4 , x5 } and X 1 = {x1 , x2 , x3 , x6 }. Analysis of GSA Γ3 shows that it is enough G = 2 for the replacement of logical condi  tions xl ∈ X 1 . Let us form the table of RLC for FSM U6 Γ3 (Table 5.18). The rows of this table are marked by variables pg ∈ P, the columns by classes

Bi ∈ Π A . If a logical condition xl ∈ X 2 is replaced by a variable pg ∈ P for class

Bi ∈ Π A , then the symbol xl is written on the intersection for the row pg and the column Bi . To diminish the number of LUTs in the circuit of MX, let us place the classes

Bi ∈ Π A with conditional transitions in the minimum possible amount of generalized intervals (cubes) of R B -dimensional Boolean space. Let us name such an encoding as optimal. One of the possible options is shown in Fig. 5.22. It follows from the Katnaugh map (Fig. 5.22) that only variables τ2 and τ3 should be connected with the block MX. Using both Table 5.18 and Fig. 5.22, the following equations can be found for functions given by (5.56): p1 = τ¯3 x1 ∨ τ3 x3 ; p2 = τ¯2 x2 ∨ τ2 x6 .

(5.58)

Obviously, each function given by (5.58) needs only one LUT with SL = 4 to be implemented. So, the block MX includes 2 LUTs and has 1 layer of logic. To construct the reduced ST, let us form a system of generalized formulae of transitions [7]. In this system, let us use the variables pg ∈ P in accordance   with Table 5.18. The following system can be constructed for Moore FSM U6 Γ3 : B1 → a2 ; B2 → p1 a3 ∨ p¯1 p2 a4 ∨ p¯1 p¯2 a2 ; B3 → p1 x4 a5 ∨ p1 x¯4 a6 ∨ p¯1 x5 a7 ∨ p¯1 x¯5 a8 ; B4 → p1 a5 ∨ p¯1 p2 a9 ∨ p¯1 p¯2 a10 ; B5 → a11 .

(5.59)

5.3 Replacement of Logical Conditions in EMB-Based Moore FSMs

137

  Table 5.19 Part of reduced ST for Moore FSM U6 Γ3 Bi

K (Bi )

as

K (as )

Ph

Xh

Φh

h

B1 B2

100 000

B3

001

a2 a3 a4 a2 a5 a6 a7 a8

0001 0010 0011 0001 0100 0101 0110 0111

1 p1 p¯1 p2 p¯1 p¯2 p1 p1 p¯1 p¯1

1 1 1 1 x4 x¯4 x5 x¯5

D4 D2 D3 D4 D4 D2 D2 D4 D2 D3 D2 D3 D4

1 2 3 4 5 6 7 8

Each term of this system corresponds to one row of reduced ST. The table has the following columns: Bi , K (Bi ), as , K (as ), Ph , X h , Φh , h. In the case of U6 (Γ3 ), the reduced ST includes H6 (Γ3 ) = 12 rows. The part of the table is represented by Table 5.19. The connection between system given by (5.59) and Table 5.19 is obvious. The transformed reduced ST represents the block EMB1. It includes the following

columns: K (Bi ), P, X 2 , Φ, h. The transitions for each class Bi ∈ Π A are represented by V6 rows of this table, where 1 V6 = 2G+L . (5.60)   In the case of FSM U6 Γ3 , there is V6 = 16. Table 5.20 represents the part of this table for transitions from the class B3 . The table of microoperations is constructed in the same way as it is for the FSM U3 . It is represented by Table 5.21. We do not discuss the last step of design. As we know, the bit-streams should be formed for each element of the FSM circuit. The standard CAD tools [2, 27] execute this step. Also, they execute placement and routing procedures.

5.4 Replacement of State Registers by Counters Three peculiarities should be taken into account to reduce hardware in FSM circuits [11, 25]: 1. The specifics of logic elements. For example, there are EMBs in modern FPGAs. So, it is possible to replace some LUTs (may by, all of them) by EMBs. 2. The specifics of FSM model. For example, there are PES in Moore FSMs. Using classes of PES reduces the numbers of arguments in the system of input memory functions. 3. The peculiarities of a control algorithm. For example, it includes more than 70% of operator vertices. Such algorithms are linear [7]. It is possible to reduce hardware

138

5 Implementing Control Algorithms with FPGAs

  Table 5.20 Part of transformed reduced ST for Moore FSM U6 Γ3 K (Bi ) τ1 τ2 τ3

P p1 p2

X2 x4 x5

Φ D1 D2 D3 D4

v

h

001 001 001 001 001 001 001 001 001 001 001 001 001 001 001 001

00 00 00 00 01 01 01 01 10 10 10 10 11 11 11 11

00 01 10 11 00 01 10 11 00 01 10 11 00 01 10 11

0111 0110 0111 0110 0111 0110 0111 0110 0101 0101 0100 0100 0101 0101 0100 0100

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

8 7 8 7 8 7 8 7 6 6 5 5 6 6 5 5

  Table 5.21 Table of microoperations of Moore FSM U6 Γ3 am a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11

K (am ) Y T1 T2 T3 T4 y1

y2

y3

y4

y5

y6

y7

y8

y9

y10

τ1 τ2

0 0 0 0 0 0 0 0 1 1 1

0 1 1 0 1 0 0 0 1 0 0

0 0 0 0 0 1 0 1 0 1 1

0 0 0 0 1 0 0 0 0 0 0

0 0 0 0 0 1 1 0 0 0 1

0 0 0 0 0 0 1 0 0 0 0

0 0 0 0 0 0 0 1 0 1 0

0 0 1 0 0 0 0 0 0 0 0

0 0 0 1 0 0 0 0 0 0 0

0 0 0 1 0 0 0 0 0 1 0

0 0 0 0 1 1 1 1 0 0 0

0 0 0 0 1 1 1 1 0 0 0

0 0 1 1 0 0 1 1 0 0 1

0 1 0 1 0 1 0 1 0 1 0

0 1 0 1 0 0 1 0 1 0 0

T

m 0 0 1 1 0 0 0 0 1 1 0

τ3 0 0 0 0 0 0 0 0 1 1 0

1 2 3 4 5 6 7 8 9 10 11

in FSMs interpreting linear algorithms [7]. To do it, the state register is replaced by a state counter (SCT) [7]. Let us consider the GSA Γ4 (Fig. 5.23) marked by states of Moore FSM. The following sets can be derived from Fig. 5.23: A = {a1 , . . . , a16 }, X = {x1 , . . . , x5 }

5.4 Replacement of State Registers by Counters

Start

a1

y1y2

a2

139

1 1 y3 1 1 y2y5 y3y6

a3

x1

x3 y3

0

y5

a10

y1y7

a11

y2y3

a13

y3y5

a14

y2y4

a12

y6y7

a15

0

x2 a4

x1

0

x4

a5

0

1

x5

y1y4

a6

y3y5

a8

y7

a9

a7

0

1 y7

a16

End

a1

Fig. 5.23 Initial GSA Γ4

and Y = {y1 , . . . , y8 }. There are n 0 = 15 operator vertices and n C = 6 conditional vertices. So, there is n 0 /(n 0 + n C ) = 0, 7 and the GSA Γ4 is a linear GSA. To use this property, let us find a partition ΠC = {α1 , . . . , αG } of the set A where αg ∈ ΠC is a linear chain of states (LCS). Let us define such issues as LCS, main input of LCS and output of LCS. Definition 5.1 An LCS αg is a vector ag1 , . . . , ag Ig  such that there is unconditional transition agi , agi+1  for each pair of adjacent components of αg (i = 1, Ig−1 ). Each LCS has at least a single input and exactly a single output. Definition 5.2 The state am ∈ A is a main input of LCS αg if: (1) am = a1 or (2) there are no unconditional transitions in GSA Γ such that am is a state of transition. Definition 5.3 The state am ∈ A is an output of LCS αg if: (1) there are only conditional transitions from the state am or (2) there is unconditional transition am , a1  or (3) there is unconditional transition am , as  where as is not a component of LCS αg . Using these definitions, we can find the set IM of main inputs for GSA Γ4 . It is the following: I M = {a1 , a4 , a5 , a6 , a10 , a11 , a12 , a16 }. So, there are G = 8 LCSs in GSA Γ4 . Now we should create an LCS for each main input starting from the state a1 ∈ A.

140

5 Implementing Control Algorithms with FPGAs

Let us explain this process. Let us form an LCS α1 starting from a1 . In the beginning there is α1 = a1 . There is a transition a1 , a2  and a2 does not belong to other LCSs. So, it is included into α1 . It gives the chain α1 = a1 , a2 . Next, there is a transition a2 , a3  and a3 does not belong to other chains. So, now there is α1 = a1 , a2 , a3 . There are no unconditional transitions from a3 ∈ A. So, a3 is the output of α1 . It means that the LCS α1 is completed. We should eliminate the state a1 from IM. Now it is necessary to construct the LCS α2 starting from the state am ∈ I M having the minimum value of subscript m. The process is finished if I M = ∅. For GSA Γ4 , there is the set ΠC = {α1 , . . . , α8 } where α1 = a1 , a2 , a3 , α2 = a4 , a7 , α3 = a5 , α4 = a6 , a8 , a9 , α5 = a10 , a13 , α6 = a11 , a14 , α7 = a12 , a15 , and α8 = a16 . Let us form the set of LCS outputs O(Γ4 ). It is the following: O(Γ4 ) = {a3 , a5 , a7 , a9 , a13 , a14 , a15 , a16 }. Let us encode the components of each LCS in the following order: K (agi+1 ) = K (agi ) + 1.

(5.61)

There are R bits in codes K (am ) where R = log2 M . There is R = 4 in the discussed case. The state codes are shown in the Karnaugh map (Fig. 5.24). It is a natural state assignment [7]. Due to (5.61), it is possible to replace a state register by a state counter. To execute the transition am , as  where K (as ) = K (am ) + 1, it is enough to add 1 to the SCT. It is done only for unconditional transitions. It leads to FPGA-based Moore FSM U7 (Fig. 5.25). The LUTer implements the system (3.3). The functions Dr ∈ Φ are generated only for transitions am , as  such that am ∈ O(Γ ). The EMBer includes EMBs (may be, a single EMB). It generates the functions (3.4) and a special signal y0 . If y0 = 1, then C T = C T + 1. If y0 = 0, then C T := Φ. So, if y0 = 1, then the transitions corresponding to (5.61) are executed. There are the following steps in the design method of FSM U7 : 1. 2. 3. 4. 5. 6.

Creating the set ΠC for a given control algorithm. Executing the natural state assignment. Creating the table of LUTer. Creating the system (3.3). Creating the table of EMBer. Implementing the FSM circuit with particular LUTs and EMBs.

There is a serious drawback in FSM U7 . Namely, the functions Dr ∈ Φ may depend on up L + R variables. To diminish the number of LUTs in the LUTer, it is necessary to diminish the number of arguments in functions Dr ∈ Φ. Because U7 is a Moore FSM, we can use the classes of PES. Let us introduce the following definition. Definition 5.4 The LCSs αi , αi ∈ ΠC are pseudoequivalent LCSs (PLCS) if their outputs are pseudoequivalent states.

5.4 Replacement of State Registers by Counters

141

Fig. 5.24 Outcome of state assignment

Fig. 5.25 Structural diagram of Moore FSM U7

+1

X

Φ

LUTer

CT

T

y0 EMBer

Y

Start Clock

Fig. 5.26 Structural diagram of Moore FSM U8

Using Definition 5.4, we can find the partition Π B = {B1 , . . . , B I } where Bi is a class of PLCS. In the discussed case, there is the set Π B = {B1 , . . . , B4 } where B1 = {α1 }, B2 = {α2 , α3 , α4 }, B3 = {α5 , α6 , α7 }, and B4 = {α8 }. So, there is I = 4. Let us encode the classes Bi ∈ Π B by binary codes K (Bi ) having R B bits. The value of R B is determined as (5.53). In the discussed case, there is I0 = 3 because the transitions a16 , a1  are executed automatically. In the discussed case, there is R B = 2. Let us use the variables τr ∈ T to encode the classes Bi ∈ Π B . In the discussed case, there is the set T = {τ1 , τ2 }. Let us encode the classes Bi ∈ Π B in the trivial manner: K (B1 ) = 00, . . . , K (B4 ) = 11. Of course, the code 11 may be used for minimization the functions Dr ∈ Φ. Now, the functions Dr ∈ Φ are represented as (5.44). It leads to FSM U8 (Fig. 5.26). In FSM U8 , the LUTer implements the system (5.44). The EMBer implements the systems (3.4), (5.46) and the function y0 = y0 (T ).

(5.62)

142

5 Implementing Control Algorithms with FPGAs

  Table 5.22 Table of LUTer of Moore FSM U8 Γ4 Bi

K (Bi )

as

K (as )

Xh

Φh

h

B1

00

B2

01

B3

10

a4 a5 a6 a9 a10 a11 a12 a16 a15

0011 0101 0110 1000 1001 1011 1101 1111 1110

x1 x2 x1 x¯2 x¯1 x3 x¯1 x¯3 x4 x1 x4 x¯1 x¯4 x5 x¯5

D3 D4 D2 D4 D2 D3 D1 D1 D4 D1 D3 D4 D1 D2 D4 D1 D2 D3 D4 D1 D2 D3

1 2 3 4 5 6 7 8 9

In FSM U8 , the functions (5.44) depend on up to L + R B variables. If R B < R, then we can expect the hardware reduction for LUTer of U8 . There are the following steps in the design method of FSM U8 : 1. 2. 3. 4. 5. 6. 7. 8.

Creating the set ΠC . Executing the natural state assignment (5.61). Creating the set Π B . Executing the class encoding. Creating the the table of LUTer. Creating the system Dr ∈ Φ. Creating the table of EMBer. Implementing the FSM circuit.

Let us discuss an example of design for Moore FSM U8 (Γ4 ). There are already executed the first four steps of synthesis. To create the table of LUTer, let us form the system of generalized formulae of transitions for classes Bi ∈ Π B . To do it, we should use the GSA Γ4 . There are the following generalized formulae of transitions: B1 → x1 x2 a4 ∨ x1 x¯2 a5 ∨ x¯1 x3 a6 ∨ x¯1 x¯3 a9 ; B2 → x4 x1 a10 ∨ x4 x¯1 a11 ∨ x¯4 a12 ;

(5.63)

B3 → x5 a16 ∨ x¯5 a15 . The system (5.63) determines the table of LUTer having the columns Bi , K (Bi ), as , K (as ), X h , Φh , h. It is Table 5.22 in the discussed case. We take the codes K (am ) from Fig. 5.24. There is the transparent connection between the system (5.63) and Table 5.22. This table is used to derive the system (5.44). For example, the following equations may be derived from Table 5.22 (after minimizing):

5.4 Replacement of State Registers by Counters

143

  Table 5.23 Table of EMBer of Moore FSM U8 Γ4 K (am ) T1 T2 T3 T4 y1

y2

y3

y4

y5

y6

y7

τ1 τ2

0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1

0 1 0 0 0 0 0 0 0 0 1 0 0 1 0 0

0 0 1 0 1 1 0 1 0 0 1 0 1 0 0 0

0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0

0 0 0 1 0 0 0 1 0 1 0 0 1 0 0 0

0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0

0 0 0 0 0 0 0 0 1 0 0 1 0 0 1 1

0 0 0 0 0 0 0 0 0 0 1 0 1 0 1 1

0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1

0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1

0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1

0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0

T

Y

D1 = τ¯1 τ¯2 x¯1 x¯3 ;

0 0 0 0 1 1 0 0 1 0 0 0 0 0 0 1

y0

m

1 1 0 1 0 0 1 1 0 1 0 1 0 1 0 0

1 2 3 4 7 5 6 8 9 10 13 11 14 12 15 16

(5.64)

D2 = τ¯1 τ¯2 x1 x¯2 ∨ τ¯1 τ¯2 x¯1 x3 ∨ τ2 x¯4 ∨ τ1 ; D3 = τ¯1 τ¯2 x1 x2 ∨ τ¯1 τ¯2 x¯1 x3 ∨ τ2 x4 x¯1 ∨ τ1 ; D4 = τ¯1 τ¯2 x1 ∨ τ2 ∨ τ1 x5 . There are the following columns in the table of EMBer: K (am ), Y , T , y0 , m. It is Table 5.23 in the discussed case. Let us explain how Table 5.23 is filled. If state am ∈ O(Γ4 ), then y0 = 0 and there / I (Γ4 ), then y0 = 1 and there are zeros in the is the K (Bi ) in the column T . If am ∈ column T . The state codes are taken from Fig. 5.24. Let us compare FSMs U7 and U8 . If R > R B , then there are more arguments in functions (3.3) than in functions (5.44). So, it could be more LUTs in LUTer of U7 than in LUTer of U8 . The EMBer requires more outputs in U8 . If there are the same amounts of EMBs in equivalent FSMs U7 and U8 , then it is necessary to choose the model U8 . Otherwise, the model U7 is more preferable.

References 1. S. Achasova, Synthesis algorithms for automata with PLAs. M: Soviet radio (1987) 2. Altera. http://www.altera.com. Accessed January 2016 3. S. Baranov, Logic Synthesis of Control Automata (Kluwer Academic Publishers, USA, 1994)

144

5 Implementing Control Algorithms with FPGAs

4. A. Barkalov, Principles of logic optimization for Moore microprogram automaton. Cybern. Syst. Anal. 34(1), 54–60 (1998) 5. A. Barkalov, A. Barkalov Jr., Synthesis of finite-state machines with transformation of the object’s codes, in Proceedings of the International Conference TCSET’2004, Lviv, Ukraina, 2004. Lviv Polytechnic National University, Publishing House of Lviv Polytechnic, Lviv, pp. 61–64 6. A. Barkalov, A. Barkalov Jr., Design of Mealy finite-state machines with the transformation of object codes. Int. J. Appl. Math. Comput. Sci. 15(1), 151–158 (2005) 7. A. Barkalov, L. Titarenko, Logic Synthesis for Compositional Microprogram Control Units, vol. 22 (Springer, Berlin, 2008) 8. A. Barkalov, L. Titarenko, Logic Synthesis for FSM-Based Control Units. Lecture Notes in Electrical Engineering, vol. 53 (Springer, Berlin, 2009) 9. A. Barkalov, L. Titarenko, A. Barkalov Jr., Structural decomposition as a tool for the optimization of an FPGA-based implementation of a Mealy FSM. Cybern. Syst. Anal. 48(2), 313–322 (2012) 10. A. Barkalov, L. Titarenko, S. Chmielewski, Reduction in the number of PAL macrocells int the circuit of a Moore FSM. Int. J. Appl. Math. Comput. Sci. 17(4), 565–675 (2007) 11. A. Barkalov, L. Titarenko, M. Kołopie´nczyk, K. Mielcarek, G. Bazydło, Logic Synthesis for FPGA-Based Finite State Machines. Studies in Systems, Decision and Control, vol. 38 (Springer International Publishing, Cham, 2015) 12. R. Czerwinski, D. Kania, Area and speed oriented synthesis of FSMs for PAL-based CPLDs. Microprocess. Microsyst. 36(1), 45–61 (2012) 13. R. Czerwinski, D. Kania, Finite State Machine Logic Synthesis for Complex Programmable Logic Devices. Lecture Notes in Electrical Engineering, vol. 231 (Springer, Berlin, 2013) 14. M. Kołopie´nczyk, A. Barkalov, L. Titarenko, Hardware reduction for RAM-based Moore FSMs, in 7th International Conference on Human System Interactions - HSI 2014, Lisbon, Portugal (2014), pp. 255–260 15. M. Kołopie´nczyk, L. Titarenko, A. Barkalov, Design of EMB-based Moore FSMS. J. Circuits Syst. Comput. 26(7), 1–23 (2017) 16. I. Kuon, R. Tessier, J. Rose, FPGA architecture: survey and challenges. Found. Trends Electron. Des. Autom. (2008). Lightning Source 17. C. Maxfield, The Design Warrior’s Guide to FPGAs (Academic Press Inc, Orlando, 2004) 18. C. Maxfield, FPGAs: Instant access. Newnes (2008) 19. E. McCluskey, Logic Design Principles (Prentice Hall, Englewood Cliffs, 1986) 20. G. De Micheli, Synthesis and Optimization of Digital Circuits (McGraw-Hill, New York, 1994) 21. P. Minns, I. Elliot, FSM-based digital design using Verilog HDL (Wiley, New Jersey, 2008) 22. M. Rawski, T. Łuba, Z. Jachna, P. Tomaszewicz, The influence of functional decomposition on modern digital design process, Design of Embedded Control Systems (Springer, Boston, 2005), pp. 193–203 23. C. Scholl, Functional Decomposition with Application to FPGA Synthesis (Kluwer Academic Publishers, Boston, 2001) 24. V. Sklyarov, Synthesis and implementation of RAM-based finite state machines in FPGAs, in Proceedings of Field-Programmable Logic and Applications: The Roadmap to Reconfigurable Computing, Villach, 2000 (Springer, Berlin), pp. 718–728 25. V. Sklyarov, I. Skliarova, A. Barkalov, L. Titarenko, Synthesis and Optimization of FPGA-Based Systems. Lecture Notes in Electrical Engineering, vol. 294 (Springer, Berlin, 2014) 26. A. Tiwari, K. Tomko, Saving power by mapping finite-state machines into embedded memory blocks in FPGAs, in Proceedings of the Conference on Design, Automation and Test in Europe, vol. 2 (IEEE Computer Society, 2004), pp. 916–921 27. Xilinx. http://www.xilinx.com. Accessed January 2018 28. Xilinx. ISE foundation. http://www.xilinx.com. Accessed January 2018 29. S. Yang, Logic synthesis and optimization benchmarks user guide. Technical report, Microelectronic Center of North Carolina, 1991 30. S. Yanushkevich, V. Shmerko, Introduction to Logic Design (CRC Press, USA, 2008)

Chapter 6

Programmable Logic Controllers

Abstract The chapter is devoted to programmable logic controllers (PLC). We start from the classification of PLC, their architecture and cycle of operation. Next, the main laws of Boolean algebra are shown. We show the connection between the Boolean algebra and basic logic functions used in programming of PLC. Next, different programming languages used for PLC are shown. The last part is devoted to examples of programming for different Boolean functions and simple control algorithms. All programs are written using the Ladder Diagram language.

6.1 Introduction Programmable Logic Controllers (PLC) are used in the equipment of industrial automation and not only there. It is difficult to imagine either a modern factory or a smart house without usage of PLCs. The first programmable logic controllers entered the market in the late 60’s of the twentieth century. They replaced conventional control systems built using contactors and relays in which the cabling plays the role of a program. Nowadays, the PLC is a universal microprocessor-based device designed to control the operation of various machines, devices or process lines. The cyclic program memory circuit is a characteristic feature of PLCs distinguished them from other computer-based controllers. The PLC operation algorithm is developed using a special programming language intended for it. To change a control algorithm, it is necessary to change the memory content. It is possible to write a program using either a programmer or a computer with the dedicated software. PLCs are equipped with appropriate number of input devices collecting information about the state of a controlled object and service requests as well as the appropriate number of output devices connected to executive, signalling or data transmission elements. There are a lot of companies that produce programmable logic controllers for sale. They offer potential users a wide range of basic PLCs, as well as specialized controllers targeting specific purposes. The following companies are the most popular © Springer Nature Switzerland AG 2019 A. Barkalov et al., Foundations of Embedded Systems, Studies in Systems, Decision and Control 195, https://doi.org/10.1007/978-3-030-11961-4_6

145

146

6 Programmable Logic Controllers

PLC producers: ABB, Allen-Bradley, Delta Electronics, Fatek, GeFanuc, Honeywell, Kinco, Mitsubishi, Moeller Electric, Omron, Panasonic, Schneider Electric Modicon, SIEMENS, Unitronics, Vipa.

6.2 Classification of PLCs PLCs offered on the market form a large and diverse product group. They start from the simplest models that resemble the programmable relays and end by the very complex devices with capabilities resembling what industrial computers offer [7]. They could be classified using two criteria [3]. These criteria are: (1) construction and (2) the numbers of inputs and outputs. The division of PLCs according to the applied construction is shown in Fig. 6.1. A compact PLC is a device in which the main elements such as CPU, power supply and a small number of inputs and outputs are integrated in one housing. Some of them possess a small display and keyboard. Modular controllers are more complicated devices. They are used where it is required to connect a larger number of input or output blocks. They are characterized by a flexible construction which adapts to the needs of a desired control object through the selection of appropriate modules such as a CPU, input-output modules, or communication modules. Distributed controllers are the devices with elements placed in the distributed manner. The division of PLCs according to the number of inputs and outputs is shown in Fig. 6.2. Nanocontrollers are the smallest devices available on the market. These include controllers having up to 32 input/output points. The microcontrollers have from 33 to 128 input/output points. As a rule, both nano- and microcontrollers belong to the class of compact controllers. So, they have a compact, uniform construction. Controllers PLC

Compact controllers

Modular controllers

Distributed controllers

Fig. 6.1 The division of PLCs according the applied construction Controllers PLC

Nanocontrollers

Microcontrollers

Medium controllers

Big controllers

Fig. 6.2 The division of PLCs according the number of inputs and outputs

6.2 Classification of PLCs

147

The medium controllers have from 129 to 1024 input/output points. There are more than 1024 input/output points in the big controllers. As a rule, both medium and big controllers have modular construction. They have no a uniform construction; it is selected to meet the specifics of a given control object. In the case of the expansion of a control system, it is possible to extend the controller with additional modules. However, the largest devices have the distributed construction, where the modules are connected with each other through a suitable bus.

6.3 Architecture of PLC There is an internal architecture of PLC shown in Fig. 6.3. The following functional elements could be found in the typical controller: (1) the power supply, (2) the central processing unit, (3) module of digital and analog inputs, (4) module of digital and analog outputs, (5) communication module. The programmable controllers are most often supplied with 24 V DC voltage. There are also constructions adapted to be supplied directly from the 110 V AC or 230 V AC power grid. Depending on the construction of a controller, the power supply could be installed inside the controller’s housing (compact controllers, programmable relays) or be a separate element in modular controllers [3]. The central unit is responsible for execution of the program, i.e. tasks specified by the user. The task of central unit is to perform the tasks described in the control program based on information from input modules. The result of program implementation is some information for the control object in the form of appropriate output states [4, 6]. A microprocessor is a basic element of the central unit. Today’s produced PLCs use microprocessors having 8, 16, and even 32 bits. It is the type and parameters of a microprocessor used that directly determine the functional capabilities and parameters of a controller [3]. The module of digital and analog inputs is an element enabling the input to the controller of input signals from sensors, room units (traces) and measuring devices.

Power Supply

Input module

Central processing unit (CPU)

Communication module

Fig. 6.3 Internal architecture of PLC

Output module

148

6 Programmable Logic Controllers

These signals may be either digital or analog. The task of input modules is a transformation of signals from different devices in the form understandable by CPU [4, 6]. The module of digital and analog outputs is a subunit enabling transferring control signals to different actuators and executive elements. These signals may be either digital or analog. The output modules transform the results generated by a program into the object signals [4, 6]. The communication module is used to connect the controller with other devices such as a programmer or some local area network devices, for example, such as Modbus, PROFIBUS, Profinet, ControlNet, Genius, and so on. Also it may be connected with the Ethernet. The controllers have in their internal software (system software) some specific communication procedures as well as special modus operation (for example, such modes as either Multimaster or Master-Slave). The external elements of the module are communication connectors. Usually, the standards RS232, RS485 are used [4, 6]. The telemetric systems are also used in modern controllers. In this case, the communication is executed using either cellular networks or Internet.

6.4 Principle of Operation of PLC The PLC controller analyses input signals and executes control algorithms in the real-time mode. The PLC controller operates according to a specific control program kept in its memory. The control program contains a sequence of logic instructions that control the operation of a device. Creating a control program consists in distributing a control task to individual instructions. An instruction is the smallest part of a control program. After uploading the program into a controller, these instructions are processed by the microprocessor devices, which performs appropriate actions on the base of these instructions (for example, it may be executed either arithmetic operations or logical operations or writing into the memory, and so on) [4, 6]. The controller cycle’s phases are shown in Fig. 6.4. Inicialization. In this phase, the controller executes a series of preliminary and control actions. It checks the correctness of the configuration of the structure and basic elements. Reading the input signals. Each program cycle begins from the reading the states of each controller input. Next, the state is written into so called picture of process inputs. During the phase of actualization of input states, the input values from input modules are written to the corresponding areas of the data memory of the controller. The controller reads the input states only once during each cycle. If some states are changed during some cycle of program execution, then this change will affect only the next operation cycle. Due to it, these new states of input signals are used only in the next operation cycle. Execution of control program. During this phase, only the user program is executed. The program is executed instruction by instruction. Subsequent instructions

6.4 Principle of Operation of PLC Fig. 6.4 The controller cycle’s phases

149

Inicialization Reading the input signals Execution of control program Recording output signals Communication support Autodiagnostics

are transferred from the program memory to the microprocessor, which decodes them into the machine code, performs appropriate actions, and writes the outcomes into the data memory. When all instructions are executed and corresponding output states are generated, the controller writes the output states in the area of its memory used as the picture of process outputs. Recording output signals. The outputs are operated by downloading from the area of memory with the picture of process outputs. The operating system activates appropriate outputs controlling the actuators. During the phase of actualization of the outputs, the calculated values of outputs are copied from the appropriate area of data memory to the output modules which generate the control signals. Communication support. During the phase of communication support, twoways transmission data with external devices is executed. For example, there is the transmission to other controllers, or the operator panel, or the programmer. Autodiagnostics. The last phase of each cycle is devoted to internal diagnostic operation. For example, such parameters are checked as the battery voltage supplying the RAM memory of the controller, the power supply voltage, or the occurrence the errors in the program execution [4, 6].

6.5 Programming the PLCs High popularity and demand for programmable controllers contributed to creation of many different solutions, depending on a manufacturer. Manufacturers did not have any rules regarding programming methods or device connection standards. With the increasingly common use of PLCs, there has been a need to standardize them.

150

6 Programmable Logic Controllers

In the nineties of XX century, International Electrotechnical Commission (IEC) developed the IEC 1131 standard which described the standards for PLCs. In 1998, this standard was replaced by the IEC 61131 standard. It is used in Europe as the EN 61131 standard. For the programmer, the most important is the third part of the IEC 61131 standard [2, 3], which concerns the principles of the programming controllers. The programming method is unified in such a way that the user can program PLCs of different manufacturers without major problems. This is not the rigid set of rules that must be followed but should be treated as a programming guide. The enormous amount of details contained in this part means that you can expect only partial compliance with the proposed standard from the programming systems of the controllers. The IEC 61131-3 standard also contains some criteria, called benchmarks, regarding the compliance of products with the standard. Due to it, both the users and manufacturers can determine to what extent the product corresponds to the used standard. There are two groups of programming languages identified by the 61131-3 standard, namely, the text languages and graphic languages. The following languages are the textual: • Instruction List (IL), • Structured Text (ST). There are the following graphic languages: • Ladder Diagram (LD), • Function Block Diagram (FBD). The sequence language is identified to describe the sequential processes. It is the Sequential Function Chart (SFC). The further part of this Section presents short description of PLC programming languages. The Instruction List language consists of the series of instructions similar to Asembler language used to program the microprocessors. It is a low level programming language based on the machine language. Due to the low code clarity, the IL language is rarely used to write the entire control algorithm. Nevertheless, it is ideally suited for coding some program functions. For example, the complex computational algorithms are represented in the compact and simple form with using IL. Also, these algorithms are faster executed by the processor of a controller. There is a fragment of some program shown in Fig. 6.5. It implements the conjunction AND using the IL language. The Structured Text language (Fig. 6.6) uses the structures of high level programming languages. This type of language has the syntax and keywords to make it easier for human being to understand a program code. The basic elements of this language are expressions and commands (if then else, for, while, repeat, and so on). Fig. 6.5 Implementing conjunction AND with IL language

Label Start:

Operator LD AND ST

Operand A B X

Comment (*Button*) (*Logical AND*) (*Turn on*)

6.5 Programming the PLCs

151

Fig. 6.6 Example of using instructions FOR and IF with ST language

Fig. 6.7 Part of a program in LD language

The Ladder Diagram language belongs to the group of graphic languages. It allows implementation of control task using standard graphic symbols. These symbols are placed in circuits in a manner similar to ladders in ladders diagrams for control relay circuits. The appearance of a written program is practically the same as the appearance of a contactor-relay circuit. There are two basic symbols in this language, namely, contacts and two-state outputs. The contacts present logic values of input signals and Boolean variables. The two-state outputs are reflections of relay coils which are used to drive discrete outputs and assign logic values to Boolean variables. In addition, the ladder diagrams use function blocks describing complex functions such as counters, timers, comparators, arithmetic operations, and so on. There is a fragment of a program in LD language shown in Fig. 6.7. The Function Block Diagram language is a language based on ideological diagrams used in electronics to describe signal flows and topologies of interconnections of integrated circuits made in digital technology. A FBD-based program is constructed using blocks and connecting lines. Individual blocks correspond to mathematical functions, logic functions, converters, comparators, counters, flip-flops, and so on. Creating an application in this language consists in selecting the appropriate function block and placing it in the appropriate place of the circuit. There is a fragment of a program in FBD language shown in Fig. 6.8.

152

6 Programmable Logic Controllers

Fig. 6.8 Logic multiplication AND in FBD language

Fig. 6.9 SFC network with impulse action [1]

1

Action P nb_edge:=0; End_Action; cmd Action P nb_edge:=nb_edge+1; End_Action;

2

Not (cmd) 3 cmd

The Sequential Function Chart language (Fig. 6.9) is a language used, among other things, to organize the structure of a sequential control program, especially, in the case of rather complicated tasks. It describes the operations using a simple graphic representation for individual process steps and conditions called transitions. The language SFC was developed on the base of methods used for representing the sequential automata. These are the following methods [5]: • methods based on using the state transition graphs (each vertex of the graph is a state of automaton; each arc is defined by either a Boolean variable or some function of several variables); • methods based on using the state transition tables (the number of columns increases exponentially with the increase in the number of inputs; for example, there are needed more than one million of columns for 20 inputs). Due to the growing role of software engineering in the preparation of programs for controllers, in recent years the role of ST text language has grown tremendously. The ST syntax is similar to Pascal. The good position of FBD language still persists as substitutes of object-oriented language and the graphic form of structuring the programs. The role of traditional languages, such as the ladder (relay) language LD and the language of the instruction list IL, is gradually decreasing. Their role is gradually reduced only to a technical means enabling the integration of previously prepared, thoroughly tested and used software with new, currently emerged programs [1].

6.5 Programming the PLCs

153

Priority and full acceptance was gained by the graphic language SFC in connection with actions represented in the ST language. It may be particularly effective to use the language of block diagrams FBD, SFC graphs and ST language together in a single project environment.

6.6 Introduction to Logical Operators This part of the Chapter is devoted to basic laws and properties of Boolean algebra. There is explained the relationship between the Boolean algebra and basic logic functions used in the PLC programming. There are shown the practical examples of their use in programming the SIMATIC S7 -1200 controller. The author of Boolean algebra is a British mathematician and philosopher George Boole (1815–1864). His most important works include the representation of logical expressions in the mathematical form known as Boolean algebra. His algebra gave rise to mathematical logic creating the basis of informatics (computer science). Today it is still used for describing switching circuits; design of relay systems and electronic digital devices as well as for programming PLC controllers. Boolean algebra is a formal system composed of non-empty set including two elements, namely, one and zero. Two operations with two arguments (binary operations) are defined for this set: • logical sum (Boolean sum, disjunction, alternative), • logic product (Boolean product, conjunction), and a single unary operation (with a single argument): • negation (inversion). It is possible to represent the basic logical operations using either an expression or a truth table. The logical expressions may be implemented in PLC controllers using, for example, the LD language. The operation of logical sum is defined in the following way. If at least one argument is equal to 1, then the outcome is equal to 1. If both operands are zeros, then the sum is equal to zero. We represented this operation using the logical expression: y = x1 + x2, the truth table (Table 6.1) and the program in LD language (Fig. 6.10). The operation of logical product is defined in the following way. The outcome is equal to 1 if all arguments are equal to 1. We represented this operation using the logical expression: y = x1 · x2, the truth table (Table 6.2) and the program in LD language (Fig. 6.11). Table 6.1 Truth table for logical sum

x1

x2

y

0 0 1 1

0 1 0 1

0 1 1 1

154

6 Programmable Logic Controllers

Fig. 6.10 Logical sum as program in LD language Table 6.2 Truth table for logical product

x1

x2

y

0 0 1 1

0 1 0 1

0 0 0 1

Fig. 6.11 Logical product as program in LD language Table 6.3 Truth table for negation

x1

y

0 1

1 0

The operation of negation changes the value of argument to the opposite. We represented this operation using the logical expression: y = x¯1 , the truth table (Table 6.3) and the program in LD language (Fig. 6.12). There are logic gates XOR and XNOR shown in Figs. 6.13 and 6.14, respectively. ¯ · x2 + x1 · x2 ¯ and XNOR They are represented as logical expressions: XOR y = x1 ¯ · x2, ¯ truth tables (Tables 6.4 and 6.5) and programs in LD language. y = x1 · x2 + x1

Fig. 6.12 Logical negation as program in LD language

6.6 Introduction to Logical Operators

155

Fig. 6.13 XOR gate as program in LD language

Fig. 6.14 XNOR gate as program in LD language Table 6.4 Truth table for XOR logic gate

x1

x2

y

0 0 1 1

0 1 0 1

0 1 1 0

Table 6.5 Truth table for XNOR logic gate

x1

x2

y

0 0 1 1

0 1 0 1

1 0 0 1

For any elements x, y, z of Boolean algebra, the following laws take places (Table 6.6). These laws are used for minimizing Boolean functions. The minimization is used to simplify logical expressions in such a way that they include the minimum possible number of arguments and Boolean operations. They are used mainly in digital technique, in which the functions are represented using logic gates. In practice, the simpler expression means less complexity of digital circuit, and, thus lower cost of execution, greater reliability, and faster operation.

156

6 Programmable Logic Controllers

Table 6.6 Basic Laws of Boolean algebra Nr Law AND form

OR form 0+1= x 1+x =1 x+x =x x + x¯ = 1 x+y = y+x (x + y) + z = x + (y + z) x · (y + z) = x · y + x · z

Absorption

1·x = x 0·x =0 x·x =x x · x¯ = 0 x·y = y·x (x · y) · z = x · (y · z) x +y·z = (x + y) · (x + z) x · (x + y) = x

9.

DeMorgan

(x · y) = x¯ + y¯

(x + y) = x¯ · y¯

10.

Double evolution

1. 2. 3. 4. 5. 6. 7.

Identity Null Idempotent Inverse Commutative Associative Distributive

8.

x+x·y =x (x) ¯ =x

In the case of logical controllers, using minimization allows simplifying a program code and increasing its transparency. Let us consider the following function: z = x + x · y¯ + x · y.

(6.1)

Let us minimize it using the laws from Table 6.1: z = x + x · y¯ + x · y = = = =

x + x · ( y¯ + y) x + x · (1) x+x x

= = = =

Law 7, Law 4, Law 1, Law 3.

We can show the implementation of the function (6.1) before (Fig. 6.15) and after (Fig. 6.16) minimization. In both cases, the LD language was used.

Fig. 6.15 Expression (6.1) before minimization

6.6 Introduction to Logical Operators

157

Fig. 6.16 Expression (6.1) after minimization

V1

SW1

L1

Fig. 6.17 Tank filling process

Fig. 6.18 The tank filling system in LD language

The following examples show a way of moving from a word description to a description with logic functions. Example 6.1 Tank filling process (Fig. 6.17). Filling the tank is carried out by opening the solenoid valve V 1. The tank should be filled if the level of liquid is bellow the sensor L1. In this case, the button SW 1 should be switched on. ¯ An example This dependence may be described as the following: V 1 = SW 1 · L1. of a program in LD language is shown in Fig. 6.18. Example 6.2 Temperature measurement in three points (Fig. 6.19). The temperature is measured in three points of a room (sensors T1, T2, T3). If the room temperature exceeds 30◦ C for at least two points of measurement, then the fan (FAN) is activated. The truth table for this problem is represented by Table 6.7. This relationship can be described as: F AN = T¯1T 2T 3 + T 1T¯2T 3 + T 1T 2 T¯3 + T 1T 2T 3.

158

6 Programmable Logic Controllers

T1

T2

FAN

T3 Fig. 6.19 Temperature measurement in three points Table 6.7 Truth table for temperature measurement in three points

T1

T2

T3

FAN

0 0 0 0 1 1 1 1

0 0 1 1 0 0 1 1

0 1 0 1 0 1 0 1

0 0 0 1 0 1 1 1

Fig. 6.20 Program in LD language for performing the temperature measurement

Figure 6.20 presents the program in LD language for performing the temperature measurement in three points. Using the Karnaugh method (Fig. 6.21), it is possible to minimize the program. As a result, we have the following expression: F AN = T 1T 2 + T 2T 3 + T 1T 3.

6.6 Introduction to Logical Operators

159

T3 T 1T 2

Fig. 6.21 The Karnaugh map for temperature measurement

Fig. 6.22 Minimized program in LD language

After minimizing the function FAN, the program for performing the temperature measurement is minimized, too (Fig. 6.22).

6.7 Finite State Machine The finite state machine (FSM) is one of the most popular models describing the behaviour of an arbitrary discrete system [8]. An FSM includes the finite number of states (circles) and the finite number of arcs connecting states (Fig. 6.23). It is necessary to describe the conditions enabling the interstate transitions (for example, x1 = true). The states are related to actions performed in the given state (for example, the function y should be active during the activity of the state S2). It is

Fig. 6.23 Example of FSM model

S1

x1

S2 y

160

6 Programmable Logic Controllers

Fig. 6.24 Implementation of interstate transition in LD language Fig. 6.25 Model of control system with three conveyors

SW1 V1 M1 M2 M3

necessary to distinguish the initial state of FSM (for example, by a different colour, dot inside or a short arrow). FSM states represent stable states in which the object can be. Only a single state can be active in any one time [8]. The implementation of interstate transitions can be shown in LD language. For example, the fragment (Fig. 6.24) shows the program for the transition from Fig. 6.23. Let us show an approach used for transition from a word description to the description based on FSM [8]. To do it, we use the Example 6.3 (Fig. 6.25). The FSM is shown in Fig. 6.26. Example 6.3 Control of three conveyors The material from the silo is transported by three conveyor belts. The system should operate using the following rules: • after switching on SW1 button the M3 conveyor belt should be switched on, • the M2 conveyor belt is activated for three seconds after M3; the M1 conveyor is activated for three seconds after M2, • gate valve V1 opens in three seconds after M1, • after disconnecting the SW1 button, the V1 slider closes, • the M1 conveyor belt is switched off in three seconds after V1,

6.7 Finite State Machine

161 M3 S8

3s

S1

SW1

3s

3s

S3 M2,M3

M2,M3 S7 3s M1, M2, S6 M3

S2 M3

3s

not SW1

S5

3s

S4 M1, M2,M3

V1, M1, M2,M3

Fig. 6.26 FSM of control system with three conveyors

Fig. 6.27 The selected fragments of the of control system with three conveyors in LD language

• the M2 conveyor belt is switched off in three seconds after M1, • the M3 conveyor belt is switched off in three seconds after M2. The selected fragments of the of control system with three conveyors represented in LD language are shown in Fig. 6.27.

References 1. M. Adamski, M. Choda´n, Modelowanie układów sterowania dyskretnego z wykorzystaniem sieci SFC (Wydaw. PZ, Zielona Góra, 2000) 2. IEC, Programmable controllers, part 3: Programming languages. Technical report, International Electrotechnical Commission (1992) 3. J. Kasprzyk, Programowanie sterowników przemysłowych (Warszawa, 2014) 4. T. Legierski, J. Wyrwał, J. Kasprzyk, J. Hajda, Programowanie sterowników PLC (Wyd. Pracowni Komputerowej J. Skalmierskiego, Gliwice, 1998) 5. T. Mikulczy´nski, Z. Samsonowicz, Automatyzacja dyskretnych procesów produkcyjnych (Metody modelowania procesów dyskretnych i programowania PLC, Warszawa, 1997)

162

6 Programmable Logic Controllers

6. R. Sałat, K. Korpysz, P. Obstawski, Wste˛p do programowania sterowników PLC (WKŁ, Gliwice, 2010) 7. K. Wi´sniewska, Polscy producenci i dystrybutorzy sterowników PLC. Technical report, ElektronikaB2B (2006) 8. W. Zaja˛c, M. Kołopie´nczyk, G. Andrzejewski, Modelling and sythesis of parallel traffic control algorithms with time dependencies, New trends in digital systems design, Fortschritt - Berichte VDI: Nr, 836 (VDI Verlag GmbH, Düsseldorf, 2014), pp. 94–109

Conclusion

Now we are witnesses of the intensive introduction of various embedded systems in our everyday life. We use them without even knowing that they exist. Sometimes they are very simple, sometimes very complex. The modern state of semiconductor technology allows creating very complex embedded systems. The complexity of some modern embedded systems increases drastically, as well as the complexity of chips used for the design. However, there are also constraining factors slowing the pace of development of this direction. One of these factors is insufficient number of qualified specialists capable of development of complex embedded systems. Such an expert should be able to choose (or to create) the best platform for a developed embedded system, combine the meeting of time constraints with optimal use of hardware resources and consumed energy. If necessary, the expert should distribute tasks between the hardware and software parts of the system. Also, the expert should be able to specify and design its hardwaresoftware interface. For multitasking systems, the expert must be able to choose the best real-time operating system and, if necessary, to add there some specific program modules. The expert should be able to improve the system performance using either original assembler programs or hardware accelerators. And all this must be done in a relatively short time to enter the market with a high-quality product earlier than competitors do. In short, an expert should have extensive knowledge of standard solutions and be able to find new solutions based on this knowledge. To acquire the initial knowledge in this area, books aimed to student audience are needed. Now there is a clear deficit of such books. Our book is designed to reduce this deficit. We hope that our book gives the initial knowledge and shows the broad prospects opening up for specialists in the field of embedded systems. Embedded systems are doomed to succeed now and in the future. What will be the future of embedded systems? Let us quote the famous IT-specialist Alan Kay. He slightly altered the notable statement of Abraham Lincoln and told that the best way to predict the future is to invent it. We hope that for many of our readers this book is a first step to create a brilliant future of embedded systems.

© Springer Nature Switzerland AG 2019 A. Barkalov et al., Foundations of Embedded Systems, Studies in Systems, Decision and Control 195, https://doi.org/10.1007/978-3-030-11961-4

163

Index

A Abstraction dimension behaviour, 27 physical, 27 Abstraction level circuit, 25 Actuator, 14, 72 Algorithm computational, 61, 72, 103 control, 9, 10, 14, 17, 34, 74, 107, 137, 140, 145, 148, 150 Analog-to-Digital Converter (ADC), 11, 47, 96 Application Specific Integrated Circuit (ASIC), 35, 37 Application Specific Standard Product (ASSP), 93 Arithmetical and Logical Unit (ALU), 62 B Block of functions, 121 input memory functions, 132 Boolean function, 29, 57, 60, 83, 86, 91, 110, 155 C Central Processing Unit (CPU), 3, 62, 147 Code of state, 59, 109, 130, 131, 135, 140 Complementary Metal-Oxide-Semiconductor (CMOS), 25 Computer-Aided Design (CAD), 23 Control

unit, 57 Counter, 60, 95, 137 Cyber-physical system, 15 Cyberspace, 1, 6

D Deadline, 15, 16, 34 Decoder, 81 Decomposition functional, 101, 112, 115, 116, 123 structural, 116 Dedicated user interface, 18 Design methodology, 73 bottom-up, 29, 30 meet-in-the-middle, 30 top-down, 30 process, 23, 24, 28, 31, 89 Digital signal processor, 33 Digital-to-Analog Converter (DAC), 11, 34 Direct structure table, 59

E Electronic Design Automation (EDA), 23 Embedded memory block, 90 Embedded system, 1, 5, 14, 18, 23, 29, 32, 34, 42, 49, 94, 105, 116 EMBer, 111, 140, 141, 143 Energy efficiency, 42

F Feed-back, 24, 58

© Springer Nature Switzerland AG 2019 A. Barkalov et al., Foundations of Embedded Systems, Studies in Systems, Decision and Control 195, https://doi.org/10.1007/978-3-030-11961-4

165

166 Field-Programmable Gate Array (FPGA), 37 Finite State Machine (FSM), 10, 29, 159 Mealy, 108, 109, 113, 116, 121, 126, 127 Moore, 109, 111–114, 129, 131, 132 Flash memory, 32, 62, 72, 84 configuration (CFM), 96 user (UFM), 96 Flip-flop, 25, 36, 58, 78, 82, 87, 96, 133 G Generic array logic, 83 Graphic Processing Unit (GPU), 37, 93 H Hardware accelerator, 72 Hardware-software co-design, 72 interface, 74 I Implementation of algorithms hardware, 49 heterogeneous, 72 software, 61 Integrated circuit, 3, 84 Intellectual Property (IP) core, 24 Internal control unit, 60 K Karnaugh map, 125, 140, 158 L Library, 25, 29, 30, 116 LUTer, 111, 133, 140–143 M Microcontroller, 25, 45, 47, 62–65, 68, 69 Microoperation, 108, 137 Microprocessor, 3, 25, 35, 81, 84, 93, 94, 103, 147, 149 Monitoring, 50, 68 Multiplexor, 85, 86, 103 O Operating system, 2, 32

Index P PLC languages, 150 Function Block Diagram (FBD), 150, 151 Instruction List (IL), 150 Ladder Diagram (LD), 150, 151 Sequential Function Chart (SFC), 150, 152 Structured Text (ST), 150 Printed circuit board, 24 Productivity gap, 27, 89, 90 Programmable Array Logic (PAL), 83 Programmable Logic Array (PLA), 82 Programmable logic controller, 145 Programmable Logic Device (PLD), 81 complex (CPLD), 87 simple (SPLD), 83 Programmable Logic Sequencer (PLS), 82 Programmable Read-Only Memory (PROM), 81

R Random-Access Memory (RAM) static (SRAM), 85 Real-time operating system, 32, 34 Register, 25, 51, 54, 58, 82, 96 Register Transfer Level (RTL), 25 Run-time efficiency, 21

S Sample-and-hold, 11 Sensor, 11, 49, 65, 67, 68, 147 Set of states, 108 Software, 20, 23, 33, 34, 73, 96 Stacked Silicon Interconnection (SSI), 96 Standby mode, 8, 12 State of FSM initial, 109, 160 of transition, 129, 139 pseudoequivalent, 129 State transition graph, 10, 99 State transition table, 108 Structural diagram, 51, 58, 111, 113, 132 Sum-of-products, 82 Synthesis, 28, 30, 34, 54, 99, 101, 121, 126, 128, 142 System cyber-physical, 14 dedicated, 17, 21

Index reactive, 17 real-time, 15 System gate, 33, 83 System of Boolean function, 58, 82 generalized formulae of transitions, 136, 142

T Technological mapping, 28 Time constraints, 44, 54, 72, 116 Timer, 76 Transistor, 25, 26, 29, 30, 37, 62

167 U Ultra Large Scale Integration circuit (ULSI), 62 Usefulness, 16

V Vertex conditional, 108 end, 108 operator, 108, 109 start, 108

Y Y-Chart, 28, 29