137 23 9MB
English Pages [298]
Table of contents :
Cover
Half Title
Title
Copyright
Dedicated To
Preface
Table of Contents
Chapter 1: Overview of Modern Avionics
1.1 Functions of Avionics in Civil, Military and Space Systems
1.2 Typical Avionics Systems
1.2.1 Sensors
1.2.2 Communication, Navigation, Identification and Surveillance (CNIS) System
1.2.3 Control and Management System
1.2.4 Cockpit Displays and Recorders
1.3 Avionics Architecture
1.3.1 Central Architecture
1.3.2 Distributed Architecture
1.3.3 Federated Architecture
1.3.4 Integrated Modular Architecture
1.4 History of Avionics
Chapter 2: Avionics Systems: Design and Certification
2.1 Metrics for Evaluating an Avionics System
2.2 Design Process Flow
2.2.1 Stakeholders
2.2.2 Process Flow
2.3 Hardware Design and Certification
2.3.1 Thermal Considerations
2.4 Software Design and Certification
Chapter 3: Principles of Avionics
3.1 Digital Number Systems, Codes and Complement Arithmetic
3.1.1 Number Systems
3.1.2 Negative Integers and Binary Arithmetic
3.1.3 Codes
3.2 Amplitude Modulation
3.3 Phase Shift Modulation Schemes
3.3.1 Binary Phase Shift Keying
3.3.2 Quadrature Phase Shift Keying
3.4 Digital Circuits
3.4.1 Combinational Logic Circuits
3.4.2 Sequential Logic
3.5 Semiconductor Switches
3.5.1 Diode
3.5.2 Bipolar Junction Transistor (BJT)
3.5.3 Metal-Oxide-Semiconductor Field Effect Transistor (MOSFET)
3.5.4 Complementary Metal-Oxide-Semiconductor (CMOS) Logic
3.6 Inter-Integrated Circuit (I2C) Bus
3.6.1 Hardware
3.7 Analog-Digital Data Conversion
3.7.1 Amplifiers
3.7.2 Non-inverting Summer
3.7.3 Inverting Summer and R/2nR Digital-to-Analog Converter
3.7.4 Analog to Digital Converter (ADC)
3.8 Digital Computer
3.8.1 Architecture
3.8.2 Representation of Data and Instructions
3.8.3 Central Processing Unit (CPU)
3.8.4 Main Memory
3.9 Integrated Circuits
3.9.1 Application Specific Integrated Circuit (ASIC) and General Purpose IC
3.9.2 Field Programmable Gate Array (FPGA)
3.10 Fiber Optic Communications
3.11 Elements of Avionics Systems
3.11.1 Line-Replaceable Unit (LRU)
3.11.2 Data Bus
3.12 Fault Tolerance
3.13 Avionics Programming
Chapter 4: Sensors
4.1 Anatomy of a Civilian Aircraft
4.2 Types of Sensors
4.3 Displacement Sensors
4.3.1 Potentiometer
4.3.2 Linear Variable Differential Transformer
4.3.3 Synchro
4.4 Air Data Sensors
4.4.1 Earth’s Atmosphere
4.4.2 Air Temperature Measurement
4.4.3 Air Velocity Measurement
4.4.4 Altitude Measurement
4.5 Attitude Sensors
4.5.1 Gimbaled (Mechanical) Gyroscopes
4.5.2 Laser Gyroscopes
4.5.3 Vibratory Gyroscopes
4.5.4 Angle-of-Attack Sensor
4.6 Position Sensors
4.6.1 Accelerometers
4.7 Electromagnetic Sensors
4.7.1 Radio Frequency (RF) Bands
4.7.2 Transponders
4.7.3 Primary Surveillance Radar (PSR)
4.7.4 Secondary Surveillance Radar (SSR)
4.7.5 ModeS Communication
4.7.6 Automatic Dependent Surveillance-Broadcast (ADS-B)
4.7.7 Global Positioning System
4.7.8 Automatic Direction Finder (ADF)
4.8 Engine Sensors
4.8.1 Temperature
4.8.2 Role of Engine Pressure Ratio (EPR) in Thrust Measurement
4.8.3 Engine Vibration, Acceleration and Shock
4.8.4 Fuel Quantity
4.8.5 Fuel Flow Rate
4.8.6 Engine’s Shaft Speed
Chapter 5: Avionics in Civilian Aircraft
5.1 Overview of Boeing 787 Dreamliner
5.2 Avionics in Civilian Aircraft
5.2.1 Avionics in Boeing 787
5.2.2 Full Authority Digital Engine Control (FADEC)
5.2.3 Engine Health Monitoring and Management System (EHMMS)
5.2.4 Digital Fly By Wire System (DFBWS)
5.2.5 Bleed Air versus Bleedless ‘More Electrical’ Aircraft
5.2.6 Integrated Vehicle Health Monitoring (IVHM)
5.2.7 Air Data Inertial Reference Unit (ADIRU)
5.2.8 Standby Air Data and Attitude Reference Unit (SAARU)
5.2.9 Weather Radar (WxR)
5.2.10 Terrain Awareness and Warning System (TAWS)
5.2.11 Traffic Collision Avoidance System (TCAS)
5.2.12 Instrument Landing System (ILS)
5.2.13 Radar Altimeter
5.2.14 Aircraft Communications Addressing and Reporting System (ACARS)
5.2.15 Crash-Safe Flight Recorders (CSFRs)
Chapter 6: Avionics in Military Aircraft
6.1 F-35 Lightning II
6.2 Avionics in F-35 Lightning II
6.2.1 Integrated Core Processor
6.2.2 Active Electronically Scanned Array (AESA) Radar (AN/APG-81)
6.2.3 Distributed Aperture System (AN/AAQ-37)
6.2.4 Electro-Optical Targeting System (AN/AAQ-40)
6.2.5 Communication, Navigation and Identification System
6.2.6 Pilot Vehicle Interface
6.2.7 Full Authority Digital Engine Control (FADEC)
6.2.8 Electronic Warfare System (AN/ASQ-239)
6.2.9 Integrated Sensor System
6.2.10 Vehicle Management System (VMS)
6.2.11 Autonomic Logistics Information System (ALIS)
Chapter 7: Future of Avionics
7.1 Future of Aviation
7.1.1 Self-healing Materials
7.1.2 Alternative Energy Sources
7.1.3 Spaceplanes
7.1.4 Flying Cars
7.1.5 Terrestrial Hypersonic Flight
7.2 Avionics: The Road Ahead
7.2.1 Reduction of Flight Time
7.2.2 Reduction in Fuel Consumption
7.2.3 Next Generation Air Transportation System (NextGen)
7.2.4 Military Avionics
Appendices
I. Abbreviations
II. RTCA Documents
III. ARINC Standards
Bibliography
Index
Back Cover
PRINCIPLES OF
MODERN AVIONICS
PRINCIPLES OF
MODERN AVIONICS
S. NAGABHUSHANA Former Head, Aerospace Electronics and Systems Division National Aerospace Laboratories Bengaluru, India
& N. PRABHU
Professor of Industrial Engineering Purdue University West Lafayette, Indiana, USA
©Copyright 2020 I.K. International Pvt. Ltd., New Delhi-110002. This book may not be duplicated in any way without the express written consent of the publisher, except in the form of brief excerpts or quotations for the purposes of review. The information contained herein is for the personal use of the reader and may not be incorporated in any commercial programs, other books, databases, or any kind of software without written consent of the publisher. Making copies of this book or any portion for any purpose other than your own is a violation of copyright laws. Limits of Liability/disclaimer of Warranty: The author and publisher have used their best efforts in preparing this book. The author make no representation or warranties with respect to the accuracy or completeness of the contents of this book, and specifically disclaim any implied warranties of merchantability or fitness of any particular purpose. There are no warranties which extend beyond the descriptions contained in this paragraph. No warranty may be created or extended by sales representatives or written sales materials. The accuracy and completeness of the information provided herein and the opinions stated herein are not guaranteed or warranted to produce any particulars results, and the advice and strategies contained herein may not be suitable for every individual. Neither Dreamtech Press nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. Trademarks: All brand names and product names used in this book are trademarks, registered trademarks, or trade names of their respective holders. Dreamtech Press is not associated with any product or vendor mentioned in this book. ISBN: 978-93-89583-20-5 EISBN: 978-93-89976-49-6
Edition: 2020
DEDICATED TO
VANI NAGABHUSHANA
Preface The coming years are poised to be an exciting era for avionics. The ongoing deployment of the NextGen air transportation system across the United States is expected to become fully operational by 2020. Avionics is a critical bedrock technology in NextGen, which is expected to have a transformative impact on commercial air travel within the country. Avionics is also the core technology for ongoing modernization of civil aviation outside of the United States as well (for example, the Single European Sky initiative). The centuryold field, which has already had a profound impact on aviation from the very early days, when it made blind flight possible, is at the cusp of dwarfing its own rich history with its imminent impact. Avionics is a vast subject. A complete discussion of avionics necessarily spans multiple granular scales—from the microscopic electronic devices used in avionics to the planetwide infrastructures such as the Global Navigation Satellite System. Our focus in this book is on the salient principles underlying avionics at different granular scales. We have not attempted to make the discussion comprehensive. Instead, we have focused on selected salient topics and attempted to present a self-contained, if not comprehensive, discussion of the topics. The discussion of the underlying principles is supplemented with a case study of Boeing’s latest civilian aircraft—the Boeing 787 Dreamliner—and a case study of one of the latest military aircraft—the F-35 Lightning II. The case studies provide examples of concrete applications of the principles. The book is organized as follows. Chapter 1 contains an overview of the different avionics systems deployed on aircraft. It also presents the widely used avionics architectures. Chapter 2 presents an overview of the certification process. Chapter 3 contains a discussion of selected principles of electronics that are invoked in later discussion. The prominent sensors deployed on aircraft are discussed in Chapter 4. In particular, we discuss the Global Positioning System in considerable detail. Chapter 5 is focused on avionics in civilian aircraft, and in particular on the avionics in Boeing 787 Dreamliner. The avionics in the military aircraft—F-35 Lightning II—is discussed in chapter 6. The discussion in chapter 6 is based on information about F-35 that is available in the public domain. Finally, in chapter 7, we discuss some of the advances in avionics and aviation that are on the horizon. This book was conceived by the first author, who had prepared an initial draft of most of the book at the time of his sudden demise. Subsequently, the second author wrote the current version of the book guided by the initial draft. While the overall organization of
viii Preface
the book remains close to that of the initial draft, several sections in the initial draft have been rewritten, some sections have been condensed, and a few new sections have been added in the current version. The responsibility for any errors and omissions in the book rests entirely with the second author. Readers are requested to communicate comments, suggestions and corrections to the second author ([email protected]). S. Nagabhushana N. Prabhu
Contents Preface.............................................................................................................................................................vii
1. OVERVIEW OF MODERN AVIONICS....................................................... 1–16 1.1 Functions of Avionics in Civil, Military and Space Systems................................................ 1 1.2 Typical Avionics Systems.......................................................................................................... 3 1.2.1 Sensors.............................................................................................................................. 3 1.2.2 Communication, Navigation, Identification and Surveillance (CNIS) System...... 4 1.2.3 Control and Management System................................................................................ 6 1.2.4 Cockpit Displays and Recorders................................................................................. 10 1.3 Avionics Architecture............................................................................................................... 11 1.3.1 Central Architecture..................................................................................................... 11 1.3.2 Distributed Architecture.............................................................................................. 11 1.3.3 Federated Architecture................................................................................................. 12 1.3.4 Integrated Modular Architecture................................................................................ 13 1.4 History of Avionics................................................................................................................... 14
2. AVIONICS SYSTEMS: DESIGN AND CERTIFICATION......................... 17–30 2.1 Metrics for Evaluating an Avionics System.......................................................................... 17 2.2 Design Process Flow................................................................................................................. 18 2.2.1 Stakeholders................................................................................................................... 18 2.2.2 Process Flow................................................................................................................... 18 2.3 Hardware Design and Certification....................................................................................... 21 2.3.1 Thermal Considerations............................................................................................... 25 2.4 Software Design and Certification.......................................................................................... 26
3. PRINCIPLES OF AVIONICS.................................................................. 31–100 3.1 3.2 3.3
Digital Number Systems, Codes and Complement Arithmetic......................................... 31 3.1.1 Number Systems........................................................................................................... 31 3.1.2 Negative Integers and Binary Arithmetic.................................................................. 32 3.1.3 Codes............................................................................................................................... 33 Amplitude Modulation............................................................................................................ 37 Phase Shift Modulation Schemes............................................................................................ 39 3.3.1 Binary Phase Shift Keying............................................................................................ 39
x Contents
3.4 3.5 3.6 3.7 3.8 3.9 3.10 3.11 3.12 3.13
3.3.2 Quadrature Phase Shift Keying................................................................................... 41 Digital Circuits........................................................................................................................... 43 3.4.1 Combinational Logic Circuits...................................................................................... 45 3.4.2 Sequential Logic............................................................................................................ 46 Semiconductor Switches.......................................................................................................... 53 3.5.1 Diode............................................................................................................................... 53 3.5.2 Bipolar Junction Transistor (BJT)................................................................................ 54 3.5.3 Metal-Oxide-Semiconductor Field Effect Transistor (MOSFET)............................ 57 3.5.4 Complementary Metal-Oxide-Semiconductor (CMOS) Logic............................... 59 Inter-Integrated Circuit (I2C) Bus........................................................................................... 60 3.6.1 Hardware........................................................................................................................ 63 Analog-Digital Data Conversion............................................................................................ 63 3.7.1 Amplifiers....................................................................................................................... 63 3.7.2 Non-inverting Summer................................................................................................ 65 3.7.3 Inverting Summer and R/2nR Digital-to-Analog Converter................................. 66 3.7.4 Analog to Digital Converter (ADC)............................................................................ 67 Digital Computer...................................................................................................................... 69 3.8.1 Architecture.................................................................................................................... 69 3.8.2 Representation of Data and Instructions................................................................... 71 3.8.3 Central Processing Unit (CPU).................................................................................... 76 3.8.4 Main Memory................................................................................................................ 85 Integrated Circuits.................................................................................................................... 86 3.9.1 Application Specific Integrated Circuit (ASIC) and General Purpose IC............. 87 3.9.2 Field Programmable Gate Array (FPGA).................................................................. 87 Fiber Optic Communications.................................................................................................. 87 Elements of Avionics Systems................................................................................................. 90 3.11.1 Line-Replaceable Unit (LRU)..................................................................................... 91 3.11.2 Data Bus........................................................................................................................ 91 Fault Tolerance.......................................................................................................................... 96 Avionics Programming............................................................................................................ 99
4. SENSORS............................................................................................ 101–178 4.1 4.2 4.3 4.4
Anatomy of a Civilian Aircraft.............................................................................................. 101 Types of Sensors...................................................................................................................... 102 Displacement Sensors............................................................................................................. 104 4.3.1 Potentiometer............................................................................................................... 104 4.3.2 Linear Variable Differential Transformer................................................................ 105 4.3.3 Synchro......................................................................................................................... 106 Air Data Sensors...................................................................................................................... 107 4.4.1 Earth’s Atmosphere.................................................................................................... 108 4.4.2 Air Temperature Measurement ............................................................................... 108
Contents xi
4.4.3 Air Velocity Measurement ........................................................................................ 110 4.4.4 Altitude Measurement................................................................................................ 114 4.5 Attitude Sensors...................................................................................................................... 117 4.5.1 Gimbaled (Mechanical) Gyroscopes......................................................................... 118 4.5.2 Laser Gyroscopes........................................................................................................ 121 4.5.3 Vibratory Gyroscopes................................................................................................. 124 4.5.4 Angle-of-Attack Sensor.............................................................................................. 128 4.6 Position Sensors....................................................................................................................... 129 4.6.1 Accelerometers............................................................................................................ 130 4.7 Electromagnetic Sensors......................................................................................................... 136 4.7.1 Radio Frequency (RF) Bands..................................................................................... 136 4.7.2 Transponders............................................................................................................... 137 4.7.3 Primary Surveillance Radar (PSR)............................................................................ 138 4.7.4 Secondary Surveillance Radar (SSR)........................................................................ 140 4.7.5 ModeS Communication.............................................................................................. 141 4.7.6 Automatic Dependent Surveillance-Broadcast (ADS-B)....................................... 143 4.7.7 Global Positioning System......................................................................................... 145 4.7.8 Automatic Direction Finder (ADF)........................................................................... 167 4.8 Engine Sensors......................................................................................................................... 170 4.8.1 Temperature................................................................................................................. 172 4.8.2 Role of Engine Pressure Ratio (EPR) in Thrust Measurement.............................. 173 4.8.3 Engine Vibration, Acceleration and Shock.............................................................. 174 4.8.4 Fuel Quantity............................................................................................................... 174 4.8.5 Fuel Flow Rate............................................................................................................. 175 4.8.6 Engine’s Shaft Speed................................................................................................... 176
5. AVIONICS IN CIVILIAN AIRCRAFT.................................................... 179–213 5.1 5.2
Overview of Boeing 787 Dreamliner.................................................................................... 179 Avionics in Civilian Aircraft.................................................................................................. 180 5.2.1 Avionics in Boeing 787............................................................................................... 180 5.2.2 Full Authority Digital Engine Control (FADEC).................................................... 193 5.2.3 Engine Health Monitoring and Management System (EHMMS)........................ 194 5.2.4 Digital Fly By Wire System (DFBWS)...................................................................... 194 5.2.5 Bleed Air versus Bleedless ‘More Electrical’ Aircraft............................................ 195 5.2.6 Integrated Vehicle Health Monitoring (IVHM)...................................................... 196 5.2.7 Air Data Inertial Reference Unit (ADIRU).............................................................. 197 5.2.8 Standby Air Data and Attitude Reference Unit (SAARU).................................... 198 5.2.9 Weather Radar (WxR)................................................................................................. 198 5.2.10 Terrain Awareness and Warning System (TAWS)................................................. 199 5.2.11 Traffic Collision Avoidance System (TCAS)........................................................... 200 5.2.12 Instrument Landing System (ILS)............................................................................. 207
xii Contents
5.2.13 Radar Altimeter........................................................................................................... 210 5.2.14 Aircraft Communications Addressing and Reporting System (ACARS)........... 211 5.2.15 Crash-Safe Flight Recorders (CSFRs)....................................................................... 212
6. AVIONICS IN MILITARY AIRCRAFT................................................... 214–235 6.1 6.2
F-35 Lightning II...................................................................................................................... 216 Avionics in F-35 Lightning II................................................................................................. 219 6.2.1 Integrated Core Processor ......................................................................................... 219 6.2.2 Active Electronically Scanned Array (AESA) Radar (AN/APG-81)................... 222 6.2.3 Distributed Aperture System (AN/AAQ-37)......................................................... 225 6.2.4 Electro-Optical Targeting System (AN/AAQ-40).................................................. 225 6.2.5 Communication, Navigation and Identification System....................................... 225 6.2.6 Pilot Vehicle Interface ................................................................................................ 227 6.2.7 Full Authority Digital Engine Control (FADEC).................................................... 230 6.2.8 Electronic Warfare System (AN/ASQ-239)............................................................. 231 6.2.9 Integrated Sensor System .......................................................................................... 232 6.2.10 Vehicle Management System (VMS)........................................................................ 234 6.2.11 Autonomic Logistics Information System (ALIS) ................................................. 235
7. FUTURE OF AVIONICS....................................................................... 236–247 7.1 7.2
Future of Aviation................................................................................................................... 236 7.1.1 Self-healing Materials................................................................................................. 236 7.1.2 Alternative Energy Sources....................................................................................... 237 7.1.3 Spaceplanes.................................................................................................................. 238 7.1.4 Flying Cars................................................................................................................... 239 7.1.5 Terrestrial Hypersonic Flight.................................................................................... 239 Avionics: The Road Ahead.................................................................................................... 240 7.2.1 Reduction of Flight Time............................................................................................ 240 7.2.2 Reduction in Fuel Consumption............................................................................... 241 7.2.3 Next Generation Air Transportation System (NextGen)....................................... 243 7.2.4 Military Avionics......................................................................................................... 246
APPENDICES........................................................................................... 249–264
I. Abbreviations........................................................................................................................... 250 II. RTCA Documents................................................................................................................... 259 III. ARINC Standards.................................................................................................................... 261
BIBLIOGRAPHY....................................................................................... 265–278 INDEX ..................................................................................................... 279–283
CHAPTER
1
Overview of Modern Avionics Avionics are electronics deployed in aircraft or aerospace vehicles. The word avionics is derived from two words—aviation and electronics. The field of avionics is over a century old. Starting from the simple electromechanical systems in the early years of aviation, avionics have evolved to provide the modern capabilities such as all-digital flight control and planet-wide satellite-based navigation. With the expanding role of avionics, not surprisingly, the fraction of an aircraft’s costs expended on avionics has increased over the decades. For example, in F-35A, one of the latest military aircraft, avionics account for about 15% of the total cost of the aircraft [Balle 2016]. From the days of the first world war and continuing to this day, the evolving needs of military aircraft continue to provide a driving force for advances in avionics. The advances in military avionics, in turn, have percolated into civilian aircraft contributing to such civilian avionics systems as multimode radars and satellite-based communication, navigation and surveillance systems. Section 1.3 presents selected examples of the tasks performed by avionics in modern aviation. The following section presents a more detailed overview of the avionics systems found in modern aircraft.
1.1 FUNCTIONS OF AVIONICS IN CIVIL, MILITARY AND SPACE SYSTEMS One of the prominent tasks performed by avionics in civilian aircraft is digital flight control. Specifically, avionics are used to control the movements of aircraft’s surfaces, such as the flaps, slats, spoilers and ailerons in aircraft’s wings—often without the involvement of a pilot—to enhance fuel efficiency, safety and riding comfort. Boeing 787 Dreamliner, for example, has a Vertical Gust Suppression system, which automatically reshapes the aerodynamic profile of the aircraft in response to atmospheric turbulence to make the ride smoother for passengers [Dodt 2011, Gunter 2005]. Avionics also play an important role in navigation. For example, the Air Data Systems (ADS) measure important air data, which enables an aircraft to determine its own altitude and speed relative to the ambient air. The GPS receivers, onboard an aircraft, enable the aircraft to determine its own instantaneous location in 3-dimensional space [Helfrick 2002]. The Traffic Collision Avoidance System (TCAS) enables an aircraft to avoid collision with other airborne aircraft in the vicinity, even in low visibility conditions [TCAS 2011]. The Terrain Awareness Warning System (TAWS) alerts pilots when the aircraft is too close to the ground during landing maneuvers or 1
2 Principles of Modern Avionics
too close to the terrain in mountainous regions (§ 5.2.10; also see ARINC 594 and ARINC 723 in Appendix III). The Integrated Vehicle Health Monitoring system is used for onboard performance monitoring of an aircraft’s systems—such as its engines—thereby providing critical maintenance data to ground crew (§ 5.2.6). The Aircraft Communications Address and Reporting System (ACARS) provides the critical equipment for the communications between an aircraft and ground-based control stations (§ 5.2.14). Avionics also permit communications among airborne aircraft (for example, see the discussion on ADS-B communications in § 4.7.6). A more detailed discussion of the tasks performed by avionics is presented in § 1.2. In short, by performing many of the tasks that would otherwise be done by the crew, avionics serve to reduce the crew’s workload while enhancing their situational awareness, making it possible for the crew to operate flights over longer time intervals. They also enhance an aircraft’s capabilities. For example, they enable aircraft to fly in adverse weather conditions, in which human visibility could be severely limited. In military aircraft, one of the prominent roles played by avionics is to enhance a pilot’s situational awareness—that is, a pilot’s awareness of the locations and movements of friendly and hostile units in a battle space. In the words of John Boyd [Ford 2010], the ability to execute the Observe → Orient → Decide → Act (OODA) sequence faster than an adversary confers a tactical advantage on a pilot. Modern avionics systems, a few representative examples of which are presented in the next paragraph, enable a pilot to observe the battle space and gain situational awareness rapidly. The Distributed Aperture System (DAS), deployed in an F-35 aircraft, presents a continuous video stream of a spherical region around the aircraft on the pilot’s Helmet-Mounted Display System (§ 6.2.3). The Multifunction Advanced Data Link, a secure data link, enables a fleet of F-35 aircraft engaged in battle to share tactical data with each other and with ground stations in real time (§ 6.2.5.3). The Active Electronically Steered Array (AESA) radar deployed on F-35 has the capability to detect and track multiple targets, including airborne aircraft, anti-aircraft artillery and ground-based missile launchers (§ 6.2.2). A pilot in F-35 can point the AESA radar to explore battle space in any direction by merely turning his head towards that direction. Besides providing situational awareness, avionics—such as the Electronic Warfare System (EWS)—also assist a pilot in both defensive and offensive maneuvers (§6.2.8). In addition to enhancing a pilot’s situational awareness, avionics in military aircraft also assist in handling the aircraft—ensuring that it is flying within the safety limits—especially in high-speed maneuvers. In a military aircraft, such as F-35, that is flown by a single pilot the avionics perform some of the tasks that are normally performed by co-pilots and/or navigators, freeing the pilot to focus on combat rather than on flying the aircraft. Avionics in the military aircraft F-35 are discussed further in Chapter 6. In spacecraft avionics play an even more important role. They are responsible for attitude and flight path control of spacecraft and launch vehicles. They play a critical role in Telemetry and Telecommand (TM/TC) systems enabling communication between a spacecraft, artificial satellite or launch vehicle and the earth-based control facilities. In artificial satellites they provide communication capabilities over C, S and X-band
Overview of Modern Avionics 3
frequencies. They are used to implement advanced encryption systems for ensuring security in communications. They are responsible for managing the on-board power systems including the controls for solar arrays, battery charging and protection systems. They are used for error detection and correction in onboard systems. Avionics are also used to monitor the health of astronauts. They play an important role in recording and archiving data pertaining to the flight. They are used extensively in satellite-based surveillance and remote sensing applications. Deployed on artificial satellites, they enable satellite-based navigation systems for terrestrial flights. The preceding paragraphs provide a glimpse of the tasks performed by modern avionics. In the next section we take a closer look at some of the typical avionics systems deployed on civilian/military aircraft.
1.2 TYPICAL AVIONICS SYSTEMS Avionics systems can be broadly classified into four categories: (1) sensors, (2) communication, navigation, identification and surveillance system, (3) flight control and management system, and (4) cockpit displays and recorders. In the following paragraphs we provide an overview of avionics systems in the above categories, deferring a more detailed discussion of selected avionics systems to Chapters 3-6. Avionics systems that are not discussed further in later chapters are described at slightly greater length below.
1.2.1 Sensors We focus on five prominent types of sensors: air data sensors, attitude and heading sensors, acceleration sensors, electromagnetic sensors and engine sensors.
1.2.1.1 Air Data Sensors The air data sensors measure atmospheric parameters, such as static pressure, dynamic pressure and temperature of the ambient air. The measurements are sent to the Air Data Computer (ADC), which uses the data to compute navigational parameters such as the aircraft’s speed and its altitude. See § 4.4 for further discussion.
1.2.1.2 Attitude and Heading Sensors The attitude of an aircraft is its orientation relative to the horizon. It is specified by the pitch (rotation about the lateral axis) and roll (rotation about the longitudinal axis); see Fig. 4.18. An aircraft’s heading is its orientation relative to a reference direction, such as the magnetic north or the centerline of a runway. It is specified by the yaw (rotation about the vertical axis). The attitude and heading are thus measured by tracking the rotations of the aircraft’s frame about the three axes (lateral, longitudinal and vertical). Gimbaled gyroscopes and the strap-down laser gyroscopes are used to measure the rotations of the aircraft’s frame. The rotation data is sent to the Attitude and Heading Reference System (AHRS). The AHRS processes the raw data and provides the aircraft’s attitude and heading
4 Principles of Modern Avionics
information to other major systems such as the Electronic Flight Instrumentation System (EFIS), which presents the data in a pilot-friendly display. See § 4.5 for further discussion.
1.2.1.3 Acceleration Sensors Acceleration sensors are also called accelerometers. Accelerometers serve two prominent functions in avionics. Tracking the instantaneous acceleration of an aircraft along the x-, y- and z-axes, starting at takeoff, enables one to determine the velocity and position of the aircraft at any later time by single and double integration over time respectively; this technique is called dead reckoning. Secondly, unusual vibrations in the engine often provide early warnings of imminent failure of an engine. An engine’s vibrations are sensed using piezoelectric or servo-balance accelerometers. See § 4.6 for further discussion.
1.2.1.4 Electromagnetic Sensors An important class of electromagnetic sensors is the Global Positioning System (GPS) receivers, which are used by an aircraft to determine its own location in 3-dimensional space. A second class of electromagnetic sensors—the Primary Surveillance Radars (PSRs)— is used to detect the direction and distance of an airborne aircraft. A PSR transmits electromagnetic waves in the radio frequency band. The waves reflected by an aircraft are sensed by the radar to determine the aircraft’s position. The Active Electronically Steered Array (AESA) radar on F-35 is an example. A Secondary Surveillance Radar (SSR) also transmits radio waves. However, unlike a PSR, an SSR does not sense reflected waves. Rather, the radio waves transmitted by an SSR activate a transponder in a target object, which responds to the SSR’s interrogation by transmitting a digital message that is sensed by the SSR. Examples of SSR include the Mode S data link between a ground-based SSR and transponders on aircraft. Another example of electromagnetic sensors is the Identify Friend or Foe (IFF) transponder on aircraft. A final example of electromagnetic sensors is an Automatic Direction Finder, an electromagnetic sensor onboard an aircraft that was used in early days to sense the direction to a ground-based radio beacon. See § 4.7 for further discussion.
1.2.1.5 Engine Sensors Besides the engine vibration sensor mentioned above, several other sensors are deployed in an aircraft’s engine to measure key engine parameters. For example, engine sensors are used to measure the Exhaust Gas Temperature (EGT), Engine Pressure Ratio (EPR), which is the ratio of the pressure of the exhaust gas to the pressure of engine’s intake air, engine’s thrust, fuel flow rate, engine shaft’s rotational speed and fuel quantity. The engine parameters are used to track the health of the engine in real time. Engine sensors are discussed in § 4.8.
1.2.2 C ommunication, Navigation, Identification and Surveillance (CNIS) System Some of the critical functions performed by avionics systems, from the very early days of aviation, are (i) communication with ground-based control centers and other airborne
Overview of Modern Avionics 5
aircraft, (ii) navigation, including in low-visibility and adverse weather conditions, and (iii) identification. The avionics units that form the CNIS system are described briefly below and in greater detail in later chapters.
1.2.2.1 Aircraft Communication Addressing and Reporting System The Aircraft Communication Addressing and Reporting System (ACARS) is used for data communications between an aircraft and ground-based computers, without human intervention. Examples of ACARS data include engine performance data, fuel data, passenger loads, and departure and arrival reports. The data tracking, made possible by ACARS, facilitates timely maintenance of the aircraft. See § 5.2.14 for further discussion.
1.2.2.2 Onboard Communication System The onboard communication system is used for communication among the crew, and between the crew and passengers, mostly over audio channels. Speakers installed at select locations facilitate communication with the passengers.
1.2.2.3 Air-to-Ground Communication System Air-to-ground communication system comprises the avionics used to identify an aircraft and for communications between an aircraft and the control tower. Most aircraft today are equipped with a radio transponder. When radio waves from a ground based radar strike the transponder, it responds by sending a signal back to help the control tower identify the aircraft. Secondly, the aircraft are also equipped with Very High Frequency (VHF) and High Frequency (HF) radio transceivers to enable voice communications between the cockpit and the control tower.
1.2.2.4 Global Positioning System Global Positioning System, or GPS as it is commonly known, is used by modern aircraft to determine their location during flight. The system comprises a network of thirty-seven satellites that are in geosynchronous orbits. At any given time, at least six of the satellites are in direct line of sight of an aircraft. A GPS receiver, onboard an aircraft, uses signals from at least five of the satellites to compute its own instantaneous longitude, latitude and altitude, that is, its 3-dimensional location in the troposphere. See § 4.7.7 for further discussion.
1.2.2.5 Inertial Navigation System The Inertial Navigation System (INS) comprises a system of accelerometers, gyroscopes and computers to determine the location, attitude and heading of an aircraft through ‘dead reckoning’. The INS is a key navigation tool that is commonly used on most aircraft in conjunction with GPS.
6 Principles of Modern Avionics
1.2.2.6 Traffic Alert and Collision Avoidance System The Traffic Alert and Collision Avoidance System (TCAS) is a secondary radar installed on aircraft to detect the threat of mid-air collision with other aircraft. The system operates independent of ground control. TCAS enables an aircraft to use radio waves to interrogate its neighborhood for the presence of other aircraft. Upon receiving the radio waves from the interrogating aircraft a transponder on a TCAS-equipped aircraft responds to announce its presence. Audio alerts are issued to pilots to alert them in case the TCAS detects a threat of collision with another aircraft in the vicinity. See § 5.2.11 for further discussion.
1.2.2.7 Data Bus Interoperability among the different avionics systems requires a means for onboard communications among the systems. An aircraft-wide data bus provides the hardware connectivity as well as the communication protocols that enable the avionics systems to exchange messages. The communication protocol of a data bus stipulates both the electrical connectivity requirements, as well as the format of the messages among the systems. ARINC 429 is a widely used unidirectional data bus standard for civilian aircraft. It supports data transfer at a maximum rate of 100 Kbps (kilo bits per second). An enhancement of ARINC 429 is the ARINC 629 standard, which is a bidirectional data bus that supports data transfer at the peak rate of 2 Mpbs (mega bits per second). ARINC 629 is used in the modern aircraft such as Boeing 777 and Airbus A330/A340. Military avionics use a different data bus standard, namely, MIL-STD-1553B, which supports data transfer at speeds of up to 1 Mbps. An implementation of MIL-STD-1553B with fiber optic cables is governed by the MIL-STD-1773B standard. ARINC 429 is discussed in greater detail in § 3.11.2.1.
1.2.2.8 Surveillance System An aircraft’s surveillance system comprises the instruments that enable an aircraft to deal with external conditions in its vicinity. Specific external conditions include other aircraft, lightning activity and weather conditions. The TCAS discussed above is part of the surveillance system. In addition, other systems, such as Automatic Dependent SurveillanceBroadcast (ADS-B) systems are also used for collision avoidance. In ADS-B, each aircraft broadcasts its own location continuously. An aircraft can determine the threat of collision(s), knowing the location of other aircraft in its vicinity. In addition, instruments that are part of the surveillance system sense lightning activity in the neighborhood. The weather radar, which is also a part of the surveillance system, seeks to sense turbulence in an aircraft’s vicinity.
1.2.3 Control and Management System Advanced digital control systems play a key role in modern aircraft. Prominent examples include the following systems.
Overview of Modern Avionics 7
1.2.3.1 Electronic Flight Instrument System The Electronic Flight Instrument System (EFIS) consolidates inputs from several sources in the aircraft and presents a user-friendly display of the important data on display screens in the cockpit. The sources from which data is gathered include, the Instrument Landing System (ILS), the Attitude and Heading Reference System (AHRS), Air Data System (ADS), radar altimeter, the Engine-Indicating and Crew-Alerting System (EICAS), and Weather Radar System (WxR). A pilot is provided a menu-based option of choosing the data to be displayed. One of the most commonly displayed items is the Electronic Attitude Direction Indicator (EADI), which displays the aircraft’s tilt relative to the horizon. The EFIS replaced several electromechanical displays that were in use in earlier aircraft.
1.2.3.2 Engine-Indicating and Crew-Alerting System The Engine-Indicating and Crew-Alerting System (EICAS) consolidates the key engine parameters, such as engine shafts’ rotational speed, temperature, fuel flow rate, fuel quantity and engine vibration into a consolidated display in the cockpit. In addition to the engine parameters, the EICAS also displays color coded alerts for the crew. For example, Rockwell Collins EICAS-5000 presents data about important systems, such as landing gear position, cabin pressure and electrical system and also routes the relevant data to the Flight Data Recorder. See § 5.2.1.6 for further discussion.
1.2.3.3 Fly by Wire The Fly by Wire (FBW) refers to the control of aircraft surfaces such as the rudder, elevators and ailerons (see Fig. 4.18) using electrical signals transmitted over wires to the actuators that move the surfaces. Earlier, the control signals were transmitted from the cockpit to the actuators using cables, pulleys and hydraulic systems. In FBW control the mechanical control commands issued by a pilot are converted to electrical signals by the FBW computer, which transmits the electrical signals to the actuators. Apart from lightening the workload of the pilot, the FBW system makes it possible for the computer to monitor the control signals and ensure that the flight is being operated within its safety envelope. Another big advantage of FBW is that, by eliminating heavy cables and hydraulic systems, it greatly reduces the weight of the aircraft and hence improves its fuel efficiency. See § 5.2.4 for further discussion.
1.2.3.4 Flight Management System The onboard Flight Management System (FMS) enables planning and navigation of a flight. It comprises a FMS computer, an FMS navigational database and, a Control Display Unit (CDU). The FMS database, which is updated periodically, contains information about the worldwide network of airports and ground-based beacons. Once the source, the destination, the intermediate points of a route, the dead reckoning data, and the flight parameters such as aircraft weight and fuel level are entered into the FMS, it can navigate the aircraft along an optimal flight path. The FMS also assists with flying when the aircraft is operating in
8 Principles of Modern Avionics
autopilot mode, by providing the steering commands to the autopilot. In addition, FMS is used to monitor the sensors, display radar data, optimize en route fuel consumption and/ or travel time. See § 5.2.1.2 for further discussion.
1.2.3.5 Autopilot Autopilot is an avionics system that performs a range of flight control activities. In its simplest setting, called one-axis control mode, a computer controls the roll of the aircraft by moving the ailerons. In two-axes control mode the autopilot computer controls turns about two of the aircraft’s axes—the longitudinal axis (roll) and the vertical axis (yaw); see Fig. 4.18. Whereas the roll is modulated by moving the ailerons the yaw is modulated by moving the rudder (for large turns). Alternatively, two-axis control could also modulate roll and pitch. In three-axis control mode the altitude or rate of ascent/descent of the aircraft is also under a computer’s control in addition to the turns about the aircraft’s three axes. Beyond three-axes control mode, an autopilot system can be set to the autothrottle mode, in which the computer controls the amount of engine power in addition to the three-axes control. The most involved autopilot mode is the autoland mode in which the engine power, turns about the three axes as well as the flare—a landing maneuver in which the nose of the aircraft is raised to slow descent—and decrabbing—a maneuver to counteract crosswinds— are handled by a computer. Autopilot systems implement control of the actuators using feedback from the relevant sensors. See [FAA 2017, Helfrick 2002] for further discussion.
1.2.3.6 Instrument Landing System The Instrument Landing System (ILS) enables a pilot to perform precision landing of an aircraft using instruments even when the external visibility is poor. The ILS comprises two ground-based radio transmitters called Localizer and Glide Slope. The localizer, which is typically located at the end of a runway, transmits two beams from either side of the center of the runway. The two beams are modulated at two different frequencies—90 Hz and 150 Hz. While landing, an aircraft receives more of the 90 Hz (150 Hz) wave if it swerves to the left (right) of the center. On board instrumentation displays the deviation from the center, allowing the pilot to correct horizontal trajectory. Similarly, another radio transmitter, called glide slope, transmits two angularly separated radio beams that are symmetrically oriented about the optimal angle of descent. The upper (lower) beam is modulated with a 90 Hz (150 Hz) wave. While landing, if the descent trajectory of an aircraft is above (below) the optimal descent trajectory, then the aircraft receives more of the 90 Hz (150 Hz) wave. Again, onboard instruments display the deviation allowing a pilot to correct the slope of descent. See § 5.2.12 and [Helfrick 2002] for further details.
1.2.3.7 Very High Frequency Omnidirectional Range (VOR) The VOR comprises a network of ground-based radio beacons that transmit two signals continuously. The first signal, called the master signal, is omnidirectional. The second signal is directional and rotates clockwise 30 times per second. The phase of the rotating
Overview of Modern Avionics 9
signal coincides with the instantaneous angle that the rotating signal makes with respect to the magnetic north. Thus, when the rotating signal makes an angle of 40° with respect to the magnetic north its phase is offset by 40° relative to the master signal. Upon receiving the master signal and the directional signal, the VOR equipment onboard an aircraft can calculate the bearing to the beacon, that is, the angular position of the beacon relative to the magnetic north, as viewed from the aircraft—by determining the phase difference between the master signal and the directional signal [Helfrick 2002]. By calculating the bearing to two known VOR beacons an aircraft can fix its own horizontal location. The VOR uses frequencies in the band 108 MHz-117.95 MHz. The omnidirectional master signal encodes an identifier of the transmitting beacon in Morse code to help aircraft identify the beacon. The ground-based VOR beacon is often collocated with the DME and TACAN equipment (see below).
1.2.3.8 Distance Measuring Equipment (DME) The onboard DME uses radio waves to measure the distance from an aircraft to a groundbased transponder. The onboard DME interrogator (transmitter/receiver) sends a pair of pulses of fixed duration and with fixed separation—12 microseconds (X channel)—between pulses. Upon receiving the pair of pulses, a ground-based DME transponder (receiver/ transmitter) waits for 50 microseconds and retransmits the same pair of pulses. The distance to the ground-based transponder can be calculated by the onboard DME equipment by measuring the time delay between the transmission and reception of the pair of pulses. The rate at which an onboard DME interrogator transmits pulses—that is, the number of interrogation pulse pairs transmitted by the DME interrogator per unit time—is chosen randomly, to filter out the replies intended for other aircraft. A DME interrogator transmits at one frequency—1.041 GHz (X channel)—and receives a reply from the DME transponder at a different frequency—0.978 GHz (X channel); see [Helfrick 2002] The frequency at which a DME transponder receives a pulse-pair and the frequency at which it retransmits the pulse pair are separated by 63 MHz, with a signal spectrum width of 100 kHz. In addition, DME transponders transmit their identity in Morse code. If the DME is collocated with a VOR station, then the DME identifier will be the same as the station’s VOR identifier.
1.2.3.9 Tactical Air Navigation System (TACAN) The TACAN is a VOR/DME system for military aircraft. It enables military aircraft to determine the bearing and distance to ground-based or ship-based TACAN stations. Two military aircraft that are equipped with TACAN hardware can also determine the relative bearing and distance between them as well as their relative velocities.
1.2.3.10 Power Supply Modern aircraft have both DC (direct current) and AC (alternating current) power sources. AC power is generated during flight by onboard generators that are driven by the aircraft’s
10 Principles of Modern Avionics
engines. Transformers and rectifiers are used to step down the AC voltage and convert AC power to DC (direct current) power as needed. In Boeing 787, for example, onboard AC power is supplied at 235 V and 115 V, while the DC power is available at 28 V and ± 270 V. In addition to the main generators, which are powered by the aircraft’s main engines, a commercial aircraft also houses an Auxiliary Power Unit (APU) in its tail section. The APU serves as a source of backup power in case of failure of engines or main generators. While on ground the APU could also be used to start the main engines. See § 5.2.1.8 for further discussion.
1.2.4 Cockpit Displays and Recorders Cockpit displays are used to present the pilot(s) with information that is critical for flight management. Whereas the cockpit displays in earlier years comprised crowded electromechanical displays—such as the display of Boeing 707 shown in Figure 1.1— modern displays are largely Active Matrix Liquid Crystal Displays (AMLCDs). The information presented on a display is usually customized to the current flight phase on a need-to-know basis. Selected data is also presented in Head Up Displays (HUD) and Head Down Displays (HDD).
Fig. 1.1 Cockpit of boeing 707 (Courtesy: hitechautomotive.blogspot.com/2011/05/boeing-707-cockpit.html)
Aircraft carry two types of crash-safe recorders—Flight Data Recorder (FDR) and Cockpit Voice Recorder (CVR). The recorders store data that would be of interest to investigators, in the event of a disaster. FDRs are used to record flight parameters of interest, while the
Overview of Modern Avionics 11
CVRs record the conversations in the cockpit—among pilots, Air Traffic Control (ATC) and cabin crew—and other audio data such as warnings from instruments, or unusual engine noises. The FDR retains the flight parameter data for at least 25 hours before it is overwritten. In addition, multichannel voice data from the cockpit is stored for a duration of at least 30 minutes. See § 5.2.15 for further discussion.
1.3 AVIONICS ARCHITECTURE The avionics architecture describes how the various avionics systems—such as those discussed in the previous section—are interconnected (network topology), the manner in which they communicate with each other (communication protocols), the systemwide resources that are available, and the manner in which such resources are shared or encapsulated by the avionics systems. In the following paragraphs we look at four different paradigms for avionics architecture: the central, distributed, federated and finally integrated modular architectures [Helfrick 2002, Watkins 2006]. The current trend is to use integrated modular architectures as the basis for avionics infrastructures.
1.3.1 Central Architecture A central avionics architecture has a single main computer that does all the data processing. In a pure central architecture there is no computing hardware or ‘intelligence’ resident in sensors. All of the data from the sensors are sent in a raw form—analog or digital—to the central computer for processing. If the signal is in analog form, then it is routed through an Analog-to-Digital Converter (ADC) connected to the computer. Consolidating all of the computing power into one computer makes it easier to maintain and/or upgrade the software running on the central computer. Since there is just a single computing resource it would be economically viable to deploy a significantly powerful computer onboard, and maintain it in a temperature-controlled dust-free environment. The main disadvantage, however, is that the computer becomes a single point of failure (SPOF). If the computer stops functioning mid-flight then it shuts down the entire avionics infrastructure. One work-around would be to build redundancy by having a backup central computer. The central architecture was attractive when computing power was expensive. With the scale of circuit integration that is available today, it is no longer necessary to concentrate all of the computing power into one computer.
1.3.2 Distributed Architecture In contrast to the central architecture, in a distributed architecture the computing power is not concentrated in a single computer but is distributed across the different onboard avionics systems. For example, the sensors are integrated with the necessary intelligence to process the data they acquire, and transmit the results to other systems.
12 Principles of Modern Avionics
The main advantage of the distributed architecture is that it does not have a SPOF. Failure of a system affects only the systems that rely on it, but does not crash the entire avionics infrastructure. Also, the computation in distributed architectures is quicker. Typically, the amount of processing needed in each system is quite small. Having the computations occur in parallel, instead of scheduling the computing tasks serially on a central computer, reduces the overheads associated with scheduling tasks on a computer and hence speeds up the overall processing. The third advantage is modularity, which enables individual systems to be replaced or upgraded easily. Changes in the implementation of a system would be invisible to other systems as long as their interfaces are not altered. The paradigm of hiding the implementation details of a system and exposing only an interface through which the functionality of the system is to be accessed is called encapsulation. The main disadvantage of the distributed architecture is the cost. Replication of computing power increases the cost. Secondly, some of the systems, such as those measuring engine temperature or outside air pressure operate in harsh conditions, which are not suitable for deployment of computing hardware.
1.3.3 Federated Architecture A federated architecture is a hybrid of the central and distributed architectures, as shown in Fig. 1.2. In a federated architecture the overall avionics infrastructure is partitioned into a small number of systems, such as the communication system and flight management system. Viewed at the granularity of systems it is a distributed architecture comprising interacting systems. Within a single system, however, one has a central architecture, with a central computer.
Internal Bus
Internal Bus
Input/Output
Input/Output
Main Bus Fig. 1.2 Federated architecture
Sensor(s)
OS, Memory
Processor(s)
System n
Application S/W
Sensor(s)
OS, Memory
Processor(s)
System 1
Application S/W
Overview of Modern Avionics 13
1.3.4 Integrated Modular Architecture
Sensor
OS Processor
I/O
Sensor
OS Processor
Sensor
OS Processor
Sensor
OS Processor
In a federated architecture the overall avionics infrastructure is physically partitioned into systems, each of which has its own hardware and software resources. In contrast, in an Integrated Modular Architecture (IMA), illustrated in Fig. 1.3, the partitioning into subsystems is virtual. The infrastructural resources, such as processors, sensors and I/O modules, are shared among different applications. A virtual system would then comprise the application software and the parts of the shared resources that are allocated to the application software. A virtual system boundary is not shown in Fig. 1.3 to avoid crowding the diagram. However, one could, for example, visualize the Application 1, and the parts of the resources allocated to it as a virtual system.
I/O
Bus
Application 1
Application 2
Application 3
Fig. 1.3 Integrated modular architecture
The paradigm shift in IMA is that the construction of the infrastructure begins with the deployment of the avionics resources, the Application Program Interfaces (APIs) to which are made known. Different vendors could develop application software packages for the infrastructure (modular development), knowing the details of the available resources and their APIs. The integration of the application software into the overall architecture is then rendered relatively easier owing to pre-specified interface specifications. The current trend is that most of the avionics systems are moving towards IMA. Examples of civilian aircraft that use IMA are Airbus A380 and Boeing 787. The F-22 military aircraft of the U.S. Air Force also uses IMA.
14 Principles of Modern Avionics
1.4 HISTORY OF AVIONICS Interestingly, two pivotal events in the history of avionics occurred exactly a year apart. On December 17, 1902, Guglielmo Marconi succeeded in transmitting a wireless message across the Atlantic Ocean, belying the earlier belief that wireless communications were possible only between endpoints that were in each other’s line of sight. Marconi’s demonstrations leading up to the trans-Atlantic communication ushered the era of wireless communication. Exactly, a year later, on December 17, 1903, Orville Wright and Wilbur Wright demonstrated a controlled, powered flight of a heavier-than-air flying machine heralding the era of aviation. The earliest avionics systems essentially comprised wireless radio communication equipment, which was used to perform surveillance behind enemy lines during World War I (WWI). The radio communication equipment had a range of a few miles, and were being deployed only on military aircraft during and shortly after WWI. Takeoff, navigation and landing of civilian aircraft in the 1920s relied entirely on pilot’s vision. As a result, civilian flying was restricted to daylight hours and cloudless days. Flight schedules were at the mercy of the weather. In order to commercialize air transport it was necessary to develop the capability for ‘blind flight’—that is, the ability to fly during nights and in low visibility conditions. In an effort to enable flying during nights the Department of Commerce of the United States installed light beacons at selected locations along airways. A network of such airways had emerged in the 1920s. Even with the network of beacons in place flying had to be restricted to weather conditions in which the beacons were visible. There was a continuing need to develop the capability for true ‘blind flight’ in which a pilot would takeoff, fly and land the aircraft using instruments and without looking outside the cockpit. The key technology for such blind flight was wireless radio communication, since clouds are transparent to radio waves. Ground-based Non-Directional radio Beacons (NDB) were installed along airways, which previously had only light beacons. Using an on-board directional antenna and a radio receiver, a pilot could orient the aircraft towards an NDB. The capability to fly towards an NDB was further improved to the four-course range system, in which a ground-based radio transmitter continuously broadcast the letters A and N in Morse code, along two orthogonal directions in the horizontal plane. The four-course range system, not only enabled pilots to home in towards the beacon, but also helped steer them along a course. The four-course range system was upgraded to the VHF (Very High Frequency) Omnidirectional Range (VOR) system. While the four course range system facilitated navigation, takeoff and landing in foggy conditions were still a challenge. Following failed attempts to disperse fog near runways using giant fans, attention soon turned to radio communication technology to assist with landing in foggy conditions. The first truly ‘blind flight’ occurred on September 29, 1929, when test pilot James Doolittle and check pilot Benjamin Kelsey, took off, flew and landed while covered with a canvas canopy, with no visibility of the external world. The blind
Overview of Modern Avionics 15
flight marked the advent of an important component of avionics—the radio landing system. The next major advance in the application of radio waves in aviation was the invention of the Radio Detection And Ranging (RADAR). While RADAR could detect an incoming aircraft, it lacked the capability to distinguish between friend and foe. The work on the development of Identify Friend or Foe (IFF) systems gave rise to radio transponders, which are the precursors of the current Radio frequency Identification (RFID) technology. As the civilian air traffic density continued to increase near the airports, the IFF system, which was primarily developed for war use, was adapted for civilian use, not to distinguish between friend and foe, but to maintain a safe inter-aircraft separation. Following Doolittle and Kelsey’s first blind flight the technology for blind landing continued to improve resulting in the development of the Instrument Landing System (ILS) that is in use today. The first commercial passenger flight landed with the assistance of ILS in 1938, nine years after the maiden blind flight. In the latter half of the twentieth century avionics systems continued to evolve, catalyzed by the miniaturization that was made possible by semiconductor electronics. Boeing 707 was the first aircraft on which transistor-based electronics were deployed. It also had avionics to measure distances, the autopilot navigational capability and an airborne weather radar. Starting in the 1950s, continued advances in civilian avionics was spurred by the advent of space travel, artificial satellites, missile technology, digital computing technology and high-altitude spy planes. Microprocessors were deployed in avionics to perform sophisticated tasks, which were earlier done by humans. In the 1970s the Air Data Computer (ADC) was developed to collect and process data from various air sensors. Examples of measurements monitored by an ADC include those of external parameters such as the air speed, altitude and ambient air pressure. The 1970s also witnessed the emergence of the Fly-By-Wire (FBW) technology. As mentioned above, a FBW system transmits the control signals from the cockpit to the actuators, not mechanically, but electrically. The mechanical control signals, such as the movement of the control wheel, are transduced into electrical signals, which are then transmitted by wires to the electronically controlled actuators at the flight control surfaces. In the digital (analog) FBW systems digital (analog) signals are used to transmit the control commands to the actuators. The first truly FBW aircraft with no mechanical backup was the Lunar Landing Research Vehicle (1964). The first digital FBW fixed wing aircraft without mechanical backup was F-8 Crusader, whose maiden flight was in 1972. Several variants of the FBW technology have emerged. Fly-By-Optics replaces the electrical cables with optical fibers, which are relatively lighter and immune to electromagnetic disturbances. Although FBW flight control reduces the weight of the aircraft significantly, the electrical cables contribute to the aircraft’s weight and have multiple points of failure—both within the cables and at the connectors. The Fly-By-Wireless control system, which uses wireless channels for communication between the cockpit and the flight control surfaces, retains the advantages of FBW systems, while reducing
16 Principles of Modern Avionics
the weight further and increasing reliability. In the 1980s electronic collision avoidance systems were deployed on aircraft, improving the safety of passengers. In the 1990s the Global Positioning System (GPS) was made available for civilian use after years of development for military applications. The GPS was a revolutionary advance that made it possible for aircraft to determine their locations with unprecedented accuracy. See [Helfrick 2002] for a more detailed discussion of the above historical account. Selected advances that have occurred in the last two decades are discussed in Chapters 5 and 6, in which we take a closer look at the avionics in a modern commercial aircraft (Boeing 787) and a modern military aircraft (F-35) respectively.
qqq
CHAPTER
2
Avionics Systems: Design and Certification Over the years a structured process has evolved to guide the design, development and certification of an entire aircraft, including its avionics systems. Regulatory agencies, such as the Federal Aviation Authority, play a critical role in providing oversight for the aircraft development process. An aircraft can be commissioned into service only after it is certified to be airworthy by the concerned regulatory agency. Therefore, even though certification is the last activity in the development of an aircraft, every step during the design and development effort is guided by the requirements that regulatory agencies place on certification. In this chapter we present a coarse overview of the avionics development activity, the guidelines and best practices used in avionics development and finally the aircraft certification process. We begin with the metrics used to evaluate an avionics system.
2.1 METRICS FOR EVALUATING AN AVIONICS SYSTEM The salient metrics used to evaluate an avionics system are often called the “-ities” [Young 2004]. A few examples of “-ities” are listed below. Capability
: A measure of its built-in functionalities.
Reliability
: Measured using Mean Time Between Failures (MTBF).
Maintainability : A measure of the ease with which the avionics system can be maintained in an operational state. Availability
: Measured using its duty cycle—the percentage of time it is available for use.
Survivability
: A measure of the system’s robustness and ability to survive damage to the aircraft, either in combat or in civilian catastrophes.
Integrity
: A measure of the system’s ability to alert the pilot when it realizes that it is malfunctioning and is sending hazardously misleading information.
Testability
: A measure of the ease of troubleshooting.
Repairability
: A measure of the ease of repairs when the system malfunctions.
Accessibility
: A measure of the ease of access to the subsystems, especially those with small MTBF.
Affordability
: The cost of the avionics system. 17
18 Principles of Modern Avionics
2.2 DESIGN PROCESS FLOW Before proceeding to a discussion of the design process flow of an aircraft, including its avionics system, we list the various stakeholders involved in the design process, in Table 2.1.
2.2.1 Stakeholders Most of the stakeholders listed in Table 2.1 require no elaboration. Examples of regulatory agencies are the Federal Aviation Authority (FAA), USA, the Directorate General of Civil Aviation (DGCA), India, and finally the International Civil Aviation Organization (ICAO), UN. For the remainder of the discussion we choose FAA as the representative regulatory agency.
2.2.2 Process Flow The process flow for the design, development and certification of avionics systems is outlined in Figure 2.2. Guidelines for the overall development of a civilian aircraft are provided by the Aerospace Recommended Practice document ARP 4754 [SAE4754 1996], published by the Society of Automotive Engineers (SAE) International. The document was recognized by FAA in its Advisory Circular AC 20-174 [AC-20-174, 2011]. Table 2.1 Stakeholders in aircraft development
Stakeholder
Examples of Concerns/Interests
Aircraft manufacturer
Ease of design and manufacture, financial factors, market
Regulatory agencies
Safety of flight, certification
Operators
Passenger cargo payload, ease of maintenance
Civil airlines
Fuel efficiency, life-span, ease of maintenance, ease of operation, service costs, reliability, flight range. crashworthiness
Military
Maneuverability, agility, high acceleration, thrust vector control
Passengers
Safety, comfort, economy
Public
Impact on environment (eg., pollution, noise)
As shown in Fig. 2.2 there are four main feeders to the Requirement Capture step. Safety: The safety of flight is arguably the most important driver. The guidelines for safety of civilian aircraft are specified by regulatory agencies in documents such as SAE/ARP 4754 and SAE/ARP 4761 [SAE4754 1996, SAE 4761 1996]. SAE/ARP 4754 and 4761 provide the technical standards that can be used to demonstrate compliance with the requirements of the FAA. For military aircraft the technical standards are specified in documents such as MIL-STD-882C [MIL-STD-882C 1993].
Avionics Systems: Design and Certification 19
Usage
Safety
Cost
Certification
Requirements Capture
Preparation of System Requirements Specifications Document
Phase Requirements
Structural Requirements
Engine Requirements
Avionics Requirements
Hardware Requirements
Software Requirements
Hardware Design
Software Design
Hardware Implementation
Software Coding
Hardware Verification
Software Testing
Integration of Hardware and Software and Testing
TSO/STC Type Approval
Flight Testing and Aircraft Certification Fig. 2.2 Process flow for design and development of avionics system
20 Principles of Modern Avionics
Usage: Civilian aircraft are used either for transport of passengers or cargo. Accordingly, factors such as the passenger load, cargo load and desired range are important usage considerations. For military aircraft, the usage factors are more varied depending on whether the aircraft is used for training, transport of heavy equipment, long-range bombing, air-to-air combat or surveillance. Alongside the usage information, the requirements capture step also involves gathering high level specification of the desired functionalities of the aircraft. Cost: The life cycle cost of the aircraft is an important consideration for civil airlines, and less so for military aircraft. The life cycle costs involve considerations such as the lifespan of the aircraft and the maintenance costs. Certification: Certification is an important consideration in the design and development of avionics systems. The certification process starts at the requirement capture phase and ends with the retirement of the avionics equipment. Regulatory bodies, such as FAA, do not certify subsystems of an aircraft, such as its avionics system. Certification is granted for the whole aircraft. The requirements capture step results in the generation of the Systems Requirements Specifications (SRS) document. SRS forms the basis for all further design and manufacturing activities. Needless to say, capturing the requirements is arguably one of the most important steps in the design and construction of an aircraft. The different phases of flight are taxiing on ground, takeoff, ascent, cruising, descent, approach and landing. The high level description of system requirements captured in the SRS document is expanded, at higher resolution, into requirements for each of the six flight phases. The phase requirements are then partitioned into the requirements for the engine, the structure of the aircraft and for the avionics subsystem. The high resolution description of the requirements for an avionics system has to precede the development of the system. The requirements for the composite avionics system are mapped to separate hardware and software requirements. The standards and guidelines for the development of avionics hardware and software are discussed in detail in the following subsections. The hardware development proceeds in three stages—design, implementation and verification. The software development proceeds in analogous stages—design, coding and testing. Following the development of the hardware and software functionalities along two parallel tracks, the hardware and software are integrated. The integrated system is subjected to extensive testing to ensure interoperability of the hardware and software. The integrated prototype comprising the avionics hardware and software is then submitted to the concerned regulatory agency for type approval. The approval is granted through a Technical Standard Order (TSO) if it is a new system and Supplemental Type Certification (STC) if it is a retrofit to an existing aircraft. After type approval, the avionics system will be ready to be fitted into the aircraft and proceed to flight trials phase [Hilderman 2011]. Regulatory agencies do not certify subsystems, such as avionics. Rather certification is granted to the aircraft as a whole.
Avionics Systems: Design and Certification 21
There are two very important aspects in the process outlined above. The design and construction of an avionics system—and for that matter the design and construction of the entire aircraft—are guided by certification considerations from the very beginning. Although certification is listed at the very end of the flowchart in Fig. 2.2, in reality Designated Engineering Representatives (DERs) are involved in the development process at every step to ensure that the whole process is on track to certification by the appropriate regulatory agency (such as FAA). The second aspect is traceability. Certification is greatly facilitated by documenting how each of the implemented hardware and software functionalities can be traced back to at least one of the requirements listed in the SRS. Therefore, traceability is a consideration that pervades the entire development effort.
2.3 HARDWARE DESIGN AND CERTIFICATION The guidelines for the development of avionics hardware are specified in the RTCA (Radio Technical Commission for Aeronautics) document RTCA/DO-254: Design Assurance Guidance for Airborne Electronic Hardware (see Appendix II). DO-254 provides guidelines for the hardware development process without imposing any constraints on the actual implementation. In the following paragraphs we provide an overview of DO-254. An interested reader, and especially designers who seek to comply with DO-254, should refer to the original DO-254 document for details. In 2005, the FAA issued an Advisory Circular AC 20-152 [AC-20-152 2005], which formally recommended using DO-254 to guide the development of hardware especially with Design Assurance Levels A, B or C. Design Assurance Level (DAL) characterizes the criticality of a hardware component [Fulton 2014]. The criticality is measured by the impact that the failure of the component would have, as described in Table 2.3. Table 2.3 Design assurance levels
DAL
A
B
Criticality
Catastophic impact
Hazardous impact
Effect of Hardware Failure on Aircraft
Crew
Passengers
Examples
Aircraft cannot fly and/or land safely.
Severely impairs crew’s ability
Severe casualties
Instrument landing system, ground proximity warning system
Drastic reduction in the safety margin
Pilots’ or crew’s decisions are no longer reliable due to extreme stress
Serious to fatal injuries
Flight displays
(Contd...)
22 Principles of Modern Avionics
DAL
Criticality
Effect of Hardware Failure on Aircraft
Crew
Passengers
Examples
C
Significant reduction in the safety margin
Impairs the efficiency of the crew due to excess load
Non-fatal injuries and discomfort
Weather radar
Major Impact
D
Minor impact
Small reduction in the safety margin
Crew’s capabilities and efficiency are not affected
Some inconvenience
Cabin communication equipment
No effect.
No impact
Operational capability is not affected.
No effect.
E
In-flight entertainment
Needless to say, development of hardware at a higher criticality level, such as level A, involves significantly more effort in design and verification than hardware at a lower criticality level. The hardware development process can be partitioned into four sequential stages— planning, design, implementation and verification—as shown in Fig. 2.4. At the end of each stage of the development effort, there is to be a formal audit by the DER to ensure that the development effort is on track to certification. Hardware Requirements
SAFETY ASSESSMENT PROCESS Planning
Planning Audit
Design
Implementation
Design Audit
Implementation Audit
Verification
Final Audit
Integration with Software Fig. 2.4 Avionics hardware development process
Planning: The first phase—planning—involves development of a clear specification of the formal hierarchical requirements for the hardware. The formal requirements are to be
Avionics Systems: Design and Certification 23
translated into two documents which provide the design specification and the verification plan. The planning phase leads to the development of a formal PHAC (Planning for Hardware Aspects of Certification) document. As mentioned in DO-254, “The PHAC defines the processes, procedures, methods and standards to be used to achieve the objectives of this document and obtain certification authority approval for certification of the system containing hardware items.” In addition to requirements specification, design specification and verification plan, mentioned above, the PHAC should also include three other sections on tool assessment, traceability mechanism and configuration management. While DO-254 does not specify the tools to be used in the hardware development process it does require that the PHAC include information on the identification of tools to be used in the development process, an independent assessment of the identified tools, the relevant history of the tools (such as their documented use in similar hardware development in the past), and finally a discussion of the qualification of the tools for the intended hardware development. In addition, the PHAC is required to specify the traceability, that is the mechanism to trace (map) the functionality of each hardware component back to one or more of the specified system requirements. The PHAC must also elaborate on configuration management—the strategy to track the version and history of the hardware subsystems. After all of the above documents are prepared they are submitted to DER for a Planning Audit before proceeding to the design phase. Design: As mentioned above, the planning phase is expected to produce a high-level design specification and a verification plan. The first step in the design process is to realize the specifications at a relatively high level using Register-Transfer Level (RTL) formalism. RTL, a formalism for digital circuit design, is commonly used for the design of sequential logic circuits. An RTL circuit is represented using registers—for storage—and logical gates for transformation of signals flowing between registers. Fig. 2.5 illustrates a simple circuit in RTL formalism. See Chapter 3 for a more detailed discussion of the circuit elements.
D1
Q1
D1
Q1
A D-Latch
D-Latch
Clock Fig. 2.5 An RTL circuit
The RTL design is expected to provide the functionality outlined in the design specification. Simulation software is to be used to ensure that the RTL design delivers the required functionality. Further, the verification process requires that each component in the RTL design be traceable to one or more of the formal requirements. DO-254 does not
24 Principles of Modern Avionics
specify the simulators (or other tools) used during the design process. However, it does require that a justification provided for the choice of tools used, either by documenting the successful use of the chosen tools in similar projects, or by demonstrating that the used tools are industry standards. Following successful verification that the RTL design indeed delivers the functionality specified in the design specification, the RTL design is provided to a synthesizer, which converts the abstract RTL design to an actual implementation-level design using real circuit elements. The synthesis tools also seek to optimize the timing, physical area and power consumption of the circuits. The implementation-level design produced by the synthesis tools must be functionally equivalent to the abstract RTL design. In other words, for any set of inputs, the two models must produce the same output at every clock cycle. Although, in principle, the synthesis tools guarantee the functional equivalence of the input and output designs the synthesis step typically employs several software packages, Requirements
Design Specification
Verification Plan
RTL Design
RTL Verification
Synthesis No
Equivalence Testing
Requirements Met?
Yes Reports Fig. 2.6 Hardware design flow
Analysis
Avionics Systems: Design and Certification 25
and the final design is often modified manually. As a result, errors could creep in at the synthesis stage, making it necessary to check if the implementation-level design produced is actually functionally equivalent to the abstract RTL design. The equivalence-checking is done once the synthesis is complete, as shown in Fig. 2.6. After the implementation-level design has been finalized, the circuits are also analyzed using a simulator to check if they satisfy the original requirements. If the requirements diverge from the functionalities provided by the designed circuits, the design is refined in a feedback loop. The design process exits from the loop when the design satisfies the original requirements specified for the hardware. DO-254 recommends that the final design be submitted to the DER for design audit before proceeding to the implementation step. Implementation: The final design is implemented as a prototype in the implementation stage. Complex airborne hardware rely on Application Specific Integrated Circuits (ASICs) and Field Programmable Gate Arrays (FPGAs) for the implementation. DO-254 does not constrain the choices that are to be made during the implementation step. Verification: Functional verification can be done using approaches such as directed testing and constrained random verification [Jasinski 2016]. In directed testing one performs a test for each requirement, until all the requirements have been covered. It is a straightforward approach and provides considerable visibility into the design. Tests that are successful point to requirements that have been met, while failed tests point to unmet requirements. Compartmentalizing the verification process by requirements, however, does not probe the emergent effects due to coupling between requirements. A strategy—called constrained random verification—that has evolved to probe the state space of the design, including adverse couplings, is to study the behavior of the system for carefully designed random inputs. Such random verification has the capability to probe regions of the state space that are not tested through directed testing. The drawback of this approach, however, is that it is not very helpful in confirming that certain features of the design are free of defects. A hybrid approach is often followed to realize the advantages of both the directed testing and constrained random verification approaches. The hybrid approach uses directed testing in the initial phase to verify that the individual requirements are satisfied. Subsequently, it switches to a constrained random verification approach to probe the behavior of the system further.
2.3.1 Thermal Considerations As the packaging density of avionics systems increases large quantities of heat are dissipated in very small volumes. Being subjected to sustained high temperatures makes electronic components prone to failure and reduces the Mean Time Between Failure (MTBF). Conventional cooling techniques, such as circulation of air by a fan, or the use of heat pipes to drain heat away from hot spots, are no longer adequate to drain heat away rapidly when the circuit density is high. Novel and more effective techniques for rapid drainage of heat are critical in the design of avionics. We mention one example of cooling techniques that are currently used to cool avionics.
26 Principles of Modern Avionics
The F-22 Raptor aircraft uses a special liquid coolant—Poly Alpha Olefin (PAO)—to conduct heat away from its power supplies [Brower 2001, F22 2017]. The PAO coolant is made to flow through the power supply module, to drain heat away from the module. The reduction in ambient temperature, achieved by PAO cooling, translates to an increase in the output power from 250 watts to 400 watts [F22 2017]. The F-22 Raptor also uses the same liquid flow-through cooling to drain heat from all the Line Replaceable Modules (LRMs) in its Central Integrated Processor (CIP). F-22’s Integrated Modular Avionics System has two CIP racks with 66 slots per rack [F22 2017]. The LRMs are liquid flow-through cooled using the PAO coolant supplied by its Environmental Control System. Liquid flow-through cooling leads to enhanced reliability with a MTBF of 25,000 hours [F22 2017].
2.4 SOFTWARE DESIGN AND CERTIFICATION The term avionics software encompasses the system software running on a flight’s computers, the firmware running in embedded systems, the application software that performs various tasks ranging from controlling the entire flight in autopilot mode to the archiving of data generated during the flight. Although the data used or generated by the software running on onboard instruments and computers does not, strictly speaking, constitute software we include it in the discussion of avionics software. Although intangible, defects in the avionics software could be every bit as consequential as defects in onboard hardware. Consequently, the avionics software’s life cycle encompassing its development, deployment and maintenance are strictly regulated by the licensing agencies. In fact, the avionics standard for hardware development—DO-254, discussed above—evolved from its analog for software. Certification of an aircraft by a licensing agency, such as FAA, hinges on adherence to the software development guidelines laid out in a standard such as DO-178C (which evolved from the earlier de facto standard DO-178B). In the following subsections we present a coarse overview of the DO-178 standard. For a fuller discussion the reader is referred to [Hilderman 2011]. As with avionics hardware, avionics software modules are classified into five Design Assurance Levels—levels A to E—based on their criticality. The DALs A to E mirror the DALs of hardware. Since malfunctioning of software at DALs A, B or C can lead to injuries at the very minimum, and possibly fatalities, the development of software at these DALs as well as obtaining certification for it involves considerably more effort, than for software at lower levels of criticality. As shown in Fig. 2.2, the avionics requirements are broken down to hardware and software requirements. The software development process begins with the software requirements as its input. It is recommended that software development process adhere to two guidelines. The first, shown in Fig. 2.8 is continuous consultation with the certifying agency. The second guideline is that the development of the software be constrained by the traceability
Avionics Systems: Design and Certification 27
requirement, that is, the requirement that individual modules of code be mapped to the high level requirements specified for the software. One of the simplest models of software development is the so-called waterfall model illustrated in Fig. 2.7 [Leach 2016]. Planning Design Implementation Verification Maintenance Fig. 2.7 Waterfall model of software development
The waterfall model stipulates a 1-dimensional progression from the planning to the maintenance stages. In reality, software development often involves feedback to a phase, such as the design phase, based on work done in downstream phases. Notwithstanding such feedback, the waterfall model is a useful starting approximation to the software development process model. The software development process is summarized in Fig. 2.8. Planning: The first task in the software development effort is stipulated to be the preparation of the following documents [Rierson 2013]:
1. 2. 3. 4. 5.
Plan for Software Aspects of Certification (PSAC), Software Development Plan (SDP), Software Configuration Management Plan (SCMP), Software Quality Assurance Plan (SQAP), and Software Verification Plan (SVP).
The PSAC describes the road map towards certification. While the certification is granted to the aircraft as a whole, and not to software modules, the PSAC focuses on the requirements to be satisfied by the software infrastructure in order to obtain certification for the aircraft. The SDP lays out the plan for the development of the software infrastructure for avionics. The configuration management plan, detailed in SCMP, describes the plan for maintaining consistency in the software infrastructure, throughout the software lifecycle, in the face of changes made independently to different software modules. The SQAP details the plan for ensuring that the software development effort is in compliance with the plans described in PSAC, SDP, SCMP and SVP, as well as the requirements specified in DO-178. And finally the SVP describes the plan for verifying that the software performs as it is expected to perform.
SAFETY ASSESSMENT PROCESS
Software Requirements
Planning
Requirements
Design
Coding
Testing
Integration with Hardware
CONSULTATION WITH CERTIFICATION AGENCY
28 Principles of Modern Avionics
Fig. 2.8 Software development process
Requirements: As shown in Fig. 2.2 the avionics requirements yield the derived requirements for the software. The software requirements and the outcome of the safety assessment process are consolidated into a list of High Level Requirements (HLRs) for the software. The HLRs specify for each software module, viewed as a black box, the desired functionality, performance requirements, timing constraints, memory constraints and the interfaces to hardware and other software modules. Design: The design phase involves designing a software architecture, complete with a description of the various interacting software modules, their data consumption, generation and storage characteristics, the interactions among the modules and the system-wide data bus. The software architecture and the HLRs are used to specify the set of Low Level Requirements (LLRs), which involves allocating specific tasks to modules, the data structures and algorithms to be used by the modules, specification of the input and output of each module, the data and control flows. The design is specified in sufficient detail to enable implementation of each module. After the completion of this phase two documents— the Software Requirements Data (SRD) and Software Design Description (SDD)—are to be generated to serve as the basis for the coding effort. Coding: DO 178 B/C does not place constraints on the platform or languages used for coding. The programmers have the latitude to select the appropriate platform, tools, languages and even Commercial Off The Shelf (COTS) software. However, they are required to document the justification for the choice of tools and/or COTS software used in the
Avionics Systems: Design and Certification 29
coding process. The outcome of the coding process is the source code, including the linked libraries, and the hardware specific executable machine code installed on the target hardware. Testing: In general, the problem of guaranteeing the correctness of software is known to be unsolvable (see the Halting Problem of the Turing Machine [Davis 1983]). It is nearly impossible to certify that the avionics software will perform as required for all the situations that may arise during flight. Although unable to provide an assurance of correctness, the testing/verification phase attempts to establish performance assurances to the best possible extent. Code Coverage Analysis (CCA) is one of the techniques used for testing software [Peled 2013]. The simplest case of code coverage analysis is statement coverage analysis in which one tracks the statements in the source code that were executed on an input test case, say T1. Statements that execute successfully are covered, and those that do not are not covered by T1. The challenge is to design a suite of test cases T = {T1, T2, …, Tn} such that each statement in the source code is successfully executed sufficiently many times when the software is tested on suite T. A coarser version of statement coverage analysis is function coverage analysis, in which one tracks the execution of functions/subroutines in the source code, instead of individual statements. Other variants of function coverage are loop coverage (has every loop been executed at least once?) and branch coverage (have all possible branches of a branchstatement been executed?). The philosophy of coverage analysis is that if each unit of code—such as a statement or a function—is successfully executed under the range of conditions that can exist while the program visits that unit, then the probability of the software malfunctioning on an unseen input is reduced. A different approach is to test each individual module of software extensively in a phase called unit testing. After the individual modules are exhaustively tested one progresses to integration testing, in which the modules are allowed to interact and tested together. The final phase is acceptance testing in which the whole software is tested to verify that it meets the HLRs placed on the software [Peled 2013]. Standard tools that are available for testing, such as the code coverage analyzer tools, may be used in the testing process. The testing could also use tools that analyze the coverage of the HLRs by the developed code. The justification for the choice of the tools needs to be included in the documents submitted for certification. An important aspect of testing is that of independence. The testing must be done by personnel who are not involved in the development effort. The software testing effort results in the preparation of the following documents that are to be submitted for certification purposes: the Software Verification Cases and Procedures (SVCP) and the Software Verification Results (SVR). In addition to the above documents the software development effort is also required to maintain a Software Configuration Index (SCI) and the Software Life Cycle Environment Configuration Index (SECI). These documents contain a chronological record of the
30 Principles of Modern Avionics
versioning, revisions to code/data/parameters and bug-fixes. Also, there is a requirement to maintain a record of quality assurance tests done to show compliance with DO 178B/C, specifically, the Software Quality Assurance Records (SQAR). At the end of the development effort a Software Conformity Review (SCR) must be performed to demonstrate that the development and testing of software has conformed to the HLRs, traceability requirement, the documentation requirement, and the PSAC, SDP, SCMP, SQAP and SVP submitted at the beginning of the development process [Rierson 2013]. The final document that needs to be prepared to comply with the requirements for certification is the Software Accomplishment Summary (SAS). See [Davis 2013] for a detailed discussion of the abovementioned documents. Modification of any software after aircraft certification, especially software in the DAL A to DAL D categories, requires the approval of the Design Organization (DO), which is expected to maintain a record of all the changes made to the software throughout the life of the aircraft as part of its configuration management plan. For the purposes of postcertification modification, the term software includes both executable code as well as the data, such as parameter values, that are essential for the performance of the software.
qqq
CHAPTER
3
Principles of Avionics The hardware of avionics systems comprises both digital and analog electronic components. In this chapter we discuss the basic concepts and the operating principles underlying the electronic components used in avionics systems.
3.1 DIGITAL NUMBER SYSTEMS, CODES AND COMPLEMENT ARITHMETIC We begin the discussion by reviewing the key number systems and codes and the complement system of binary arithmetic.
3.1.1 Number Systems Digital electronics largely uses the binary (base-2) number system, and to a lesser extent the hexadecimal (base-16) number system. In general, a base-n number system has symbols that represent numbers 0, 1, … , n – 1. For example, in the decimal number system n = 10 and we have symbols 0, 1, 2, … , 9 = n – 1. For the hexadecimal system, n = 16, and we have symbols 0, 1, 2 … , 9, A, B, C, D, E, F, where A represents the number 10, B, the number 11, … and F, the number 15 = n – 1. For the binary number system n = 2 and thus we have symbols to represent only the numbers 0 and 1 = n – 1, which are called binary digits, or bits for short. As in the decimal number system a number in base-n system is written as a string of symbols. Consider, for example, the number N = 1011. In the decimal number system the value of the number 1011 is (1011)10 = 1 × 103 +0 × 102 + 1 × 101 +1 × 100 = (1011)10
(decimal number system)
whereas the value of the same number 1011 in the hexadecimal number system is (1011)16 = 1 × 163 +0 × 162 + 1 × 161 +1 × 160 = (4113)10 (hexadecimal number system) and its value in the binary number system is (1011)2 = 1 × 23 +0 × 22 + 1 × 21 +1 × 20 = (11)10
31
(binary number system).
32 Principles of Modern Avionics
On the other hand M = 94BE cannot be a number in the decimal or binary system since they do not have the symbols B or E. However, M = 94BE is a valid number in the hexadecimal system, in which its value is (94BE)16 = 9 × 163 + 4× 162 + 11 × 161 +14 × 160 = (38078)10 (hexadecimal number system)
3.1.2 Negative Integers and Binary Arithmetic In the binary number system a bit string, such as 1011 above, represents a positive integer. We discuss below three different schemes used to represent negative integers—the signed magnitude system, one’s complement system and two’s complement system [Lu 2004]. In the signed magnitude system the most significant bit (MSB) is used to represent the sign and the remaining bits, the magnitude. If the MSB is zero the sign of the number is taken to be positive and negative, otherwise. For example, in the signed magnitude system
0 1011 = +11 1 1011 = –11
Given n bits the range of numbers one can represent in the signed magnitude system is [–(2n – 1 – 1), (2n – 1 – 1)]. In this system zero is represented by two different bit strings in which the MSB can be either 0 or 1, with the other bits being zero. The next two number systems—the one’s complement and two’s complement—are specified by the weights they assign to the MSB, as shown in Table 3.1. Table 3.1 1’s complement and 2’s complement binary number systems
Number System ↓
In an n-bit # weight of the
n bits
(n-1)th bit
(n-2)th bit
…..
1st bit
0th bit
Smallest #
Largest #
2n – 1
2n – 2
…..
21
20
0
2n – 1
1’s comp.
–2n – 1 + 1
2n – 2
…..
21
20
–(2n – 1 – 1)
2n – 1 – 1
2’s comp.
–2n – 1
2n – 2
…..
21
20
–2n – 1
2n – 1 – 1
Binary
In the following example we evaluate the value of a sample 4-bit number, namely 1011, in the three number systems shown in Table 3.1. Binary Number System:
1011 = 1 × 23 + 0 × 22 + 1 × 21 + 1 × 20 = 11
1’s Complement System:
1011 = 1 × (–23 +1) + 0 × 22 + 1 × 21 + 1 × 20 = – 4
2’s Complement System:
1011 = 1 × (–23) + 0 × 22 + 1 × 21 + 1 × 20 = – 5
Principles of Avionics 33
2’s complement system is most widely used to implement binary arithmetic within the Arithmetic Logic Unit (ALU) of computers. So we examine some elementary properties of the 2’s complement system. The 2’s complement number system has the useful property that we do not need to handle positive and negative numbers differently in addition and subtraction; we just add the two summand binary numbers, regardless of their signs. If the carry into and out of the MSB are different, then we have an overflow. If the carry into and out of the MSB are the same, then there is no overflow. If there is no overflow we throw away the carry out of the MSB if any, and the resulting sum gives us the right answer. The examples with 4-bit numbers, shown in Table 3.2, illustrate the 2’s complement addition. In Example 1, the carry into and out of the MSB are both 1. Therefore, there is no overflow. We throw away the carry out of the MSB. The sum is 0100, which is 4 in 2’s complement system. In Example 2, again, the carry into and out of the MSB are both 0. So there is no overflow. The sum 0101, which represents 5 in the 2’s complement system, again gives the right answer. In Example 3, the carry into the MSB is 1 while the carry out of the MSB is 0. Since the carry into and out of the MSB are different we have an overflow. The sum 1100, which represents -4 in the 2’s complement system, does not represent the right answer. The overflow occurs because the largest positive number that can be represented in the 2’s complement system with 4 bits is 7, while the correct sum is 12 which cannot be represented using 4 bits in the 2’s complement system. The reader can verify that similar reasoning applies to subtraction. Multiplication and division are implemented in ALU using repeated additions and subtractions. An interested reader can refer to [Lu 2004] for a fuller discussion of 2’s complement arithmetic. Table 3.2 Addition in 2’s complement system Example 1 # in 2’s comp. Carry
1
Example 2 Decimal #
1
1
0
Summand 1
0
1
1
0
Summand 2
1
1
1
Sum
0
1
0
# in 2’s comp. 0
Example 3 Decimal #
0
1
0
6
0
0
1
1
0
–2
0
0
1
0
4
0
1
0
# in 2’s comp. 0
Decimal #
1
1
1
3
0
1
0
1
5
0
2
0
1
1
1
7
1
5
1
1
0
0
-4
3.1.3 Codes One of the most well-known codes is the ASCII code, which is used to represent symbols using seven bits labeled X6 X5 X4 X3 X2 X1 X0. Table 3.3 shows the partial mapping between the numbers 0-127 and the corresponding symbols in the ASCII code. The symbols corresponding to X6 X5 X4 = 000 and X6 X5 X4 = 001 are not shown.
34 Principles of Modern Avionics Table 3.3 ASCII code
X6X5X4 X3X2X1X0
010
011
100
101
110
111
0000
Sp
0
@
P
`
p
0001
!
1
A
Q
a
q
0010
“
2
B
R
b
r
0011
#
3
C
S
c
s
0100
$
4
D
T
d
t
0101
%
5
E
U
e
u
0110
&
6
F
V
f
v
0111
`
7
G
W
g
w
1000
(
8
H
X
h
x
1001
)
9
I
Y
i
y
1010
*
:
J
Z
j
z
1011
+
;
K
[
k
{
1100
,
N
`
n
~
1111
/
?
O
-
o
DEL
The eighth bit X7 is used as a parity bit. In an odd-parity scheme the parity bit is set to 0 or 1 to ensure the total number of 1’s in the ASCII symbol is odd. Thus, the parity bit for the symbol E is set to 0 while the parity bit for the symbol 3 is set to 1. Whereas the ASCII code encodes the alphanumeric characters, the commonly used arithmetic operations and punctuation symbols, a restricted set of codes encodes the decimal numbers into binary form. We look at two families of such codes—the weighted and unweighted codes. As an example of weighted codes, we consider Binary Coded Decimal (BCD) representation. As examples of unweighted codes we consider the Excess-3 and Gray codes. The BCD, Excess-3 and Gray codes are described in Table 3.4 [Jain 2010].
Principles of Avionics 35
Table 3.4 BCD, excess-3 and gray codes
Digit
BCD
Excess-3
Gray
0
0000
0011
0000
1
0001
0100
0001
2
0010
0101
0011
3
0011
0110
0010
4
0100
0111
0110
5
0101
1000
0111
6
0110
1001
0101
7
0111
1010
0100
8
1000
1011
1100
9
1001
1100
1101
The Excess-3 representation of a digit is the binary representation of the number obtained by adding 3 to the digit. The Gray code has the property that two consecutive digits differ by exactly one bit. The representations of decimal numbers in the different codes are best illustrated with an example. Consider the decimal number 25. Table 3.5 shows the representations of the number in the different codes. Table 3.5 Examples in binary, BCD, excess-3 and gray codes
25 in Binary Form
25 in BCD
25 in Excess-3 Code
25 in Gray Code
11001
0010 0101
0101 1000
0011 0111
The binary number 11001 = 1 × 24 +1 × 23 +0 × 22 + 0 × 21 +1 × 20 = 25. In BCD system the digits 2 and 5 are represented separately using the BCD code shown in Table 3.4. 25 in BCD code:
0010
0101
0010
0101
0011
0101
2
25 in Excess-3 code:
2
25 in Gray code:
2
5 5
5
36 Principles of Modern Avionics
Notice that while the binary representation of the number 25 requires only 5 bits, the BCD, Excess-3 and Gray codes require 8 bits: 4 bits per digit. BCD is called a weighted code because the four bits in BCD—going from left to right— have associated weights 23, 22, 21 and 20. The decimal digit represented by the 4-bit BCD code can be obtained by taking the weighted sum of the four bits. The Excess-3 and Gray codes however have no associated weights for the four bits, and are hence called unweighted codes. In addition to the above codes, we are also interested in the Line Codes, which are used in digital data transmission. The commonly used line codes are the Return-to-Zero (RZ), Non-Return-to-Zero (NRZ) and the Manchester codes. Bipolar RZ: The characteristic feature of the bipolar RZ scheme is that the signal is at zero in the second half of every clock period. The 1 and 0 correspond to the signal at high and low levels, which are symmetric about the zero level of the signal as shown in Fig. 3.6. The advantage of the bipolar RZ code is that it is self-clocking. The disadvantage, however, is the wasted bandwidth. 1
1
0
0
1
Hi 0 Lo
Clock Period Fig. 3.6 Transmission of bit-string 11001 using the bipolar RZ scheme
Bipolar NRZ: In contrast to the bipolar RZ scheme, in the bipolar NRZ scheme the signal does not return to zero in the second half of the clock period. Instead it remains at the same level as it was in the first half. A disadvantage of the bipolar NRZ code is that the signal is not self-clocking, in that it may lose synchronization when there are successive 0’s or 1’s. The signal pattern for the bit-string 11001 in the bipolar NRZ code is shown in Fig. 3.7. Manchester NRZ: The Manchester NRZ code, illustrated in Fig. 3.8, is characterized by a mid-bit flip between high and low levels of the signal. If the bit being transmitted is 1, then the mid-bit flip is from high to low, while the transition is from low to high if the transmitted bit is 0.
Principles of Avionics 37
1
1
0
0
1
Hi
0
Lo
Clock Period Fig. 3.7 Transmission of the bit string 11001 using the bipolar NRZ code
The Manchester code, like the RZ code, is also self-clocking and is used in MIL-STD-1553 B bus data transmission. 1
1
0
0
1
Hi
0
Lo
Clock Period Fig. 3.8 Transmission of the bit string 11001 using the Manchester NRZ code
3.2 AMPLITUDE MODULATION Amplitude modulation is a scheme in which a signal is transported by modulating the amplitude of a carrier wave—which is usually a sinusoidal wave—with the signal [Bakshi 2008]. Since any signal can be decomposed as a sum of sinusoidal waves we consider the simplest case in which the signal itself is a sinusoidal wave. Specifically, denoting the signal, the carrier wave and the modulated wave as S(t), C(t) and M(t) respectively, we have
S(t) = S0 sin(wst + ϕ)
C(t) = C0 sin(wct)
M(t) = (1 + S(t))C(t)
38 Principles of Modern Avionics
The modulated waveform can be written as S C M(t) = C(t) + 0 0 {cos ( (ωc − ωs )t − ϕ ) − cos ( (ωc + ωs )t + ϕ )} 2
The modulated wave has three components—the original carrier wave and two sidebands, which have frequencies slightly below and above the carrier wave frequency. For values S0 = 0.5, C0 = 10, wc = 10, ws = 0.5, ϕ = π/4, 0 ≤ t ≤ 10π, the three waveforms are shown in Fig. 3.9. Carrier Wave 5 0 –5 0
10
20
30
40
50
60
50
60
50
60
Signal 0.4 0.2 0 –0.2 –0.4 0
10
0
10
20 30 40 Amplitude-modulated Waveform
10 0 –10 20
30
40
Fig. 3.9 Amplitude modulation
The original signal S(t) can be recovered from the amplitude-modulated waveform M(t) using a waveform D(t)= sin(wc t) that has the same frequency and phase as the carrier wave. Specifically, M(t) D(t) =
C0 C0 C0 S0 + S(t) − cos(2 ωc t) + sin ( (2ωc + ωs )t + ϕ ) + sin ( (2ωc − ωs )t − ϕ ) 2 2 2 2
{
}
The first term on the right hand side is a constant dc shift which is removed by shifting the product M(t)D(t) by a constant amount. The terms inside the square brackets contain terms with frequencies in the range (2wc – ws, 2wc + ws ) which are removed using a high frequency filter. One is then left with original signal S(t) scaled by the amount of the dc shift. Thus, up to a multiplicative factor, one recovers the original signal.
Principles of Avionics 39
3.3 PHASE SHIFT MODULATION SCHEMES Phase shift keying is a modulation scheme in which bits are encoded as phase shifts of a carrier wave [Anttalainen 2014]. We discuss two such schemes below—the binary phase shift keying (BPSK) and quadrature phase shift keying (QPSK). The constellation diagrams for the two schemes are shown below. Q
Q
01 0
1
11 /4
I 00
I 10
Fig. 3.10 Constellation diagrams for binary (left) and quadrature (right) phase shift
The real and imaginary axes of a constellation diagram are, by convention, labeled I (in-phase) and Q(quadrature) respectively. Bits or bit patterns are represented as phases— points on a unit circle in the complex plane.
3.3.1 Binary Phase Shift Keying Binary phase shift keying (BPSK) encodes bit 1 as phase shift ϕ and bit 0 as phase shift 1 π + ϕ . In Fig. 3.10, we have taken ϕ = 0. 1
1
= 0
=
= 0
= 0
1
0
1
1
Fig. 3.11 Binary phase shift keying
40 Principles of Modern Avionics
Figure 3.11 illustrates the BPSK scheme. The bit-stream to be transmitted is taken to be 1011. Bit 1 is encoded as a phase shift of zero and bit 0 as a phase shift of π, of the carrier wave shown at the top in Fig. 3.11. Specifically, if the carrier wave is fc(t) = sin(wt) then bit 1 and bit 0 are transmitted as phase-shifted segments of the carrier wave as f0(t) = –sin(wt) = – fc(t) f1(t) = sin(wt) = fc(t) In the bipolar NRZ encoding bit 0 is represented with low voltage or –1 and bit 1 as high voltage or +1. Thus, if I(t) represents the input bit stream, the BPSK modulated signal can be obtained using the simple modulator shown in Fig. 3.12. I(t)
S(t) = I(t) fc (t)
fc (t) Fig. 3.12 Binary phase shift keying modulator
The BPSK modulated signal is obtained using a multiplier circuit, that multiplies the input bit stream I(t) and a carrier wave fc(t). The original bit stream encoded in the BPSK signal can be recovered, recalling the following identity. sin2(wt) =
1 cos(ωt) − 2 2 Low Pass Filter
S(t)
2X
I(t)
wc(t) Carrier Wave Recovery Circuit Fig. 3.13 Binary phase shift keying demodulator
A BPSK demodulator circuit is shown in Fig. 3.13. A carrier wave recovery circuit is used to generate the carrier wave wc(t) = sin (wt) from the incoming modulated signal. The multiplier circuit generates the signal g(t),
Principles of Avionics 41
1 cos(2ωt) , − 2 g(t) = 2 − 1 + cos(2ωt) , 2 2
if input bit = 1 if input bit = 0
Passing the signal g(t) through a low pass filter (LPF) to remove the component with frequency 2ω, and amplifying the resulting signal with a gain of 2 recovers the original bit stream.
3.3.2 Quadrature Phase Shift Keying In contrast to BPSK, which encodes one bit per output symbol (phase-shifted segment of the carrier wave, shown between dashed vertical lines in Fig. 3.14), quadrature phase shift keying (QPSK) encodes two bits per output symbol, as shown in Fig. 3.14. The phase shifts shown in Fig. 3.14 come from the constellation diagram 3.10 (b). Thus, the output symbol π corresponding to the bit pair 11 is encoded with a phase shift ∆ϕ = , bit pair 01 with a 4 3π 5π phase shift ∆ϕ = , bit pair 00 with a phase shift ∆ϕ = and finally bit pair 10 with a 4 4 phase shift ∆ϕ =
7π . 4
= /4
= 3/4
= 5/4
= 7/4
01
00
10
Carrier Wave
Modulated Wave
11
Fig. 3.14 Quadrature phase shift keying
Assume that the input bit stream B2n B2n – 1 ... B1 is encoded in the bipolar NRZ scheme— as a stream comprising 1 and –1, with –1 representing 0 — and the carrier wave frequency is ω. Then, consider the following function for a pair of consecutive bits B2k – 1B2k q(t; B2k – 1, B2k) =
{
}
1 B2 k − 1 sin (ωt) + B2 k cos(ωt) 2
Table 3.15 shows the function q, for different combinations of B2k – 1B2k.
42 Principles of Modern Avionics Table 3.15 Quadrature phase shift encoding
B2k – 1
B2k – 1
q (t; B2k – 1 B2k)
1
1
π sin ωt + 4
–1
1
3π sin ωt + 4
–1
–1
5π sin ωt + 4
1
–1
7π sin ωt + 4
Fig. 3.16 shows a QPSK modulator circuit, which implements the relation shown in Table 3.15. … B5 B3 B1 sin(t) 2
… B3 B2 B1
Serial to Parallel Converter
q(t) 90⁰ Phase Shifter cos(t) 2
… B6 B4 B2 Fig. 3.16 Quadrature phase shift keying modulator
The circuit for extracting the original bit stream B2nB2n – 1 ... B1 from the transmitted signal q (t; B2k – 1, B2k) is shown in Fig. 3.17. Simple trigonometric calculations show that when q is multiplied with 2 2 sin (ωt) (alternatively, 2 2 cos(ωt) ) one obtains the odd (even) bits and a component of frequency 2w. When the product is passed through a low pass filter, the high frequency component is eliminated. The odd and even bits are combined by a parallel to serial converter to recover the original bit stream.
Principles of Avionics 43
Low Pass Filter 2 2 sin(t)
Carrier Wave Recovery Circuit
Parallel to Serial Converter
q(t)
… B3 B2 B1
90° Phase Shifter 2 2 cos(t)
Low Pass Filter Fig. 3.17 Quadrature phase shift keying demodulator
3.4 DIGITAL CIRCUITS The building blocks of digital circuitry are the logic gates. Table 3.18 depicts some of the basic logic gates and their truth tables. A truth table specifies the output of the gate as a function of its input(s). The inputs are labeled A and B, and the output is labeled C. Table 3.18 Basic logic gates
Logic Gate
NOT
AND
Circuit Representation
A
A B
Truth Table
C=A
C = AB
A
C
0
1
1
0
A
B
C
0
0
0
0
1
0
1
0
0
1
1
1 (Contd...)
44 Principles of Modern Avionics
Logic Gate
OR
XOR
Circuit Representation
A
C = A +B
B
A
C = A B
B
NAND
A
C = AB
B
NOR
A B
C = A+B
Truth Table A
B
C
0
0
0
0
1
1
1
0
1
1
1
1
A
B
C
0
0
0
0
1
1
1
0
1
1
1
0
A
B
C
0
0
1
0
1
1
1
0
1
1
1
0
A
B
C
0
0
1
0
1
0
1
0
0
1
1
0
There are sixteen different logic gates with two inputs and one output. We have listed only six most commonly used gates. Among them, the NAND and NOR gates are universal logic gates in the sense that all of the sixteen possible gates can be constructed using only NAND gates or only NOR gates. The digital circuits built using logic gates can be classified as combinational logic circuits or sequential logic circuits depending on the extent of delay between the input signal and the output signal. We will take a closer look at the two families of digital circuits in the following discussion. For a fuller and concise discussion the reader is referred to [All About Circuits 2017].
Principles of Avionics 45
3.4.1 Combinational Logic Circuits In combinational logic circuits the output is a function of only the current inputs, and not of the history of inputs. As a result, there is practically no delay between the input and output (a small delay of a few nanoseconds occurs due to signal propagation). The output occurs almost as soon as the circuit is excited with an input. There is no need for a synchronizing pulse/signal. The circuit does not involve flip-flops.
3.4.1.1 Half Adder An example of a combinational logic circuit is the half-adder shown in Fig. 3.19. A halfadder accepts only two inputs, labeled A and B. It does not accept a carry as input. However, it produces a sum S as well as a carry. Half-adder circuit A Truth Table A
B
0 0 1 1
0 1 0 1
Carry
&
Sum Carry 0 1 1 0
B
0 0 0 1
(a) Truth Table
XOR
A B
Sum
(b) Logic Circuits
Sum Carry
(c) Symbolic Representation
Fig. 3.19 Half-adder
3.4.1.2 Full Adder Another example of a combinational circuit is a full-adder, which accepts two inputs A and B as well as a Carry-in and produces the sum as well as a Carry-out as outputs. The full-adder is shown in Fig. 3.20.
A 0 0 0 0 1 1 1 1
A Carry B B Carry SUM in out Carry in 0 0 0 0 0 1 1 0 0 1 0 1 1 0 1 1 0 1 0 0 1 0 1 0 1 0 0 1 1 1 1 1 (a) Truth Table
XOR
& & & A B C
Sum
AB Carry out
BC CA
AB + BC + CA
(b) Logic Circuit
Fig. 3.20 Full-adder
A B
Carrying
Full Adder
Sum Carry
(c) Symbolic Representation
46 Principles of Modern Avionics
The XOR of three inputs A ⊕ B ⊕ Cin is short-hand for two successive XORs, namely (A ⊕ B) ⊕ Cin.
3.4.1.3 Multiplexer As a final example of combinational logic circuits we consider a multiplexer, which is used to select one of several inputs. The simplest 2-to-1 multiplexer is illustrated in Fig. 3.21. If S—the selector bit—is 1, then the output C = A. On the other hand, if S = 0, then C = B. Similarly, using two selector bits we can build a 4-to-1 multiplexer, and in general a 2n-to-1 multiplexer using n selector lines. S A C
B Fig. 3.21 2-to-1 multiplexer
3.4.2 Sequential Logic In contrast to combinational logic circuits, the output of sequential logic circuits depends on the current input as well as the history of inputs. In other words, sequential logic circuits involve memory. In sequential logic circuits there is a delay between the incidence of inputs and the generation of the output. A sequential logic circuit requires precise timing, and uses a clock pulse to synchronize events. We look at examples of sequential logic circuits with increasing complexity leading up to Series In Parallel Out (SIPO) register. The circuits are chosen to elucidate some of the key features of sequential logic.
3.4.2.1 Multivibrators One of the key new features we encounter in going from combinational to sequential logic is the notion of feedback. A digital circuit that incorporates feedback is called a multivibrator. We look at three simple multivibrators—the astable, the monostable and the bistable multivibrators—that have, respectively, no stable state, one stable state and two stable states. Fig. 3.22 illustrates a simple astable multivibrator—an inverter whose output is fed back to its input.
Fig. 3.22 An astable multivibrator
Principles of Avionics 47
It is easily seen that the multivibrator shown in Fig. 3.22 cannot have a stable output state. For example, if the output state is at 1, then feeding it back to the input, as shown, forces the output state to transition to 0. Similarly, if the output state is a 0, then the feedback forces it to transition to 1. In contrast to an astable multivibrator, a monostable multivibrator has essentially one stable output state, and the second output state in which it can exist for short intervals of time. An example of a monostable multivibrator is the pulse detector shown in Fig. 3.23. The AND gate in the figure has two inputs—the original input A and B, which is an inverted and slightly delayed version of A. The delay is caused by the accumulation of the inherent delays in the three NOT gates. The output of the AND gate thus is a small square pulse every time the signal A goes from low to high. Hence, the multivibrator of Fig. 3.23 is also called the pulse detector as it detects the rising edge of the pulse A. It is said to be monostable since it has essentially only one stable state—the low value of C. For brief periods—when the signal A rises from low to high—the output exists in the high state. A C B A B Delay C Time Fig. 3.23 Monostable multivibrator
Lastly, we look at the bistable multivibrators. A simple example of bistable multivibrators is the S-R latch shown in Fig. 3.24. Input
New
Old
S Q SET RESET Illegal Unpredictable Q R
Latch Latch Unpredictable
S
R
Q
Q
Q
Q
0 1 1
1 0 1
× × ×
× × ×
0
0 0 1 1
0 1 0 1
1 0 0 ? 0 1 ?
0 1 0 ? 1 0 ?
0
Fig 3.24 Bistable multivibrator (S-R Latch)
48 Principles of Modern Avionics
The × indicates that the value is irrelevant to the output. Q and its complement Q are expected to have complementary values. Therefore, the inputs S = R = 1 are considered illegal since they force both Q and Q to go to zero. An interesting situation arises when S=R=0, such as would happen when the multivibrator is powered up. If Q and Q are both initially in the de-energized state with value 0, then the new values at both outputs would be 1, which, in turn, makes them both zero and the cycle repeats. The bistable multivibrator would behave like an astable multivibrator. On the other hand if one of the gates reacts faster than the other, as is typically the case in real circuits, then Q and Q settle to a consistent state. For example, if the upper gate reacts faster than the lower gate, then Q stabilizes to 1 while Q remains at 0. Conversely, if the lower gate reacts faster Q stabilizes to 1 and Q remains at zero. The states that the two outputs (Q and Q ) settle into are thus unpredictable and are denoted with ? symbols. The same reasoning applies if the two outputs initially start out in the illegal state where both are 1. If on the other hand the initial states of Q and Q are complementary, and S = R = 0, then the outputs remain latched in the old state and this situation is labeled ‘Latch’ in the table.
3.4.2.2 Latches If the initial output states of the S-R latch shown S in Fig. 3.24 are consistent (that is, Q and Q have Q complementary values), then the output states are E maintained when S and R are set to zero. We can drive the outputs to Q = 1, Q = 0 state by sending Q a SET signal (S = 0, R = 1). Sending a RESET signal R (S = 1, R = 0) puts the outputs in Q = 0, Q = 1 state. Fig. 3.25 Gated S-R latch A variant of the S-R latch is the gated S-R latch shown in Fig. 3.25. The gate E (for Enable) allows the SET and RESET signals to act on the S-R latch only when E = 1. When E = 0 the output values Q and Q are maintained regardless of the input values at S and R. Hence, the circuit shown in Fig. 3.25 is called a gated S-R latch. The illegal and unpredictable situations in the S-R latch (Fig. 3.24) arise only when S and R have the same values. One can preclude S and R from taking the same values as shown in the following circuit.
Q E Q D Fig. 3.26 D latch
Principles of Avionics 49
With the gate E = 1, if D is set to zero, then the output would be Q = 0, Q = 1. On the other hand, with E = 1, if D is set to 1, then the output will Q be Q = 1, Q = 0. That is, with E = 1, the value of Q, follows D the value of D. Once the outputs have the desired values, if E is turned off (set to zero), then output values are maintained E Q regardless of the value of D. Thus, the D latch functions as a 1-bit memory into which the value of the bit D can be written when the gate is open (E = 1). D latches come as prepackaged Fig. 3.27 Circuit symbol for a D latch circuits and are denoted by the symbol shown in Fig. 3.27.
3.4.2.3 Edge-triggering In synchronous circuits the enable signal at gate E (see Figures 3.25-3.26) is a square wave called the clock signal, shown at the top in Fig. 3.28. A gated latch, such as the D latch (Fig. 3.26) allows the output Q to follow the input (D) whenever the clock signal is at 1. In edge-triggered circuits the gate E is open not for the entire duration for which the clock is at 1, but only for a short interval at the rising/falling edge of the clock signal. Limiting the time window over which the gate is open promotes stability in synchronous circuits. The circuits in which the outputs are allowed to change states only at the rising/falling edge of the clock signal are called edge-triggered circuits. Input wave
Clock
Pulse Detector (Monostable Multivibrator)
Output wave Time Fig. 3.28 Edge-triggering
The D latch shown in Fig. 3.26 and represented by the circuit symbol shown in 3.27, can be converted to an edge-triggered D latch or a D flip-flop using the pulse detector shown in Fig. 3.28. The D flip-flop is shown in Fig. 3.29.
50 Principles of Modern Avionics
D Clock
Q Pulse Detector
E Q
Fig. 3.29 D flip-flop
3.4.2.4 Flip-flops An edge-triggered latch is called a flip-flop, which is a basic building block of synchronous circuits. Fig. 3.29 shows a D flip-flop. An S-R flip-flop can be similarly built by feeding the clock signal to the enable gate E through a pulse detector. The standard circuit symbols for D and S-R flip-flops are shown in Fig. 3.30. Leading edge-triggered D flip-flop D
Trailing edge-triggered D flip-flop Q
Q
D C
C
Q
Q
Q
S C
Q
S C
Q
R Leading edge-triggered S-R flip-flop
Q
R Trailing edge-triggered S-R flip-flop
Fig. 3.30 Circuit symbols for flip-flops
The triangular sign at the clock input indicates that it the element is edge-triggered. Leading (positive) and trailing (negative) edge-triggering are distinguished with a small circle at the clock input.
3.4.2.5 Series In Parallel Out Register We conclude the discussion on sequential logic circuits by describing a simple circuit built using flip-flops—the Serial In Parallel Out (SIPO) register. The SIPO register shown in Fig. 3.31 comprises four D flip-flops connected in series. The output Q of one flip-flop is connected to the input D of the next. At the rising edge of each clock cycle, the output of a flip-flop is transferred to the output of the next flip-flop. After four clock cycles, the 4-bit sequence input at D is available at the output lines O1 – O4. For example, if at clock cycles 1 – 4 (t = 1, 2, 3, 4), D(t) is D(1) = 1, D(2) = 0, D(3) = 1, D(4) = 1,
Principles of Avionics 51
then after the fourth clock cycle we have O4 = 1, O3 = 0, O2 = 1, O1 = 1. SIPO registers are used to convert serial data to parallel data. The shift operation implemented in the SIPO register is also used in the ALU (Arithmetic Logic Unit) for multiplication and division. O1
D
O2
Q
Q C
O3
Q
O4
Q Q
C
Q Q
C
Q
Clock Fig. 3.31 Serial in parallel out (SIPO) register
3.4.2.6 Linear Feedback Shift Register Linear Feedback Shift Registers (LFSRs) are used to generate the pseudo random noise in Global Positioning System (GPS) satellites, discussed in Chapter 4 [Helfrick 2002]. An LFSR is constructed using feedback in a SIPO register. Specifically, the outputs of a subset of flip flops in a SIPO register are XORed and fed as input to the SIPO register. Figure 3.32 illustrates an example of a 4-bit LFSR. O1
D
O2
Q C
O3
Q Q
C
O4
Q Q
C
Q Q
C
Q
Clock
Fig. 3.32 4-bit linear feedback shift register (polynomial: 1 + X 3 + X 4)
It is easily verified that if the outputs O1, O2, O3, O4 are set to 1111 initially then the following sequence of 4-bit patterns is generated, with the transitions occurring at the leading edges of the clock pulses. 1111 → 0111 → 0011 → 0001 → 1000 → 0100 → 0010 → 1001 → 1100 → 0110 → 1011 → 0101 → 1010 → 1101 → 1110 → 1111 → 0111→ ….
52 Principles of Modern Avionics
The bit streams generated at each of the four output bits O1, O2, O3, O4 are shown in Table 3.33. Table 3.33 Bit streams generated at the four outputs in in Fig. 3.32
Output
Bit stream
O1
1 0 0 0 1 0 0 1 1 0 1 0 1 1 1 1 0 0 0….
O2
1 1 0 0 0 1 0 0 1 1 0 1 0 1 1 1 1 0 0….
O3
1 1 1 0 0 0 1 0 0 1 1 0 1 0 1 1 1 1 0….
O4
1 1 1 1 0 0 0 1 0 0 1 1 0 1 0 1 1 1 1….
Table 3.33 shows that the same 15-bit sequence 100010011010111 is repeated at each of the four outputs in the LFSR shown in Fig. 3.32. The bit streams generated at O1, O2, O3, and O4 merely begin at different starting points in the above repeating sequence. The 15bit characteristic sequence is bold-faced for clarity. Thus, we could take the bit stream at any one of O1, O2, O3, and O4 to obtain the same repeating pseudo random sequence. It is pseudo random since it repeats. The LFSR is represented algebraically as A = 1 + X3 + X4 The superscripts 3 and 4 in X3 and X4 indicate that outputs O3 and O4 are XORed and fed as input to the first flip flop in the SIPO register. Every LFSR is represented as a polynomial such as the one shown above. The second example is shown in Fig. 3.34. Again starting with the pattern 1111, the following LFSR generates the following sequence. 1111→0111→0011→1001→1100→1110→ 1111→0111→ …. O1
D
O2
Q C
O3
Q Q
C
O4
Q Q
C
Q Q
C
Clock
Fig. 3.34 4-bit linear feedback shift register (polynomial: 1 + X 2 + X 4)
Q
Principles of Avionics 53
While the LFSR shown in Fig. 3.32 has a period of 15, the LFSR shown in Fig. 3.34 has a period of only 6. We will revisit LFSRs later in the discussion on GPS.
3.5 SEMICONDUCTOR SWITCHES Digital electronics owes its prominence in the modern world to the semiconductor electronic switches, which have two important characteristics. They are small. They can be toggled—turned on or off—using electrical signals at nanosecond time scales. In contrast, mechanical switches, such as those used in homes, are toggled through the application of a mechanical force. The mechanical switches are bulkier and several orders of magnitude slower. We discuss three archetypes of semiconductor switches—the diode, bipolar junction transistor and field effect transistor. Whereas the diode acts like a valve, allowing current to flow in one direction (when forward-biased), the transistors allow one to open/close a circuit electronically.
3.5.1 Diode A diode comprises a p-type semiconductor—semiconductor doped with p-type impurity and an n-type semiconductor—semiconductor doped with n-type impurity. For example, the semiconductor silicon can be doped with a pentavalent element like phosphorus to make an n-type semiconductor. The pentavalent impurity infuses the silicon with excess electrons, which function as mobile charge carriers. In order to make a p-type semiconductor silicon can be doped with a trivalent impurity such as boron. A trivalent impurity infuses silicon lattice with holes, which are positively charged and function as mobile charge carriers. Thus, flow of electric current in an n-type (p-type) semiconductor involves flow of negatively (positively) charged particles. I-V Characteristic Curve I
Circuit Symbol –
p
n
Composition
Vc V
Fig. 3.35 Semiconductor diode
Forward Bias
+
Reverse Bias
–
Breakdown
+
54 Principles of Modern Avionics
At the junction of a p-type semiconductor and an n-type semiconductor electrons and holes diffuse from regions of higher density to those of lower density. Thus, electrons build up on the p side of the interface and holes on the n side creating a small electric field pointing from the n side to the p side. Charge carriers—holes and electrons—that drift into the electric field are quickly swept away by the electric field to either the p side or the n side. Thus, a small region on either side of the interface is devoid of charge carriers and is called the depletion region. The voltage on the n side of the depletion region is higher than the voltage on the p side creating a barrier for holes to flow from the p side to the n side or for electrons to flow from the n side to the p side. The barrier, which is about 0.7 V in silicon, can be removed by applying a forward voltage—called forward bias—of about 0.7 V across the diode—with the externally applied voltage on the p side at higher value than the n side—as shown with the symbols + and – in the circuit symbol in Fig. 3.35. Once the barrier is removed, the diode effectively functions as a low impedance conductor. The forward bias voltage required to make a diode function as a low impedance conductor is about 0.7 V for silicon diodes and about 0.3 V for germanium diodes. On the other hand, if the voltage applied across the diode is in the opposite direction, that is, the diode is reverse biased with the n side at higher voltage than the p side, then the barrier across the p-n junction is increased and the current flow across the diode is effectively blocked, as shown in the characteristic curve in Fig. 3.35. If the reverse bias exceeds a certain threshold value, then the diode breaks down as shown in Fig. 3.35. The relation between the current I through a diode (taken as positive when it is flowing from the p side of the junction to the n side), and the voltage applied across the diode, denoted VD is given by the Shockley equation [Shockley 1949], VD I = IS e ( n) VT − 1
The Schockley equation describes the behavior of the diode in the forward and reverse bias regimes. n is a factor that accounts for the non-ideality of a real diode. It is set to 1 for kT , is the thermal voltage; k is Boltzmann constant and q, the charge an ideal diode. VT = q of the electron. T is the temperature in kelvin. To understand IS consider a large negative value of VD in the reverse bias regime. Then I ≈ – IS . Thus, IS is the negative of the current flowing across the diode at large reverse bias. See [Li 2006] for a detailed discussion of semiconductor diodes.
3.5.2 Bipolar Junction Transistor (BJT) A bipolar junction transistor (BJT)—so named because it has two n-p (or p-n) junctions— provides a current-controlled switch. Fig. 3.36 illustrates the two types of BJTs—the n-p-n and the p-n-p.
Principles of Avionics 55
Circuit Symbol
n-p-n BJT
Circuit Symbol
p-n-p BJT
Collector
Base
n
p
p
n
n
p
Emitter Fig. 3.36 Bipolar junction transistors
The n-p-n transistor has a p-type semiconductor sandwiched between two pieces of n-type semiconductor. Although the arrangement looks symmetric the two n-type semiconductor blocks are not identical. One of them—called the emitter—is doped more heavily with n-type impurities than the other, which is called the collector. The names emitter and collector serve as reminders that the n-type block called the emitter, with higher doping concentration, serves as the source of the negative charge carriers (electrons) for the current that flows from collector to emitter. The p-type semiconductor in the middle is called the base. The doping concentration of the base is also different from that of the emitter. In addition the geometries of the emitter, base and collector—their thicknesses— are not identical. Therefore, the transistor is an asymmetric device. Analogous statements hold for the p-n-p transistor. The convention is to denote the direction of current flow between collector and emitter with an arrow on the line connecting the base and the emitter. In an n-p-n (p-n-p) transistor the charge flows in the direction opposite to the direction of the current flow. Circuit Symbol
n-p-n BJT
Collector
Collector IC IB
VBC n
DBC
Depletion Region BC
p
Base
VCE
Base
Depletion Region BE
IE
n
VBE
DBE
Emitter Emitter Fig. 3.37 n-p-n transistor
56 Principles of Modern Avionics
The following discussion is restricted to the n-p-n transistor. The discussion of the p-n-p transistor is analogous. A transistor can be viewed as two diodes connected back-to-back as shown in Fig. 3.37. Since each of the two diodes DBC and DBE can be either forward biased or reverse biased, the transistor can operate in four modes as shown in Table 3.38. Table 3.38 Transistor modes
Transistor mode
VBE
VBC
Remarks
Saturation
>0
>0
Low impedance path from C to E.
Forward Active
>0
4V
1111
1000
100
The digitization of the analog voltage Vsignal to binary output B2, B1, B0 is shown in Table 3.53. For voltage between 0 V and 5(–)V the digital output represents the floor Vsignal—that is, the largest integer less than or equal to Vsignal—in binary representation.
3.8 DIGITAL COMPUTER Digital computer is the core resource in modern avionics. It is responsible for data processing, generating much of the communications data, flight management, collision avoidance, blind landing and vehicle health management, to mention a few tasks it performs. In this section, we present a simplified overview of a digital computer. The discussion does not focus on an actual computer but rather on the key concepts that underlie most of the computer systems. For a more detailed discussion of a digital computer the reader is referred to [Patterson 2007; Tooley 2007].
3.8.1 Architecture The hardware of a digital computer comprises four main subsystems—the Central Processing Unit (CPU), the main memory, the interfaces to external devices and networks, and the buses that facilitate data transfer. The organization of these subsystems is illustrated schematically in Fig. 3.54. The CPU performs the computations. The operations that a CPU can perform are quite elementary and few in number. The CPU’s versatility stems from the richness of the set of composite operations that can be performed using the CPU’s elementary operations. The second noteworthy aspect of the CPU is its speed. A typical CPU can perform several hundred millions of elementary operations every second. The sequence of operations to be performed by a CPU is specified by a program that is stored in the main memory. A program comprises a sequence of instructions written in the machine language. The CPU’s hardware is capable of executing every valid instruction written in the machine language. At a coarse level the CPU functions are as follows. Starting with the first instruction in a program, the CPU loops through the following four steps.
70 Principles of Modern Avionics
Address Bus Secondary Storage Central Processing Unit (CPU)
Main Memory
Interfaces
Network Peripherals
Data Bus Control Bus COMPUTER Fig. 3.54 Architecture of a typical computer
1. Fetch the next instruction to be executed, from the main memory. The address of the next instruction is stored in a register called the Instruction Pointer (IP). 2. Execute the instruction at a hardware level. 3. If the instruction that was just executed is the last instruction of the program, STOP. Else, determine the location of the next instruction to be executed. 4. Update the IP to point to the next instruction. Go to Step 1. The CPU exits the above loop normally, after the last instruction has been executed. It could also exit the loop abnormally if an exceptional situation—such as an external interrupt or overflow (an attempt to store a larger number than can be stored in a variable)—occurs. In addition to a program, the main memory also stores the input data for every instruction, and the output that results from executing an instruction. Thus, the CPU reads an instruction and the input data for the instruction from the main memory and writes the result of the operation back into it. The flow of instructions and data between the CPU and the main memory is facilitated by three buses. The address bus is used by the CPU to communicate the address of the memory location for its read or write operation. The data bus supports the flow of data between the CPU and memory. The control bus is used for communicating control signals. Besides the CPU and the main memory, the computer also has hardware that allows it to connect to external devices—such as external drives, monitors, keyboards, printers, sensors and actuators—and networks. The interface hardware also uses the internal buses for communication with the CPU and the memory.
Principles of Avionics 71
3.8.2 Representation of Data and Instructions Before proceeding to a discussion of the working of a CPU we briefly review the formats for data and instructions.
3.8.2.1 Data The primitive data types are integer, real (floating point), Boolean, character and string. An integer is stored in a word, which is a collection of bytes. The size of a word in a machine depends on the design of the CPU. A typical word size is 4 bytes (32 bits). Integers can be either signed or unsigned. A word with four bytes can hold unsigned integers in the range 0 to 232 – 1. Signed integers are represented in several formats. One of the formats is the 2’s complement scheme discussed earlier. A 32-bit word can hold signed integers in the range –231 to 231 – 1. Real numbers are represented in the so-called floating point format in a computer. In the floating point format a real number R is represented as a triple (σ, S, e). The value of R is R = (–1)σ × S × B e where σ is 0 or 1 (represented by a single bit), S is an integer that contains significant digits, the base B is a design feature of the computer, and e is the exponent. The IEEE 754-2008 standard [IEEE 2008] specifies the standard for representing floating point numbers. For example, the “binary 32” format specifies the following structure for floating point numbers: IEEE 754-2008 binary 32 format Number of Bits for Sign (s)
Number of Bits for Exponent (E)
Number of Bits for Significand (S)
Base (B)
1
8
23
2
Fig. 3.55 Details of binary 32 floating point number
A floating point number in binary 32 format is structured as follows. σ
b31
Exponent (e)
b30
Significand (S) b23
b22
b0
Fig. 3.56 Organization of bits in binary 32 floating point numbers
The exponent E is in a biased format. That is, if the unsigned integer represented by the 8 bits is E, then the actual exponent e is given by e = E – 127
72 Principles of Modern Avionics
where 127 is called the bias. The values of E = 0 and E = 255 are reserved. Therefore, the actual exponent e is constrained to lie in the interval –126 ≤ e ≤ 127 A number, represented in the format shown in Fig. 3.56, whose biased exponent E has a value between 1 and 254 is called a normal number. For normal numbers the significand S is the binary number S = 1.b22, b21, …, b0 The leading bit 1 to the left of the binary point is implicit in the representation. When the the biased exponent is either 0 or 255, binary 32 format represents special values as shown in Table 3.57. Subnormal numbers are nonzero numbers whose magnitudes are smaller than the magnitude of the smallest normal number, namely (1 + 2–23) × 2–126. Table 3.57 Implicit leading bit in binary32 format in IEEE 754-2008
Biased Exponent E
b22 = b21 = … b0 = 0
bi ≠ 0, for some 0 ≤ i ≤ 22
E=0
zero
Subnormal Number
1 ≤ E ≤ 254
Significand: 1. b22b21 ... b0
Significand: 1. b22b21 ... b0
E = 255
Infinity
NaN
In summary, for a normal number R, represented in the format shown in Fig. 3.56, the value is given by R = (–1)b31 × (1.b22b21 ... b0)2 × 2(b30b29 ... b23)2 – 127 The other standard basic formats defined in IEEE 754-2008 are binary64, binary128, decimal64 and decimal128. The reader is referred to [IEEE 2008] for details. Although the Boolean data type requires just one bit to represent 0 or 1 (false or true), it is rarely implemented using just one bit, for reasons of efficiency. If a variable has value of 0, then it is interpreted as having Boolean value of 0 (false) and 1 (true) otherwise. Data types such as character, as well as composite data types such as string and array are higher level abstractions that are exposed by high level programming languages. Our focus here being on machine language we restrict our discussion to the three types mentioned above.
3.8.2.2 Instructions The vast diversity of tasks performed by the computer are reducible to a rather small set of elementary operations that a CPU can execute at the hardware level. The instructions that a CPU is designed to execute—together called the CPU’s instruction set—are machinedependent. A program written using the CPU’s instruction set is said to be written in machine language, the only language that a CPU understands. Table 3.58 illustrates the generic structure of instructions in machine language with a few examples. The structure
Principles of Avionics 73
illustrated in Table 3.58 is cumbersome; a refinement of the structure is discussed later. The examples shown are not taken from any specific machine. Table 3.58 Examples machine-level instructions
Instruction Address-1
Address-2
Address-3
Instruction Name
Description
0001H
A’s Address
B’s Address
C’s Address
ADD
C←A+B
0002H
A’s Address
B’s Address
C’s Address
COMPARE
if (A < B), then C ← 1, else C ← 0
0003H
A’s Address
I1’s Address
I2’s Address
BRANCH
if (A = 0), then IP ← I1, else IP ← I2
0004H
I’s Address
—
—
JUMP
Jump to instruction at address I
0005H
A’s Address
B’s Address
C’s Address
AND
C←A&B
Operation Code
For concreteness we assume that the words in the computer are four bytes long. The main memory can then be viewed as an array of words, as shown in Fig. 3.59. Word 1 →
Byte 0
Byte 1
Word 2 →
Byte 4
Byte 5
Word 3 →
Byte 8
Word 4 →
Byte 12
Byte 2
Byte 3
Fig. 3.59 Organization of memory
The address of the first word would be 0—the number of the first byte of the word. The address of the second word would be 4, that of third and fourth words 8 and 12 respectively. The size of the address field in an instruction is determined by the size of the addressable memory. For example, a main memory that has 216 = 65,536 = 64 K words can be fully addressed using an address field that is two bytes long. The exact size of the operation code—hereafter called opcode—field depends on the size of the machine’s instruction set. The size of the address fields is determined by the size of the addressable main memory. The size of the opcode and address fields are hence machine-dependent features. If we take the size of the opcode field to be two bytes and assume that the addressable main memory has 65,536 words—making each address field two bytes long—then two words would be needed to represent each instruction.
74 Principles of Modern Avionics
Each of the instructions shown in Table 3.58 has four fields: one field for opcode and three fields for addresses. An instruction’s opcode specifies the operation to be performed by the CPU. For example, the ADD instruction specified by the opcode 0001H (hexadecimal format) signals to the CPU that it should retrieve the numbers at Address-1 and Address-2, add the numbers and write the sum into the word at Address-3. Thus, if we seek to add the numbers in words 1 and 2 and put the sum in word 3, the ADD instruction would be 0001H
0000H
0004 H
0008 H
Fig. 3.60 An example of an Add instruction
The second example (opcode: 0002H) compares the two numbers at Address-1 and Address-2, and writes 1 into the word at Address-3, if the number at Address-1 is less than that at Address-2, and zero otherwise. Examples 3 and 4 implement conditional and unconditional jump respectively. The conditional jump (operation code: 0003H) instruction, named BRANCH, asks the CPU to check if the number in the word at Address-1 is zero, and if it is jump to the instruction at Address-2, or jump to the instruction at Address-3 otherwise. The CPU gets the address of the next instruction that it needs to execute from its Instruction Pointer (IP) register. After an instruction such as ADD has been executed, the IP is incremented by the control unit inside the CPU (discussed below) as IP ← IP + 8 assuming that each instruction is 8 bytes long. That is, the CPU proceeds to the next instruction in the program. However, after executing a BRANCH instruction, depending on the value of the number at Address-1, the control unit updates the IP as IP ← I1 or IP ← I2 Address-2 (Address-3) in the BRANCH instruction specifies the address of the word that contains the address I1(I2) that should be put into the IP if the word at Address-1 (Address-2) has a value of 0 (not zero). Thus, the field labeled Address-2 (Address-3) in the BRANCH instruction contains the address of an address. The JUMP instruction (operation code: 0000 0100) merely updates the IP as IP ← 1 The JUMP instruction forces the program to continue execution at the instruction at address I. It is used for looping. The final example—the AND instruction (operation code: 0005H)—performs a bitwise AND operation on the numbers at Address-1 and Address-2, and puts the result into the word at Address-3. The examples discussed above have a shortcoming, which is that the instructions are large and require two words per instruction. The instruction sizes can be decreased even as
Principles of Avionics 75
the range of addressable memory is increased by breaking up a complex instruction such as ADD, discussed above, into simpler instructions. For example, the ADD instruction can be implemented using the simpler instructions shown in Table 3.61. Table 3.61 Instructions with a single address field
Instruction
Instruction Name
Opcode
Address
01H
A’s Address
LOADA
02H
B’s Address
ADDA
03H
C’s Address
STOREA
Description
Accumulator ← A Accumulator ← Accumulator + B
C ← Accumulator
In the simplified instruction format shown in Table 3.61 the opcode is allocated 1 byte with three bytes allocated for the single address field. The architecture supports an addressable memory of size 16 mega words, and an instruction set comprising 256 instructions. Table 3.61 illustrates implementation of the ADD instruction from Table 3.58. In the first instruction—opcode 01H—the number at the address specified in the address field is loaded into an accumulator within the CPU. In the second instruction—opcode 02H—the number at the address specified in the address field is added to the number in the accumulator, leaving the result in the accumulator. In the third instruction—opcode 03H— the number in the accumulator is transferred to the address specified in the instruction’s address field. Holding intermediate data in a register—called accumulator— that resides within the CPU enables the CPU to implement the ADD instruction (Table 3.58), which has three address fields, shown in Table 3.58, using a sequence of the three instructions each with a single address field. LOADA
A
ADDA
B
STOREA
C
ADD A B C
Fig. 3.62 Implementation of an instruction with multiple address fields as a sequence of instructions with a single address field
Similarly, the COMPARE, BRANCH, JUMP and AND instructions shown in Table 3.58 can also be realized as a sequence of instructions with a single address field, using an accumulator. If the computer has the bandwidth on the address bus and the data bus to retrieve the data in variables A and B in parallel, then the instructions shown in Fig. 3.58, though longer, would execute faster than their implementation using simpler instructions, such as those shown in Table 3.61, in which A, B and C are accessed sequentially. The design of
76 Principles of Modern Avionics
instruction’s format in a computer architecture, thus, depends on several factors such as the size of the addressable main memory, the size of the instruction set, the bandwidths of the data bus and the address bus. In the following discussions we will continue to refer to the two instruction formats discussed above—one with three address fields and the other with a single address field.
3.8.3 Central Processing Unit (CPU) The main function of the CPU is to fetch instructions from the main memory and execute them. Specifically, the CPU cycles through the loop shown in Fig. 3.63. Fetch
Decode
Execute
Write
Fig. 3.63 Coarse-grained description of CPU’s functionality
The first step in the cycle is to fetch the next instruction to be executed. As mentioned earlier, the CPU gets the address of the next instruction from a register—called Instruction Pointer (IP). The second step is to decode the opcode in the fetched instruction to determine the operation that the CPU needs to perform. After decoding the instruction the CPU executes the operation specified in the instruction. Performing the operation could entail retrieving data from the main memory. The final step of the cycle is to write the result of the execution into either a location in the main memory or some register within the CPU. In the following discussion we look at a simple model of the CPU. Actual implementation of the CPU hardware has modifications intended to improve the speed and efficiency of the CPU.
3.8.3.1 Architecture A CPU comprises three main subsystems—the Arithmetic Logic Unit (ALU), the Control Unit and an internal bus—in addition to miscellaneous hardware such as registers. We will discuss each of the components of the CPU, in turn, and then describe how the components inter-operate to achieve the functionality shown in Fig. 3.63. The architecture of a simple CPU is shown in Fig. 3.65. We discuss the different components of a CPU below [Tooley 2007].
Clock and Instructions The CPU operates as a synchronous circuit. A master clock, which generates a rectangular waveform of fixed frequency, provides the unit of time for a CPU as well as the computer’s memory and interface circuits. Each clock cycle is called a T-state. Each of the four steps shown in Fig. 3.63 spans an integral number of clock cycles. The exact number of clock cycles spanned by each step is machine-dependent. As an illustrative example, in Fig. 3.64 we
Principles of Avionics 77
have shown that the instruction fetch, decode, execute and write steps of the ADD instruction use 3, 2, n and 2 clock cycles respectively. The exact number of clock cycles spanned by the execute step depends on the implementation as well as the type of instruction. While atomic instructions, such as ADD require few clock cycles, a composite instruction such as MULTIPLY, which involves repeated additions, requires considerably larger number of clock cycles.
T1
T2
T3
T1
T2
Instruction Fetch
Instruction Decode
Fetch
Decode
T1
T2
T3
Tn
T4
Data Fetch
Calculate
Execute
T1
T2
Write/ Load Address of Next Instruction
Write
Fig. 3.64 Synchronization of clock and instruction execution in a generic CPU
Instruction Pointer Sometimes also called the program counter, the Instruction Pointer (IP) stores the address of the next instruction that is to be executed by the CPU. After completing the execution of an instruction the Control Unit (CU) generally increments the IP to point to the next instruction in sequence within the program. However, when a JUMP/BRANCH instruction is encountered the IP is not incremented but is loaded with the address provided in the JUMP/BRANCH instruction.
Stack Pointer The stack is a region of the main memory that holds intermediate data during computation. Data is stored in and retrieved from a stack using last-in-first-out approach. For example, a stack is used to hold intermediate data during recursive or nested calls to subroutines. That is, when a process (program) calls another process as a subroutine then the state of the calling parent process—the values of its local variables and registers—must be saved before the called child process begins execution on the CPU. The stack holds the saved state of the calling parent process while the child process runs on the CPU. After the child process finishes its computation the saved state of the parent process is retrieved from the stack enabling the parent process to resume its computation. The stack is also used to save the state of a process when the CPU receives an interrupt that has a higher priority than the process that it is currently running on the CPU (see below).
78 Principles of Modern Avionics
Figure 3.66 illustrates the organization of the main memory in Von Neumann architecture [Patterson 2007]; see § 3.8.4. The lowest addresses in the main memory are reserved for system’s use and are inaccessible to user’s programs. The machine code of user’s programs resides above the reserved space. The static data for a program—such as the constants and arrays of fixed size—are stored above the machine code. Dynamic data—such as linked lists, DATA BUS
Data Bus Buffer High Speed Internal Data Bus
Instruction Register
Stack Pointer
Instruction Pointer
CPU
Address Bus Buffer
ADDRESS BUS
Fig. 3.65 Architecture of a simple CPU
Address Register 3
CONTROL UNIT
Address Register 2
Address Register 1
Opcode
Flags Register
Instruction Decoder
Register C
ALU
Register A
CONTROL BUS
Register B
Carry, Borrow
Cache
Multiplexer
Principles of Avionics 79
which can grow in size as the program executes—are stored in addresses above the space earmarked for static data. The memory holding the dynamic data is also called the heap and it grows from lower to higher memory addresses. On the other hand, the stack grows from the highest address towards lower addresses. Thus, the stack and heap grow towards each other. The stack pointer holds the address of the first unused memory location next to the stack, that is the highest unused address in the memory. If new data is to be pushed into the stack, the data can be written starting at the address held in the stack pointer. Highest Address FF…FH
Used Space Stack Pointer
Stack
Free Space Heap (dynamic data) Used Space Static Data
Machine Code
Reserved Lowest Address 00….0H Fig. 3.66 Generic organization of main memory in Von Neumann architecture
Instruction Register The first step in the cycle shown in Fig. 3.63 is fetching the instruction to be executed. The fetched instruction is temporarily stored in the Instruction Register (IR) before it is decoded. While Fig. 3.65 illustrates a single IR, more recent architectures incorporate a set of IRs to facilitate instruction pre-fetching. If the current instruction is not a JUMP/BRANCH instruction, then the address of the next instruction is known even before the execution of the current instruction is completed. Thus, the CPU could pre-fetch the next instruction even while the execution of the current instruction is in progress. In fact, the CPU could pre-fetch several subsequent instructions while executing the current instruction, thereby creating a pipeline of pre-fetched instructions. The parallelism embodied in pre-fetching reduces the number of clock cycles needed per instruction, thereby speeding up the computation.
80 Principles of Modern Avionics
A related but different paradigm, which also implements instruction-level parallelism, is the so-called superscalar architecture. A superscalar CPU has multiple ALUs and multiple instruction and data pipelines derived from the same program. Based on an analysis of data dependencies among the instructions and data the CPU determines which of the instructions can be executed in parallel. Instructions from different pipelines are then routed to different ALUs for execution in parallel. Superscalar architectures also lead to significant compression of the number of clock cycles needed per instruction.
External/Internal Data Bus and Address Bus Several units such as the CPU, memory and peripherals use the external data bus for transporting data. Similarly, registers within a CPU use an internal data bus for sharing data. A critical aspect of these buses is the logic that ensures that multiple users of a bus do not attempt to load data onto the bus simultaneously. The protocol used in the I2C bus, for example (see § 3.6), provides an example of how multiple masters are prevented from transporting data simultaneously on the bus.
.....
Data Bus
.....
Address Decoding Logic
.....
Chip
Address Bus Lines Fig. 3.67 Address decoding logic
A second aspect of a bus is that a user of a bus needs to be isolated from the bus when the data being transported on the bus is not intended for the user. For example, multiple memory chips as well as the CPU chip connect to the external data bus. The address decoding logic (not shown in Fig. 3.65) at the interface between a chip (e.g., CPU) and the external address bus isolates the chip from the data bus when the data on the bus is not intended for the chip. Specifically, a subset of the address bus lines carry the address of the target chip. The address decoding logic of a chip compares its own address with the address of the target chip. In case of a match, the data on the data bus is made available to the chip. If
Principles of Avionics 81
the address of the target chip does not match the chip’s own address the address decoding logic isolates the signals on the data bus lines from the chip. An address bus is used by the CPU to communicate the target address of a memory location or an external device in a read/write operation [Tooley 2007]. The address decoding logic is easily implemented. As a simple example, consider a device whose 4-bit address is 1101. The output of the circuit shown in Fig. 3.68 is 1 only when the input bits are 1101, and zero otherwise. Thus, the output bit can be used as a chip selector. If the selector bit is 1 the chip knows that it is the target for the data. On the other hand, if the selector bit is 0, then the chip concludes that is not the target and ignores the data on the data bus. A3 A2
Address Selector
A1 A0 Fig. 3.68 Address selector logic for device with address A3A2A1A0 = 1101
Data and Address Bus Buffers The data bus buffer is a temporary register used by the CPU to interact with the external data bus. The data to be transmitted out of the CPU via the data bus is first loaded into the data bus buffer before it is made available on the data bus. Also, incoming data from the data bus is first loaded into the data bus buffer. Similarly, an address bus buffer is used to temporarily store the address of the target memory location for a read/write operation.
Control Bus The control bus is used by the CPU to communicate with the memory and the peripheral devices and by the peripheral devices to gain the attention of the CPU. The following examples of signals communicated over the control bus illustrate its role. Read: In reading data from a memory location, the CPU places the address of the target location on the address bus and a signal on the control bus to indicate that the data at the target location is to be read. Write: In a write operation the CPU places the address of the target location on the address bus, the data to be written to the target location on the data bus and a signal on the control bus to indicate that the data on the data bus is to be written into the target location. Interrupt Request (IRQ): An external device that seeks CPU’s attention communicates an IRQ signal to the CPU over the control bus. An interrupt may be processed by the
82 Principles of Modern Avionics
CPU depending on the priority of the interrupt relative to the priority of the task that the CPU is executing. A Non-Maskable Interrupt (NMI) is unique in that it has a priority that supersedes that of any process currently executing on the CPU. An NMI is processed as soon as it is received.
Instruction Decoder The instruction decoder logic extracts the operation code (opcode) of the instruction currently in the Instruction Register. The opcode is communicated to the Control Unit (CU) to enable it to select the hardware logic in the ALU that implements the current instruction. For example, ALU has separate hardware circuits for implementing ADD and (bitwise) AND operations. Depending on whether the current instruction is ADD or AND, the CU selects the appropriate hardware logic in the ALU to operate on the two inputs in Registers A and B. See Fig. 3.69.
Arithmetic Logic Unit (ALU) The ALU contains the circuits needed to implement the various instructions such as ADD, SUBTRACT, bitwise AND and bitwise OR. One of the above circuits is selected by the CU, based on the instruction in the IR. The inputs to the ALU are obtained from Register A (Accumulator) and Register B. The result of computation is stored in Register C, and special situations that arise as a result of the computation are recorded in the Flags Register (discussed below). Some of the operations implemented in the ALU—such as multiplication/division—could involve multiple steps. The intermediate results are stored in a cache available to the ALU. The ALU is a combinational circuit and is not controlled by a clock. The synchronization is imposed on ALU by the CU. A
Carry/Borrow In Opcode
B
ALU
Carry/Borrow Out Status Flags
C Fig. 3.69 Schematic representation of an ALU
Flags Register Special situations that arise as a result of a computation done by the ALU are signaled using bits in a dedicated register called the flags register. The following are examples of special situations. A bit in the flags register is set to 1 if the result of ALU’s computation is zero. A bit in the flags register is used to signal if the ADD operation results in a carry
Principles of Avionics 83
bit. Another bit in the flags register is used to signal overflow. Yet another bit in the flags register could be used to signal if the outcome of ALU’s operation is negative.
Control Unit The logic in the Control Unit (CU) is used to coordinate the operations of the CPU. The following examples illustrate the role of CU.
1. The CU communicates with the memory over the control bus to signal a read/write operation. 2. It receives the IRQ signals from external devices and initiates the interrupt processing routines. It is also responsible for (i) storing the system state prior to interrupt handling and (ii) updating the stack pointer. 3. It coordinates data transport over the internal data bus as well as data transfer between the CPU on the one hand and the memory and external devices on the other, synchronizing the data transfers with the system clock. 4. It selects the hardware logic in ALU to ensure that the ALU executes the instruction stored in the IR. 5. The CU is responsible for overseeing the flow of control during program execution by updating the instruction pointer. Next, we take a closer look at the interactions of the different components, described above, by following one iteration of the loop shown in Fig. 3.63. As a representative example we choose the ADD instruction in Table 3.58. The following discussion is intended to describe the sequence of tasks that occur in a typical CPU. In an actual CPU the tasks would be optimized to maximize efficiency. Fetch: In the T1 clock cycle of the fetch step the address in the IP register is loaded into the address bus buffer. The CU signals a read operation on the control bus. In the T2 clock cycle the instruction is retrieved from the memory and loaded into the data bus buffer. In the T3 clock cycle the instruction is loaded from the data bus buffer into the Instruction Register (IR). Decode: In the T1 clock cycle of the decode step the opcode of the instruction in the IR is extracted by the instruction decoder and provided to the CU. In the T2 clock cycle of the decode step, the CU selects the relevant hardware circuit in the ALU, isolating all of the other hardware circuits in the ALU. In the T2 cycle of an ADD instruction the CU also loads the addresses of the summands and the memory address at which the result is to be stored, into address registers 1, 2 and 3; see Fig. 3.65. Execution: The number of clock cycles spanned in this step depends on the instruction. For the ADD instruction that we are considering, the T1, T2 and T3 clock cycles of this step are used to retrieve the two summands and load them into Registers A and B (we are assuming that the address and data buses have sufficient bandwidth to retrieve two summands in parallel). Specifically, in the T1 clock cycle the multiplexor, under the direction
84 Principles of Modern Avionics
of the CU, loads the addresses of the two summands—stored in address registers 1 and 2 (Fig. 3.65) into the address bus buffer. In the T2 cycle the values of the two summands are loaded into the data bus buffer of the CPU. Finally, in the T3 clock cycle the values of the summands are transferred from the data bus buffer to the registers A and B. The next phase of the execution step is to perform the computation (addition in our example). While atomic operations such as ADD, SUBTRACT, JUMP, AND and OR are implemented in a small number of clock cycles, composite operations such as MULTIPLICATION/ DIVISION, which involve repeated ADD/SUBTRACT operations, require considerably larger number of clock cycles. Under the coordination of the CU, the ALU performs the necessary computation and loads the result into Register C. Write: In the T1 cycle of this step, the data in Register C is loaded into the data bus buffer and the target address, contained in Address Register 3, is loaded into the address bus buffer. The CU signals a write operation on the control bus. In the T2 cycle the data in the data bus buffer is written into the memory location at the address contained in the address bus buffer. In parallel, the IP is updated to point to the next instruction. As mentioned before, the number of clock cycles spanned by an instruction depends on the instruction. For example, a JUMP instruction does not involve a write operation. Its execution step is also simpler and merely involves loading the address of the next instruction from the IR to the IP. Similar modifications of the above discussion for the other instructions listed in Table 3.58 are easily deduced. For details of the architecture of an actual CPU—such as Intel 8086, the Intel Pentium or the AMD 39050—including the implementation details of the instructions in the CPU’s instruction set, the reader is referred to the technical manual of the CPU and [Patterson 2007; Tooley 2007].
Interrupt and Exception Handling An interrupt is an unexpected request for CPU’s attention that is generated by an event that occurs outside the CPU. For example, when an external device such as a printer detects errors in the data transmitted to it, it requests CPU’s attention by sending an interrupt request to the CPU. In contrast to an interrupt, an exception is an unusual event that occurs inside the CPU. For example, when an arithmetic computation results in an overflow or involves division by zero, an exception arises [Patterson 2007]. When an interrupt request is received, the priority of the interrupt request is compared with that of the process currently running on the CPU. If the interrupt request’s priority is lower than that of the current process, the interrupt request is stored for later processing. If the priority of the interrupt is higher than that of the current process then the CPU stores its current state in a stack (described above). The interrupt processing involves executing an interrupt-handling program that depends on the type of interrupt. A section of the main memory stores a table that contains the locations of the interrupt-handling programs for the various interrupts. For example, Intel 8086 microprocessor earmarks 1 kilobyte of memory (with hexadecimal addresses 0000H to 03FFH) for storing the table. The location
Principles of Avionics 85
of the relevant interrupt-handling program is loaded into the IP to start its execution. After the interrupt-handling program has executed the CPU is assigned to that pending process or interrupt request that has the highest priority. An interrupt flag, which remains set as long as there is one or more active interrupt requests, is used to keep track of interrupts. (The interrupt flag is not shown in Fig. 3.65.) When an exception arises, the CPU stores the address of the instruction at which the exception occurred in the Exception Program Counter (not shown in Fig. 3.65). It then hands over the control to the Operating System by loading the pointer to the appropriate systems program into the IP. The systems program, which is a part of the Operation System, does the exception handling.
3.8.4 Main Memory The main memory is used to store both the programs that execute on the CPU as well as the input and output data of the programs. The two distinct main memory architectures are the Von Neumann Architecture and the Harvard Architecture; see Fig. 3.70. The Von Neumann architecture treats instructions and data on equal footing. Both instructions and data are transported over a common data/instruction bus. Since data and instructions share a common bus, at any given time either data or instructions can be transported over the bus, but not both. The serialization of data and instruction flow between CPU and main memory leads to significant degradation in the speed of computation. Since the locations in the main memory could hold data as well as instructions, a common address space is used to address memory locations that contain data as well as memory locations that contain instructions. Harvard Architecture
Main Memory (Contains Instructions and Data)
Instruction Address Bus CPU
Data Bus
CPU
Data/Instruction Bus
Address Bus
Instruction Memory (Contains Instructions)
Data Address Bus Data Memory (Contains Data)
Fig. 3.70 Comparison of Von Neumann and Harvard architectures
Instruction Bus
Von Neumann Architecture
86 Principles of Modern Avionics
In the Harvard architecture the data and instructions are stored in separate memories, respectively called the data memory and instruction memory. The data memory, which holds the input data, the intermediate data as well as the output data of programs, has its own Data Address Bus over which the address of a target location is communicated by the CPU to the data memory. The flow of data between the CPU and the data memory occurs over the Data Bus. The data memory also has a dedicated Data Control Bus (not shown) which is used by the CPU to signal to the data memory whether the CPU seeks to read or write. The programs that execute on the CPU, or more elementally the instructions, are stored in a separate memory called the instruction memory, which has its own dedicated address bus—the Instruction Address Bus—over which the CPU communicates the address of the next instruction. The instruction itself is transported from the instruction memory to the CPU over the Instruction Bus. Like the data memory, the instruction memory also has a dedicated control bus, the Instruction Control Bus. The separation of main memory into data and instruction memories that have their own dedicated buses enables the transport of data and instructions to occur in parallel, speeding up the overall instruction execution. The increased speed however comes at the expense of additional hardware for buses. For examples of actual systems that are based on the Von Neumann and Harvard architectures see (Deshmukh 2005).
3.9 INTEGRATED CIRCUITS Unparalleled miniaturization of electronics is made possible by the Integrated Circuit (IC) technology. Whereas circuits were previously built using discrete components—like individually packaged resistors, capacitors, diodes and transistors—the IC technology allows entire circuits to be realized on a single monolithic wafer of semiconductor (such as silicon). IC chips can contain analog circuits, such as operational amplifiers, or digital circuits, such as logic gates. Digital ICs are often classified by the scale of integration—that is, the number of logic gates on the chip. The following table [Tooley 2007] lists the different classes. Table 3.71 Integration scales and gate densities
Integration scale
Number of gates on chip
Example
Small Scale Integration (SSI)
100 – 101
Flip flop
Medium Scale Integration (MSI)
101 – 102
Bus buffer
Large Scale Integration (LSI)
102 – 103
Small memory
3
Very Large Scale Integration (VLSI)
4
10 – 10
Small microprocessor
Ultra Large Scale Integration (ULSI)
104 – 105
Large microprocessor
One adverse consequence of high density of gates on chips is that the ICs generate large amounts of heat that has to be dissipated in order to keep the ICs within their operating
Principles of Avionics 87
temperature range. Further, since circuits are distributed non-homogeneously within an IC, thermal hotspots could arise in certain regions of the chip, endangering the components and circuits within the region. Thermal simulations to identify and eliminate hotpots within an IC, deployment of cooling systems to drain the heat generated by the ICs and thermal considerations in the design of interconnections between ICs on a mother-board, are thus important issues in the design of multi-IC systems. Besides the scale of integration, the reconfigurability of its circuitry can also be used as a criterion to classify ICs as Application Specific Integrated Circuit (ASIC) and Field Programmable Gate Array (FPGA).
3.9.1 Application Specific Integrated Circuit (ASIC) and General Purpose IC The circuitry in an ASIC is not reconfigurable. The circuitry is designed for a specific application (purpose), such as a satellite, cell phone or voice recorder. Designing an ASIC is a labor-intensive process. In the design process the placement of logic gates on the chip and the routing of the interconnections among the logic gates are optimized not only to increase speed and reduce power consumption, but also to efficiently drain the heat generated by the gates. As a result of such design optimization, ASICs deliver superior performance. Their main drawback is the inflexibility of their hardware’s functionality.
3.9.2 Field Programmable Gate Array (FPGA) An FPGA has reconfigurable hardware circuitry. It comprises an array of logic blocks, and a set of interconnections among the blocks. The circuitry in a logic block can be reconfigured at any time to change the block’s functionality. The interconnections among blocks can also be reconfigured at any time to change the network topology of the blocks. The reconfigurability of the circuitry within blocks, as well as the interconnection network topology, enables an FPGA to provide a range of functionalities at the hardware level. FPGAs are useful in settings—such as prototyping—in which the hardware design changes need to be accommodated. Also, FPGAs can be used to speed up computation by customizing a portion of the hardware to run computation-intensive tasks—such as inner loops in programs. The reconfigurability of an FPGA’s hardware, however, comes at the expense of efficiency. While the array of logic blocks and the interconnections are optimized for speed, power consumption and thermal dissipation at the time of its fabrication, in general, the circuitry is sub-optimal after reconfiguration. See [Chandrasetty 2011] for a detailed discussion of ASICs and FPGAs.
3.10 FIBER OPTIC COMMUNICATIONS Fiber optic cables transmit signals using electromagnetic waves, unlike copper cables, which transmit signals using electrical current. Fiber optic communication has several advantages.
88 Principles of Modern Avionics
It has low thermal losses. Since the interaction between electromagnetic waves is very weak the fiber optic communications are practically immune to electromagnetic interference. The fiber optic cables are lighter than copper cables and thus yield significant reduction in cable weight. Fiber optic communications are characterized by high bandwidth. Signals transmitted over fiber optic channels suffer relatively lower attenuation. Finally, fiber-optic circuits are free of grounding issues that are associated with electric circuits. Fiber-optics communications do have some drawbacks, which are discussed later in the section [Tooley 2007]. Cladding Core > c
Cladding c Air
Core
a
Fig. 3.72 Fiber optic cable
The structure of a fiber-optic cable is illustrated in Fig. 3.72. It is made of a glass core surrounded by a glass cladding. The refractive index nco of the core is higher than the refractive index ncl of the cladding. If light is incident at the core-cladding interface from within the core at an angle q > qc where n qc = sin −1 cl nco then the incident beam is totally reflected at the interface, with no loss due to refraction. Assuming that the cable is not bent sharply, a ray incident at the core-cladding interface at q > qc would propagate through multiple total internal reflections with no refractive loss. Light is injected into the fiber from an external medium, which is usually air. At a certain angle qa , shown in the diagram at the bottom in Fig. 3.72, the refracted light in the core strikes the core-cladding interface at the critical angle qc. As the angle of light incident from air is increased beyond qa, the angle of incidence at the core-cladding interface increases beyond qc and we observe an onset of transmission through repeated total internal reflection.
Principles of Avionics 89
Light Emitting Diode
Digital
Photodiode
Tx
Rx
Digital
Fig. 3.73 Transmission of a digital signal over a fiber optic cable
The transmission of electrical signals over a fiber optic cable is illustrated in Fig. 3.73. A light emitting diode (LED) converts an electrical input to an electromagnetic signal, which is then injected into the optical fiber. The electromagnetic signal emerging at the other end of the optical fiber is sensed by a photodiode, which converts the optical signal back to an electrical signal. An LED is made of a semiconductor that has a direct band gap. The energy difference between the lowest energy level in a semiconductor’s conduction band and the highest energy level in its valence band is called the band gap of the semiconductor. When the momentum 3-vector of an electron in the lowest energy level of the conduction band coincides with the momentum 3-vector of an electron in the highest energy level of the valence band, then the semiconductor is said to have a direct band gap, and an indirect band gap otherwise [Deshpande 2007]. In semiconductors with direct band gap an electron can transition from the conduction band to valence band without having to transfer any part of its momentum to phonon modes of the semiconductor lattice. Hence, electron’s transition from conduction band to valence band occurs with a high probability and photons—whose energy is equal to the difference in energies of the two electron levels—are emitted at a rapid rate. In contrast, in a semiconductor with an indirect band gap—such as silicon—an electron at the lowest energy level in the conduction band has to transfer some of its momentum, and in the process some of its energy, to phonons before it can reach the highest energy level in the valence band. Electron’s transition from conduction band to valence band in a semiconductor with indirect band gap is thus an inefficient process, and the photon emission due to such transitions occurs at an extremely low rate. Therefore, semiconductors such as silicon are not efficient emitters of photons. Electrical current is generated in a photodiode through an inverse process. When photons—whose energy equals or exceeds the band gap—are incident on a semiconductor electron-hole pairs are created. If the pair creation occurs close to or within the depletion region of the diode’s p-n junction, then the electron and hole are swept in opposite directions by the electric field in the depletion region, giving rise to a flow of charge (electric current). Although optical fibers have many appealing advantages, they do have some drawbacks. We mention four of them below [Tooley 2007].
90 Principles of Modern Avionics
While injection of electrical current into a copper wire is relatively easy, the injection of photons into an optical fiber or sensing the photons that emerge from it is a more delicate matter. Efficient injection of photons from a photo-transmitter such as an LED into an optical fiber requires special connectors that can ensure a precise coupling between the LED and the fiber. Similarly, an optical fiber and a photodiode also need to be coupled with a specially designed connector. Secondly, the angle of incidence of light at the core-cladding boundary being an important factor in optical transmission, bending the optical fiber impacts the transmission of light through it. In order to avoid severe degradation in the signal transmission, bends in optical fibers cannot exceed a threshold curvature, which depends on the properties of the fiber. Light Emitting Diode
Input Electrical Waveform 1 0 1 0 0
1 0 1 0 0 Tx Optical Waveform
Fiber Optic Channel
1
Output Electrical Waveform 0 0 1
0
Fig. 3.74 Conversion of an electrical signal to an optical signal
Thirdly, electromagnetic waves of different frequencies travel at different speeds in the core, giving rise to a frequency-dependent dispersion of speeds. Dispersion stretches an optical pulse as it travels over a fiber optic cable, as shown in Fig. 3.74; in the figure the cable and receiver are included in the box called the fiber optic channel. Dispersion limits the data transfer rate across a fiber optic cable. Finally, the attenuation of signal traveling over a good fiber optic cable is about 2 decibels/km. The attenuation, like the speed of propagation, is also frequency-dependent. The attenuation occurs because of absorption of the electromagnetic energy by the core, the scattering at the core-cladding boundary and scattering within the core due to nonhomogeneities [Tooley 2007]. For a detailed discussion of fiber optic communications see [Downing 2005]. Boeing 777’s LAN (local area network), which supports onboard communications among its subsystems, is based on fiber optic technology and can support data transfer rates of 100 megabits per second (Mbps). Fiber optics are used in the ring laser gyroscopes (discussed in Chapter 4). The backbone of F-35 Lightning II fighter (discussed in Chapter 6) also uses fiber optic technology.
3.11 ELEMENTS OF AVIONICS SYSTEMS At a physical level the avionics infrastructure comprises the Line Replaceable Units (LRU) that house the core hardware, the data bus that connects the different avionics systems
Principles of Avionics 91
distributed throughout the aircraft and the interface circuitry that connects the avionics infrastructure to sensors and actuators. Deferring a discussion of sensors to the next chapter, in the following discussion we take a closer look at the notion of LRU and the details of a simple data bus—ARINC 429.
3.11.1 Line-Replaceable Unit (LRU) The avionics systems embody a modular design. At a high level hardware is organized as Line-Replaceable Units (LRUs). The MIL-PRF-49506, Appendix B[MIL 2005] defines an LRU as follows: “An LRU is an essential support item which is removed and replaced at the field level to restore the end item to an operational ready condition. Conversely, a non-LRU is a part, component, or assembly used in the repair of an LRU/LLRU, when the LRU has failed and has been removed from the end item for repair.” LRUs are designed and mounted with the stipulation that they should be easily replaceable. A defective LRU, made by one manufacturer, should be replaceable by a functional LRU made by a different manufacturer. Thus, an LRU should be designed to conform to not only the specified electrical interfaces, but also have specified dimensions, mounting brackets, weight, and thermal characteristics. For example, the Integrated Core Processor (ICP)—the core computing hardware in F-35—is organized into two racks with 23 LRUs and 8 LRUs respectively. In principle, if an LRU malfunctions the ICP can be restored to operational status by swapping the defective LRU with a functional LRU, made possibly by a different manufacturer.
3.11.2 Data Bus A data bus enables the different electronic subsystems on an aircraft to communicate with one another. It comprises the wires that carry the digital data among subsystems, the control hardware and a common protocol used by all the subsystems to transmit messages. Separate standards have evolved for buses in military and civilian avionics. As a prototypical example, we discuss the ARINC 429 data bus, which is widely used in civilian aircraft. At a physical level a bus is somewhat analogous to a road and the data flowing on a bus corresponds to vehicular flow. A bus on which data flows only in one direction is called a unidirectional bus. In a bidirectional bus data can flow in either direction. If a bus allows the flow of one bit at a time—like a single-lane road—then it is called a serial bus, while a parallel bus—like a multi-lane road—supports simultaneous flow of multiple bits. Parallel buses support high data transfer rates. However, they involve a larger number of wires than serial buses and hence add to the weight of an aircraft. So the main bus in an aircraft is typically a serial bus. Each LRU has its own local bus, which handles signal transfers over short distances and is typically a parallel bus. The overall organization of a bus system is shown in Fig. 3.75.
92 Principles of Modern Avionics
Bus Coupler System Bus Serial Data Flow
Interface
Interface Parallel Data Flow
Avionic Unit
Avionic Unit
LRU
LRU Fig. 3.75 Generic avionic bus architecture
The interface unit within an LRU is responsible for converting parallel data flowing within the LRU into serial data that is transmitted over the system bus. The interface is also responsible for converting data format from that used within the LRU to the format used by the system bus. The interface is connected to the coupler through a stub cable. The coupler provides the electrical interface to the system bus. Groups of couplers are housed in coupler panels, which are situated at different locations within the aircraft. The system bus is terminated at both ends using terminators (not shown) to minimize signal reflection. See [Helfrick 2002] for a detailed discussion of the organization of a bus.
3.11.2.1 ARINC 429 Data Bus The Aeronautical Radio Incorporated (ARINC) was an organization comprising aircraft manufacturers and airline service providers, that is currently part of Rockwell Collins. The ARINC 429 bus, proposed by ARINC, is one of the most popular data buses in civilian avionics. For example, it is used in Airbus A310, Boeing 737, 747, 757 and 767 and McDonnell Douglas MD-11. We take a closer look at the electrical specifications and data transfer protocols of the ARINC 429 bus below. The following discussion is based on [Spitzer 1993, Tooley 2007, ARINC 2017]. ARINC 429 is a unidirectional bus. The bus supports data transmission at two speeds—12.5 kilo bits per second (low speed) and 100 kilo bits per second (high speed). Each word—a unit of transmission—is 32 bits long. Consecutive words are separated by zero voltage for four bit periods. The bit transmission uses BPRZ (bipolar return to zero) scheme (see Fig. 3.6 and Fig. 3.76). The cable that carries the data comprises two wires between which a differential voltage of magnitude 10V±1V is maintained in the first half of each cycle as shown in Fig. 3.76.
Principles of Avionics 93
1
1
0
0
1
+5 V Wire A
0V –5 V
+5 V Wire B
0V –5 V
Fig. 3.76 Voltage signals for bit stream 11001 in the twisted pair of wires in ARINC 429 bus
The voltage levels shown in Fig. 3.76 are with respect to ground voltage. The signals shown in Fig. 3.76 are idealized in the sense that they depict square waveforms in which the voltage levels rise and fall with infinite slope. In a real signal the rise and fall times are nonzero as shown in Fig. 3.77. The rise (fall) time is defined as the time it takes for the signal to rise from 10% of its maximum magnitude to 90% of the maximum magnitude. The rate of rise/fall of voltage is called the slew rate. +5 V
0V High Time
Rise/Fall Time –5 V
Bit Time Fig. 3.77 Slew rates in real signals
ARINC 429 specifies that for high-speed data transfer (100 kbps) the rise/fall time must be 1.5 ± 0.5 µs, the high time to be 5 ± 0.25 µs and the bit time to 10 ± 0.5 µs. For low data transfer rate (12.5-14.5 kbps) the rise/fall time is required to be 10 ± 5 µs. The bit time is determined by the data transfer rate with a tolerance of ±5%. The high time is half the bit time with, again, a tolerance of ± 5%. Since ARINC 429 is a unidirectional bus, each LRU that seeks to transmit data needs to have its own bus, and may transmit to as many as twenty receivers. The unit of data transmission in ARINC 429 is a 32-bit word, which is organized as shown in Fig. 3.78.
94 Principles of Modern Avionics
Bit # →
32 P
31
30 SSM
29(MSB) 11(LSB)
10
Data
9
8(LSB) 1(MSB)
SDI
Label
Fig. 3.78 Format of a word in ARINC 429
The label field, which has eight bits, encodes what is being sent in the word. For example, label 0108 (octal code, decimal value = 8), is used to send the current latitude of the aircraft. Label 0648 signals that the word contains the nose tire pressure. Bits in the label field are organized with the least significant bit (LSB) in bit 8 and the most significant bit (MSB) in bit 1. The SDI, short for Source/Destination Indicator, is optional in ARINC 429. When used, it serves as the extension of the data field (bits 11-29) to increase the resolution of the data, or as an extension of the label field to label the receiver for which the data in the word is intended. Bits 30 and 31—called the Sign-Status Matrix (SSM) bits—are used for signaling. The interpretation of the SSM bits depends on the type of data being transmitted. The semantics of the SSM bits for three of the data types are shown in Table 3.79 [Spitzer 1993]. The data types are enumerated below. Table 3.79 SSM bit values and their interpretation
Bits
BCD Numeric and Discrete
Ack./ISO/Maint. (AIM)
File Transfer
31
30
0
0
+/North/East/Right/ To/Above
Intermediate word
Intermediate word/+/ North/East/Right/To/ Above
0
1
No computed data
Initial word
Initial word
1
0
Functional test
Final word
Final word
1
-/South/West/Left/ From/ Below
Control word
Intermediate word/-/ South/West/Left/ From/ Below
1
The data field (bits 11-29) contains the actual data transmitted in the word. ARINC 429 allows transmission of five data types: (1) binary data (BNR), (2) binary coded decimal data (BCD), (3) discrete data, (4) maintenance/acknowledgment data and finally (5) file transfer using the Williamsburg/Buckhorn Protocol. Binary Data: Binary (BNR) data is transmitted in the 2’s complement format, with bit 29 set to 0 for positive number and 1 for negative numbers. Bit 28 is the MSB and bit 11 the LSB for the data. When the label field (bits 1-8) contains all 1’s (binary 255 or octal 377) then
Principles of Avionics 95
bits 11-18 encode equipment identifier. The word with octal 377 label can be transmitted by a source to all the receivers on a bus to inform the receivers of the equipment identifier of the source. The bits used to transmit binary data are determined by the resolution, the units and the range of the data, that are given in ARINC 429 specification. For example, label 1038 corresponds to selected airspeed, for which the following guidance is provided in the specification [ARINC429 2017]. • • • •
Bits used: 19-29 (11 bits) Scale factor: 512 Unit: knots Resolution: 1 knot
Bit 29 is used for sign. Bit 28 represents half the scale factor, bit 27 a fourth of the scale factor and so on. Thus, the example shown in Fig. 3.80 represents a speed of 256 (bit 28; 1/2 of 512) + 64 (bit 26; 1/8 of 512)+32 (bit 25; 1/16 of 51) = 352 knots. The 0 in bit 29 signals that it is a positive number. Table 3.80 Example of BNR data
Bit #
29
28
27
26
25
24
23
22
21
20
19
0
1
0
1
1
0
0
0
0
0
0
The remaining bits in the data field—bits 11-18—are set to zero (that is, the data is padded on the right) or used for transmitting discrete data (see below). Binary Coded Decimal Data: If the data field uses BCD format then the bits 11-29 are partitioned to represent up to five BCD digits as follows: Bit
29
28 Digit 1
27
26
25
24
Digit 2
23
22
21
20
19
18
Digit 3
17
16
Digit 4
15
14
13
12
11
Digit 5
Fig. 3.81 Data field format
Digit 1 is the most significant digit, while Digit 5 is the least significant. The maximum value that Digit 1 can assume is 7. If the most significant digit in the BCD data is greater than 7, then Digit 2 becomes the most significant digit and bits 27-29 are padded with zeros. The SSM field (discussed below) is used to represent the sign of the data (see Table 3.79). As in the case of binary data, if the BCD data does not require all of the bits in the data field, then the unused bits are either padded with zeros or used to transmit discrete data (see below). Discrete Data: Discrete data is transmitted in two types of discrete words—the general purpose discrete words and dedicated discrete words. General purpose discrete data is transmitted in words with octal labels 145-147, 155-161, 270-276 and 350-354, and are
96 Principles of Modern Avionics
used for communication between the aircraft’s computers and for diagnostic information pertaining to maintenance. Dedicated discrete data is transmitted in the spare bits of the BCD and BNR words [Spitzer 1993]. Acknowledgment, ISO or Maintenance (AIM) Data: The labels with octal codes 355-357 are reserved for AIM words. The initial, intermediate and final words in acknowledgment messages are signaled using SSM bits as shown in Table 3.79. In the initial word bits 9-16 are used to send the word count of the whole message. If the data sent in the message is intended for displays, then the second word will be a control word, with the following format. Bits 22-29 in the control word are set to zero. Bit 21 is set to 1 if the display is to flash. Bits 19- 20 are used to specify the character size. Bits 17-18 specify the intensity of the display. Bits 14-16 specify the color and bits 9-13 specify the number of lines in the display. File Transfer: File transfer protocol is used to transmit large amounts of data, and such communication usually occurs at high transmission rate (100 kbps). A file may have a maximum of 127 records, each with 125 data words. Table 3.79 specifies the convention of using SSM bits to signal the initial, intermediate and final words of the transmission. The initial word itself is of one of eight types: (1) Request to Send, (2) Clear to Send, (3) Data Follows, (4) Data Received OK, (5) Data Received Not OK, (6) Synchronization Lost, (7) Header Information, and (8) Poll. The Header Information word is used to send the file size, and the Poll word is used to establish communication between the two LRUs. Further details may be found in [ARINC 1977]. Finally, the bit 32 in the ARINC 429 word is used for parity. ARINC 429 uses odd parity. The bit 32 is set to 1 if there is an even number of 1’s in bits 1-31 in the word, and is set to 0 otherwise. The parity bit is used to detect errors in transmission. Besides ARINC 429 several other data bus standards have been proposed. Notable among them are ARINC 717, which is used for Flight Data Recorders, and ARINC 629, which has significantly greater capability than ARINC 429. ARINC 629, which was developed in the 1990s, supports data transfer rate that is twenty times higher than that of ARINC 429. Further, in contrast to ARINC 429, which allows a transmitter to communicate with twenty receivers, ARINC 629 is designed to support up to 120 devices to connect to the bus. Also, in contrast to ARINC 429, which is a unidirectional bus, ARINC 629 is a bidirectional bus, with no bus controller. It is used in Boeing 777 and Airbus A 330/A340 aircraft. In contrast to the standards for civilian aircraft, the standard for data buses on military aircraft is provided by MIL-STD-1553B/1773B, which is a bidirectional bus that supports data transfers at the rate of 1 Mbps (million bits per second). The bus is controlled by a bus controller and has the provision for up to 31 LRUs to connect to it. MIL-STD-1773B is the MIL-STD-1553B standard implemented using fiber optic cables [Tooley 2007].
3.12 FAULT TOLERANCE A system is said to be fault tolerant if its performance is not affected by the failure of a limited number of its components. A hardware component could fail as a result of damage to the
Principles of Avionics 97
component—such as burnout. For example, temperature spikes in an integrated circuit (IC) could burn the IC rendering it non-functional. Failure of a hardware component is also said to occur when the component continues to function, but erroneously. For example, a temperature sensor on an aircraft could return incorrect temperature readings. Analogous to the failure modes of hardware components, faults in a software module could arise as a result runtime errors or logical errors. Runtime errors arise when a software attempts to perform illegal operations such as division by zero or accessing forbidden region of memory. Confronted with a runtime error, the execution of the software module is aborted. If the module has logical errors, then it continues to function but erroneously. We note that onboard avionics hardware and software interact with the hardware and software deployed in ground stations and on satellites, and with onboard human crew and human ATCs stationed on ground. Faulty operation of the avionics could result from errors committed by humans, or from errors that originate in hardware/software outside the aircraft. For example, if the hardware on a GPS satellite malfunctions, and transmits incorrect ephemeris data, then the onboard software would compute erroneous position of the aircraft, even if it is functioning as it should. We exclude such extraneous faults from the discussion below, and focus only on faults that originate in the onboard avionics hardware and software. Several strategies are used to design fault tolerant hardware and software. We discuss a few prominent strategies below. The reader is referred to [Hitt & Mulcare 2007, Bartley 2001] for additional details.
Hardware Fault tolerance in the face of failure of a component can be accomplished by providing redundancy. Fault tolerance in case of erroneous performance of a component is accomplished using fault masking/reconfiguration. Redundancy: One of the simplest approaches to endow fault tolerance is to provide redundancy for critical systems. An example is the triple-triple redundancy built into Boeing 777’s Fly-By-Wire system. See Fig. 3.82. Boeing 777 has three Primary Flight Computers (PFC). Each PFC, has three computing channels, with each computing channel comprising its own microprocessor (μP), dedicated power supply and bus interface. Even if all three computing channels of any PFC were to malfunction, flight control would not be affected. The surviving PFCs would perform the necessary flight control function. The PFCs communicate with the Actuator Control Electronics (ACE) in Boeing 777. The ACE is also provided with redundancy. Boeing 777 carries four ACEs onboard, each of which receives the flight control commands from the PFCs. If any of the ACEs fails, the actuator control is unaffected, since the surviving ACEs are designed to perform the necessary control functions.
98 Principles of Modern Avionics
Left PFC
Center PFC
Right PFC
ARINC 629 Bus
R1 Power Supply
P: AMD 29050
ARINC 629 Interface
R2 Power Supply
P: Motorola 68040
ARINC 629 Interface
R3 Power Supply
P: Intel 80486
ARINC 629 Interface
C1 Power Supply
P: AMD 29050
ARINC 629 Interface
C2 Power Supply
P: Motorola 68040
ARINC 629 Interface
C3 Power Supply
P: Intel 80486
ARINC 629 Interface
L1
Power Supply
P: AMD 29050
ARINC 629 Interface
L2
Power Supply
P: Motorola 68040
ARINC 629 Interface
L3
Power Supply
P: Intel 80486
ARINC 629 Interface
Fig. 3.82 Triple-triple redundancy for Primary Flight Computer (PFC)
Fault Masking: Fault masking using Triple Modular Redundancy (TMR)—the approach that was used to implement fault tolerance in the guidance and control system of the Apollo spacecraft [Hitt & Mulcare 2007]—is illustrated in Fig. 3.83. Replica 1 Input
Replica 2
Vote Counter
Output
Replica 3 Fig. 3.83 Fault masking with triple modular redundancy
The hardware module, whose failure one seeks to safeguard against, is replicated threefold in TMR and twofold in Dual Modular Redundancy (DMR; not shown). The input is fed to all three replicas, each of which produces its own output (digital/analog). In the event of erroneous functioning of say Replica 3 its output disagrees with the outputs of the correctly functioning Replicas 1 and 2. A Vote Counter selects, as the correct output, the output produced by a majority of replicas—in this case Replicas 1 and 2.
Principles of Avionics 99
Fault masking is often also followed by automated reconfiguration of the architecture. Specifically, in the above example, after the Vote Counter determines that Replica 3 is malfunctioning, it can reconfigure the architecture to disregard future outputs from Replica 3, effectively reducing the level of redundancy from TMR to DMR.
Software We discuss two of the prominent strategies used to make software fault tolerant—multiversion software and recovery blocks. Multi-version: In all but the simplest of software modules it is nearly impossible to check correctness of the module along all possible computation paths that it could traverse at runtime. Therefore, in general, it is nearly impossible to assure that avionics software modules are bug-free. A software module could contain bugs, introduced into it during its design or implementation. And the bugs would not manifest themselves until the software module follows the computation paths impacted by them. One approach to safeguard against faults in software is to build and deploy several versions of the software, implemented independently by different software development teams, with all of the teams starting with the same set of specifications. The multiple versions, presented with the same input data stream, would concurrently process the input and generate results. The probability of all of the multiple versions confronting faults while processing a given input stream would be significantly low. The results generated by the multiple versions could be compared and the consensus presented as the output of the multi-version software module. If some version of the software is found to yield defective results, that version could be deactivated. Recovery Blocks: Instead of developing multi-version software module, and incurring the overheads of running the multiple versions concurrently, an alternative would be to build an acceptance module that decides whether or not the results generated by a software module are acceptable. For example, if the recommended bank angle exceeds the safety envelope under normal flying conditions, then the acceptance module could reject the results as unacceptable. When the results are deemed unacceptable, the acceptance module could also restore the integrity of the software module by reverting to its last known good configuration and rolling back the changes made by the module. The challenge in building an acceptance module is the difficulty of embedding into it the necessary intelligence to distinguish between acceptable and unacceptable results.
3.13 AVIONICS PROGRAMMING Avionics software performs many critical tasks ranging from engine health monitoring to monitoring flight control surfaces in autopilot mode. The consequences of bugs in software that performs critical tasks being severe, safety is of primary concern in the development of aviation software. Barnes [2007] describes safe software as the software that does not
100 Principles of Modern Avionics
cause harm to others, namely hardware or humans. The converse notion is security [Barnes 2007], which is the software’s capability to protect itself against malicious attempts to harm (corrupt) it. Safety and security—not causing harm and not being susceptible to harm—are critical features that must be built into aviation software. The development and certification of avionics software are discussed in § 2.4. While DO-178C specifies the guidelines for the development of avionics software it does not prescribe the programming language to be used to develop the software. Although several languages have been used to develop avionics software one language—Ada—has emerged as the language of choice for large safety-critical embedded systems, such as avionics, largely because the language has many built-in features that help reduce the number of errors in software [Barnes 2001]. For example, the strong typing in Ada guards against a variable being assigned a value of a different type or a value outside the declared range. A second important feature of Ada is that many of the errors in an Ada program are caught during compilation, and several of the remaining errors are caught by using constraints at run-time [Barnes 2001]. While typing rules prevent pointers from accessing data of the wrong type, accessibility rules are provided to ensure that pointers do not try to access objects that do not exist [Barnes 2007]. The packages in Ada provide a specification—an interface to other software modules—and an implementation, which is hidden from the external world [Barnes 2007]. The information hiding made possible by packages simplifies debugging of interactions among packages. For a more detailed discussion of the advantages of Ada see [Barnes 2001; 2007]. The current version, Ada 2012, is the international standard ISO/IEC 8652:2012 [ISO 2012].
qqq
CHAPTER
4
Sensors The avionics infrastructure on an aircraft performs several critical tasks during flight. Examples include communication with external ground-based, airborne and satellitebased equipment, navigation, landing in low visibility conditions, weather monitoring, surveillance, collision avoidance with other aircraft and monitoring its own health. Activities such as the above, which are critical for enhancing safety of flight and optimizing fuel efficiency and the cost of aircraft’s maintenance, are made possible by the onboard sensors, that gather the key data. In this chapter we discuss the principles underlying the prominent sensors that are deployed on aircraft.
4.1 ANATOMY OF A CIVILIAN AIRCRAFT Before getting into a discussion on sensors it is useful to recall the main external parts of an aircraft. Figure 4.1 shows the control surfaces—the slats, flaps, spoilers, ailerons, rudder and Vertical Stabiliser
Rudder
Wing Upper Surface APU Exhaust Upper Fuselage Spoilers
Elevator
APU Inlet
Leading Edge Slats
Horizontal Stabiliser
Trailing Edge Flaps Aileron
Angle of Airflow Sensor (2 locations)
Wing Tip Static Ports (both sides)
Radome Static Ports (both sides)
Engine Intake
Pitot Tubes (both sides)
Fig. 4.1 Exterior diagram of a civilian aircraft (Source: https://commons.wikimedia.org/wiki/File:Aircraft_Parts_eng.jpg) 101
102 Principles of Modern Avionics
elevators—whose controlled movements modulate the velocity, lift, pitch roll and yaw of the aircraft. The movements of control surfaces are effected by large hydraulic actuators. The commands to the actuators are transmitted from the cockpit as electrical signals in a FlyBy-Wire system. The Actuator Control Electronics (ACE), a part of the avionics infrastructure, acts as the interface between the avionics and the actuators. The movements of control surfaces are sensed by the displacement sensors. The figure also points to the locations of the angle of attack sensors, which sense the angle of attack of the aircraft (see § 4.5.4), and Pitot tubes, which sense the static and dynamic pressures of the ambient air (see § 4.4.3). Pressure sensors are also deployed near the engine intake (shown) and the engine exhaust nozzle (not shown) to sense the Engine Pressure Ratio and thrust (see § 4.8). Fig. 4.1, which displays the external structures of interest, will be supplemented with description of relevant internal structure of the aircraft, as needed, in the following discussion.
4.2 TYPES OF SENSORS The sensors deployed on an aircraft can be broadly grouped into the following overlapping classes (some sensors belong to more than one class). Displacement Sensors: The displacement sensors measure linear as well as angular displacements. In § 4.3 we discuss three widely used displacement sensors—the potentiometer, the linear variable differential transformer and the synchro. Air Data Sensors: Air data sensors are used mainly to measure the static and dynamic pressure of ambient air, the angle of attack of the aircraft and the temperature of the ambient air. The air data sensors, which are discussed in § 4.4, are used to calculate the velocity of the aircraft relative to air, as well as the aircraft’s altitude. Attitude Sensors: An aircraft’s attitude is the orientation of its longitudinal and lateral axes relative to the horizontal plane. See Fig. 4.2 and Fig. 4.18. Gyroscopes (§ 4.5) and synchros (§ 4.3) are used to sense an aircraft’s attitude. The sensors continuously monitor the rotation of an aircraft’s frame along the longitudinal and lateral axes. Position Sensors: An aircraft’s instantaneous position—its latitude, longitude and altitude—are sensed by the position sensors. The most self-contained position sensors are accelerometers mounted on gimbaled platforms. Three accelerometers are used to continuously monitor an aircraft’s acceleration along three fixed, mutually orthogonal directions. The aircraft’s position can then be obtained by double-integration of the acceleration. The above method is called ‘dead reckoning’. Accelerometers are discussed in § 4.6. Partial information about the position of an aircraft can also be determined with the help of ground-based beacons, such as the VOR system discussed in § 1.2.3.7. The abovementioned older technologies are being replaced by the Global Positioning System, which is discussed in § 4.7.7.
Sensors 103
Longitudinal Axis
Nose Down
Nose Up
Horizontal
Banked Left Lateral Axis
Banked Right Horizontal
Fig. 4.2 Attitude of an aircraft
Electromagnetic Sensors: In both civilian and military aircraft electromagnetic sensors are used for navigation, blind landing, and for communications between aircraft and between an aircraft and the traffic controllers or satellites. In addition, in military aircraft electromagnetic sensors are also used for surveillance of battle space, weapons guidance and detection of missiles. In § 4.7 we discuss the following electromagnetic sensors: transponders, primary and secondary surveillance radars. We also discuss the Mode S and Automatic Dependent Surveillance-Broadcast communications which use the secondary surveillance radar. Finally, we discuss the Global Positioning System, which plays an increasingly important role in aircraft navigation. Electromagnetic sensors are also used in Terrain Awareness and Warning System (§ 5.2.10), Traffic Collision Avoidance System (§ 5.2.11), Automatic Direction Finder (§ 4.7.8), Instrument Landing System (§ 5.2.12) and radio altimeter (§ 5.2.13). The Active Electronically Steered Array (§ 6.2.2) is an example of radar (radio frequency sensor) that employs the state-of-the-art beam steering technology. Electromagnetic sensors use, in addition to the radio frequency (RF) band, the visible band for optical imaging and infrared band for thermal imaging. Examples of electromagnetic sensors in the IR and visible band of the electromagnetic spectrum include the Distributed Aperture System (§ 6.2.3) and the Electro-Optical Targeting System (§ 6.2.4). Engine Sensors: Several parameters of an aircraft’s engine are monitored to track its health during flight. Imminent abnormal behavior of an engine is usually signaled by aberrant values of the engine’s parameters, making the engine’s sensors particularly critical for the safety of flight. Examples of engine’s parameters include the fuel flow rate, fuel quantity, the angular speeds of an engine’s shafts, the Engine Pressure Ratio (EPR), engine’s temperature and thrust. The sensors for the above engine parameters are discussed in § 4.8.
104 Principles of Modern Avionics
4.3 DISPLACEMENT SENSORS Displacement sensors are used to measure linear displacement along a single axis, or the angular rotation of a system about an axis of rotation. Displacement sensors play an important role because the measurement of several parameters, such as pressure, acceleration or angle of attack, in effect, reduce to measurement of displacement. For example, some of the pressure sensors are effectively displacement sensors. The principle is k P illustrated in Fig. 4.3. Consider a chamber with a spring attached to its wall. The other end of the spring is attached to a moveable partition of area A. The enclosed chamber—between the fixed wall and moveable Fig. 4.3 Pressure sensor based on measurement of displacement partition—can be assumed to have vacuum. Let the spring constant of the spring be k. A fluid pressure P acting on the moveable partition, as shown, exerts a force F = PA, on the partition. The force, in turn, compresses the spring by an amount ∆L, given by k F = –k∆L ⇒ P = − ∆L A
k and A being constant, a measurement of the displacement ∆L of the moveable partition (spring) yields the fluid pressure acting on the partition. The capacitive accelerometer, discussed in § 4.6.1.4, in effect measures the displacement of a moveable plate between two fixed plates of a parallel plate capacitor. The design of the capacitive accelerometer can be adapted to measure pressure by making the moveable partition of Fig. 4.3, a moveable plate between two fixed plates. In the following subsections we describe three widely used displacement sensors— the potentiometer and the linear variable differential transformer, which are used to measure linear displacements, and the synchro, which is used to measure angular displacement. The reader is referred to [Nyce 2004] for a more detailed discussion on linear displacement sensors.
4.3.1 Potentiometer A sensor that is commonly used to measure small linear displacements is the potentiometer, which is illustrated in Fig. 4.4. A wiper, whose displacement we seek to measure, moves along a resistor R of length L. The displacement of the wiper from the left end, denoted by x, is related to the voltage Va as V x = L 1 − a V
Since L and V are constant, a measurement of Va yields the displacement x.
Sensors 105
Va R L-x
x
V
Fig. 4.4 Potentiometer
4.3.2 Linear Variable Differential Transformer Yet another commonly used displacement sensor is the Linear Variable Differential Transformer (LVDT), which is illustrated in Fig. 4.5. An LVDT has a primary coil, two identical secondary coils located symmetrically on either side of the primary coil, and a magnetic core connected to a platform that can move along a single axis. The secondary coils are connected in a polarity-reversed configuration as shown in the figure. The primary coil is excited by an alternating current and the output Vout is measured. Vout S1
S2
Magnetic Core
P
Fig. 4.5 Linear Variable Differential Transformer
An alternating current in the primary coil creates a time-varying magnetic flux in the two secondary coils, inducing an electromotive force in the coils. When the magnetic core is exactly in the middle of the two coils, due to symmetry, MPS1 = MPS2 where MPS1(MPS2) is the mutual inductance between the primary coil and the secondary coil S1 (S2). The magnitudes of the electromotive forces generated in the two secondary coils are exactly equal and opposite. Since the secondary coils are connected in a polarity-reversed configuration, the net output, when the core is in the middle, is Vout = 0.
106 Principles of Modern Avionics
As the core moves left or right MPS1 is no longer equal to MPS2 and Vout ≠ 0. The magnitude of Vout increases with the displacement of the magnetic core from the center, while the phase of Vout relative to the phase in the primary coil indicates the direction of movement. The LVDT is calibrated to translate a measurement of Vout to the corresponding value of displacement of the magnetic core.
4.3.3 Synchro A synchro can be used to measure the angle of rotation, for example, in the angle of attack sensor (§ 4.5.4). A synchro comprises two nearly identical arrangements called Transmitter (Tx) and Receiver (Rx), as shown in Fig. 4.6. A transmitter comprises three immobilized coils called stators—which are labeled S1, S2 and S3 in the figure—and a coil called rotor that is free to rotate about an axis perpendicular to the plane in which the stators lie. The receiver has an identical arrangement. Assume that the rotor in the transmitter is immobilized. When an alternating current flows through the receiver’s rotor it creates a timedependent magnetic field, and hence a time-dependent magnetic flux through the receiver’s stators. The magnitude of the flux through a stator, such as S1, varies as the magnitude of the cosine of the angle between the axes of the stator and the rotor. For example, as shown in Fig. 4.7, when the axes of the rotor and the stator are aligned, one has the maximum magnitude of the magnetic flux through the stator. When the axes are perpendicular, the flux through the stator (due to a current in the rotor) practically vanishes. S1 S2 Synchro to S3 Digital R1 Converter R2
Receiver
S1
S3
S2 R2 R1
Fig. 4.6 Synchro Transmitter/Receiver (Courtesy: Christian Wolff, www.radartutorial.eu/17.bauteile.en.html)
Sensors 107
The magnitude of electromotive force (emf) induced in the stator is equal to the rate of change of magnetic flux through it. Hence, the amplitude of the emf induced in the stator also varies as the cosine of the angle between the axes of the rotor and stator, reaching its maximum when the axes of the rotor and stator are aligned, and practically vanishing when they are orthogonal. When the transmitter’s rotor is oriented along the same direction as the receiver’s rotor, the induced emf’s in the transmitter’s stators and the corresponding stators of the receiver are equal and there is no circulating current in the stator loops. As the receiver’s rotor rotates, while the transmitter’s rotor is held fixed, the induced emf’s in the receiver’s stators change relative to the induced emf’s in the receiver’s stators, generating circulating currents in the stator loops of the synchro. The exact angle of rotation of the receiver’s rotor can be determined by measuring the currents in the stator loops. Specifically, a synchroto-digital converter IC (e.g., the AD2S44 SDC manufactured by analog devices) can be used to compute the angle by which the receiver’s rotor has rotated, as shown in Fig. 4.6. Maximum Flux (maximum amplitude)
Zero Flux (zero amplitude)
Stator Coil
0°
90°
Rotor Coil
Fig. 4.7 The angle-dependent amplitude of the emf induced in the stator due to an alternating current in the rotor
4.4 AIR DATA SENSORS Air data sensors measure critical parameters of the air—specifically the air pressure and temperature—in the immediate vicinity of an aircraft. We begin the discussion of air data sensors by summarizing relevant details about the band of earth’s atmosphere that is of interest to aviation.
108 Principles of Modern Avionics
4.4.1 Earth’s Atmosphere Most aircraft fly inside the troposphere, the atmospheric shell extending from the earth’s surface to a height of about 8 km (above the poles) to 18 km (above the tropics) [Mohanakumar 2008]. Troposphere contains a mixture that is composed of mostly nitrogen (~ 78%), oxygen (~ 21%) with small amounts of other gases such as carbon dioxide and water vapor. The International Standard Atmosphere (ISA) is an atmospheric model that has been finalized after extensive studies and research. ISA describes how the bulk parameters such as temperature, pressure, density and viscosity of the atmosphere change with altitude above the earth’s surface. In ISA, tropopause, the interface between the troposphere and the stratosphere above it, is assumed to occur at 11 km above the mean sea level (MSL). The values of the temperature and pressure at MSL and at the tropopause in the ISA model are shown in Table 4.8 [Torenbeek 2013]. Table 4.8 Atmospheric temperature and pressure
Altitude →
MSL
Tropopause (11 km)
18 km
Temperature
288.15 K
216.65 K
216.65 K
Pressure
101325 Pascals
22632 Pascals
7505 Pascals
Whereas the temperature decreases linearly at a lapse rate of –6.5°C/km between the MSL and the tropopause in the ISA model, the pressure decreases nonlinearly. See [ICAO 1964] for further details. The actual atmospheric conditions are described as deviations from the value specified by ISA. Thus, for example, ISA predicts a temperature of – 50°C at an altitude of 10 km. If the actual temperature at 10 km is found to be – 40°C it is described as ISA + 10 condition. The instruments of aircraft, such as commercial jetliners, are set to ISA standard. The standardization is critical to maintaining safe vertical separation between aircraft.
4.4.2 Air Temperature Measurement The temperature of ambient air is measured using a platinum resistance thermometer. We describe the principle underlying the thermometer. The temperature of the static ambient air is in general different from the temperature measured by the aircraft’s sensors, which are moving relative to the ambient air. We also describe the corrections that must be applied to the measured temperature in order to recover the actual temperature of static ambient air.
4.4.2.1 Platinum Resistance Thermometer The resistivity of some metals, including platinum, increases linearly with absolute temperature. Platinum is a preferred metal because it is relatively non-reactive and highly malleable. The resistivity of platinum increases at a rate of approximately 0.4 Ohm-m/kelvin [Boyes 2010]. Thus, if the platinum resistor’s geometry is designed to
Sensors 109
make its resistance 100 ohms at 273 K (giving R1 R a PT100 sensor) then its resistance increases to 300 ohms at a temperature of 773 K. The Va change in resistance can be measured using a V Wheatstone’s bridge circuit shown in Fig. 4.9. R3 R2 Resistances R1, R2 and R3 and the voltage V of the battery are held constant. The platinum Fig. 4.9 Wheatstone’s bridge resistor is shown as having a variable resistance R, in the Wheatstone’s bridge. The value of R can be determined by measuring the voltage Va. Specifically, R=
R1R3 (V + Va ) + Va R2 R3 R2 (V − Va ) − Va R1
A platinum resistance thermometer (PRT) can be used to measure temperatures in the range 73 K (–200°C) to 1273 K (1000°C). Figure 4.10 illustrates the construction of a PRT. In Fig. 4.10, the platinum resistor is embedded in a ceramic substrate. The metallic leads are used to connect the resistor to the Wheatstone’s bridge circuit. Platinum Resistor
Ceramic Substrate
Leads
Fig. 4.10 A platinum resistance thermometer
4.4.2.2 Adiabatic Correction A PRT is typically mounted on the surface of an aircraft, housed in a probe, as shown in Fig. 4.11. Part of the air is brought to rest relative to the aircraft, and its temperature is sensed by the PRT. The temperature measured by the PRT is called the Total Air Temperature (TAT). Probe
PRT
Airflow
Aircraft's Skin Fig. 4.11 Externally mounted platinum resistance thermometer
110 Principles of Modern Avionics
When the air is brought to rest relative to the aircraft the kinetic energy in the air is converted to heat, raising its temperature. Thus, the TAT measured by the PRT is actually higher than the actual temperature of air, called Outside Air Temperature (OAT) or Static Air Temperature (SAT). Denoting TAT as Ttotal and OAT as Tstatic we have [Trenkle 1973], Tstatic =
Ttotal γ − 1 2 1+ RM 2
where g is the ratio of specific heats of air at constant pressure and constant volume (g = 1.4 for dry air), R is a factor that reflects the efficiency with which the kinetic energy of air is converted to heat when it is brought to rest (R = 0.7 – 1, depending on the construction of the probe) and M is the Mach number of air speed. (See § 4.4.3.1 for a discussion of Mach number.) The difference Ttotal – Tstatic is called the Ram Rise in temperature. For low Mach number there is not much difference between TAT and SAT. For example, for M = 0.2, TAT ≈ (1.008) SAT, or the difference is less than 1%. For higher Mach numbers the divergence between TAT and SAT becomes more pronounced. If, in addition to measuring TAT, SAT is also measured independently then one can determine the air speed.
4.4.3 Air Velocity Measurement The velocity of ambient air relative to the aircraft enables an aircraft to measure its own velocity through air. We begin the discussion of air velocity sensors by reviewing the notion of Mach number that is often used to describe the speed of an aircraft.
4.4.3.1 Mach Number Mach number of an object, such as an aircraft, is defined as M=
v vs
where v is the speed of the fluid flow past the object and vs the local velocity of sound in the fluid. The different Mach regimes [El-Sayed 2016] are shown in Table 4.12. Table 4.12 Mach regimes
Mach Number (M)
Regime
Example Aircraft [M]
0.0 – 0.8
Subsonic
Cessna 172
[0.2465]
0.8 – 1.2
Transonic
Boeing 787
[0.89]
1.2 – 5.0
Supersonic
F-35 Lightning II
[1.6]
5.0 – 10.0
Hypersonic
X-15
[6.72]
10.0 – 25.0
High-hypersonic
ABM-3 Gazelle missile [17]
> 25.0
Re-entry
Space shuttle
[25.4]
Sensors 111
The local speed of sound—the sound at which sound travels in the vicinity of the interface between the object and the fluid—depends on the local conditions such as temperature. So two objects traveling at the same Mach number at widely different altitudes would travel at different speeds.
4.4.3.2 Indicated Air Speed Measurement Using a Pitot Tube Pitot tube, the instrument used in aircraft to measure the relative velocity of ambient air, was invented by a French hydraulic engineer Henri Pitot in 1735. He was primarily interested in measuring the velocity of water in rivers and canals. His invention serves to measure the ambient air velocity in subsonic aircraft. The basic principle underlying the Pitot tube is the equation for total pressure in a fluid flow field given by Ptotal = Pstatic +
1 2 ρv 2
Pdynamic
where Ptotal is the total pressure at a point in the flow field, Pstatic the static pressure at the point, ρ, the fluid density and v, fluid velocity. The following diagram illustrates the above equation as well as the Pitot tube which is based on it. Closed End Vacuum
Pitot’s Tube A h
V = Flow Velocity R
O
Flow Tube B Fig. 4.13 Pitot tube
Consider a fluid flowing horizontally in the tube with velocity V, as shown in Fig. 4.13. The Pitot tube is an L-shaped structure one of whose orifices, O, faces the flowing fluid. The other arm of the L is perpendicular to the direction of the flow and is positioned vertically. When the flow velocity is zero the height of the fluid in the vertical arm, denoted by h, is zero. However, as the flow velocity increases, the fluid in the vertical arm rises to a height h, which depends on the velocity of the fluid. This is the basic Pitot tube. Consider a point R that is at the same height as the orifice O, but sufficiently far away from O so that the flow velocity at R is V. The static pressure at R—the pressure when the flow velocity is zero—is Ps. At O the fluid is at rest. So the orifice O is also called the
112 Principles of Modern Avionics
stagnation point. Let the pressure at O be Po. Applying Bernoulli’s equation at R and O, we get Ps +
1 ρV 2 = Po 2
Hence, we get [Lan & Roskam 2003], V=
2 gh =
2(Po − Ps ) ρ
Thus, if we can measure the difference between the pressure at stagnation point Po and the static pressure in the flow field Ps, and the air density , then we can compute the air velocity of the aircraft (the velocity of the aircraft relative to the ambient air). The air speed computed using the above formula and air density at Mean Sea Level (MSL), r0, is called the Instrument Indicated Air Speed (IIAS). The air density at higher altitudes differs appreciably from that at MSL, necessitating corrections to the IIAS. The corrections due to variations in the air density, and a few other parameters, are discussed in the next subsection. The actual Pitot tube deployed on an aircraft is based on the same principle as that illustrated in Fig. 4.13, albeit with a slightly modified design. Fig. 4.14 illustrates a typical Pitot tube deployed on aircraft [Tandeske 1991]. Signal to DADC
Air Flow
P0
Stagnation Point Air Flow Static Port
Ps
Diaphragm
Differential Pressure Transducer
Fig. 4.14 Schematic diagram of a Pitot tube deployed in aircraft
The Pitot tube shown in Fig. 4.14 has two sets of orifices. The stagnation point is an orifice of a central tube the other end of which feeds into the chamber on the left of a diaphragm. The air in the left chamber is therefore at stagnation pressure Po. The second set of orifices, called static ports, is arranged radially around the central tube. The static ports open in a direction perpendicular to the direction of fluid flow and feed into the chamber on the right within which air is at static pressure Ps. A diaphragm hermetically separates the left and right chambers. The difference in pressure across the diaphragm, namely Po – Ps, is converted by a differential pressure transducer to an electrical signal that is transmitted to the Digital Air Data Computer (DADC).
Sensors 113
In some Pitot tubes, instead of a differential pressure transducer that measures the pressure difference Po – Ps, absolute pressure transducers are used to measure Po and Ps separately in the two chambers. The Po and Ps measurements by the absolute pressure transducers are sent directly to the DADC, which then computes the difference Po – Ps. Clogged Pitot tubes have been implicated in several air crashes, which accentuate the importance of fault-free functioning of Pitot tubes. For example, see [BEA 2009].
4.4.3.3 True Air Speed As noted above, the measured IIAS does not correct for variations in the air density, instrument error, compressibility of air and static source error. The necessary corrections are briefly discussed below [Lan&Roskam 2003]. IIAS (Instrument Indicated Air Speed)
IAS = IIAS + Instrument
Indicated Air Speed
CAS = IAS + Position
Calibrated Air Speed
EAS = CAS – Compressibility
Equivalent Air Speed
TAS = EAS –
0
True Air Speed Fig. 4.15 Air speeds
The instruments measuring the differential pressure (stagnation pressure – static pressure) have an inherent error, called ∆εInstrument ,which can be determined by calibration tests. The IIAS corrected for the instrument error gives the Indicated Air Speed (IAS). Another source of error is the location of the Pitot tube. The aircraft, although streamlined, presents an obstruction to the air flowing past it. Therefore, the air flow in the vicinity of the aircraft’s surface deviates from the free stream profile, and the static pressure measured by the Pitot tube differs from the actual static pressure. Further, since the aircraft’s surface geometry varies, the distortion of air flow in the vicinity of the aircraft depends on the position on
114 Principles of Modern Avionics
the aircraft in whose vicinity we are looking at the air flow. Hence, all other conditions being the same, the static pressure measured by the Pitot tube varies depending on the location on the aircraft at which the Pitot tube is installed. This position-dependent error in the measurement is called ∆εPosition. The position dependent correction is determined through calibration. The aircraft is flown, at different speeds and altitudes, alongside another airplane that has the capability to accurately measure the static pressure. The measurements on the two airplanes are used to tabulate the necessary position-dependent corrections [Lan & Roskam 2003]. Correcting the IAS for the position error yields the Calibrated Air Speed (CAS). The Bernoulli equation, which we used to derive the relation between the air speed and differential pressure at the Pitot tube, assumed that the air is incompressible. The corrections to the measured air speed due to compressibility of air are small up to Mach number of approximately 0.3. However, when the Mach number exceeds about 0.5, the corrections to air speed due to compressibility of air are in excess of 6%. Correcting the CAS for the compressibility of air, especially at high Mach numbers, yields the Equivalent Air Speed (EAS). The final correction pertains, not to the measurement of the differential pressure, but to the other quantity—the air density—that appears in the expression for air speed. In the calculation of IIAS, IAS, CAS and EAS, we took the air density to be r0, the air density at mean sea level under normal atmospheric conditions. However, the air density varies with altitude. Assuming that air is a perfect dry gas it obeys the equation nMRT PM ⇒ r = PV = nRT = M RT where M is the molar mass of air, R, the gas constant, P, the static pressure and n, the number of moles of air in volume V. Since M and R are constant, the air density at any altitude can be determined by measuring the static pressure and temperature of air. The True Air Speed (TAS) is obtained from the EAS by replacing the actual air density value in place of the value of MSL. Specifically, TAS = EAS
ρ0 ρ
4.4.4 Altitude Measurement The term altitude, by itself, does not have a definite meaning in aviation. The following five different terms are used in aviation to refer to different indicators of altitude [Dole 2017]. 1. Indicated Altitude: The reading displayed on an aircraft’s altimeter. 2. Absolute Altitude: The actual height of the aircraft above the ground that is directly below the aircraft, called the height above ground level (AGL). This is also called the Radar Altitude and is measured using a radar altimeter, which is discussed in § 5.2.13.
Sensors 115
3. True Altitude: The vertical separation between an aircraft and the mean sea level (MSL). 4. Pressure Altitude: The height of the aircraft above MSL derived by measuring the static air pressure in the vicinity of the aircraft and using a standard value of pressure at MSL. The pressure altitude is derived using a relation that gives the decrease in static air pressure with increasing distance from the earth’s surface. The pressure altitude (AP) is related to the static pressure P in the vicinity of an aircraft by the following the formula published by the National Oceanic and Atmospheric Administration [NOAAP 2017]. 0.190284 P AP = 145366.45 1 − 1013.25
where the pressure P is expressed in millibars and the altitude AP is given in feet. The measurement of static pressure, P, is discussed above. 5. Density Altitude: The height of the aircraft derived from a measurement of air density in the vicinity of the aircraft. The density altitude is derived using a standard relation [NOAAD 2017] that gives the decrease in air density with increasing separation from the earth’s surface. The density altitude is used less as a measure of altitude of the aircraft and more as an indicator of the air density. Closely related to the notion of altitude is the convention of flight level used in aviation [Wyatt 2015]. The absolute altitude—and the distinction between absolute and true altitude—of an aircraft are important when an aircraft is close to the local terrain. For example, in instrument-aided blind landing in low visibility conditions the absolute altitude of the aircraft needs to be measured with an error of no more than a few feet. On the other hand, the difference between absolute and true altitude could be a few thousand feet. ISA Transition Level Transition Layer Transition Altitude
QFE QNH Local Terrain
Mean Sea Level Fig. 4.16 Altitude measurement at different heights
116 Principles of Modern Avionics
As an aircraft moves away from the earth’s surface, the distinction between absolute and true altitudes becomes progressively less significant. Closer to the earth’s surface, the altimeter readings are used for terrain avoidance. Far away from the earth’s surface the notion of altitude is used more to maintain a minimum vertical separation between nearby aircraft as explained below. QNH is the lowest forecast air pressure at MSL on a given day. On the other hand QFE is the air pressure on the earth’s surface directly below the aircraft under the present atmospheric conditions. When an aircraft is in the vicinity of an air control tower it receives the QNH and QFE data, which can be used to calibrate the onboard altimeter. An aircraft’s height calculated using the QNH setting gives an estimate of the aircraft’s altitude above the MSL. For example, if an aircraft is stationed at an airport then its QNH altitude gives the height of the airport above the MSL. On the other hand, the height calculated using QFE setting gives the altitude of the aircraft above the local terrain. With QFE setting, the altimeter reading of an aircraft stationed at an airport would be zero. See Fig. 4.16. There is a predetermined height above the MSL, called the transition altitude—18,000 feet in the U.S. and Canada—below which an aircraft flies with QNH setting at the altimeter. See Fig. 4.17. Once the aircraft goes above the transition altitude, the altimeter setting is changed from QNH to a standard pressure (SP) setting of 1013.2 millibars for MSL, regardless of the prevailing conditions at MSL on that day.
SP
Transition Level Transition Layer Transition Altitude
QNH
Fig. 4.17 Transition from QNH to ISA
The flight level is the pressure altitude of an aircraft calculated using SP (1013.2 millibars) at MSL, expressed in units of 100 ft. For example, FL125—that is, Flight Level 125—indicates an altitude of 12,500 feet assuming SP at MSL. Flight levels are expressed in multiples of 500 feet—as FLxyz—where xyz is a 3-digit number with z being either 0 or 5. Flight levels are used to maintain a minimum vertical separation between nearby airborne aircraft.
Sensors 117
The lowest flight level available for use above the transition altitude is called the transition level. For example, if the lowest flight level available for use above a transition altitude of 18,000 feet is 19,500 feet then the transition level is FL195. The region between transition altitude and transition level is called the transition layer. A descending aircraft switches from SP to QNH at transition level.
4.5 ATTITUDE SENSORS The attitude of an aircraft is the orientation of its frame relative to the local horizontal plane. Two axes—the longitudinal and lateral—are used to describe the attitude of an aircraft, as shown in Fig. 4.18. The rotation about the longitudinal axis is called roll and is denoted by the angle α. Similarly, the rotations about the lateral and vertical axes are called pitch and yaw, and are denoted by the angles β and γ respectively. The attitude of an aircraft can be determined by continuously measuring and integrating its roll and pitch, starting from a known configuration. Lateral Axis
z YAW
ROLL x
Longitudinal Axis
PITCH y Vertical Axis Fig. 4.18 The three relevant axes and angles for aircraft’s motion
Two types of sensors are used to measure the roll, pitch and yaw—the gimbaled sensors and strapdown sensors. A gimbaled sensor comprises a mechanical gyroscope, which under ideal conditions points in a fixed direction regardless of the orientation of the aircraft’s frame. The deviation of the aircraft’s frame from the fixed directions of three gyroscopes—that are oriented along three mutually orthogonal directions—yields the total roll, pitch and yaw of the aircraft relative to its initial orientation. In contrast, a strapdown sensor is attached to the frame of the aircraft and comprises a laser gyroscope. As the aircraft rolls, pitches or yaws, the corresponding strapdown sensor rotates with the aircraft’s frame and measures the rotation. By integrating the rotations measured by three strapdown laser gyroscopes—mounted on three mutually orthogonal surfaces—one can determine the total roll, pitch and yaw. In the following discussion we describe both
118 Principles of Modern Avionics
the mechanical and laser gyroscopes. In § 4.6, we describe the inertial reference platform that integrates gyroscopes and accelerometers to yield both the location of an aircraft as well as its attitude.
4.5.1 Gimbaled (Mechanical) Gyroscopes A gimbaled gyroscope has a spinning disc mounted on an Gimbals inner gimbal, which, in turn, is mounted on an outer gimbal. The outer gimbal is mounted on the gyroscope’s frame as shown in Fig. 4.19. The inner gimbal is allowed to rotate freely about the axis determined by the two contact points with the outer gimbal. The outer gimbal is also allowed to rotate freely about the axis determined by the points at which it is mounted to the frame. In an ideal gyroscope the spinning of the disc and the rotations of the two gimbals are not damped by friction at the points at which they are mounted. When it spins the disc has a nonzero angular momentum that points along its axle. In the absence of an external Fig. 4.19 Gyroscope torque the angular momentum is conserved. The angular momentum being a vector has a magnitude and direction, both of which remain unchanged over time in the absence of external torque. In particular, the direction of the axle of the spinning disc remains unchanged as long as no external torque acts on the spinning wheel. Therefore, the axis of the spinning disc of a gyroscope that is placed in an aircraft continues to point in a fixed direction as the roll, pitch and yaw angles of the aircraft change, as long as we can ignore friction and other external forces. The attitude of an aircraft—or the α and β angles—can be determined using on-board gyroscopes as the reference. A vertical-spin gyroscope—a gyroscope whose disc’s spin axis is in the vertical direction— can be used to measure roll and pitch. A horizontal-spin gyroscope, whose disc’s spin axis is pointing along the longitudinal axis (see Fig. 4.18) can be used to measure the yaw. Thus, we can measure the roll, pitch and yaw using just two gyroscopes.
4.5.1.1 Rate Gyroscopes A rate gyroscope measures the rate at which an angle (pitch, roll or yaw) changes. It is based on the phenomenon of precession. Figure 4.20 illustrates the underlying principle. Consider a disc spinning with angular momentum L that is pointing along the y-axis. The disc is mounted on an inner gimbal. The inner gimbal is mounted on an outer gimbal which is free to rotate about the z-axis. If a torque t is applied to the outer gimbal causing it to rotate about the z-axis then the response of the inner gimbal is to rotate about the x-axis—that is, precess. In order to understand precession we consider first the simple case of a constant external torque applied to the outer gimbal.
Sensors 119
Inner Gimbal
z
Outer Gimbal
p x
L
y
Fig. 4.20 Rate gyroscope
We observe that a torque t that is perpendicular to the angular momentum L changes the direction of L but not its magnitude. (We will denote vectors with bold-faced symbols.) t=
dL2 dL dL = 2L ⋅ =0 ⊥ L ⇒ dt dt dt
where the . (dot) represents the dot (scalar) product of the vectors. Thus, the squared magnitude of L does not change with time as long as t ^ L. In contrast, a torque that is parallel to the angular momentum L changes the magnitude, but not the direction of L. Specifically, t=
d|L| dL || L ⇒ = |t| dt dt
Consider the gyroscope shown in Fig. 4.20. Assume that the angular momentum of the spinning disc, at t = 0, is pointing along the y-axis as shown in the figure. Assume that for t > 0, a constant external torque τ is applied along the z-axis as shown. At t = 0, the instantaneous change of L is along the z-axis. In other words, at t = 0+, the inner gimbal starts rotating about the x-axis. In the interval (0, dt), assume that the inner gimbal rotates about the x-axis by an angle dq. The change in the disc’s angular momentum in the same interval is |τ| dθ = = wp dL = tdt = |L|dq τˆ ⇒ |L| dt or the instantaneous angular velocity of precession, ωp, at t = 0+ is the magnitude of the torque divided by the magnitude of the angular momentum [Lawrence 1998]. We will use the widely used convention in the following discussion. The z-axis about which the outer gimbal rotates will be called the input axis. The axis about which the disc spins is the spin axis. And the axis about which the axle of the disc precesses is called the output axis. As the inner gimbal precesses, with the external torque τ remaining constant, the input axis and the spin axis do not remain orthogonal. Assume that at time t, the inner gimbal has
120 Principles of Modern Avionics
rotated by an angle θ about the output axis; see Fig. 4.21. The external torque can then be decomposed into a component τ^ that is perpendicular to the spin axis, and a component τ|| that is parallel to the spin axis. τ^ continues to drive the precession—that is, the rotation of the axle about the spin axis—while τ|| changes the magnitude of L(t), the component of the disc’s angular momentum about its spin axis. When the spin axis becomes parallel to the input axis, τ^ = 0 and the external torque no longer drives the precession. In a rate gyroscope the outer gimbal is attached to the frame of the aircraft. The axis of rotation of the outer gimbal—the z-axis in Fig. 4.20—is the input axis about which we are interested in measuring the angular velocity of the aircraft frame. Thus, for the yaw rate the input axis would be the vertical axis; for the pitch rate it would be the lateral axis, and finally, to measure the roll rate the input axis would be the longitudinal axis (see Fig. 4.18). z Spinning Disc
Inner Gimbal
y
Spring
Dashpot
Fig. 4.21 Restrained rate gyroscope
When used as a rate gyroscope—that is, as a sensor of the angular velocity of the aircraft’s frame about the input axis—the inner gimbal is connected to a torsion spring with spring constant K and a viscous damper, as shown in Fig. 4.21. The other end of the spring is attached to the frame of the outer gimbal. The disc is spun at a constant angular speed driven by an external drive. In other words, the magnitude of the angular momentum of the disc about its spin axis—|L|—is held constant. If the aircraft’s frame turns about the input axis at angular speed ω then the inner gimbal responds to the angular velocity of the outer gimbal about the input axis undergoing a steady state angular displacement θ given by q=
ω|L| K
where L is the angular momentum of the spinning disc about the spin axis; see [Wertz 1978, Lawrence 1998, Northrop 2005] for a more detailed discussion. Knowing the constant value of |L|, and the spring constant K, a measurement of the angular displacement of the spin axis, namely θ, yields the rate of turn, w, of the aircraft about the axis of interest.
Sensors 121
4.5.1.2 Rate Integrating Gyroscopes The restrained rate gyroscope described above can be modified to measure the total angle by which the aircraft’s frame has turned about the input axis—that is, the integral of the rate of turn over an interval of time—with a simple modification. The spring shown in Fig. 4.21 is removed. Then the resistive torque comes only from the dashpot, which resists the axle’s precession with a torque that varies as its angular velocity of precession. The angular precession velocity of the axle initially increases until the resistive torque provided by the dashpot exactly counteracts the torque driving the precession. Then the axle reaches a terminal angular precession velocity that is proportional to the angular velocity of rotation of the aircraft about the input axis. Thereafter, the angular displacement of the axle about the output axis increases linearly with the angular displacement of the outer gimbal about the input axis. A measurement of the axle’s angular displacement about the output axis, thus, yields the angular displacement of the aircraft about the input axis. Since the angular displacement of the axle about the output axis can increase without bound, a servomotor is usually used to continuously rotate the axle in the opposite sense (to its angular precession velocity) to return it to the resting position. By calculating the amount of rotation that it had to induce, the servomotor senses the total amount of rotation that the axle would have undergone were it not for the counter-acting servomotor; see [Wertz 1978, Lawrence 1998, Northrop 2005].
4.5.1.3 Drift Errors Although the angular momentum of a spinning wheel is conserved in an ideal gyroscope that is not subjected to other external forces, the behavior of a real gyroscope deviates from ideality. The bearings in a real gyroscope are not frictionless. As a result the spinning wheel experiences an external torque, which over time has an accumulated impact on the angular momentum of the spinning wheel. Secondly, since a gyroscope that is orbiting the earth is not in an inertial frame; the gravitational field of the earth induces deviation from ideal behavior. Due to non-idealities, such as those mentioned above, a real gyroscope displays drift of its spin axis and needs to be recalibrated periodically. The mechanical gyroscopes discussed above played a critical role in navigation until recently. With the advent of the laser gyroscopes, which have no moving parts and are not plagued with shortcomings mentioned above, the prominence of mechanical gyroscopes has declined. In the following paragraphs we discuss the laser gyroscopes.
4.5.2 Laser Gyroscopes Laser gyroscopes are based on the Sagnac effect. The next section presents a simple derivation of the Sagnac effect.
122 Principles of Modern Avionics
4.5.2.1 Sagnac Effect and Fiber Optic Gyroscope Consider a source of light attached to a disc of radius R, rotating counterclockwise with angular velocity w, as shown in Fig. 4.22. Assume that the source simultaneously emits two light pulses of same initial phase and wavelength l. The two pulses travel along the boundary of the disc in opposite directions. The pulse moving in the clockwise direction is labeled CWP and that traveling in the counterclockwise direction is labeled CCWP in Fig. 4.22. The source is also assumed to have a receiver built into it. Assume that the CWP reaches the receiver at time tc and the CCWP reaches the receiver at time τcc. In time τc the receiver has moved a distance L = R w τc.
CWP
CCWP
Source/Receiver Fig. 4.22 Sagnac effect
Therefore, tc =
2 πR − Rωτc 2 π c τc 4 π2 cR 2 πR ⇒ tc = ⇒ jc = = λ λ(c + Rω) c c + Rω
where φc is the phase shift of the CWP relative to its initial phase, when detected by the receiver. A similar analysis for CCWP shows that its phase shift φcc is jcc =
8 π2 R 2 4 π2 cR 8 π2 cR 2 ω ⇒ Dj = jcc – jc = ≈ ω if Rw 1, the system is said to be overdamped. It is critically damped when x = 1. In the overdamped and critically damped systems the system relaxes to equilibrium (x = 0) without oscillations. Systems are designed to relax to equilibrium rapidly, and without oscillations, and are usually critically damped. Accelerometer is the sensor-of-choice in many of the subsystems in an aircraft for the following reasons. Accelerometers can measure both static (constant) as well as dynamic acceleration and have excellent response over a large frequency band. Velocity and displacement can be derived from accelerometer’s output by performing single and double integration in time respectively. The signature of destructive events, such as collision, is large acceleration or deceleration.
4.6.1.2 Piezoelectric Accelerometer A piezoelectric accelerometer is based on the piezoelectric effect. In certain crystals (such as barium titanate and quartz) application of a force along select axes induces a nonzero surface charge density, and hence potential difference, to appear as shown in Fig. 4.31. The induced voltage is proportional to the applied force. For example, for the piezoelectric material barium titanate the charge sensitivity is 140 picocoulomb/newton. That is, if a force of 1 Newton is applied across the crystal along a certain axis, then the force induces a surface charge of +140 picocoulomb on one face and –140 picocoulomb on the opposite face. Consider a cube of barium titanate whose side is 1 millimeter long. Since the dielectric constant of the material is 12.5 nanofarad/meter the capacitance across the opposite faces of the cube is 12.5 picofarad. Applying 1 newton of compressive force across the crystal therefore generates 11.2 volts across the faces. See [de Silva 2005].
Piezoelectric Material (No external force)
–––––– +++++++++++
V
V
–––––––––– ++++++
Fig. 4.31 Induced voltage in piezoelectric crystal
Sensors 133
Since the voltage generated is proportional to the force applied on a piezoelectric crystal, the measurement of the induced voltage yields the value of the applied force. As shown in Fig. 4.31, the polarity of voltage depends on the direction of the applied stress. The principle of piezoelectric accelerometer is illustrated in Fig. 4.32. Consider a piezoelectric material of mass m and a proof mass M attached to a platform. Assume that the platform has negligible mass. If an externally applied force F makes the whole system accelerate upwards with an acceleration a then F a= M+m Therefore, the piezoelectric material exerts an upward force Ma on the proof mass, which, in reaction, exerts a downward force of –Ma on the piezoelectric material. M Similarly, the platform exerts an upward force of (M + m)a on the piezoelectric material. Thus, the net compressive force a Piezoelectric V on the piezoelectric material is Fnet = ma. Material The net compressive force Fnet on the Platform piezoelectric material can be measured by measuring the voltage V induced by the Fig. 4.32 Piezoelectric accelerometer compressive force. The acceleration of the system is obtained as Fnet/m.
4.6.1.3 Piezoresistive Accelerometer While a piezoelectric accelerometer is based on the piezoelectric effect, a piezoresistive accelerometer relies on the phenomenon of piezoresistivity—that is, the change of resistivity of certain semiconductors and metals, in response to a mechanical stress [Bao 2005]. The principle of piezoresistive accelerometers is illustrated in Fig. 4.33. M
a
Piezoresistive Material
Wheatstone’s Bridge
Va
Platform Fig. 4.33 Piezoresistive accelerometer
As we showed above, if the system comprising the platform, piezoresistive material and the proof mass, accelerates upward with acceleration a then the net compressive force on the piezoresistive material is Fnet = ma. The mechanical stress induced by the compressive
134 Principles of Modern Avionics
force causes a change in the resistance of the piezoresistive material. Measuring the change in resistance yields the value of the external force, and hence the acceleration of the system. The change in resistance of the piezoresistive R1 R material is measured using a Wheatstone’s bridge circuit shown in Fig. 4.34, where the piezoresistive Va material is denoted as the variable resistor R. The V resistance R in the Wheatstone’s bridge is given by R R 2
R R (V + Va ) + Va R2 R3 R= 1 3 R2 (V − Va ) − Va R1
3
Fig. 4.34 Wheatstone’s bridge
Knowing the voltage V of the battery, the values of resistances R1, R2 and R3, and measuring the voltage Va one can obtain the new value of the resistance R. The change in resistance R yields the acceleration of the piezoresistive material.
4.6.1.4 Capacitive Accelerometer A capacitive accelerometer senses the acceleration of a system by measuring a corresponding change in capacitance, as illustrated in Fig. 4.35.
P k M Insulator
C0
d
Vo
b
d
Vi
a
C0
Q
Fig. 4.35 Capacitive accelerometer
The capacitance of a parallel-plate capacitor is given by εA C= d where ε is the electric permittivity of the medium between the parallel plates, A the area of overlap of the parallel plates, and d the distance between the plates. Therefore, a change in the distance between the two plates changes the capacitance. Consider a plate of mass M suspended from a spring with spring constant k, midway between plates P and Q, as shown in Fig. 4.35. Let the capacitance between plates P and M, be denoted CPM and the capacitance between plates M and Q be denoted CMQ. When M is midway between P and Q, CPM = CPM = C0.
Sensors 135
If the entire chamber is now accelerated upwards with acceleration a, due to an externally applied force F, then the spring stretches by an amount b, with the restoring force of the spring k –kb = Ma ⇒ a = − b. M In other words, k and M being constants, measuring the displacement b gives the acceleration a of the system. As a result of the displacement of M the new values of the capacitances are C0 C0 CPM = ; CMQ = ; 1+ δ 1−δ
where d =
CPM Vo
Vi
b d
C0
The circuit shown in Fig. 4.36 is a Wheatstone’s bridge of capacitors. In the Wheatstone’s bridge shown in Fig. 4.35,
CMQ
C0
Fig. 4.36 Wheatstone’s bridge of capacitors in Fig. 4.14
(− 2δ)(1 + δ) V0 δ = ≈ − if d 0 We can compute similar errors εA, εB, εC and εD . The time on the clock at the receiver Q is then taken to be the value that minimizes the total error. That is, the time on the receiver clock tQ is the solution to the following optimization problem:
min{ε A + εB + εC + εD + εE } tQ
...(2)
Using the above minimization scheme mitigates the need for a high-precision clock on the GPS receiver. The above argument makes the assumption that the signal travels in vacuum between a satellite and the receiver. In reality, for the last 1000 km of its nearly 20,000 km journey the signal travels through a region that is populated with earth’s atmospheric gases, and charged particles (ionosphere)—in which the refractive index is not exactly unity. The above calculation needs to be corrected for the varying refractive index of the earth’s atmosphere. Besides the above correction, due to the earth’s atmosphere, the other more important corrections of the elapsed time are the special relativistic correction due to the satellite’s motion, and the general relativistic correction due to the variation of the gravitational field. While the exact corrections involve detailed calculations we provide the gist of the arguments below. Consider two ideal synchronized clocks. One of them is placed at rest in an inertial frame E and the other is placed on a spacecraft that travels at constant velocity v along the x-axis as seen by an observer at rest in E. Consider a bulb at rest on the spacecraft. The bulb is turned on at a certain time by an observer OS on the spacecraft. tS seconds later, as
Sensors 149
measured by clock on the spacecraft, the bulb is turned off. Then an observer OE in frame E measures the time interval tE for which the bulb is on to be ts tE = v2 1− 2 c Since v < c, tE > tS, that is, a clock on the spacecraft advances more slowly than a clock at rest in E, as seen by an observer at rest in E. Of course, to an observer at rest on the spacecraft a clock at rest in E appears to advance more slowly relative to a clock at rest on the spacecraft. Unfortunately, a clock at rest on the earth is not at rest in an inertial frame since the earth is rotating about its own axis and is in an elliptical orbit around the sun. The satellite is not at rest in an inertial frame either. However, we can assume that the clocks on the earth and on the satellite are at rest in two different momentarily co-moving inertial frames. Assuming that the clock on the earth is at rest on the equator the velocity of clock, due to earth’s rotation, is about vE = 463.31 m/s. Assume that the GPS satellite is orbiting in the equatorial plane (and not in the plane inclined at 55° relative to the equatorial plane) with a period of half a sidereal day. Then the satellite’s orbital velocity is v = 3873.72 ms–1
vrel = v – vE = 3410.41 ms–1
⇒ ⇒
2 vrel 1 − 2 c
−1/2
= 1 + 6.4706 × 10–11
In the above calculation we are considering the instant at which the clock on the satellite is directly above the clock on earth and moving in the same direction. The simplifying assumption is used to get an estimate, and not the exact value, of the required correction. After about 24 hours elapse on a clock at rest in S, the clock at rest in E would have advanced by tS = 24 × 3600 × (1 + 6.4706 × 10–11)s In other words the clock on the spacecraft falls behind the clock on the earth by about 5.59 µs every day. Considering that an uncertainty of 10 m or less requires the measurement of elapsed time to an uncertainty of 33 ns or less, a discrepancy of 5.59 µs is more than 150 times than the permitted uncertainty. So the elapsed time needs to be corrected for the special relativistic time dilation due to the satellite’s motion. An even bigger effect is the general relativistic correction due to the gradient in earth’s gravitational field. For an exact calculation of this effect we need to include the angular momentum of the earth, use the Kerr metric that describes a rotating gravitational source, and finally integrate the correction for varying polar and azimuthal angles along the trajectory of the satellite. However, by making a few simplifying assumptions, which are listed below, we can get a reasonably good estimate of the magnitude of the discrepancy that would arise if general relativistic corrections are not applied.
150 Principles of Modern Avionics
The following analysis hinges on the concept of proper time. Consider a coordinate system —(t, r, q, ϕ)—for spacetime, where (r, q, ϕ) provide a spherical coordinate system for space. The coordinate t is called the book-keeper time—the time kept for the whole coordinate system by some imaginary book-keeper—to emphasize that it is not the time that an observer at rest at (r, q, ϕ) would see on his wristwatch. t is merely the first coordinate in a certain man-made coordinate system. The book-keeper time is related to the wristwatch time—also called the proper time—by the expression for the metric of the spacetime. For example, consider a universe that contains just a stationary spherically symmetric body of mass M, whose center is at the origin of the spherical coordinate system we have chosen. Then the time interval dτ that an observer at rest at (r, q, ϕ) would see on his wristwatch is related to the time interval dt that the book-keeper would record as 2 GM 2 dt2 = 1 − dt c 2r
In the interval dt that the global imaginary book-keeper records, an observer at rest at (r, q, ϕ) sees the time on his wristwatch advance by an amount dτ as shown above. The relation shown above is part of the Schwarzschild metric [Wald 1984, Taylor 2016], G is the gravitational constant, and c the velocity of light in vacuum. The remarkable feature of the above relation is that over the same book-keeper time interval dt observers at rest at different values of r notice that their wristwatches advance by different amounts. In particular, observers closer to the center of the spherical mass M—that is, observers at smaller values of r— see their wristwatches advance less than the wristwatches of observers who are at rest at larger values of r. The wristwatch of an observer at rest at r = 2GM/c2 does not advance at all; time literally stands still for an observer at rest at this special value of r, which is called the Schwarzschild radius. Consider two observers O1 and O2 who are at rest at r1 and r2. When the book-keeper’s time interval advances by dt, the wristwatches of O1 and O2 advance by amounts dτ1 and dτ2 respectively. We have, dτ1 = dτ 2
2 GM 1 − 2 c r1 2 GM 1 − 2 c r2
If we make two simplifications about earth’s movements we can apply the above argument to the case where observer O1 is at rest on the earth’s surface and observer O2 is at rest at a distance of 26,561 km—that is, at approximately same distance from the earth’s center as the radius of the orbits of GPS satellites. The first simplification is to disregard earth’s rotation about its own axis and all of the other matter in the universe. We can then use the Schwarzschild metric. The second simplification is to regard the earth as a perfect sphere. While these assumptions are invalid they allow us to get a reasonably good estimate of the general relativistic corrections that must be applied.
Sensors 151
Using the above simplifications and taking r1 = 6371 km, r2 = 26,561 km, G = 6.67408 × 10–11 N.m2.kg–2, M = 5.972 × 1024 kg, we get, dτ1 = 1 – 5.2912 × 10–10 dτ 2
Each time the wristwatch that is at rest at r2 advances by 1 second, the wristwatch that is at rest on the surface of the earth falls behind by 0.52912 nanoseconds. After 24 hours elapse on the wristwatch at r2, the wristwatch on the surface of the earth would have fallen behind by 45.716 microseconds. A discrepancy of 45.716 microseconds translates to a position uncertainty of 13.705 km. And the error, being cumulative, grows with time. Therefore, in order to keep the positional errors in GPS to a few meters, one must apply the necessary general relativistic corrections to compensate for the earth’s gravitational gradient, which makes the clocks on the surface of the earth run slower than the clocks in satellites.
4.7.7.2 Corrections to Keplerian Geosynchronous Orbits Recall that the location of a GPS receiver is computed using the locations of GPS satellites and their distances from the receiver. The precision and synchronization of clocks, discussed above, pertain to computing the distance between a satellite and a receiver. In the following discussion we focus on determining the location of a GPS satellite itself. In computing the ideal geosynchronous orbits of satellites we make several simplifying assumptions. We assume that the earth is a perfect sphere. We ignore the perturbations of the geosynchronous orbit due to the gravitational pull of the sun and the moon. We ignore the pressure exerted on the satellite by the solar radiation. An orbit calculated using such simplifying assumptions—that is, by treating the earth and satellite as point objects in a universe devoid of other matter—is called a Keplerian orbit [Helfrick 2002]. In reality, the earth is not exactly spherical. It has a smaller radius at the poles than it does at the equator. As a result the earth’s gravitational field is not spherically symmetric. The departure of the gravitational field from spherical symmetry along a satellite’s orbit causes the real orbit to deviate from the Keplerian orbit. The earth’s orbit around the sun is elliptical with an eccentricity of 0.0167 while the eccentricity of moon’s elliptical orbit around the earth is 0.0549. Thus, the gravitational pull of the sun on the satellite varies seasonally and to a smaller extent with the motion of the satellite itself. The moon’s pull on the satellite varies with the relative motion of the satellite and the moon. Finally, the de Broglie momentum p of a photon of wavelength λ is given by h p= λ Photons, viewed as particles, transfer momentum to the satellite when they are reflected or absorbed by the satellite. The amount of momentum that the solar photons transfer to the satellite per unit time per unit area is the solar radiation pressure. The radiation pressure also cumulatively perturbs the Keplerian orbit, although the perturbation is much weaker than the other perturbations mentioned above.
152 Principles of Modern Avionics
Left uncorrected, the perturbations discussed above can cause the satellite’s orbit to change significantly over time. Given the importance of knowing the exact orbit of the satellite two approaches to correct for the perturbations are: (1) periodically fire the satellite’s built-in rockets to correct for the perturbations and keep the satellite in a known orbit, or (2) calculate cumulative impact of the perturbations, and use the calculation to obtain the exact orbit of the satellite. In practice, both approaches are used. A GPS satellite actually transmits the information about the cumulative perturbation of its orbit to the GPS receivers. The GPS receiver then applies the necessary corrections in its calculations to determine the satellite’s orbit with high accuracy. Such information, transmitted by the satellites to the GPS receivers, is called ephemeris data. When the cumulative perturbation exceeds a threshold the satellite’s rockets are fired, under the guidance of the earth-based control stations, to offset the effect of accumulated perturbations.
4.7.7.3 Ionospheric Delay and Differential GPS Most of the errors discussed above are deterministic. For example, the time dilation due to earth’s gravitational field requires a correction that is calculable. Errors due to ionospheric delay—the delay experienced by the signal as it travels through the ionosphere—cannot be predicted accurately. The ionosphere changes not only from location to location, day to night and season to season, but also in response to solar activity such as solar flares. The technique of Differential GPS (DGPS) has been developed to correct for ionospheric variations. In DGPS a network of GPS receivers—called reference locators—are installed on the earth at fixed known locations. The reference locators calculate their own locations using the data received from satellites. Being at fixed known locations—and hence knowing their own exact coordinates—they can determine the discrepancy between their known coordinates and those that they compute using the data from the GPS satellites. The discrepancy— or residual error—is attributed to indeterminate factors such as the ionospheric and tropospheric delays. Each reference locator broadcasts information about the residual error, which is used by nearby GPS receivers to correct for the indeterminate factors.
4.7.7.4 Data Transmitted by a GPS Satellite A satellite in the space segment of GPS continuously transmits two types of data: 1. Pseudo-Random Noise (PRN) ranging code 2. Navigation data The PRN ranging codes that are available for civilian use include the P-code (Principal Code) and the C/A-code (Coarse/Acquisition Code). Both of the codes are described below. The navigation data from a Satellite Vehicle (SV) includes the satellite’s ephemeris data, system time and almanac data, which contains information about the health and status of all the satellites in the space segment.
Sensors 153
The PRN ranging code and navigation data are transmitted by GPS satellites over several RF data links (signals). Four of the GPS signals are available for civilian use at this time [GPS4 2017]. They are the:
1. 2. 3. 4.
L1 C/A signal L2 C signal L5 signal L1 C signal
We present details of the L1 C/A signal below. A complete description of the PRN ranging codes, the characteristics of the navigation data and the different signals, including L1 C/A, can be found in the Interface Specification [GPS1 2013, GPS2 2013, GPS3 2013]. We do not cover all the details below. We restrict the following discussion to just a coarse description of the P-code, C/A-code and the L1 C/A signal.
4.7.7.5 P-code The GPS Satellite Vehicles (SVs) are given ID numbers 1 through 37. The P-code bit stream generated by SV with IDi is denoted Pi(t), where i = 1, 2, ..., 37. Pi(t) is generated as follows. Pi (t) = X1 ⊕ X2i i = 1, 2, ..., 37 where X1 is a bit stream generated by taking the modulo-2 sum of two bit streams X1A and X1B. X1 = X1A ⊕ X1B X1A and X1B are pseudo random bit streams generated using 12-bit LFSR and are described by the following polynomials (see § 3.4.2.6). X1A: 1 + X6 + X8 + X11 + X12 X1B: 1 + X1 + X2 + X5 + X8 + X9 + X10 + X11 + X12 X1A is started with the initial vector 001001001000 and X1B with the vector 010101010100. On the other hand the X2i sequence is generated by delaying the X2 sequence by i chips (bits), where i = 1, 2, ..., 37. The X2 sequence is generated as, X2 = X2A ⊕ X2B where X2A and X2B are generated by 12-bit LFSR with the following polynomials. X2A: 1 + X1 + X3 + X4 + X5 + X7 + X8 + X9 + X10 + X11 + X12 X2B: 1 + X2 + X3 + X4 + X8 + X9 + X12 X2A is started with the initial vector 100100100101 and X2B with the vector 010101010100.
4.7.7.6 C/A-code In SV with IDi, for i = 1, 2, …, 37, the C/A code is a Gold code Gi(t).
154 Principles of Modern Avionics
Gi (t) = G1 ⊕ G2i that is obtained by taking the modulo-2 sum of two pseudo random sequences G1 and G2, i, which are generated by 10-bit LFSRs. The bit stream G1 is generated by a 10-bit LFSR (see § 3.4.2.6) described by the polynomial G1 = 1 + X3 + X10 The bit stream G2i, for i = 1, 2, ..., 37, is obtained by delaying the bit stream G2 by a specified number of bits as described in Table 3-Ia of [GPS1 2013]. The sequence G2 itself is generated using a 10-bit LFSR described by the polynomial G2 = 1 + X2 + X3 + X6 + X8 + X9 + X10 Both G1 and G2 bit streams are started with the 10-bit pattern 1111111111.
4.7.7.7 L1 C/A Signal The L1 C/A signal uses a carrier wave of frequency 1575.42 MHz. As shown in Fig. 4.50, a quadrature-phase (phase-shifted by 90°) carrier wave is BPSK-modulated by the bit stream DPi that contains the Pi-code and the navigation data D(t). An in-phase carrier wave (phase shifted by 0°) is BPSK-modulated by the bit stream DC/Ai that contains the C/A-code and the navigation data D(t). The modulated carriers are then summed to obtain the L1C/A signal. The subscript i denotes the transmitting SV’s ID, where i = 1, 2, …, 37. X1 +
Pi
+
DPi
BPSK
X2, i
MDPi
/2
Data D(t)
+
50 bps
L1 C/A
1575.42 MHz
G1 +
1.023 Mbps C/Ai
+
BPSK DC/Ai
MDC/Ai
G2, i Fig. 4.50 Generation of L1 C/A signal. ⊕ denotes XOR operation
The Pi (i = 1, 2, ..., 37) bit stream’s data rate is 10.23 Mbps (megabits per second), while the C/Ai (i = 1, 2, ..., 37) bit stream’s data rate is 1.023 Mbps, as seen by a stationary earthbased receiver (the data rate seen by an observer on a satellite differs from that seen by a
Sensors 155
stationary earth-based observer due to relativistic corrections). The D(t) bit stream’s data rate is 50 bps. In summary, the L1 C/A has four separate frequencies of interest: D(t)—50 bps, C/A—1.023 Mbps, P—10.23 Mbps, and finally the carrier wave at 1575.42 MHz. As described above, the Pi and C/Ai bit streams are sequences of pseudo random numbers (PRNs). Each satellite generates its own characteristic P and C/A bit streams. For i = 1, 2, ..., 32, Satellite Vehicle (SV) with ID number i uses Pi and C/Ai. For i = 33, 34, ..., 37, Pi and C/Ai bit streams are used by SV with ID number 32 + i.
4.7.7.8 GPS Receiver A GPS receiver operates in two modes—acquisition and tracking. In the acquisition mode, a receiver searches for satellites in its line-of-sight, and locks on to the signals transmitted by the satellites. Immediately after a receiver is turned on, for instance, it operates in the acquisition mode. Once the receiver has locked on to the signals from the satellites in its line of sight, it switches to the tracking mode. In the tracking mode, a receiver has to work to maintain its lock on the signal because of the jitters in the frequency of transmission. The main activity in the tracking mode, however, is extraction of the navigation data contained in the signal. A GPS receiver needs to have at least four satellites in its line-of-sight. The GPS space segment ensures that at least six satellites are in the line-of-sight of every point in the troposphere. Therefore, a typical GPS receiver has at least six channels operating in parallel, with each channel capable of processing the data from a different satellite. In the following discussion, we will focus on just one channel. Antenna
Signal Preprocessing
Signal Acquisition/ Tracking
Data Demodulation
Navigation
Fig. 4.51 Architecture of a generic civilian GPS receiver
The architecture of a generic GPS receiver is shown in Fig. 4.51. The signal-in-space (SIS)—that is, the signal transmitted by a satellite—is picked up by the receiver’s antenna. The RF signal received by the antenna then goes through a preprocessing stage, which includes down-conversion of the RF signal to Intermediate Frequency (IF) signal. The signal acquisition/tracking system enables the receiver to lock on to the satellite signal and extract the BPSK-modulated DC/Ai(t) bit stream (see Fig. 4.50) from the signal, where i is the label of the transmitting SV; hereafter we drop the subscript i with the understanding that it is implicit in the notation. The D(t) data bit stream is extracted from the BPSK-modulated DC/A(t) bit stream by a demodulator and provided to the navigation unit, which contains
156 Principles of Modern Avionics
the application software for GPS-based navigation. In the following discussion we take a closer look at each of the components of the receiver.
Antenna A key feature of the antenna is that it is designed to have a high gain for signals from high elevation angles and low gain for signals from low elevation angles. Suppression of signals coming from low elevation angles enables a receiver to disregard multi-path signals—reflections of the main signal off-ground based structures—that come at relatively lower angles to the horizon. On the other hand, signals from satellites usually come from high elevation angles. When a satellite falls below a threshold elevation angle its signals are suppressed, as the receiver searches for other satellites to replace it.
Signal Preprocessing Details of the signal preprocessing unit are shown in Fig. 4.52. The input to this unit is the RF signal, and the output a digitized version of the down-converted IF signal. Antenna
Signal Preprocessing Unit RF Filter
RF LNA
IF Mixer
IF Filter
IF Oscillator
IF Amplifier
ADC
Fig. 4.52 RF filter, RF amplifier and down-converter
RF Filter: The received signal is generally contaminated with out-of-band frequencies and the image frequencies. These undesirable frequencies are removed using appropriate RF filters. Low Noise Amplifier: The filtered low-power incoming signal is fed to a Low Noise Amplifier (LNA). While an ordinary amplifier amplifies both the signal and the noise, an LNA differentially amplifies the signal thus keeping the Signal-to-Noise Ratio (SNR) of the amplified signal close to the SNR of the input signal. Down-conversion: Following filtering and amplification, the incoming signal is converted from RF, which is a rather high frequency, to a lower Intermediate Frequency (IF). One of
Sensors 157
the techniques used for down-conversion is heterodyning, which is illustrated in Fig. 4.52. For simplicity, assume that the incoming signal is SIN(t) = A sin (wRF t + j) A wave of frequency ωRF + ωIF, generated by a IF oscillator (IFO), that is, SIFO (t) = A sin [(wRF + wIF)t] is mixed with the input signal to get SM(t) = SIN(t) SIFO(t) = A{cos(wIF t – j) – cos[(2wRF + wIF)t + j]} The mixed signal contains a component of the desired IF as well as a high frequency component. The components around the frequency 2ωRF + ωIF are removed by an IF filter. The resulting IF signal is amplified by an IF amplifier prior to further processing. Recalling that the L1 C/A signal is π SL1C/A; RF(t) = DC/Ai(t) sin (ωRF t) + DPi (t) sin ωRF t + 2
the down-converted signal, is SL1C/A; IF(t) = DC/Ai(t) cos (ωRF t) + DPi (t) sin ( ωRF t ) That is, the down-converted signal has a component modulated by DC/Ai bit stream and another component—90° out of phase with the first—that is modulated by the DPi bit stream. Analog-to-Digital Conversion: The signal SL1 C/A; IF is digitized by an ADC. The output of the ADC is a bit stream that is used for acquisition and tracking.
Signal Acquisition/Tracking The signal acquisition/tracking module is illustrated in Fig. 4.53. The main task in the signal acquistion/tracking module is to align the incoming down-converted signal from SVi—the satellite with id i (1 ≤ i ≤ 37)—and a replica of the BPSK-modulated C/Ai-code generated locally within the receiver. The phase of the SIS fluctuates due to Doppler shift and factors such as atmospheric turbulence. The Doppler shift is continuously corrected by the Phase Lock Loop (PLL), shown in Fig. 4.53. We discuss the PLL is greater detail below. Secondly, in order to decode the transmissions sent by SVi a receiver generates its own copy of C/Ai–code, called the replica code. The replica code, being a bit stream, needs to be aligned with the C/Ai bit stream being generated by SVi. That is, the start of SVi’s bit stream needs to be synchronized with the start of the replica bit stream. The synchronization, which involves sliding the replica bit stream past SVi’s bit stream until alignment is achieved, is done by the Delay Lock Loop (DLL). We discuss the modules in Fig. 4.53 in greater detail below.
158 Principles of Modern Avionics Digitized SL1 C/A; IF
In-phase component of the received, down-converted signal NCO-generated Waveform Numerically Controlled IF Oscillator
Extract In-Phase Component
Updates for NCO Registers Phase Lock Loop
P-based Phase/Frequency Correction at IFNCO No
CrossCorrelation Computation E
P
Lock? No
L Delay Lock Loop
Replica C/A Code Generator
Updated delay
P-based Delay Correction
Prompt C/A code
Yes
Data Demodulation
Fig. 4.53 Signal acquisition and tracking in a generic GPS receiver. The mP (microprocessor) that handles the necessary corrections is not shown
Doppler Correction: The Doppler shift in frequency due to the motion of a satellite relative to the GPS receiver is given by freceived = ftransmitted
c+v c−v
where v, the relative (instantaneous) speed of the satellite with respect to the receiver, is positive if the satellite is moving towards the receiver and negative otherwise. For a satellite orbiting the earth at a distance of about 20,000 km with a period of 12 hours, the orbital speed is about 2900 m/s. The velocity of a receiver, who is stationed on the earth’s equator, due to the earth’s rotation is approximately 460 m/s. Therefore, conservatively, the relative speed of the satellite relative to an earth-based observer is approximately 2440 m/s. For a carrier frequency of 1575.42 MHz (L1C/A signal’s SIS frequency) the Doppler shift in frequency is 12.81 kHz. The preceding calculation is a simple back-of-the-envelope estimate of the size of the Doppler shift. The exact Doppler shift varies with time as the receiver and satellite follow their separate trajectories. One must also consider the transverse Doppler shift at the point of closest approach. The magnitude of the above estimate shows that the frequency of the received waves must be dynamically corrected for Doppler shift. A Phase Lock Loop (PLL), shown in Fig. 4.53, continuously tracks the difference between the frequency of the received signal and that of the IF wave generated by the local oscillator in the receiver, and corrects for the Doppler shift in a GPS satellite’s transmission. PLL is discussed below. See [Liu 1999, Kaplan 2005] for a more detailed discussion of the Doppler correction in satellite communications.
Sensors 159
Numerically Controlled Oscillator (NCO): A NCO, also called a Direct Digital Frequency Synthesizer (DDFS) can be used to generate waveforms of arbitrary shape, frequency and phase. Further, the frequency and phase of a waveform generated by an NCO can be changed dynamically through control inputs to the NCO. At its core, an NCO contains a Look-Up-Table (LUT) that contains (address, amplitude) pairs for the desired waveform. For example, the LUTs for square wave and saw-tooth wave of unit amplitude, are shown in Fig. 4.54. Initial Address
LUT for Square Wave Step Size
n-bit Address A Waveform LUT
LUT for Saw-Tooth Wave
n-bit Address
Amplitude
n-bit Address
Amplitude
0 1
1 1 1 –1 –1 –1
0 1 2 3 2n – 3 2n – 2 2n – 1
0 1/2n 2/2n 3/2n 2n – 3/2n 2n – 2/2n 2n – 1/2n
NCO
Accumulator
2n–1 – 1 2n–1 n–1 2 +1 2n – 1
Fig. 4.54 Numerically controlled oscillator and look-up-tables for sample waveforms
Given an n-bit address the LUT outputs the amplitude at that address. The addresses encode the phase of the wave with address A, 0 ≤ A ≤ 2n – 1, corresponding to phase value of ϕ(A) = (360° . A)/2n. The amplitude corresponding to address A would be the amplitude of the wave at phase ϕ(A). The architecture of a simple NCO is shown on the left in Fig. 4.54. In each clock cycle the n-bit address in the accumulator is incremented by the step size, which is stored in a separate step-size register, to generate a new n-bit address A. The amplitude of the waveform stored in LUT, corresponding to phase ϕ(A), is then output by the LUT. The addition operation is done modulo 2n, so that the accumulator cycles after reaching its maximum value. n is typically taken to be 32. If the frequency of the system clock is fc , the desired frequency of the output waveform is fd, and the step size is s, then [Analog 2017], fd =
s. fc 2n
fc and n being constant, one can tune the frequency of the output waveform by loading an appropriate value into the step size register. Secondly, the phase of the output waveform can be tuned by loading an appropriate initial address. For example, if the desired output waveform is sin(ωd t + θ), that is the desired phase at t = 0 is θ, then the initial address would be
160 Principles of Modern Avionics
Ainitial =
2n θ 360
For a more detailed discussion of NCO the reader is referred to [Analog 2017]. Delay Lock Loop (DLL): In order to check if the received signal is coming from SVi, where 1 ≤ i ≤ 37, the GPS receiver generates a local replica of the C/Ai code corresponding to SVi; see § 4.7.7.5 and § 4.7.7.6. The generated local replica of the C/Ai-code needs to be compared with the received C/A-code to determine if there is a match. A match indicates that the received ranging code is coming from SVi. A comparison between the received C/A-code and the generated local replica code, however, requires that the start bits of the two-code sequences should be aligned. Such alignment is achieved by the DLL [GMV 2017]. The basic principle of DLL is illustrated in Fig. 4.55. The C/A code transmitted by an SV is shown at the top. The receiver generates a local replica of the C/A code, called P(t) (P for Prompt; not to be confused with the ranging P-code), another bit stream called E(t) = P(t + τ) (E for Early) that is obtained by shifting P to the left by 1 chip (τ) and a third bit stream called L(t) = P(t – τ) (L for Late), which is P(t) shifted to the right by 1 chip. Recall Delay = d* – 0
1
Delay = d* 0
0
1
0
E
P
L
C/A
E
P
Averaged CrossCorrelation of E, P & L with the C/A bit stream (solid dots)
L
C/A's Autocorrelation AC/A
E
P
L
Fig. 4.55 Tracking delay using E, P and L bit streams
Sensors 161
that the graph of f (t – α) is obtained by translating the graph of f(t) by α units to the right along the t axis. As the three bit streams—E, P and L—are translated in lock-step along the time axis, the correlation between the P and the C/A bit streams is maximized when they are perfectly aligned, as shown on the right in Fig. 4.55. Let d* be the optimal delay that leads to a perfect alignment between the C/A and P, that is P(t + d*) = C/A(t). The autocorrelation of a real signal f(t) is defined as Af (a) =
T
1 f (t) f (t − α) dt T ∫0
where [0, T] is the time window of observation, with T being sufficiently large. The autocorrelation function of the C/A code has the qualitative form shown in Fig. 4.56 [Grewal 2013]. APRN(α) is also shown at the bottom of Fig. 4.55. AC/A()
0 0.9775 s = 1 chip = c Fig. 4.56 Autocorrelation function for the C/A code (qualitative plot)
AC/A(α) is negligible when the magnitude of α is greater than the time interval of 1 chip (τc = 1.023–1 seconds), and reaches its maximum when α = 0. As mentioned above, we take d* to be the optimal delay that leads to a perfect alignment between the C/A and P, that is P(t + d*) = C/A(t). When the E, P and L bit streams are translated by the optimal delay d*, E(t + d*) = C/A(t + τc). Therefore, the averaged inner product of E(t + d*) and C/A(t), called averaged cross-correlation of E(t + d*) and C/A(t), denoted C(E(t + d*), C/A(t)), C(E(t + d *), C/A(t)) = def
T
1 E(t + d *) ⋅ C/A(t) = AC/A(–tc) ≈ 0 T ∫0
Similarly, L(t + d*) = C/A(t – τc), and C(L(t + d*), C/A(t)) = AC/A(τc) ≈ 0. At optimal delay d*, C(P(t + d*), C/A(t)) = AC/A(0), as shown at the bottom right in Fig. 4.54.
162 Principles of Modern Avionics
Thus, the signature of alignment of P and C/A bit streams—that is, the signature of optimal delay d*— is a maximum averaged cross-correlation between P and C/A and nearly vanishing averaged cross-correlation between C/A and E, as well as C/A and L. On the other hand, for a delay of d* – β the averaged cross-correlations of E, P and L, with C/A—shown at the bottom left in Fig. 4.54—do not display the signature of optimal delay and suggest that E, P and L bit streams must be moved to the left in lock-step. The search for optimal delay d* is facilitated by the so-called S-wave defined as S(d) = C ( E(t + d ), C/A(t)) − C ( L(t + d ), C/A(t)) A qualitative plot of the S-wave is shown in Fig. 4.57. At the optimal d* the S-wave displays a zero-crossing. The DLL searches for the delay that gives a zero-crossing of the S-wave. The feedback loop for DLL is shown in Fig. 4.53. S(d)
c
d
d*
Fig. 4.57 S-wave. The zero-crossing represents the optimal delay
BPSK-modulated DC/Ai (in-phase component) Carrier wave generated by IF-NCO
CrossCorrelation
Doppler Shift Removal
CrossCorrelation
Lock? CrossCorrelation
E
P
L
Replica C/A Code Generator Fig. 4.58 Cross-correlation computation (shaded box)
Cross Correlation Computation: The details of the cross-correlation computation unit, shown in Fig. 4.53, are presented in Fig. 4.58. The Doppler-corrected IF carrier generated
Sensors 163
by the local IF-NCO is multiplied (BPSK-modulated) by the E, P and L codes generated by the Replica C/A Code Generator. The cross-correlation unit computes the cross-correlation between the received in-phase component and the modulated IF carrier waves. For example, denoting the phase/frequency locked (see below) replica IF carrier wave as c(t), the Early-code as E(t), and the in-phase component of the down-converted satellite signal as I(t), the cross-correlation is T
C(d) =
∫ I(t) ⋅ c(t) ⋅ E(t − d) dt 0
where T is a time interval contained within one chip of the 50-bps data bit stream. The cross-correlation values for the prompt code P(t) and the late code L(t) are computed similarly. The delay d is adjusted in the DLL until one reaches the optimal value d*, at which the cross-correlation C(d) is maximized and S-wave, shown in Fig. 4.57, undergoes a zero-crossing. Phase Lock Loop (PLL): The phase/frequency of the replica IF carrier wave generated by the receiver is synchronized with the phase/frequency of the down-converted incoming carrier wave using a PLL. First, we discuss the principle of analog PLL. Subsequently, we will describe the PLL shown in Fig. 4.53. An analog PLL uses a feedback loop to tune the output of a Voltage Controlled Oscillator (VCO) to have the same frequency as an input signal. As shown in Fig. 4.59, the phase of an input signal is compared with that of the output of a VCO. The phase detector produces a voltage that is proportional to the phase difference between the input signal and the VCO’s output. If the input signal and VCO’s output have different frequencies then the phase difference between the two waves is time-dependent. On the other hand, if their frequencies are the same then the phase difference between them is time-independent. As long as their frequencies are different the output of the phase detector has both a timevarying voltage and a constant voltage. The loop filter is a low pass filter that removes high frequency components. The filtered control voltage is applied to the VCO. Input Signal
Phase Detector
Loop Filter
Voltage Controlled Oscillator
P-Locked VCO Output
Fig. 4.59 Analog Phase Lock Loop
A VCO is essentially an LC circuit whose capacitance can be tuned by applying a suitable input voltage. As the capacitance of the LC circuit varies, so does the characteristic
164 Principles of Modern Avionics
oscillation frequency of the circuit. The capacitor is usually provided by a reverse-biased diode. The capacitance of the diode can be varied by changing the reverse-bias voltage. As long as the frequencies of the input signal and the VCO’s output are different the VCO’s frequency is driven towards the input frequency by the feedback loop. Once their frequencies match, the output of the loop filter becomes a constant (dc) voltage, which is exactly equal to the control voltage necessary for the VCO to produce an output with the same phase as the input. The reader is referred to [Stensby 1997] for a detailed discussion of analog PLL. The architecture of a Digital PLL (DPLL) is shown in Fig. 4.60. The NCO has been discussed above. We describe a Time-to-Digital Converter (TDC), which detects the phase difference between two signals and presents it as a digital output. For simplicity, we take the signals to be square waves. Phase/ Frequency Detector
Input Signal
Update NCO’s Step Size and Initial Address
Digital Loop Filter
P-Locked Output
NCO Fig. 4.60 Digital Phase Correction
The TDC circuit shown in Fig. 4.61 uses D flip-flops and delays. The circuit determines the phase difference between the input waveforms A and R. Delays are used to generate two other delayed versions of A, namely B and C. Being edge-triggered the D flip-flops record a = 0, b = 1 and c = 1, as shown on the right in Fig. 4.61. The transition from a = 0 to b = 1, indicates that the time delay δ between the rising edges of A and R satisfies the constraints 0 ≤ δ ≤ ∆, where ∆ is the time delay between A and B. Knowing the frequency of A and R, and the time delay between the rising edges of A and R, the µP (microprocessor) that is in charge of the PLL can calculate the phase lag between the rising edges of A and R. Delay A
B
D Q
C
D Q
C B
D Q
A R
R a
b
c
Fig. 4.61 Time-to-digital converter (TDC)
c=1 b=1 a=1
Sensors 165
The above argument assumes that the frequencies of A and R are equal. If they are not, then their phase difference is time-dependent and the pattern of bits abc… changes with time. By adjusting the step size of the NCO, the µP can synchronize the frequencies of the two input waves. See [Helfrick 2002, Egan 2008] for a detailed discussion of digital phase lock loops. Test for Lock: The phase and frequency of the IF-NCO are tuned continuously in the PLL, shown in Fig. 4.53, even as the delay of the replica C/A code is tuned by the DLL. At optimal tuning in the PLL and DLL, the cross-correlation between the received signal and the replica signal is maximized; see Fig. 4.55. When the output of the cross-correlation computation unit for the prompt bit stream P(t) exceeds a pre-set threshold, the receiver is said to have locked on to the incoming DC/A code bit stream. Upon detection of a lock, the incoming bit stream is routed to a demodulator for extraction of the D(t) bit stream.
Data Demodulation After a receiver locks on to the incoming GPS signal—that is, after it determines that the signal is from SVi (1 ≤ i ≤ 37), and determines the correct phase/frequency and code delay to synchronize the replica signal with the incoming signal from the satellite—the in-phase component of the SL1 C/A; IF signal, the synchronized replica code C/Ai and the phase/ frequency-locked replica carrier from IF-NCO are sent to the data demodulation unit, as shown in Fig. 4.51 and 4.53. The data bit stream D(t), encoded in the satellite’s signal, is recovered by the demodulator as shown in Fig. 4.62 [Hagen 2009] and routed to the navigation unit of the GPS receiver. Replica IF Carrier In-phase component of SL1 C/Ai; IF
BPSK Demodulator
DC/Ai
D(t) Replica C/Ai Fig. 4.62 Data demodulator in a GPS receiver
Navigation The navigation data is organized in the data stream D(t) as shown in Fig. 4.63 and Fig. 4.64 [Sessions 2017]. The entire navigation message comprises 25 frames, each with 1500 bits transmitted in 30 seconds at 50 bps. Thus, the total length of the message is 37,500 bits, transmitted over 12.5 minutes. Each frame is composed of 5 subframes. A subframe has 300 bits and is organized into 10 words with 30 bits each, as shown in Fig. 4.63. The structure of subframes is shown in Fig. 4.64.
166 Principles of Modern Avionics
1500 bits 30 seconds
Navigation Message
Frame 1
Frame 2
....................
300 bits 6 seconds
Frame 24
Frame 25
Subframe 4
Subframe 5
Frame
Subframe 1
Subframe 2
Subframe 3
Satellite Data
System Data
30 bits 600 milliseconds Word 1
Subframe
Word 2
....................
Word 9
Word 10
Fig. 4.63 Organization of data in GPS signal
Word 1
Word 2
Words 3–10
Subframe 1
TLM
HOW
Clock Correction Data, GPS Week Number
Subframe 2
TLM
HOW
Ephermeris Data 1
Subframe 3
TLM
HOW
Ephermeris Data 2
Subframe 4
TLM
HOW
Ionospheric Data, UTC, Almanac for SV 25-32
Subframe 5
TLM
HOW
Almanac Data for SVs 1-24, Almanac Ref. Time
TLM Preamble (8 bits) HOW
Time of Week (17 bits)
Data (16 bits) Data (7 bits)
Repeated every 30 seconds
Have 25 pages each; repeated every 12.5 minutes
Parity (6 bits) Parity (6 bits)
Fig. 4.64 Format of subframes in GPS navigation data
Each subframe begins with the Telemetry (TLM) data, followed by Hand-Over Word (HOW). The structures of TLM and HOW fields are shown at the bottom in Fig. 4.64. Words 3-10 in subframes 1-3 are used to transmit messages. Hence, the messages in words 3-10 of subframes 1-3 repeat every 30 seconds. The messages transmitted in words 3-10
Sensors 167
in subframes 4-5, however, are 200 words long organized into 25 pages of 8 words each. Thus, the messages sent in subframes 4-5 repeat every 25 frames or equivalently every 12.5 minutes. The collection of 25 frames, containing all of the 25 pages of the messages sent in subframes 4-5, is called a masterframe and constitutes the navigation data. The application program running in the navigation unit of a GPS receiver uses the data described above to compute the current location of the receiver, as discussed earlier. For a detailed discussion of the GPS signals the reader is referred to [GPS1 2013; GPS2 2013; GPS3 2013]. For a detailed discussion of the hardware and software of GPS receivers the reader is referred to [Parkinson 1996, Hagen 2009, Doberstein 2012, Xu 2016].
4.7.8 Automatic Direction Finder (ADF) Although the GPS technology provides the functionality offered by the older ADF technology, ADF is worth mentioning as it is one of the oldest navigation aids. We briefly describe the electromagnetic principles underlying the ADF. For a more detailed discussion of the ADF the reader is referred to [Helfrick 2002]. A cockpit ADF display has a needle that points continuously towards the ground-based radio transmitter to which the ADF is tuned. Thus, a pilot can fly towards a transmitter guided by the direction of the needle. An ADF detects the direction to the ADF radio transmitter, which broadcasts vertically plane-polarized waves, using a loop antenna and a sense antenna. y
Sense Antenna
Loop Antenna V
S
L Vs
n
L
E
T
U
x
Vl Fig. 4.65 Sense and loop antennae
A loop antenna is used to detect the line of propagation of plane polarized electromagnetic waves. The principle of loop antenna can be understood using a simple square conducting loop shown in Fig. 4.65. Assume that a plane-polarized electromagnetic wave of wavelength l is propagating along the positive x-axis with the electric field of the wave pointing along the y-axis. Then the magnetic field of the wave oscillates along the z-axis. Without loss of generality, we take the magnetic field to be
168 Principles of Modern Avionics
B(x, t) = B0 cos( px − ωt)kˆ where
p =
2π , λ
w =
2 πc λ
and
c = speed of propagation of electromagnetic waves in vacuum.
The fluctuating magnetic field of the electromagnetic wave induces an electric field around the loop, given by Maxwell’s equation
∇ × E = −
∂B = −B0 ω sin( px − ωt)kˆ ∂t
The total electromotive force (emf ) induced around the loop is given by Vl =
∫ E . dl
=
∫∫ (∇ × E) ⋅ dA A
L
= −LB0 ω∫ sin( px − ωt) dx 0
=
LB0 ω {cos( pL − ωt) − cos(ωt)} p
The direction of the emf changes after half a wave traverses the loop. The average emf over half a wave is
Vs =
=
LB0 ω2 πp
π/ω
∫ {cos(pL − ωt) − cos(ωt)} dt 0
2 LB0 ω sin( pL) πp
π The average induced emf in the loop over half a wave is maximized when pL = or 2 λ L= . 4
When the plane of the loop is perpendicular to the direction of propagation of the wave— that is, when the normal to the plane of the loop is parallel/antiparallel to the direction of propagation—the magnetic field oscillates in the plane of the loop, (∇ × E) ⋅ dA = B ⋅ dA = 0 and no emf is induced in the loop. When the induced emf in the loop is zero, the loop is said to be in the null configuration. In summary, as the loop is rotated about the y-axis, the average induced emf over half a wave peaks when the plane of the loop is parallel to the direction of propagation of the
Sensors 169
electromagnetic wave. Also, the average induced emf vanishes when the plane of the loop is perpendicular to the direction of propagation of the electromagnetic wave. Thus, a simple conducting loop can be used to detect the line of propagation of a plane polarized electromagnetic wave. A loop antenna can detect the line of propagation of the electromagnetic wave, but cannot determine whether the electromagnetic wave is propagating along the positive or negative x-axis in Fig. 4.51. The ambiguity in the direction is resolved using a sense antenna which is just a wire parallel to the edge UV. Assume that the sense antenna is positioned at a distance of al from the edge UV in the plane of the loop, as shown in Fig. 4.65, where a is a real number. The voltage induced across the sense antenna is Vs =
∫ E ⋅ dl = E0L cos (–pal – wt)
= E0L cos (wt + 2pa) If L = l/4 then LB0 ω 2 π Vl = sin ωt − 4 p On the other hand, if the electromagnetic wave is traveling along the negative x-axis, that is, B = B cos( px + ωt)kˆ 0
E = E0 cos( px + ωt) ˆj
then Vs = E0L cos (wt – 2pa) Vl =
LB0 ω 2 π sin ωt + 4 p
Table 4.66 summarizes the above calculations. Table 4.66 Voltages in loop and sense antennae
Direction of Propagation
∆ϕ = ϕs – ϕl
Loop emf (Vl)
Sense emf (Vs)
Right along the x-axis
LB0ω 2 π sin ωt − p 4
π E0 L sin ωt + 2 πa + 2
2 πa +
Left along the x-axis
LB0ω 2 π sin ωt + p 4
π E0 L sin ωt − 2 πa + 2
−2 πa +
3π 4 π 4
Thus, the phase difference between the loop and sense emfs contains the information about the direction of propagation of the electromagnetic waves. An ADF detects the direction of propagation by measuring the phase difference between Vs and Vl.
170 Principles of Modern Avionics
4.8 ENGINE SENSORS Modern aircraft engines employ sensors to measure parameters such as pressure, temperature, vibration, fuel flow rate, fuel quantity and shaft speed. We discuss some of the most commonly used sensors below. Engines deployed on commercial aircraft can be categorized into four types— reciprocating, turboprop, turbofan and turbojet [FAA 2014]. Examples of parameters of interest in the four types of engines are shown in Table 4.67 below. Table 4.67 Parameters of interest in different types of engines in aircraft
Engine
Examples of Parameters of Interest
Reciprocating
Temperature, pressure, vibration, shaft speed, torque, fuel quantity, oil pressure and temperature, cylinder head temperature, tachometer.
Turboprop
Temperature, pressure, vibration, shaft speed, torque, fuel quantity, oil pressure and temperature, cylinder head temperature.
Turbofan
Exhaust gas temperature, engine pressure ratio, thrust, bearing vibrations, N1 and N2 speeds.
Turbojet
Temperature, pressure, vibration, shaft speed, torque, fuel quantity, fuel flow rate
Most modern large commercial aircraft use turbofan engines. Therefore, we focus on the sensors of interest to turbofan engines. High-pressure (HP) Shaft
Combustion Chamber
High-pressure Turbine Low-pressure Turbine
Bypass Air Intake Air (Pin) Exhaust (Pout)
Fan
Low Pressure Compressor
High Pressure Low-pressure (LP) Shaft Compressor
Fig. 4.68 Two-spool turbofan engine
Nozzle
Sensors 171
Figure 4.68 shows a cross-sectional view of a typical turbofan engine. The air sucked in by the fan is compressed through a sequence of low pressure (LP) and high pressure (HP) compressors and then mixed with an atomized spray of fuel. The compressed air-fuel mixture is ignited in the combustion chamber creating a hot stream of gas, which powers the high pressure (HP) and low pressure (LP) turbines. Once the LP turbine starts running the subsequent rotation of the LP compressor and the fan is driven by the LP turbine. The LP turbine is mechanically coupled to the LP compressor and the fan through the LP shaft. Also, once the HP turbine starts running it drives the subsequent rotation of the HP compressor, which is coupled to the HP turbine through the HP shaft. The LP and HP shafts rotate independently. The engine shown in Fig. 4.68 is called a two-spool engine because it has two shafts—an LP shaft and a HP shaft [Boeing2 2017]. The rate of rotation of the LP shaft—expressed as a percentage of some nominal (e.g., maximum) rotation rate—is called N1. Similarly, the rotation rate of HP shaft—expressed as a percentage of some nominal value—is called N2. The engine is started using a starter motor to ramp up the rotation speed of the HP spool (compressor-turbine-shaft assembly). When the HP spool reaches about 20% of its maximum speed [Linke 2008] it generates sufficient airflow into the combustion chamber and adequate buildup of fuel-air mixture and the fuel-air mixture is ignited. The starter motor continues to provide the torque to increase the HP spool speed even after combustion has begun until the HP spool reaches a starter cutout speed at which point the combustion is sufficient to accelerate the HP spool to idle speed, which is about 60% of the maximum rotation speed of the engine. Once the HP spool reaches the starter cutout speed, the starter motor is turned off. Thereafter, the HP and LP spools are driven by the combustion. See [Linke 2008] for a more detailed discussion of engine start system. Part of the air sucked in by the fan is not routed through the compressors → combustion chamber → turbines assembly, which is called the engine core, but is instead routed through the bypass duct. The momentum imparted to the cold bypass air by the fan provides some of the propulsion generated by the engine. The cold bypass air also serves a secondary function of cooling the engine. The increased reliability of turbofan engines is attributed in a large part to the cooling of the engine by the cold bypass air. The bypass ratio of an engine is the ratio of the masses of air flowing per unit time through the bypass duct and the engine core. In High (Low) Bypass Turbofan Engines a large (small) fraction of the air sucked in by the fan is routed through the bypass duct. Another parameter of interest is the Engine Pressure Ratio (EPR) which is the ratio of the total pressure of the high velocity exhaust gas (Pout in Fig. 4.68) and the total pressure of intake air (Pin in Fig. 4.68) [Boeing2 2017]. For a detailed discussion of jet engines the reader is referred to [Hunecke 2010, Wild 2013, Royce 2015]. In the following subsections we describe the techniques used to sense some of the engine parameters.
172 Principles of Modern Avionics
4.8.1 Temperature There are several temperatures of interest in aircraft engines: the inlet air temperature, compressor inlet temperature, turbine inlet temperature and the exhaust gas temperature (EGT). The sensor of choice for the measurement of temperature in an aircraft engine is a thermocouple. The principle underlying a thermocouple is the Seebeck Effect, which states that an electromotive force (emf) is generated within a closed loop formed by connecting two dissimilar metals/alloys end-to-end at two junctions that are maintained at different temperatures. Fig. 4.69 illustrates a thermocouple. The emf generated is proportional to the temperature difference between the two junctions (T1 – T2 in Fig. 4.69). T2 Chromel A
T1
Cold Junction
Hot Junction
Voltmeter
Copper Wires
Alumel B Fig. 4.69 Chromel-Alumel thermocouple
The temperature that we seek to measure is that at the hot junction, while the temperature of the cold junction is maintained at a constant value (e.g., 0°C or 20°C ). The temperature of the cold junction does drift with the changing ambient temperature giving rise to a source of error. Several techniques have been developed to compensate for the temperature variation at the cold junction; see [Pallett 1992, Nagabhushana 2010] for further details. The Chromel-Alumel thermocouple, shown in Fig. 4.69, is best suited for measurement of EGT in a turbofan engine. Some of the common thermocouples used to sense temperatures in aircraft engines are listed in Table 4.70. Table 4.70 Examples of thermocouples used in aircraft
Thermocouple A
B
Sensitivity µV/°C
Max. Temp °C
Application
Copper
Constantan Ni: 40%, Cu:40%
42.8
400
Cylinder Head Temp. (piston engine)
Iron
Constantan Ni: 80%, Cu: 20%
54.0
850
Inlet Air Temp. Turbine Inlet Temp.
Chromel
Alumel
41.5
1100
Exhaust Gas Temp. Turbine Inlet Temp.
Sensors 173
4.8.2 Role of Engine Pressure Ratio (EPR) in Thrust Measurement The thrust of jet engines is measured in two ways: measurement of Engine Pressure Ratio (EPR) or measurement of N1, the rotational speed of the LP compressor, expressed as a fraction of its nominal value. We focus on EPR.
Va, P1
Ve, P2
Fig. 4.71 Forward thrust in a turbofan engine
One can understand the thrust generated by a turbofan engine by considering a simplified model of the engine, shown in Fig. 4.71. In steady state the amount of air entering at the inlet on the left must equal the amount of air exiting at the outlet on the right. Since fuel is added to air in the combustion chamber, the mass exiting the outlet includes the mass of the fuel added to the air. Assuming that ambient air is still and the aircraft is moving at velocity Vac to the left the relative velocity with which air enters the inlet is Vac. We take the ambient air pressure to be P1. We assume that the hot gas exits the exhaust at velocity Ve and has pressure P2. In steady state let the mass inflow rate of air at the inlet be µa kg/s. Then the rate at which the engine adds momentum to the air, or in other words the force that the engine exerts on the air to the right is Fair =
d (momentum of air) = ma(Ve – Vac) dt
By Newton’s third law Fair is also the magnitude of the force that the air exerts on the engine/aircraft to the left. Further, let the engine add µf kg/s of fuel in the combustion chamber. Since the engine accelerates the fuel from rest to exhaust velocity Ve the rate at which it adds momentum to the fuel is Ffuel =
d (momentum of fuel) = mf Ve dt
Again, by Newton’s third law Ffuel is the magnitude of the force that the fuel exerts on the engine to the left (forward direction). If the hot gas exiting the exhaust is expelled into vacuum at the rear of the engine, then the total forward thrust would be Fair + Ffuel. However, the hot gas at pressure P2 is expelled into the atmospheric air, which is at ambient pressure P1. The exiting air thus exerts a force FEPR = A(P2 – P1) = AP1 (EPR – 1)
EPR =
P2 P1
174 Principles of Modern Avionics
on the ambient air, where A is the cross-sectional area of the outlet and EPR stands for the Engine Pressure Ratio. Therefore, the ambient air exerts a leftward (forward) force of FEPR on the engine. The net forward thrust on the engine/aircraft is [Boeing2 2017] Fnet = Fair + Ffuel + FEPR = ma(Ve – Vac) + mfVe + AP1(EPR – 1) Thus, the measurement of the total thrust acting on the aircraft involves measurement of the EPR or, in other words, the measurement of the ambient air pressure and the pressure of the escaping gas at the engine’s nozzle. See § 4.3 for a discussion of the pressure sensors.
4.8.3 Engine Vibration, Acceleration and Shock Unusual vibrations, especially in aircraft’s engines, often provide early warning of critical failures. Hence, vibration sensors are deployed at selected locations in the engine— especially at the bearings. The vibration sensors are accelerometers, and they measure both vibration as well as shock events in engines. The sensor of choice for vibration measurement is the piezoelectric accelerometer discussed in § 4.6.1.2. When high sensitivity is desired servo-balance accelerometers (see § 4.6.1.5) can also be used. See [Nagabhushana 2010] for a detailed discussion on vibration sensors.
4.8.4 Fuel Quantity A widely used Fuel Quantity Indicator (FQI) relies on the measurement of capacitance between two cylindrical conductors as shown in Fig. 4.72 [FAA14 2014]. Consider an inner solid cylinder of radius a and an outer concentric hollow cylinder of radius b. Assume that the arrangement is suspended in the fuel tank and fuel can well up from below into the region between the two cylinders. If the relative dielectric constant of the fuel is k and the height of the fuel column Lf , then the capacitance between the two terminals A and B, denoted CAB, is CAB =
2 πε0 [(k − 1)L f + L] b ln a
Since all of the other quantities except Lf are constant on the right hand side of the equation measuring CAB gives the height of the fuel column Lf and hence the quantity of fuel left in the fuel tank. The capacitance probe of the fuel quantity has two shortcomings [Nagabhushana 2010]. 1. The relative dielectric constant k could vary from one fuel filling to another. The k value of the fuel can be determined by measuring the capacitance of a small compensating capacitor in the fuel tank that is fully submerged when the fuel tank is filled. The capacitance of the compensating capacitor can be used to correct for the variations in k.
Sensors 175
B A b L-Lf
a
Air
L Lf
Fuel
Fig. 4.72 Capacitive fuel quantity indicator
2. The energy content of fuel depends on the weight of the fuel rather than its volume. Therefore, the density of fuel also needs to be sensed to determine the total energy content of the fuel. In Boeing 777 a different method—ultrasonic sensing—is used to measure the fuel quantity. An ultrasonic pulse is transmitted from the top of the fuel tank. The pulse reflects off the fuel surface and is sensed by a receiver at the top of the tank. The time elapsed between transmission of the pulse and the reception of the echo gives the distance from the top of the tank to the fuel surface [Langton 2009]. Although Boeing 777 uses the ultrasonic sensing technology, interestingly, Boeing reverted to the capacitive sensing in Boeing 787 [Langton 2009].
4.8.5 Fuel Flow Rate The principle used to measure the fuel flow rate is illustrated in Fig. 4.73. A shaft is rotated at a constant angular velocity by a motor. The shaft is connected to an impeller by a torsion spring, with spring constant k. The impeller is a hollow rotating drum with long blades attached to it. One end of the spring is attached to the shaft, and the other end to the impeller’s drum. Assume that the impeller and the shaft do not have any mechanical coupling other than through the torsion spring. Now consider the situation where fuel is flowing through the region between the impeller and the stationary drum at the rate of M kg/s. The separation between the stationary drum and the impeller is exaggerated in Fig. 4.73, for clarity. Assume that the impeller has nearly the same radius as the stationary drum—that is, the region between the impeller and the stationary drum is a negligibly small. Let the outer diameter of the impeller, which is approximately equal to the inner diameter of the stationary drum, be R.
176 Principles of Modern Avionics
Torsion spring
Stationary Drum
Impeller Blade
Impeller Fuel Shaft Fig. 4.73 Fuel flowmeter
Fuel that enters the stationary drum has no angular velocity. When the fuel exits the stationary drum, it has an angular velocity of ω imparted to it by the rotating impeller. In steady state, the impeller imparts angular momentum to the fuel at the rate of MR2ω kgm2-radians/sec2. Or in other words, the impeller exerts a torque T = MR2ω on the fuel. The torque T is provided by the torsion spring by undergoing an angular deflection, such that R2ω M k
MR2w = kq ⇒ q =
In other words, measuring the additional deflection q of the torsion spring yields the fuel flow rate M, since R, w and k are constants. The angular deflection q can be measured by installing two magnets—one on the shaft and the other on the impeller drum. A search coil, at rest relative to the stationary drum, records pulses whenever the magnets rotate past the coil. If the pulses in the search coil are separated by time ∆t then the angular displacement is given by q = w(Dt) Measuring the time interval ∆t between the pulses enables one to determine the angular displacement between the magnets. See [Baker 2002, Nagabhushana 2010] for a detailed discussion of fuel flowmeters.
4.8.6 Engine’s Shaft Speed The rotational speed of an aircraft’s engine is measured using variable reluctance tachometer (VRT). As discussed above, a two-spool engine has two rotating shafts—the low pressure (LP) shaft and a high pressure (HP) shaft. A VRT can measure the angular speed of either shaft. The VRT is based on Hopkinson’s Law, which is an analogue of the Ohm’s Law in electricity [Veltman 2016]. Hopkinson’s law states that the magnetomotive force F, and the magnetic flux Φ in a segment of a magnetic circuit are proportional, and the constant of proportionality in that segment is the reluctance R of the medium in that segment. Reluctance in a magnetic circuit is analogous to electrical resistance in an electrical circuit.
Sensors 177
A magnetomotive force is the force felt by a unit magnetic charge (magnetic monopole), and arises due to a nonzero magnetic field acting on the charge. A magnetomotive force performs work on a magnetic charge increasing its kinetic energy. It is important to note that a magnetomotive force—much like the electromotive force generated by a changing magnetic flux—is a non-conservative force. That is, one cannot derive the magnetomotive force as the gradient of a scalar field. Magnetomotive force is expressed in units of ampere-turns. The reluctance R of a segment of a magnetic circuit is given by R=
L µ 0µ r A
Pole piece
Gear
Permanent Magnet
High Reluctance/Low flux
Gear
where L is the length of the segment, A its cross-sectional area, m0 the magnetic permeability of vacuum, and mr the relative permeability of the medium in the segment. The geometry of a segment remaining unchanged, as the relative permeability of the medium in the segment increases, reluctance decreases. Thus, a material such as iron, which has higher relative permeability than the air, has lower reluctance. The magnetomotive force F remaining constant, Hopkinson’s law states that replacing the medium of a segment, initially having air, by iron increases the magnetic flux in the region. This phenomenon is exploited by a variable reluctance tachometer as illustrated in Fig. 4.74. A VRT has a permanent magnet connected to a bar of iron called pole piece. A current carrying conductor is wound around the pole piece. The assembly comprising the magnet and the pole piece is positioned close to a rotating gear, which is made of a material with high magnetic permeability.
Low Reluctance/High Flux
Vout
Permanent Magnet
Vout Fig. 4.74 Variable reluctance tachometer
178 Principles of Modern Avionics
As the gear rotates, the reluctance in the region between the gear and the pole piece changes. As shown at the top in Fig. 4.74, when the pole piece is between the teeth of the gear, the closed loops made of magnetic lines of force—the magnetic circuit—go through air in the vicinity of the pole piece. As the reluctance in the vicinity of the pole piece is high, the magnetic flux through the pole piece is low. When a tooth of the gear is close to the pole piece—as shown at the bottom—the reluctance in the vicinity of the pole piece is low, since the gear is made of a material with high permeability. As a result, the flux through the pole piece increases giving rise to a voltage pulse at Vout. Since separation between the teeth of a gear is known, measuring the frequency of pulses at Vout yields the angular speed of the gear. Specifically, if the gear has n teeth that are equally spaced, and the frequency of pulses at Vout is f pulses/second, then the angular speed of the gear, w, is given by 2 πf radians/sec w= n The LP and HP shafts of a turbofan engine, shown in Fig. 4.68, are mechanically coupled to gears whose rotations are driven by the rotations of the shafts. Using a VRT to measure the angular speed of a gear coupled to a shaft, one can determine the angular speed of the shaft in Rotations Per Minute (RPM). The rotation speeds of the LP shaft (N1) and of the HP shaft (N2) are monitored continuously to track the health of the engine. See [Honeywell2 2012] for a detailed discussion of VRT. Besides the VRT described above, optical tachometer, and sensors based on the Hall effect are also used to measure a shaft’s RPM [Ghosh 2012].
qqq
CHAPTER
5
Avionics in Civilian Aircraft Following a general discussion of the principles and building blocks of avionics in previous chapters, in this chapter and the next we discuss actual avionics systems that are deployed on civilian and military aircraft. The discussion in this chapter revolves around a case study—Boeing 787—that represents the state-of-the-art in civilian avionics. In § 5.2.1 we discuss selected avionics systems in Boeing 787. In later subsections we describe some generic avionics systems found in civilian aircraft. The discussion in this chapter is largely skewed towards avionics in large commercial aircraft. We begin by summarizing some of the key features of Boeing 787 that represent a departure from the design orthodoxy of the past.
5.1 OVERVIEW OF BOEING 787 DREAMLINER One of the significant features of Boeing 787 is its mostly electrical architecture, and the resultant gains in fuel efficiency. Previously, aircraft used bleed air—the highly compressed air bled from the combustion chambers of engines—for such tasks as cabin pressurization, starting engines, airframe de-icing, braking and gust alleviation [Sinnett 2007]. The high pressure bleed air, which is also at high temperature, needs to be cooled in heat exchangers, using external cold air, to render it suitable for use in cabin pressurization. Draining heat from bleed air leads to reduction in fuel efficiency. Secondly, the bleed air is used to operate heavy pneumatic equipment, which adds to the gross weight of the aircraft. Boeing 787 has a bleedless architecture. That is, it does not divert any compressed air inside the engines for other uses. Instead, it uses the electrical energy produced by the onboard generators to perform the tasks that were earlier done by bleed air. The electrically-driven equipment that replaces the pneumatic equipment are significantly lighter and reduce the gross weight of the aircraft. Although Boeing 787 generates 1.45 megawatts, which is about five times the power generated in comparable aircraft that use bleed air, the bleedless architecture enables Boeing 787 to extract 35% less power from the engines for other tasks (such as cabin pressurization), thereby increasing fuel efficiency [Sinnett 2007]. The second significant departure in Boeing 787 concerns the composition of its frame. The aircraft has about 80% of its frame, by volume, built from composite material [IndustryWeek 2007]. By weight, 50% of its body is made of composite materials, 20% of aluminum, 15% of titanium, 10% of steel and 5% of other materials [Hawk 2007]. Boeing 787 has all-composite fuselage and wings [Marsh 2014]. About 35 tons out of the total weight of 119.95 tons 179
180 Principles of Modern Avionics
of the aircraft is made of carbon fiber reinforced polymer (CFRP) [Onishi 2005]. The higher strength-to-weight ratio of CFRP leads to a reduction of the aircraft’s weight and hence a gain in fuel efficiency. The avionics in Boeing 787 is based on the Integrated Modular Architecture (see § 1.4.4) and are designed to achieve substantial integration. According to Mike Sinnett, Chief Systems Engineer of Boeing’s 787 program, over a hundred LRU (Line Replaceable Units) were eliminated from the avionics of the predecessor Boeing aircraft by the integrated Common Core System (see below). The elimination of black boxes and the interconnection wiring among them, made possible by increased integration, is estimated to have reduced the overall weight of the avionics system on Boeing 787 by about 2000 pounds [Ramsey 2005]. Examples of integrated avionics in Boeing 787 are the Integrated Surveillance System (ISS), which houses the TCAS, modeS transponders, TAWS and weather radar in a single unit, and the Integrated Navigation Receiver (INR), which houses the ILS, marker beacons, VOR, GPS and GPS Landing System (GLS) in one unit [Ramsey 2005]. Boeing 787’s design achieves further weight reduction by using Fly-By-Wire (FBW) control system (see § 5.2.4.6), which eliminates heavy cables and other machines. Although the FBW technology is old, it has been exploited in Boeing 787 to reduce fuel consumption and for active gust suppression. Specifically, the FBW system changes the aerodynamic profiles of wings during flight to both optimize fuel consumption and counteract turbulence. The design of Boeing 787 also embodies significant strides in the pilot-aircraft interface and in the riding comfort of passengers. The five 15” MultiFunction Displays and the two Head-Up Displays (HUDs) in the cockpit, which provide an unprecedented amount of area for displaying navigation as well as flight information, enhance situational awareness during flight. The two Electronic Flight Bags (EFB) in the cockpit eliminate the need to carry bulky manuals. Passengers’ comfort is enhanced by a Vertical Gust Suppression system that counteracts atmospheric turbulence [Dodt 2011] to make the ride smoother. The noise-reducing chevrons on its engine nacelles and deployment of sound-absorbing materials at the air inlet of its engine reduce the noise output of the engine [Zaman et al 2011]. The cabin windows in Boeing 787 are not only electronically dimmable, but are also the largest among all commercial jets that are currently in operation [Boeing 2017].
5.2 AVIONICS IN CIVILIAN AIRCRAFT We begin discussion of civilian avionics by reviewing the avionics architecture and systems in Boeing 787. The discussion is supplemented by description of avionics systems found in other civilian aircraft, as appropriate.
5.2.1 Avionics in Boeing 787 Starting with the overall avionics architecture—the common core system—the avionics systems of Boeing 787 are described in the following paragraphs.
Avionics in Civilian Aircraft 181
5.2.1.1 Common Core System (CCS) The nervous system of Boeing 787, the CCS, comprises three main subsystems, which are illustrated in Fig. 5.1 [Ramsey 2005]. 1. Common Computing Resource (CCR): A CCR is a collection of computing resources that support the computations performed by the avionics systems. Boeing 787 has two CCRs. The CCR cabinets house LRMs for power control, data processing and network switches. The architecture is based on open standards, making it possible for the LRMs, with specific functionalities, to be designed by third parties. The CCR was developed by G.E. Aerospace. Cockpit Displays
CCR
CDN
RDC
Fig. 5.1 Topology of the Common Core System in Boeing 787
2. Common Data Network (CDN): The CDN, developed by Rockwell Collins, comprises a network of fiber-optic and copper cables running along the length on either side of the aircraft. The CDN provides interconnectivity among the sensors, actuators and avionics subsystems. It is based on the ARINC 664 protocol. The CDN uses the Avionics Full DupleX (AFDX) technology, which is based on the Ethernet protocol. The AFDX, originally deployed in Airbus A380, is an implementation of ARINC 664 Part 7 protocol, which is about three orders of magnitude faster than its predecessor ARINC 429 (see Section 3.12.2.1). Whereas ARINC 429 is a unidirectional bus with one transmitter and up to twenty receivers, AFDX uses the notion of virtual links (VLs), with each VL operating as an ARINC 429 network having one transmitter and several receivers. Bidirectional communication between subsystems A and B requires two VLs—one in which A is the transmitter and another in which B transmits. A VL is implemented by attaching a 16-bit VL identifier to each 32-bit word transmitted on AFDX. The AFDX switches, which are loaded with pre-determined VL configuration table, use the VL identifier attached to a word to route it to the proper
182 Principles of Modern Avionics
destinations. Each VL is assigned a certain bandwidth to assure a VL of minimum Quality of Service. The AFDX services the VLs in a round-robin manner among all the VLs that are attempting to transmit data at a given time. In addition to Boeing 787, the AFDX technology is also used in Airbus A380, Airbus A400M, Airbus A350, Sukhoi Superjet 100, ATR 42, ATR 72, AugustaWestland AW101, AW189, AW169 and AW149 [Heise 2015]. 3. Remote Data Concentrators (RDC): Distributed along the CDN cables, the 21 RDCs in the 787 serve to reduce the total wiring in Boeing 787. The RDCs, which interface to the sensors and actuators, have the capability to do local processing of input, and send processed signals to the CCR. They serve as the gateways through which a wide variety of sensors, actuators and subsystems, including legacy hardware, can connect to the CCS.
5.2.1.2 Flight Management System A Flight Management System (FMS) is tasked to determine a flight path prior to takeoff and then guide the aircraft along the pre-determined flight path [McHale 2011, Miller 2012]. Guidance of a flight involves two subtasks—navigation of the flight along the flight path and optimization of the performance of the aircraft during flight. An FMS relies on the real-time data from the navigation sensors—such as accelerometers, gyroscopes, VOR equipment, DME equipment and GPS sensors—to provide the pilots or the autopilot system the necessary guidance. A generic FMS is illustrated schematically in Fig. 5.2. Data from Sensors
Navigation Database (NDB)
Control Display Unit (CDU) Electronic Flight Instrument System (EFIS)
Flight Plan
Autopilot
Flight Management System (FMS) Flight Management Computer (FMC)
Primary Function Displays (PFDs) Navigation Displays (NDs) Multifunction Displays (MFDs)
Fig. 5.2 Data flow diagram for the Flight Management System
An FMS comprises a Flight Management Computer and a Navigational Data Base (NDB). The NDB, which is updated approximately every month, contains the latest information about all of the airports, runways, airways, waypoints and locations of the land-based
Avionics in Civilian Aircraft 183
radio navigation aids—such as DME and VOR. Prior to takeoff the flight plan—comprising information such as the source airport, destination airport, estimated flying time, number of people on board, the gross weight of the plane and the fuel weight—is loaded into the FMS. Using the given flight plan, and the information in NDB, the FMS determines an optimal vertical as well as lateral flight path for the entire journey. During flight, an FMS senses the current 3-dimensional location of the aircraft using data from three types of sensors: 1. GPS receivers, which sense GPS signals from satellites. 2. Equipment such as DME and VOR, which sense signals from land-based navigation aids. 3. On-board navigational equipment, such as accelerometers and gyroscopes that determine the location of the aircraft through ‘dead reckoning’. Using the flight plan and the position data about the aircraft the FMS guides the pilot or autopilot system both laterally as well as vertically. 1. The Lateral Navigation (LNAV) function of the FMS steers the aircraft along a specified lateral flight path. 2. The Vertical Navigation (VNAV) function of the FMS steers the aircraft along a specified vertical flight path, particularly during landing. The pilots interact with the FMS through two interfaces. 1. The Control Display Unit (CDU), which is a small panel comprising a display and a keyboard that interfaces a pilot to the Flight Management Computer (FMC). 2. The Electronic Flight Instrument System (EFIS), which includes the PFDs, the NDs and the MFDs. The FMS uses the displays in EFIS to provide pilots the necessary information during flight. In the early years of aviation aircraft were guided by ground-based navigation aids (navaids) such as the VOR and DME stations. Aircraft would fly from one navaid to another in a zig-zag path en route from source to destination, as shown (old route: dashed lines) in Fig. 5.3. With the deployment of computers onboard, Area Navigation (RNAV), which allows a more direct flight path between source and destination became feasible [RNAV 2017]. In RNAV, a computer (or a pilot) is allowed to generate virtual waypoints—such as those shown in Fig. 5.3—between the source and destination. A waypoint can be created by specifying its displacement (say (r, θ) in polar coordinates) from one of the groundbased navaid stations along the route. Alternatively, a waypoint can also be created by specifying its longitude and latitude. The FMS would then navigate the aircraft along the RNAV route—guiding it from one waypoint along the path to the next—as shown (with bold lines) in Fig. 5.3. By enabling computers to design routes that are not constrained to go through navaid stations RNAV made it possible for aircraft to follow more direct and hence more efficient flight paths than was previously possible. The advent of GPS technology has strengthened the interest in RNAV.
184 Principles of Modern Avionics
V1
Ground-based Navaid Station
V3
RNAV Route V2
Destination
Source
Old Route
Computergenerated Virtual Waypoints
Fig. 5.3 Illustration of Area Navigation (RNAV)
Closely related to RNAV is the concept of Required Navigation Performance (RNP) and Actual Navigation Performance (ANP). The RNP specifies a tolerance band around a pre-determined flight path. An aircraft is required to fly within the tolerance band. For example, at RNP level 4—applicable to oceanic or remote areas in which the recommended inter-aircraft separation is 30 nautical miles—RNP requires an aircraft to remain within 4 nautical miles of the RNAV flight path [RNAV 2017]. The ANP is a measure of the actual deviation of an aircraft’s course from its RNAV path. When the ANP exceeds RNP, alerts are generated to warn the pilots. In Performance Based Navigation the RNP constraints on an RNAV flight path are determined by and take full advantage of the performance capabilities of the navigational equipment onboard the aircraft and the performance capabilities of the pilots [PBN 2017]. The design of an RNAV flight path is subject to several external constraints, such as those imposed by the Air Traffic Controllers (ATCs) and by the inter-aircraft vertical and lateral separation guidelines. For example, fuel consumption can be optimized by starting idle descent—a descent mode in which the engines consume minimal fuel—at an appropriate altitude and distance from the runway. However, the high traffic density near the airports could constrain ATCs to disallow idle descent for all aircraft that are attempting to land. In such circumstances the VNAV implemented by a FMS is modulated by the constraints imposed by ATCs. The above discussion provides a description of the functionalities of a generic FMS. The FMS on Boeing 787 is provided by Honeywell Aerospace and has the following capabilities: GPS Landing System (GLS), Controller-Pilot Data Link Communication (CPDLC), Future Air Navigation System 1/A (FANS 1/A), Automatic Dependent Surveillance-A (ADS-A) [McHale 2011]; see § 5.2.1.3 and § 4.7.6. By automating several of the flight management tasks that would otherwise be performed by the flight crew FMS significantly lowers the load on pilots. Besides reduction
Avionics in Civilian Aircraft 185
of pilots’ fatigue, the FMS has also been credited with improvement of fuel efficiency, reduction in the number of delays, turn-backs and diversions, reduction in maintenance costs, improvement of situational awareness and reliability.
5.2.1.3 Communication, Navigation, Surveillance and Identification (CNSI) The avionics in Boeing 787 represents a level of integration that is unprecedented in commercial aircraft. Figure 5.4 illustrates the increased integration in Boeing 787 over its predecessor, Boeing 777, especially in CNSI subsystems. The CNSI subsystems are described briefly below, deferring more detailed discussion of selected subsystems to later in the chapter [Dodt 2011].
1. Communication As shown in Fig. 5.4, the communication avionics on Boeing 787 comprise VHF (Very High Frequency) radio for short range line-of-sight communications, a HF (High Frequency) radio for long-range communications and a SATCOM (SATellite COMmunications) system for satellite-based communication. The following paragraphs provide additional details about the communications systems deployed on Boeing 787. FANS 1/A: In the past the ATCs have managed the air traffic using, largely, voicebased radio communications with pilots. With increasing air traffic density, especially near airports the load on ATCs has approached saturation levels, motivating the need for an alternative paradigm for air traffic management. The Controller-Pilot Data Link Communication (CPDLC) was devised to replace voice-based analog communications with digital communication over a data link. The often-used messages between the ATCs and the pilots are compiled, in CPDLC, into a set of standard messages that the ATCs and pilots can send each other. For example, an ATC can choose from a set of standard messages for level assignments and radio frequency assignments to pilots. Similarly, the pilots can use a repertoire of standard messages available to them to transmit standard communications such as requesting clearance or reporting an emergency. In addition to the set of standard messages, the CPDLC also provides ATCs and pilots with the capability to send text messages that are outside the standard repertoire. An implementation of CPDLC is the Future Aircraft Navigation System 1 (FANS 1), developed by Boeing, and an analogous FANS A system developed by Airbus. The two systems are together known as FANS 1/A, which is available on both Boeing 787 and Airbus A380. The FANS 1/A is based on the ARINC 622 protocol [FANS 2017]. VHF 2100: The VHF 2100, designed by Rockwell Collins, is a radio deployed on Boeing 787. It is designed to support VDLM2 communications, which are about ten times faster than VHF communications. It supports both digital voice and data communications, as well as the more traditional analog communications. It is designed to support the CPDLC protocol and operates in the frequency range of 118.00 MHz to 136.99167 MHz [Rockwell Collins 2017].
186 Principles of Modern Avionics
TCAS ATC
MB DME
DME
RA
RA
VHF
VHF
HF
HF
SATCOM
SATCOM
ADIRU
Earth Reference System (ERS)
Inertial
MMR
Common Core Resource (CCR) Displays FM/TM/AT Air Data EICAS EHMMS Gateways Others
SAARU DME
Other
Integrated Navigation Radio (INR)
Other
Inertial
Comm. Radio
Nav. Radio
VOR
Integrated Surveillance System (ISS)
Surveillance
EGPWS (TAWS)
Nav. Radio
WxR
AIMS Displays DFDAC FM/TM CMF/ACMF/QAR FDCF Gateways Others
Fig. 5.4 Boeing 777 avionics and Boeing 787 avionics
Comm. Radio
Surveillance
SAT-2100: The SAT-2100 B system designed by Rockwell Collins is a satellite communication system deployed on Boeing 787. It provides both voice and data communications capability through INMARSAT satellites. INMARSAT was started by the INternational MARitime Organization, and provides global satellite-based connectivity to the aviation industry, through a constellation of about eleven geostationary satellites. SATCOM—short for SATellite COMmunication—provides the operational configurations: Aero-H/H+, Aero-I, Aero-L, Aero-C and Aero mini-M. The details of Aero-H/H+, Aero-I, and Aero-L are shown in Table 5.5. For details of the Aero-C and Aero mini-M configurations the reader is referred to [INMARSAT 2017].
Avionics in Civilian Aircraft 187
SAT-2100 supports essential data communications required by avionics systems such as ACARS (Aircraft Communication and Recording System) and AFIS (Aircraft Flight Information System). SAT-2100 is also used for high-speed internet connectivity during flight, thereby providing a flying office environment. Table 5.5 SATCOM configurations for use in aviation
Configuration
Details
Aero-H
Telephony, fax and data communications Voice @ 9.6 kbps, fax @ 4.8 kbps, data @2.4 kbps, cockpit data @ 10.5 kbps Suited for large commercial/government/business aircraft. High gain antenna on aircraft. Could be top or side mounted phased array, or tail mounted mechanical array.
Aero-H+
Telephony, fax and data communications Voice @ 4.8 kbps, fax @ 2.4 kbps, data @ 2.4 kbps, cockpit data @ 1.2 kbps Suited for large commercial/government/business aircraft. High gain antenna on aircraft. Could be top or side mounted phased array, or tail mounted mechanical array.
Aero-I
Telephony, fax and data communications Voice @ 4.8 kbps, fax @ 2.4 kbps, data @ 2.4 kbps, cockpit data @ 1.2 kbps Suited for short to medium-haul, narrow-bodied commercial/government/ business aircraft. Uses INMARSAT’s spot beams for voice, fax and data transmissions, and global beams for packet data transmission
Aero-L
Packet data communications Data link operates @ 1.2 kbps Used for reliable backup channel for transmission of cockpit data and realtime flight monitoring Uses a low-gain antenna
2. Navigation The navigation avionics on Boeing 787 was developed by Honeywell and comprises the following systems. (a) An Inertial Reference System (IRS). (b) An Air Data System (ADS). (c) Two Integrated Navigation Receivers (INRs). Each INR contains equipment for interacting with ground-based Instrument Landing System (ILS), Marker Beacons (MB), VHF Omnidirectional Range (VOR), satellite-based Global Positioning System (GPS) and instrumentation for GPS-based Landing System (GLS). (d) Two Distance Measuring Equipment (DME) transmitters. (e) Two Radio Altimeters (RA).
188 Principles of Modern Avionics
As shown in Fig 5.4, the VOR, MB and MultiMode Receiver (MMR)—which are separate systems in Boeing 777, are integrated into INR in Boeing 777. Air Data System
Standby Display
Inertial Reference System
Fault Detection and Isolation (FDI) Voting System
Primary Function Display (Left)
Primary Function Display (Right)
Fig. 5.6 Voted Air Data and Inertial Reference Systems in Boeing 787
Inertial Reference System The IRS contains two μIRSs—Micro Inertial Reference Systems. Each μIRS contains three ring laser gyroscopes and three accelerometers strapped down to the frame of the aircraft. The gyroscopes and accelerometers measure the acceleration and rotation of the frame to provide both location and orientation of the frame through dead reckoning. The ARINC 429 output from the μIRS is provided to the FMS, PFDs, HUDs and EGPWS (Enhanced Ground Proximity Warning System; see § 5.2.10). Further the system is capable of automatic initialization and alignment-in-motion using GPS inputs. Accelerometers and gyroscopes are discussed in Chapter 4. For technical details of μIRS see [Honeywell 2012].
Air Data System The ADS comprises six Air Data Modules (ADMs) that collect data from pitot tubes and other air data sensors mounted at different locations on the aircraft’s frame. The processed data from the ADMs is sent to the ADS software running on the CCR. The data from ADMs is compared with the corresponding data from the IRS, as shown in Fig. 5.6. The data displayed on the PFDs is a result of the voting on the ADS and IRS data by the ADS.
Integrated Navigation Receiver The components of INR are discussed separately in other sections. ILS and MB are described in § 5.2.12. VOR is described in § 1.2.3.7. GPS is described in § 4.7.7. Additionally, DME is described in § 1.2.3.8 and RA in § 5.2.13.
Avionics in Civilian Aircraft 189
3. Surveillance As shown in Fig. 5.4 the weather radar, Traffic Collision Avoidance System (TCAS), Enhanced Ground Proximity Warning System (EGPWS) and Air Traffic Control communication (ATC) systems of Boeing 777 are integrated into a single Integrated Surveillance System (ISS) in Boeing 787. Weather radar is described in § 5.2.9, TCAS in § 5.2.11 and TAWS (EGPWS) in § 5.2.10. The ATC communications use the FANS 1/A protocol, and the VHF 2100 and SAT-2100 systems described above.
4. Identification The avionics equipment used for identification functionality include the ADS-A and ADS-B systems (see § 4.7.6) and the IFF system (see § 6.2.5.1).
5.2.1.4 Display Systems The Boeing 787 cockpit has four types of displays [Neville and Dey 2012, Norris and Wagner 2009]: 1. Five 15-inch-diagonal MultiFunction Displays (MFDs). Four of the displays are located symmetrically in front of the pilot/co-pilot while the fifth display is located below, in the central pedestal, in front of the throttle. While the two outer MFDs—called the Primary Function Displays (PFDs)—display information in a fixed format, the two inner MFDs—called the Navigation Displays (NDs) allow the crew to choose the information to be displayed. The PFDs display flight data such as pitch and roll attitude, outside air temperature and radio altitude. The PFDs also display, on the upper outer corner, the flight details such as the flight number and its transponder code. On the outer lower corner the PFDs display the digital messages between the air traffic controllers and the pilots. The NDs on the other hand can be configured to display information that is specific to the current phase of the flight. For example, during the taxiing phase the NDs can be configured to display the airport’s map. They can also display the crosssectional view of the approaching terrain as well as the optimal calculated trajectory for landing. 2. Two Head Up Displays (HUDs) are mounted directly ahead of the two pilots and display the critical information from the flight instruments. The HUDs enable pilots to review the critical data from instruments while looking out of the cockpit. The HUDs play a particularly important role during low visibility landing and takeoff. 3. Two Electronic Flight Bags (EFBs) are located on the two sides of the cockpit. The EFBs enable paperless operation of the Boeing 787, and are able to display information— such as maps and manuals. The EFBs also permit communication with the flight management computers [Neville and Dey 2012]. Pilots used to carry the reference documents, including equipment instruction manuals, nav-charts and checklists, in flight bags, which weighed in excess of 15 kg.
190 Principles of Modern Avionics
The Electronic Flight Bag (EFB), which provides pilots with electronic documents, creates a paperless cockpit. Boeing 787 provides two EFBs—one for each of the two pilots. The EFBs enable pilots to perform several flight management tasks—such as takeoff calculations—more efficiently. Each EFB in the cockpit of Boeing 787 comes equipped with • a dual-processor • touch screen with control buttons, and • dual hard disk drives. Further, they are designed to be highly reliable making it unnecessary to carry backup paper documents. 4. One Integrated Standby Flight Display (ISFD), located at the center on the front Main Instrument Panel (MIP). The ISFD, which operates on a separate battery power supply and uses its own accelerometers, gyroscopes and sensors, provides the necessary flight data—such as aircraft altitude, pitch and roll angles, artificial horizon and airspeed—to guide the pilots in the event of failure of all the MFDs. The ISFD is very reliable with a Mean Time Between Failure of over 50,000 hours. CREW INFORMATION SYSTEM Electronic Flight Bag Crew Wireless LAN
Wireless LAN Interface Airport Personnel Aircraft's Systems
Central Maintenance, Condition Monitoring Data Loader Configuration Manager MANAGEMENT SYSTEM
Fig. 5.7 Crew Information System/Management System on Boeing 787
5.2.1.5 Crew Information System/Management System Boeing 787 is equipped with a Crew Information System/Management System (CIS/MS), shown in Fig. 5.7 [Ramsey 2005]. The CIS provides a secure Crew Wireless LAN (CWLAN)—that is,
Avionics in Civilian Aircraft 191
an intranet—to facilitate onboard wireless communication among the crew and systems. The CWLAN is hosted on an onboard server and interconnects the onboard avionics systems. A wireless LAN interface also allows airport personnel to connect wirelessly to the CWLAN when the aircraft is sufficiently close to the gate (about 400 feet). The wireless connectivity enables technicians to download maintenance data about the aircraft, and airport personnel to upload data such as passenger information, flight plan and cabin inventory prior to flight. The CIS also includes the electronic flight bag, described above. The MS in Boeing 787 performs the central maintenance and condition monitoring functions. The onboard systems send MS the relevant data enabling the MS to identify and isolate faults and issues that require the attention of the maintenance staff. When new data or software is to be loaded onto the aircraft’s systems, the data loader module of the MS acts as the conduit through which the data/software is deployed on the target avionics system. Thus, the MS is able to maintain central awareness of the data and software resident on the aircraft’s systems. The configuration manager module of MS serves to keep track of the version of data/software on the systems.
5.2.1.6 Engine-Indicating and Crew-Alerting System (EICAS) In addition to the usual controls for starting engines and adjusting the throttle, the pilots of Boeing 787 are also provided with the option of viewing the engine data on the PFD (Primary Flight Display) by choosing the Engine Indicator and Crew Alert System (EICAS) display mode [B787 2012]. The EICAS can be viewed in three settings:
• Normal Display: both the primary and secondary engine parameters of both the engines are displayed. • Normal Display with Alerts: in addition to parameters displayed in the normal display setting, alert messages to crew are also displayed. • Compact Display: only the primary engine parameters of the two engines are displayed.
5.2.1.7 Enhanced Airborne Flight Recorder (EAFR) Boeing 787 is equipped with two flight recorders—one in the front and the other in the aft section of the aircraft. The recorders are capable of retaining more than 25 hours of the most recent flight data and 2 hours of voice data. The standard for EAFR is specified in ARINC 767-1, which outlines the format for recording flight data, voice and video. The recorded data is stored as a Flight Recorder Electronic Documentation (FRED) file, whose format is specified by ARINC 647 [Jesse 2014, Dodt 2011].
5.2.1.8 Electrical Power in Boeing 787 One of the distinguishing features of Boeing 787 is that it is a more electrical aircraft. Many of the non-propulsion tasks such as cabin pressurization and wing de-icing, which were previously performed using bleed air (see § 5.2.5), are done using electrical power in Boeing
192 Principles of Modern Avionics
787 [Sinnett 2007]. Consequently, Boeing 787 has a higher demand for electrical power than previous commercial aircraft, and yet it is more fuel efficient. As shown in Fig. 5.8, the aircraft has six generators—two 250 kVA generators coupled to each of the main engines and two 225 kVA generators coupled to the APU. Altogether the generators can produce 1.45 megawatts of power. The generators coupled to the main engines are driven by the engines’ rotation. The APU is a smaller engine that serves to power the aircraft on the ground before the main engines are started. During flight, the generators coupled to the APU serve as a backup source of power in the event of failure of both of the main engines. The generators coupled to the APU are driven by the APU’s engine. Boeing 787 uses the APU 5000 unit manufactured by Hamilton Sundstrand. Main Engine Electrical/ Electronics (E/E) Bay
Auxiliary Power Unit (APU)
APU Exhaust
250 kVA Generators 225 kVA Generators
250 kVA Generators Remote Power Distribution Unit (RPDU) Fig. 5.8 Electrical power generation and distribution in Boeing 787
The generators coupled to the main engines produce AC power during flight. Since the engine’s rotational speed varies the generators, which are directly coupled to the engine’s gearbox, produce power of variable frequency. The frequency of the power generated by the 250 kVA generators ranges from 360 Hz to 800 Hz [Sinnett 2007]. The six generators produce AC power at 235 V. The electrical systems in Boeing 787 operate at two AC voltages—235 V and 115 V—and two DC voltages—28 V and ± 270 V [Sinnett 2007]. The power produced by the generators at 235 VAC is stepped down to 115 VAC by Auto Transformer Units (ATUs), and converted to 28 VDC supply by the Transformer Rectifier Units (TRUs). The ± 270 VDC power is obtained from the 235 VAC supply using Auto Transformer Rectifier Units (ATRUs). The electrical power is distributed to the electrical systems through two Electrical/ Electronics (E/E) bays—one located at the front of the aircraft and other at the back as shown in Fig. 5.8. In addition, Remote Power Distribution Units (RPDUs) are positioned at several locations in the aircraft. The aft E/E bay supplies 235 VAC power to selected electrical systems. The forward E/E bay and the RPDUs provide 115 VAC and the 27 VDC
Avionics in Civilian Aircraft 193
power, which are used by most electrical systems. The ± 270 VDC is used by heavy duty motors such as those used for cabin pressurization. In addition to the power generated onboard, the aircraft also has 115 VAC receptacles for the supply of power from external sources to the aircraft. The six generators are called start generators for they are also used to start the engines. Before the main engines are started on the ground, the APU is started by running its generators as motors with the power for motors supplied either by a battery or a power source at the airport terminal. If the APU needs to be started during flight, then its generators are powered by batteries—if the engines are not operational—or by the generators coupled to the engines. Similarly, the main engines are started by running their generators as motors with power derived from APU’s generators or external power from the airport’s terminals. During flight, an engine’s generators can be started using power derived from the generators of the engine on the other wing, if they are operational, or using the power derived from the APU’s generators [Sinnett 2007]. In the remainder of this chapter we discuss avionics systems found in civilian aircraft in general. Some of the avionics systems discussed below are also deployed in Boeing 787.
5.2.2 Full Authority Digital Engine Control (FADEC) In the early days of aviation pilots would manually monitor the engine parameters such as fuel flow, engine pressure ratio and engine temperature. Now the control and monitoring of engine parameters is handled by the Full Authority Digital Engine Control computer (FADEC). In a typical civilian aircraft the pilot enters pertinent flight data—such as runway length, airport temperature and pressure and cruise altitude—into the Flight Management System (FMS) computer. Using the data the FMS computer determines the power settings for the different phases of the flight—taxiing, takeoff, cruising and landing. FADEC then takes over the monitoring of the engine parameters consistent with the phase-dependent settings. Except in an emergency, when a pilot can push the engine to provide full thrust, the engine is controlled completely by the FADEC. There are many advantages to having FADEC, rather than a pilot, monitor the engine. 1. The overall fuel efficiency of the engine is enhanced, lowering the fuel consumption. 2. FADEC decreases the load on the pilot. 3. Automating engine control facilitates the integration of engine control functionality with the other avionics systems. 4. FADEC ensures safe operation of the engine by restricting the engine parameters to remain within the safety envelope. 5. FADEC can gather data during engine operation that would help with engine diagnostics, health monitoring and maintenance. Not surprisingly, handing over the engine control completely to a computer also has downsides.
194 Principles of Modern Avionics
1. If the FADEC fails, so does the engine. Even if the FADEC does not fail fully but only malfunctions partially, it is not possible for a pilot to manually override it. Therefore, the safety of a flight hinges on fault-free functioning of FADEC. To provide faulttolerance, a second (and sometimes third) FADEC is provided as backup. 2. The complexity of design, development and testing of an aircraft is increased owing to the addition of a FADEC system to the avionics.
The downsides of FADEC are dwarfed by the benefits of automated control of an aircraft’s engines and modern aircraft, such as Boeing 787, come equipped with an onboard FADEC. Boeing 787 runs on either the GEnx-1B jet engine manufactured by General Electric or the Trent 1000 engine manufactured by Rolls-Royce. The engine control for the GEnx engine is provided by the FADEC 3 system developed by FADEC International in Colombes, France [FADEC 2016].
5.2.3 Engine Health Monitoring and Management System (EHMMS) Manufacturers of aircraft engines have engine monitoring technology embedded in the engine and provide associated software to monitor the health of the engine. For example, Rolls-Royce provides an EHMMS that monitors the health of the engine during flight and after landing. Several of the engine’s parameters—such as the shaft speed, turbine gas temperature, pressure, flow rate and vibration levels—are measured by sensors deployed at critical points in the engine [Rolls-Royce 2017]. The measurements are monitored to determine if the parameters are within the specified tolerances. Unusual wear and tear of bearings and gears are monitored by deploying detectors that sense debris in the engine oil that would result from the wear and tear. Sensors are also deployed in the oil system, the fuel system, the cooling air system and the nacelle ventilation systems to ensure that the systems’ parameters are within the safety ranges [Rolls-Royce 2017]. Sensor data pertaining to engine’s health is routed to the Integrated Vehicle Health Monitoring (IVHM), discussed below. The IVHM consolidates similar health data from all of the aircraft’s systems, making the data available to on-ground crew for analysis as well as necessary maintenance and repair.
5.2.4 Digital Fly By Wire System (DFBWS) The DFBWS uses a digital computer and an Electronic Flight Control System (EFCS) to control, stabilize and maneuver the aircraft. The control surfaces of a typical aircraft are shown in Fig. 4.1. Boeing 777 has 2 ailerons, 2 flaps, 2 elevators, 1 rudder and 7 spoilers on each wing, totaling 21 control surfaces in all. The control surfaces are used to change the direction of the aircraft, lift it, stabilize it and decelerate it. For example, the rudder is used to make the aircraft turn about the vertical axis. Hydraulic actuators are used to move the control surfaces in response to commands from a pilot. Previously, the mechanical control apparatus in cockpits—such as the steering
Avionics in Civilian Aircraft 195
wheel and pedals—were mechanically coupled to the actuators using cables and pulleys that conveyed a pilot’s commands to the actuators. The DFBWS converts a command from a pilot—such as the movement of the steering wheel—to an electrical signal, which is transmitted over an electrical wire to the target actuator. DFBWS has the following advantages.
• By eliminating heavy cables and pulleys, the DFBWS significantly reduces an aircraft’s weight. • It improves the safety of the aircraft. If the control signals issued by a pilot push the parameters outside the safety envelope, then the DFBWS can be programmed to either alert the pilot or even override the manual command.
Airbus and Boeing have important differences in their implementation of the fly-bywire technologies. Whereas Boeing allows a pilot to manually override the FBW system, Airbus does not allow manual override of its FBW system. Also, Boeing uses the traditional wheel, column and rudder pedals to control the aileron, elevator and rudder respectively. Airbus on the other hand uses side-stick control. A fly-by-wire system in Boeing 777 comprises a Flight Control Computer (FCC) and electronics for controlling actuators. Boeing 777 FBW system offers the following functionalities [Bartley 2001].
• Stall protection
• Over-yaw protection
• Overspeed protection
• Fin load alleviation
• Bank angle protection
• Flap load alleviation
• Tail strike protection
• Gust suppression
• Thrust asymmetry compensation
• Modal suppression
For instance, thrust asymmetry compensation is activated when one of the engines either fails or is shut down. The new feature in Boeing 787—gust suppression—suppresses the impact of turbulence through negative feedback to make the ride more comfortable for passengers [Dodt 2011].
5.2.5 Bleed Air versus Bleedless ‘More Electrical’ Aircraft As shown in Fig. 4.68, the air sucked in by a turbofan is compressed in stages as it passes first through a low pressure compressor and subsequently through a high pressure compressor. Part of the high-pressure air, that is, the air prior to its entry into the combustion chamber, is diverted as bleed air. In bleed air aircraft, bleed air is used for non-propulsion tasks such as cabin pressurization, operation of hydraulic actuators, engine cooling, airframe antiicing, operating air-driven motors, water and waste storage tanks [Sinnett 2007]. The bleed air is not only at high pressure but also at high temperature (up to 250°C). So if it is to be used inside the cabin—for example, for cabin pressurization—then it needs to
196 Principles of Modern Avionics
be cooled. It is generally cooled using air-to-air heat exchangers in which the external cold air is used to air to drain away the heat from bleed air. In some tasks—such as de-icing the airframe or heating the engine’s intake air to ensure that ice does not form on the engine’s fan blades—the hot bleed air is not cooled before use. Boeing 787 was a departure from earlier designs that used bleed air. It is designed to be a bleedless more electrical aircraft. That is, Boeing 787 does not use any bleed air. Instead, it uses the electricity that is generated onboard for performing the tasks that, earlier, relied on bleed air. As mentioned earlier (in § 5.2.1.8) Boeing 787 has two 250kVA generators tied to each of its engines and two 225 kVA generators tied to the Auxiliary Power Unit (APU) [Ogando 2007]. The generators tied to the engines are also used to start the engines. The total power available on the aircraft is about 1.45 MW (at 24 kW/home, it is sufficient to power sixty homes). The electrical power is used to perform tasks such cabin pressurization, engine start, braking, gust alleviation and airframe de-icing. Boeing 787 design represents an advance towards a ’more electrical’ aircraft. By replacing the heavy pneumatic systems with electrical systems the aircraft’s dead weight is reduced. Boeing predicts about 3% reduction in fuel costs due to bleedless, more electrical design. The greater reliance on electrical power also reduces the maintenance costs tied to the maintenance-intensive bleed air systems, while improving reliability. The bleedless design in Boeing 787 is estimated to draw 35% less power from engines, owing to the greater efficiency of electrical power [Sinnett 2016].
5.2.6 Integrated Vehicle Health Monitoring (IVHM) The safety of flights and the overall cost of operations are greatly improved through continuous monitoring of the health of onboard systems including avionics. With the advances in sensor technology, repair and maintenance decisions are increasingly based on the performance data of the aircraft’s systems collected during flight. For example, as discussed in § 5.2.1.5, the data from Boeing 787’s systems is collected by the CIS/MS during flight and is available for wireless download by the on-ground maintenance crew. Boeing’s IVHM strategy spans four objectives [Stephenson 2006]: diagnostics, prognostics, condition-based maintenance and adaptive control. Diagnostics involves collection and analysis of performance data of systems that is gathered during flight to identify and isolate faults. Diagnostics and repair based on analysis of performance data about aircraft’s systems, make it possible to detect and correct faults sooner than is possible with schedule-based maintenance programs that were used previously. Boeing offers its customers the Airplane Health Management service program, which tracks an aircraft’s health continuously. Whereas diagnostics attempt to detect faults that have already occurred, prognostics seek to anticipate the occurrence of faults in the future. Prognostics are based on analysis of the
Avionics in Civilian Aircraft 197
current performance data viewed against a knowledge base of warning signs of imminent faults. Historical performance data is used to build the knowledge base of warning signs. Condition-based maintenance strategy is a departure from previous approaches to maintenance. Previously, maintenance was performed based on the number of hours of operation logged by a system between maintenance operations. Such schedule-based maintenance is suboptimal for two reasons. System malfunction between maintenance operations undermines the safety of flight. On the other hand, a system in good health is nevertheless subjected to maintenance work leading to suboptimal maintenance overheads. Condition-based maintenance eliminates both of these inefficiencies by customizing the maintenance schedule to the actual condition of a system rather than guidelines derived from statistical analysis of historical data. The condition of hardware is monitored using the data derived from sensors, while the condition of software is monitored using diagnostic software. For example, as described above in the discussion on EHMMS, embedded sensors in engines measure critical engine parameters such as temperature, pressure and vibration at selected points in the engine. The data transmitted by the embedded sensors enable the analysis software to reconstruct the state of health of the engine. Similarly, if the results computed by one of the three PFCs in Boeing 777 are determined to be faulty then that PFC and the software running on it are earmarked for repair, and the subsequent calculations of the faulty PFC are disregarded. Adaptive control pertains to the strategy of dynamically optimizing the performance of the aircraft in the event of a mid-air malfunction of one or more systems. It involves taking stock of the available functioning resources, and charting a course to safe landing. It is a disaster recovery response mechanism, rather than a strategy for sustaining the functional health of the aircraft.
5.2.7 Air Data Inertial Reference Unit (ADIRU) The Air Data and Inertial Reference Unit (ADIRU) of Boeing 777 integrates the functionalities of the Air Data Computer (ADC) and the Inertial Reference Unit (IRU). The ADIRU supplies air data—such as air speed, angle of attack and altitude—and inertial data—such as attitude, heading and position—to the aircraft’s instruments, including its primary function display [Stewart 2014, Croucher 2015]. Static and total pressure readings from a Pitot tube are converted to electrical signals, by an Air Data Module (ADM) located close to the Pitot tube, and sent to ADIRU and its backup Standby Attitude and Air Data Reference Unit (SAARU) over the ARINC 629 bus (Fig. 5.9). ADIRU receives 3 total pressure readings and 6 static pressure readings. While ADIRU uses the input data from sensors to compute the air data, the inertial data is computed using the gyroscopes and accelerometers housed in ADIRU. Specifically, it contains 6 ring laser gyroscopes and 6 accelerometers, in addition to three power supplies and four processors [Stewart 2014].
198 Principles of Modern Avionics
Ps, Pt
ADIRU
PFD
SAARU
ARINC 629 Bus
PFD
Fig. 5.9 Air Data and Inertial Reference Unit
5.2.8 Standby Air Data and Attitude Reference Unit (SAARU) SAARU in Boeing 777 serves as a backup unit that takes over if and when ADIRU fails. It receives total pitot pressure from 3 sensors and static pressure from 6 sensors. It contains 4 fiber optic gyroscopes and 4 accelerometers. In case of an ADIRU failure SAARU provides the attitude, heading and air data to the aircraft’s instruments [Stewart 2014]. A pilot can select either ADIRU or SAARU as the data source for the PFD.
5.2.9 Weather Radar (WxR) Boeing 787 is fitted with MultiScan ThreatTrack (MSTT) weather radar made by Rockwell Collins. The MSTT has the following features [RockwellCollins 2016].
• It can determine weather threats within 320 nautical miles by scanning—both vertically and horizontally—the convective cells that have either already formed or are forming rapidly. • It can identify dangerous areas—such as areas of high lightning activity or hail— within and around a thunderstorm and displays such areas by color-coding the regions of interest. • It can not only detect thunderstorms, but also predict their formation by tracking the rate of growth of storm cells. This capability ensures that the aircraft does not unknowingly wade into the top of a developing thunderstorm. • It can detect severe turbulence within 40 nautical miles on the aircraft’s forward trajectory. • It eliminates the clutter of the weather on the ground, presenting pilots with only the relevant weather data.
Avionics in Civilian Aircraft 199
The above features of MSTT represent some of the current capabilities of weather radars. An interested reader is referred to [RockwellCollins 2016] for additional details.
5.2.10 Terrain Awareness and Warning System (TAWS) TAWS, earlier called Enhanced Ground Proximity Warning System (EGPWS), is designed to issue audio and video alerts when there is an imminent threat of Flight Into Terrain (FIT). FAA has mandated installation of TAWS in all aircraft. Specifically, the FAA mandate is that “no person may operate a turbine-powered airplane unless that airplane is equipped with an approved terrain awareness and warning system that meets the requirements for Class A equipment in Technical Standard Order (TSO)-C151. The airplane must also include an approved terrain situational awareness display.” [TAWS 2002]. Discrete Variables
Continuous Variables
Landing Landing Gear Flaps
Altitude AOA Flap Angle
Audio Alerts
GPS FMS EFIS ILS ADC IRS/AHRS
TAWS System Video Alerts Terrain, Obstacles and Runways Database Fig. 5.10 TAWS illustrative architecture
Controlled Flight Into Terrain (CFIT) occurs when an airworthy aircraft collides with the terrain, due to poor visibility, pilot’s inattention, incorrect terrain awareness or miscommunication with ATC. FIT could also occur due to malfunctioning of equipment such as the altimeter or engine. If critical equipment, such as engines, fail then the aircraft can no longer be regarded as airworthy and the flight into terrain is not controlled. As shown in Fig. 5.10, a typical TAWS system receives data from a combination of sensors such as GPS, altimeter, air data computer, AHRS, AOA sensor and ILS. The TAWS system also has a stored global database of terrain, obstacles and runways. The current position and velocity data is then analyzed in the context of the stored terrain, obstacles
200 Principles of Modern Avionics
and airports information to determine if the aircraft is flying into terrain. Imminent FIT threats are communicated to pilots in both audio and video formats. Table 5.11 Minimum set of audio alerts that the TAWS should annunciate
Condition
Class A
Class B
Reduced required terrain clearance
Imminent terrain impact
Premature descent
Excessive rates of descent
Negative climb rate or altitude loss after takeoff
Descent to 500 feet above terrain/runway in a non-precision approach
Excessive closure rate to terrain
Flight into terrain when not in landing configuration
Excessive downward deviation from an ILS glideslope
The TAWS systems are classified as Class A and Class B systems. Class A(B) systems are deployed on larger (smaller) aircraft. The minimum set of alerts for Class A and Class B systems are shown in Table 5.11 [FAA 2016].
5.2.11 Traffic Collision Avoidance System (TCAS) The Traffic Collision Avoidance System (TCAS)—also called Airborne Collision Avoidance System (ACAS) internationally—is deployed on aircraft to prevent mid-air collisions. The incident that accentuated the need for collision avoidance technology was a mid-air collision that occurred over Grand Canyon between two commercial aircraft at an altitude of around 20,000 ft. [Collision 1956]. Mid-air collisions are not limited to commercial aircraft. As recently as 2015, a mid-air collision occurred between a military aircraft (F-16) and a small civilian plane (Cessna 150M) [Schafer 2016]. Collision avoidance systems operate without relying on the ground-based air traffic control systems. The basic principle of a collision avoidance system is quite straightforward [Tooley 2007]. Collision avoidance hardware is installed on an aircraft, which is called the hardware’s own aircraft. The hardware uses radio frequency sensing to detect the presence of other airborne aircraft—called target aircraft—within the protected volume of the own aircraft. The protected volume of an aircraft is a region around the aircraft, typically with a radius of a few nautical miles. If some other aircraft enters the protected volume of the own aircraft, then the intruder aircraft is regarded as a potential threat.
Avionics in Civilian Aircraft 201
Following detection of an intruder aircraft, the collision avoidance hardware tracks the intruder. The threat level of the tracked intruder is continuously adjusted to reflect the probability of collision. If the threat of collision escalates to a dangerous level, then the hardware—possibly in collaboration with the ATC and/or the other aircraft’s collision avoidance hardware—advises the pilot about the optimal escape plan to avoid collision. In the following we illustrate the implementation of the above principles in the context of a concrete collision avoidance standard called the Traffic Collision Avoidance System-II (TCAS-II), which was developed by FAA. Rockwell Collins’ TCAS-II Trafffic Computer TTR 2100 houses a concrete implementation of the TCAS-II standard [TCAS 2011]. First we discuss the details of threat detection and tracking and subsequently the details of threat mitigation. The reader is referred to [TCAS 2011] for additional details.
Threat Detection and Tracking The first step in collision avoidance is the detection of other aircraft whose trajectories have a high probability of leading to collision with the own aircraft. While providing a fail-safe collision avoidance protection, TCAS also needs to ensure that unnecessary advisories are not issued. TCAS strikes the balance between maximizing the protection while minimizing the number of unnecessary advisories using three concepts: tau, protected volume and sensitivity level. Tau: Tau is defined as the time to the Closest Point of Approach (CPA) between the own aircraft (on which TCAS is installed) and the target aircraft that is a potential threat. Two types of tau are used in TCAS—range tau and vertical tau. The above definition—the time to CPA—actually gives the range tau. Vertical tau is the time to co-altitude—that is, the time it takes for the two aircraft to reach the same altitude. Vertical tau can be calculated as the vertical separation divided by the vertical relative velocity. Assuming that both the aircraft are equipped with Mode S transponders computing the range tau and vertical tau is straightforward. Range tau can be calculated using ADS-B squitters, which are broadcast at least once every second (see § 4.7.6). The vertical tau can be calculated using UF4-DF4 interrogation-reply communication between the two aircraft. Recall that a UF4 uplink interrogation asks for the altitude of the target aircraft, in response to which the target aircraft transmits its altitude in a DF4 downlink reply. Protected Volume: An aircraft has two nested regions around it as shown in Fig. 5.12. If another aircraft enters the outer region then TCAS issues a Traffic Advisory (TA) alert. On the other hand, if another aircraft enters the inner region then TCAS issues a more urgent Resolution Advisory (RA) alert. The geometry of these two regions depends on the altitudes of the two aircraft, the range tau, the vertical tau and the sensitivity level (discussed below). While a TA alert is advisory and does not call for any action by the pilot, an RA alert calls for a collision avoidance maneuver by the pilot, as discussed below. Sensitivity Level: The sensitivity level (SL) sets the thresholds for tau that determine the TA and RA regions. If the SL is higher—that is, the sensitivity of TCAS is high—it senses
202 Principles of Modern Avionics
intrusions at longer ranges and the TA and RA regions are larger. The downside of higher SL, however, is that it could increase the number of aircraft that need to be monitored. TCAS allows SL to be set in two ways. 1. A pilot can set the SL. Setting the SL to SL1 puts the TCAS in standby mode. In this mode, which is usually selected when the aircraft is on the ground or TCAS has failed, the TCAS does not transmit any interrogations. If a pilot sets the TCAS to SL2 mode—also called TA-only mode—then it performs all of the functions except issuing RA alerts. The third SL setting is called TA-RA. If a pilot selects the TA-RA setting, then the SL is selected by TCAS depending on the altitude of the aircraft; see Table 2 in [TCAS 2011]. In the TA-RA mode the TCAS performs all of the functions. 2. The SL of an aircraft can be set by ground station. While this is provided as an option, it is not currently implemented on the U.S. air space.
TA RA
TA
RA
Fig. 5.12 Travel and Resolution Advisory. Left (right): horizontal (vertical) cross section
The architecture of TCAS-II system is shown in Fig. 5.13 [TCAS 2011]. The Mode S transponder provides the gateway to communicate with the ground-based ATC system and Mode S transponders on other aircraft. The Mode S transponder broadcasts ADS-B squitters, which update other aircraft about the own aircraft’s position and altitude data. The squitters are used by other aircraft to activate TA and RA alerts. The Mode S transponder is also used to coordinate the RA maneuvers with other TCAS-II equipped aircraft. The control panel is provided to enable the crew to control the TCAS computer, the transponder and the displays. Although two sets of antennas are shown in Fig. 5.13 modern TCAS systems have only one pair of antennae, which is shared by the TCAS computer and the Mode S transponder. A directional antenna is installed on top of the aircraft and an omni-directional antenna at the bottom. Optionally, an additional directional antenna is also installed at the bottom. The antenna transmit interrogations at 1030 MHz and receive messages at 1090 MHz.
Avionics in Civilian Aircraft 203
Directional Antenna (top of the aircraft)
Radar Altitude
GPS Data
Aural Alert
Mode S Transponder
TCAS Computer
Omni Antenna (bottom of the aircraft)
TA Display
Left RA Display
Pressure Altitude
Right RA Display
Mode S/TCAS Control Panel
Fig. 5.13 TCAS architecture
The TA display is used to show the positions of nearby aircraft relative to the own aircraft, vertical speed indications and the TA status. The display also allows the pilot to filter the traffic to be displayed by range or altitude. The RA display provides the guidance to execute the avoidance maneuver. One RA display is provided per pilot. The TCAS hardware is required to maintain situational awareness of the traffic within 14 nautical miles and up to a traffic density of 0.3 aircraft per square nautical mile. TCAS II is capable of tracking up to 30 aircraft within a range of 30 nautical miles. The situational awareness is maintained by listening to the ADS-B squitters broadcast by the Mode S transponders on other aircraft. The ADS-B squitters contain the transmitting aircraft’s address, which is used to send targeted interrogations to the aircraft. As a target aircraft approaches the own aircraft, Mode S transponder is made to interrogate the target aircraft at a higher rate for position and velocity data. The data obtained by the transponder is provided to the collision avoidance software for threat mitigation. Before proceeding to a discussion of threat mitigation we mention two challenges that the TCAS systems face—garble and multipath transmissions. Owing to multiple concurrent interrogation-reply interactions between ground-based ATC and airborne TCASs, as well as among the airborne TCASs, the Mode S antennae could have several replies, all at 1090 MHz, incident on them. The incidence of irrelevant FRUITs (False Replies Unsynchronized with Interrogator Transmissions) gives rise to garble. The TCAS system is required to filter out the FRUITs. The interference caused by transmissions from multiple TCAS systems becomes a particularly serious problem when they overload the ground-based ATC or thwart the critical collision avoidance maneuvers of aircraft.
204 Principles of Modern Avionics
Interference Limiting protocols are incorporated into TCAS surveillance to address the problems posed by garble. The TCAS standard requires that no transponder should be suppressed by garble for more than 2% of its operating time. The TCAS design reduces garble by restricting the interrogation rate and the power that TCASs are allowed to emit during interrogation as the Number of TCAS Aircraft (NTA) increases. The NTA is computed using the number of distinct aircraft addresses received by the TCAS’s Mode S transponder. Recent enhancements consider, in addition to NTA, additional contextual information—such as whether the nearby aircraft are on the ground in an airport or are airborne. A high traffic density on ground in airports is significantly less restrictive than a high traffic density in air. While interference limiting algorithms work to minimize the overall garble, restrictions are placed on the algorithms to ensure that they do not cause TCAS to undershoot the minimum surveillance necessary for collision avoidance. The other strategy for minimizing garble is hybrid surveillance. When the intruder aircraft is sufficiently far away, the TCAS uses passive surveillance to track the intruder. That is, it merely listens to the intruder’s squitters without crowding the airspace by sending its own interrogations to the intruder. When the probability of collision with an intruder exceeds a certain threshold however TCAS switches to active surveillance and starts a bidirectional communication by transmitting interrogation messages. Specifically, if the intruder crosses certain proximity threshold in either or range or altitude but not both, the surveillance remains passive but the surveillance status is confirmed every ten seconds with an interrogation. However, if the intruder crosses proximity thresholds in both range and altitude then the surveillance switches to active status and the interrogation rate increases to once per second. The hybrid surveillance strategy is intended to minimize the number of interrogation transmissions without jeopardizing the protection provided by the collision avoidance procedure. The second challenge in threat mitigation is multipath transmissions. A message from an aircraft could travel to a target aircraft through multiple paths owing to reflections off the ground. The messages that are reflected off the ground are discernible since they have lower power. The TCAS filters the reflected messages by setting a Dynamic Minimum Threshold Level (DMTL) for detection. Messages that have lower incident power than the DMTL are disregarded.
Threat Mitigation: Broadly, threat mitigation can be partitioned into three phases. Listed in the order of increasing criticality the phases are tracking, threat escalation and collision avoidance maneuver. Tracking: Once a target aircraft has been detected within the range of the TCAS, it initiates tracking of the target aircraft (as described under Threat Detection). The position of the target aircraft is tracked by listening to its squitters. The position of the own aircraft is obtained from its GPS, inertial and pressure sensors, enabling TCAS to compute the relative position of the target aircraft. Continuous monitoring of relative position allows the TCAS to compute the relative range velocity as well as relative vertical velocity of the target aircraft. The relative position and velocity data is supplied to threat estimation software in TCAS.
Avionics in Civilian Aircraft 205
Threat Escalation: TA and RA alerts are issued based on altitude-dependent thresholds for both tau and separation. The thresholds are listed in [TCAS 2011; Table 2]. Part of the table is reproduced below. Table 5.14 Thresholds for TA/RA alerts
Own Altitude (feet)
Tau (seconds)
Range Separation (nautical miles)
Altitude Separation (feet)
TA
RA
TA
RA
TA
RA
< 1000*
20
N/A
0.30
N/A
850
N/A
1000 – 2350*
25
15
0.33
0.20
850
600
2350 – 5000
30
20
0.48
0.35
850
600
5000 – 10000
40
25
0.75
0.55
850
600
10000 – 20000
45
30
1.00
0.80
850
600
20000 – 42000
48
35
1.30
1.10
850
700
> 42000
48
35
1.30
1.10
1200
800
*Above Ground Level
Consider the altitude band of 5000-10000 ft. A TA alert is issued if the estimated range tau is 40 seconds. When the estimated range tau falls under 25 seconds, the threat level is escalated to RA status. The tau concept, by itself, is inadequate to estimate the threat. For example, consider two aircraft that are flying almost, but not quite, parallel to each other as shown in Fig. 5.15, would have a very large range tau. Yet, if they are unacceptably close to each other, large range tau notwithstanding, TA/RA alerts need to issued. Thus, at 5000–10000 ft altitude range if the range separation from the target aircraft falls below 0.75 nautical miles, or the altitude separation falls below 850 feet, then TA alert is triggered in the own aircraft. Similarly, if the range separation falls below 0.55 nautical miles or the altitude separation falls below 600 feet then the threat level is escalated to RA alert status. Range or Altitude Separation Fig. 5.15 Large range tau with small separation
At low altitudes it is important to use the Above Ground Level (AGL) altitude—that is, the radar altitude (measured by bouncing radio waves off the ground and measuring the total travel time). The pressure altitude (measured by pressure sensors), which gives the altitude above the mean sea level and would deviate significantly from AGL if the ground itself is significantly above the MSL. The distinction between pressure altitude and AGL becomes less important as the altitude increases.
206 Principles of Modern Avionics
Collision Avoidance Maneuver: If the aircraft that poses a threat is also equipped with TCAS-II then the two TCASs get into a cooperative maneuver (discussed later). If the other aircraft is not equipped with TCAS-II however, collision avoidance maneuver is initiated in the own aircraft as soon as an RA alert is issued and it involves two decisions—RA sense and strength of advisory. RA Sense: The first decision is to choose the RA sense—that is, whether the own aircraft should start climbing up or descending down to avoid the threat aircraft. The current trajectory of the threat aircraft is extrapolated to the CPA for the two options (up sense and down sense) and the RA sense that provides the greatest separation at CPA is chosen. The extrapolation assumes that the pilot of the own aircraft will start a 0.25 g acceleration within five seconds and reach a vertical speed of 1500 feet per minute. If the threat is not eliminated and further RAs are issued, then the pilot will be expected to start a 0.35 g maneuver within 2.5 seconds after the new RA alert is issued. Under certain conditions, however, restrictions may be placed on the choice of RA sense. For example, if the own aircraft is flying at its upper limit on altitude, a decision to make it gain altitude further may be prohibited. Or if it is flying below 1100 feet AGL, a down sense RA would be prohibited. Strength of Advisory: An RA is said to be positive if it places a lower limit on a rate, and negative if it places an upper limit on a rate. For example, an advisory to ascend/descend at a rate of at least 500 feet per minute is a positive RA. On the other hand, an advisory that the rate of ascent/descent should be restricted to at most 500 feet per minute is a negative RA. The strength of an RA advisory is the severity of the restriction it places. For positive RAs stronger RA involves a higher lower limit. For negative RAs stronger RAs involve lower upper limits. The TCAS chooses a strength that causes the least disruption to the aircraft’s current trajectory while achieving the necessary separation at CPA for safe collision avoidance. Various cases arise in the interactions between the own aircraft and the threat. Some of the possibilities are discussed in [TCAS 2011]. The strength of an initial RA would be revised depending on whether the independent collision avoidance maneuver of the other aircraft amplifies or counteracts the collision avoidance maneuver of the own aircraft. Even the sense of an issued RA could be reversed if the other aircraft maneuvers to thwart the RA sense chosen by the own aircraft. Whereas the above discussion pertains to collision avoidance between two aircraft, TCAS is designed to be capable of collision avoidance maneuvers even in the presence of multiple threats. TCAS-TCAS Coordination: If both the own aircraft as well as the other aircraft are equipped with TCAS, then the two TCASs enter into a coordinated collision avoidance maneuver by choosing complementary RA senses. If one of the aircraft—say aircraft A—chooses an RA sense, then the sense is communicated to other aircraft—say aircraft B— inducing B to make its sense selection decision based on the intention of A. If the two aircraft—A and B—announce their sense selections simultaneously, and the selections conflict then the aircraft with a higher Mode S address revises its sense selection.
Avionics in Civilian Aircraft 207
The above discussion touches upon the salient aspects of threat detection and avoidance embodied in TCAS-II. The reader is referred to [TCAS 2011] for a more detailed discussion.
5.2.12 Instrument Landing System (ILS) The Instrument Landing System (ILS) comprises three separate systems—the localizer, the glide slope and the marker beacons—that are installed on the ground to provide guidance to aircraft as they approach a runway for landing [Helfrick 2002]. The three systems continuously broadcast guidance signals at radio frequencies. The localizer guides a landing aircraft to ensure that the aircraft’s trajectory is aligned with the centerline of the runway. The glide slope guides an aircraft to ensure that it descends at the right slope as it approaches the runway. The marker beacons are installed to provide audio warnings to pilots that they are approaching the runway. The ILS is particularly important when an aircraft needs to land ‘blind’, that is, when the visibility is low or absent due to adverse weather conditions. Left Localizer Transmitters
Right Localizer Transmitters
90 Hz Modulation
Center Line
End of Runway
150 Hz Modulation
Beginning of Runway Fig. 5.16 Beams transmitted by localizer transmitters
Figure 5.16 illustrates the localizer transmitters. They use directional antennae and are located beyond the end of a runway, arranged symmetrically on either side of the centerline. All of the transmitters broadcast signals with a carrier frequency in the range 108.1 MHz – 111.95 MHz. The localizer signals have a range of more than 20 nautical miles.
208 Principles of Modern Avionics
The beam transmitted by the antennae on the left of the center line, as seen by an approaching aircraft, is amplitude-modulated with a 90 Hz sine wave. The beam from the antennae on the right is amplitude modulated with a 150 Hz sine wave. If an aircraft’s path, projected onto the earth’s surface, is aligned with the centerline of the runway then the aircraft receives equal amounts of radiation with 90 Hz and 150 Hz modulation. On the other hand, if the aircraft is to the left of the centerline, then it receives more of the 90 Hz modulated beam. Conversely, it receives more of the 150 Hz modulated beam if it is to the right of the runway. The onboard radio receiver and associated electronics translate the received signal into a visual display, as shown in Fig. 5.17. If the aircraft is approaching to the right of the runway and needs to move to the left, then the display marker is to the left signaling that the pilot should move the aircraft to the left (shown in the figure on the left in Fig. 5.17). If the aircraft’s descent path is aligned with the centerline then the display marker is centered (as shown in the middle in Fig. 5.17). And finally, the figure on the right shows the display seen by the pilot if the aircraft needs to be moved to the right. Fly to the left
Aligned
Fly to the right
Fig. 5.17 Localizer display (in cockpit)
Whereas the localizer provides guidance to steer the aircraft in the horizontal plane, the glide slope system guides the aircraft in the vertical plane, as shown in Fig. 5.18. The glide slope transmitter is a vertical tower with two antennae at heights of 4.5 m and 8.5 m from the ground. The tower itself (not shown in Fig. 5.18) is located either on the left or the right of the runway. The glide slope transmitter radiates two angularly separated beams as shown in Fig. 5.18. The carrier frequency for both beams is in the range of 329.15 MHz and 335.0 MHz. The actual carrier frequency used is determined by the carrier frequency used by the associated localizer. The upper beam is amplitude modulated with a 90 Hz sine wave, while the lower beam is amplitude modulated with a 150 Hz wave. An aircraft flying on the optimal descent path would receive equal amounts of waves with 90 Hz and 150 Hz modulation. Flying above (below) the optimal descent path, the aircraft’s instruments would receive more of the 90 Hz (150 Hz). The relative strengths of the 90 Hz modulated wave and 150 Hz modulated wave, received by the aircraft, are displayed visually as shown in Fig. 5.19.
Avionics in Civilian Aircraft 209
Optimal Descent Path 90 Hz Modulation
150 Hz Modulation
Runway
Fig. 5.18 Beams transmitted by glide slope transmitter
Fly upwards
Aligned
Fly Downwards
Fig. 5.19 Glide slope display (in cockpit)
If the aircraft is flying below (above) the optimal descent path and the pilot needs to gain (lose) altitude, the pilot sees the display shown on the left (right) in Fig. 5.19. If the aircraft’s trajectory is along the optimal descent path, then the pilot sees the display shown in the middle. Outer marker Middle marker Inner marker Runway ~0.3 km ~2 km ~7-10 km Fig. 5.20 Marker beacons
In addition to the localizer and glide slope the ILS has three marker beacons situated at different distances from the beginning of the runway, as shown in Fig. 5.20. The directional
210 Principles of Modern Avionics
antennae of the beacons transmit a narrow 75 MHz beam vertically upwards. The beams are intentionally designed to have low power output. In addition the receivers mounted on the underbelly of an aircraft to sense the beam are intentionally designed to have low sensitivity. The low power of the transmitter and the low sensitivity of receivers ensure that an aircraft senses the marker beacon beams only when it is approximately directly above the beams. When an aircraft passes above a beam audio and video alerts are triggered. The outer marker is amplitude modulated with a 400 Hz signal that encodes 2 dashes per second (Morse code). The middle marker is amplitude modulated at 1300 Hz and has an encoded audio signal with alternating dashes and dots. The inner marker is modulated at 3000 Hz and carries an encoded signal of 6 dots per second. The signals are fed to the speakers or pilot’s headsets in the cockpit. In addition, the ILS protocol requires that passing over the outer marker be signaled to the pilot by lighting a blue lamp in the cockpit. Similarly, the middle and inner markers are signaled by lighting amber and white lamps respectively [Helfrick 2002].
5.2.13 Radar Altimeter A radar altimeter, also called a radio altimeter, is used to sense the aircraft’s altitude above the ground [Helfrick 2002]. An altimeter is a short-range radar and comprises a transmitter and a receiver. It can sense altitude with sub-meter accuracy for altitudes up to about 2500 feet. The high accuracy at low altitudes makes an altimeter a useful supplement to ILS as a landing aid. An altimeter is helpful during landing at stages as late as flare—the stage between glide slope and touchdown, when the aircraft noses up as it decelerates.
Reflected RF wave
Interrogating RF wave
Ground Fig. 5.21 Radio altimeter
Fig. 5.21 illustrates the operating principle of a radio altimeter. The altimeter’s transmitter sends an interrogating radio wave towards the ground. The receiver senses the reflected wave and the total travel time. Since aircraft’ speeds are orders of magnitude smaller than the speed of light, the distance is given to a good approximation by the speed of light in air times half of the time between emission of the interrogating wave and the reception of the reflected wave. Whereas altimeters used radio waves in the L band (1-2 GHz) previously, current altimeters use radio waves in the S band (2-4 GHz).
Avionics in Civilian Aircraft 211
Radio altimeters are used in large aircraft. For example, Boeing 787 has two radio altimeters, which are manufactured by Honeywell. They are less useful in aircraft that fly in Visual Meteorological Conditions (VMC) and rely on Visual Flight Rules (VFR).
5.2.14 Aircraft Communications Addressing and Reporting System (ACARS) ACARS is a network comprised of onboard, ground-based and satellite-based resources used to facilitate textual communications between aircraft and the relevant ground stations. (See Fig. 5.22). The messages could pertain to ATC—such as guidance about aircraft’s flight path, reporting of aircraft’s current position, requesting and granting of clearances and weather information. Alternatively, ACARS communications could also pertain to Airline Operational Control (AOC) including confirmation of take-off or landing, gate information, or maintenance data about the health of the onboard systems. The AOC message traffic currently amounts to 80% of the ACARS messages, with ATC accounting for the remaining 20% [RockwellCollins 2017]. Satellite
Aircraft
VHF Ground Station CMP
HFDL Ground Station
Fig. 5.22 Air-Satellite-Ground network supporting ACARS
The ground stations in the ACARS network are of two types. 1. High Frequency Data Link (HFDL) ground stations that support HF communications. 2. VHF Data Link—Mode 2 (VDLM2) ground stations that support VDLM2 communications, which are ten times faster than the VHF communications. The satellites used in the ACARS communications, currently, belong to one of the following two satellite communications networks.
212 Principles of Modern Avionics
FMS
SATCOM Transceivers
VHF/HF Transceivers
Onboard ACARS Management Unit
EHMMS
MCDU AIRCRAFT Fig. 5.23 Onboard interactions of the ACARS unit
1. Iridium Global Satellite Network. 2. INMARSAT Global Satellite Network. Specifically, when the aircraft has line-of-sight connectivity to a VDLM2 ground station the ACARS communications are routed through the VDLM2 station. If a VDLM2 station is not within line-of-sight—for instance, when the aircraft is flying over an ocean—then ACARS communications are routed through HFDL stations, which do not require lineof-sight connectivity, or a satellite from the INMARSAT or Iridium networks. When the aircraft is flying over North or South pole the communications are routed through either HFDL ground stations or iridium satellites. The ACARS network is implemented with a star topology [Oishi 2007]. The central node of the network is the Central Message Processor (CMP). All of the messages either originate or end at the CMP, as shown in Fig. 5.22. The necessary support for the ACARS communications is currently provided by two service providers—ARINC and Societe Internationale de Telecommunications Aeronautiques (SITA). The onboard ACARS Management Unit interfaces to the equipment that communicates with the ground stations and satellites on one side and to the FMS, EHMMS and MCDU on the other. It receives data from avionics systems such as the EHMMS. The primary interface between the ACARS equipment and the pilots is the Multifunction Control Display Unit in the cockpit, as shown in Fig. 5.23 [Oishi 2007].
5.2.15 Crash-Safe Flight Recorders (CSFRs) The CSFRs comprise equipment that records flight parameters and voice data in the cockpit for use in investigation in the event of a disaster. The CSFRs are required to survive a crash.
Avionics in Civilian Aircraft 213
Sensor Data Performance Data
Flight-Data Acquisition Unit (FDAU)
Image Data Data Link Traffic
Cockpit Voice Data
Flight Data Recorder (FDR)
Cockpit Voice Recorder (CVR)
Crash-Safe Flight Recorders
So they are housed in a crash-safe enclosure and located in the area that is likely to bear the least brunt of a crash—the aft of the aircraft.
Fig. 5.24 Flight data recording system
The CSFRs comprise a Flight Data Recorder (FDR) and a Cockpit Voice Recorder (CVR) [ICAO 2010, FDR 2017, NTSB 2017]. The FDR records a minimum of 88 and in some aircraft over 1000 flight parameters pertaining to flight path, speed, attitude, engine and configuration of flight control surfaces. The FDR could also record the images displayed to pilots on the flight deck displays and the textual communications occurring over data links. The data from the various sensors and instruments, some of which could be in analog format, the image data and datalink data are routed to the FDR through the Flight Data Acquisition Unit (FDAU). The FDAU ensures that the data streamed into it is presented to the FDR in a suitable digital format. The FDR is required to have the capacity to hold 25 hours of data, and overwrites on data that is older than 25 hours. The Cockpit Voice Recorder (CVR) records the conversations in the cockpit including the communications that pilots have with the cabin crew and the ATC. It also records the audio warnings issued by the instruments in the cockpit, engine noises and any other unusual sounds in the cockpit. The CVR is required to retain recorded voice data for thirty minutes, after which the data is overwritten. Whereas in the early days the recorders used magnetic tape as the storage medium, the recent recorders use solid state devices. In addition to having no moving parts, the solid state storage devices, being compact, can hold more data within the volume of crash-safe boxes that house the recorders. The CSFRs are housed in boxes that can withstand water pressure at depths of about 20,000 feet, temperatures up to 1100°C for up to half an hour, and acceleration of up to 3400 g for up to 6.5 milliseconds. The boxes also contain an Underwater Locator Beacon (ULB) that is activated when submerged in water. The ULB transmits a signal at 37.5 kHz for up to 30 days after activation.
qqq
CHAPTER
6
Avionics in Military Aircraft Military aircraft perform a wider array of tasks than commercial aircraft. For example, Lockheed Martin’s F-35C Lightning II flies at supersonic speeds (1930 km/hour), has a relatively limited range (2200 km, without refueling), is designed to handle extreme accelerations (maximum g-load=7.5g), carries just a pilot and relatively light payloads (approximately 8160 kg), and is equipped to launch as well as evade missiles [F35Facts 2017]. F-35A, variant of F-35C, is shown in Fig. 6.1.
Fig. 6.1 F-35A Lightning II fighter aircraft (Courtesy: U.S. Air Force photo by Master Sgt. Donald R. Allen en.wikipedia.org/wiki/Lockheed_ Martin_F-35_Lightning_II#/media/File:F-35A_flight_(cropped).jpg)
On the other hand, a military transport aircraft such as Lockheed Martin’s C-5B Galaxy, shown in Fig. 6.2, has a maximum airspeed of 932 km/hour, a range of 4400 km, carries a crew of about 7 and a maximum payload of about 1,22,500 kg. The maximum takeoff weight of C-5B is 381,000 kg, which is to be contrasted with the maximum takeoff 214
Avionics in Military Aircraft 215
weight of 227,930 kg of Boeing 787 Dreamliner [C5B 2014, B787 2017]. The C-5B is used for intercontinental transportation of oversize loads (such as heavy military equipment).
Fig. 6.2 Lockheed Martin’s C-5B Galaxy military transport aircraft (Courtesy: U.S. Air Force photo by Brett Snow en.wikipedia.org/wiki/Lockheed_C-5_Galaxy#/media/File:Usaf.c5.galaxy.750pix.jpg)
Fig. 6.3 General Atomics MQ-1 Predator unmanned aerial vehicle (Courtesy: U.S. Air Force Photo/Lt. Col. Leslie Pratt. en.wikipedia.org/wiki/General_Atomics_MQ-1_Predator#/media/File:MQ1_Predator_unmanned_aircraft.jpg)
In contrast to the human-operated military aircraft, Unmanned Aerial Vehicles (UAVs), such as the MQ-1 Predator drone manufactured by General Atomics carry no onboard crew. Rather they are operated remotely. The MQ-1 drone has a maximum speed of
216 Principles of Modern Avionics
217 km/hour while cruising and a range of 1100 km. After reaching the destination it can stall at a speed of 100 km/hour for several hours. It can carry a payload of about 500 kg, such as two AGM-114 Hellfire missiles [MQ1 2017]. The operational requirements of the military aircraft and the specifications for their avionics are, not surprisingly, markedly different from the specifications for the avionics in commercial aircraft. Whereas transport aircraft such as C-5B bear a greater resemblance to commercial aircraft, the avionics deployed in fighter aircraft and UAVs involve functionalities that are not seen in commercial aircraft. Fighter aircraft, such as F-35, contain cutting edge avionics technologies that are significantly more sophisticated than those in UAVs. Therefore, in the following discussion we take an in-depth look at the avionics in fighter aircraft—specifically the F-35 Lightning II fighter aircraft—which may be, arguably, the most sophisticated fighter aircraft ever built. The discussion of the avionics of F-35 requires a prior understanding of the aircraft’s main characteristics, which are summarized in the next subsection.
6.1 F-35 LIGHTNING II The F-35 is a fifth generation multirole supersonic stealth aircraft. It is optimized to excel in air-to-air as well as air-to-ground combat and reconnaissance. The highlights of its design features are summarized below and discussed at greater length subsequently. The advanced sensor intelligence deployed on an F-35 gives it unprecedented visibility into a battlefield. The Distributed Aperture System (DAS), comprising six infrared (IR) sensors, captures real time video spherically around the aircraft, with no blind spots. The data from DAS is fed to an advanced Helmet Mounted Display System (HMDS) which tracks the pilot’s head movements and displays the real time video of the battle field in the direction in which the pilot is looking. The DAS working with the HMDS effectively makes the entire aircraft transparent to the pilot. The pilot could look vertically down and would see the battlefield as if the bottom of the cockpit were transparent. In addition, a powerful Active Electronically Steered Array (AESA) radar mounted on the aircraft’s nose continuously detects and tracks a large number of moving airborne and ground-based targets. The DAS and AESA, working together with the Electro-Optical Targeting System (EOTS), can display high resolution imagery of the targets and augment the imagery with details about the targets. The EOTS working with the Electronic Warfare System (EWS) can steer laser-guided missiles to achieve pinpoint accuracy in strikes. A distinguishing feature of the avionics in F-35 is the synergy it achieves among the intra-aircraft and inter-aircraft sensor systems. A fusion engine deployed on each F-35 seamlessly fuses the data from all of its sensor systems to display battle-critical information on the main display—the Multi-Function Display System (MFDS)—and also selected information on the HMDS. Beyond fusion of data from sensors on the aircraft, the fusion engines on the different F-35s engaged in combat cooperate with each other over wireless data link—the Multifunction Airborne Data Link—to ensure that data from all of the F-35s that are engaged in battle are fused to present the same composite view to all of the F-35
Avionics in Military Aircraft 217
pilots. The digital cooperation among the F-35s, which happens autonomously (without pilots’ involvement), gives rise to unprecedented situational awareness of the airborne and ground-based threats. In addition to such enhanced situational awareness the F-35s are also designed to be stealth fighters. The stealth of an aircraft—that is, its ability to evade detection—can be measured along four dimensions [Stealth 2013]: radar stealth (its ability to evade detection in the radio frequency band of the electromagnetic spectrum), infrared stealth (evasion in the infrared band), visual stealth (evasion in the visible band) and finally acoustic stealth (its ability to evade detection through its noise). The following design features of F-35 are instrumental in its advanced stealth capability. 1. A novel material is used for its skin to reduce its Radar Cross Section (RCS). About 42% of the aircraft, by weight, is made of composite material [Butler 2010]. In addition, a classified material called fiber mat is embedded in the skin of the aircraft to minimize reflection of radio waves, and thereby reduce the RCS of the aircraft. 2. In addition, the thermal cross section of the aircraft—its cross section in the infrared band—is greatly reduced by using a Low-Observable Asymmetric Nozzle (LOAN) for extruding the hot gas from the engine. The F-35 aircraft comes in three models: F-35A, F-35B and F-35C. The F-35A is designed for Conventional TakeOff and Landing (CTOL). The F-35B is designed for Short TakeOff and Vertical Landing (STOVL). The F-35C, usually deployed on aircraft carriers (denoted CV for Carrier Version), is designed for Catapult Assisted TakeOff But Arrested Recovery (CATOBAR). F-35B is the first aircraft that could cruise at supersonic speeds and land vertically. The F-35C, the most expensive member of the family, has a bigger wingspan (43 ft) than the F-35 A/B (both of which have a wingspan of 35 ft). At 34,800 lbs, when empty, F-35C is also the heaviest aircraft in the family. The F-35B has the least range—approximately 900 nm—while the other two models have a range of 1200 nm [F35s 2017]. In the following discussion the term F-35 will be used to refer to all three models. The F-35 aircraft has a single turbofan engine—the F-135 engine—made by Pratt and Whitney [PW 2017]. The STOVL engine is illustrated schematically in Fig. 6.4. In normal cruise mode the main engine in STOVL F-135 drives just the main fan at the engine inlet to draw air in for combustion and bypass. In the hover/vertical landing mode the threebearing swivel duct bends the tailpipe to point the exhaust nozzle vertically down (Fig. 6.4 and Fig. 6.15) and also couples the engine to a shaft that drives the lift fan, which is located behind the cockpit. In the hover/vertical landing mode the engine redirects some of the bypass air to the two roll post ducts, which, under the control of a FADEC, provide roll stability. The hot gas exiting the downward-pointing exhaust nozzle provides approximately half the upward thrust needed to make the aircraft hover. The lift fan provides approximately the other half of the upward thrust by forcing air vertically downward through its outlet. The air forced down through the roll post ducts also provide upward thrust, although the main purpose of roll post ducts is to balance the aircraft laterally.
218 Principles of Modern Avionics
Engine Exhaust Nozzle
Lift Fan Inlet
Roll Post Duct
Engine Inlet
Roll Post Duct
Three-bearing Swivel Duct
Lift Fan Outlet Fig. 6.4 STOVL F-135 Pratt & Whitney engine
The F-135 engine is designed to provide extra thrust—called wet thrust—by activating its afterburner. An afterburner injects extra fuel into the hot gas exiting the engine’s turbine to reheat the gas before it streams out of the exhaust nozzle. The reheating increases the velocity with which the gas exits at the nozzle, providing additional forward thrust. The technical details of the two engine types are summarized in Table 6.5 and Table 6.6 [PW 2017]. Table 6.5 Technical details of the F-135 Pratt & Whitney engine.
CTOL/CV F-135
Maximum thrust
Main Engine
Main Engine
191.3 kN
182.4 kN
Short takeoff thrust Intermediate thrust
STOVL F-135 Lift Fan
181.2 kN 128.1 kN
120.1 kN
Length
5.59 m
9.37 m
Inlet diameter
1.09 m
1.09 m
1.30 m
Maximum diameter
1.17 m
1.17 m
1.34 m
0.57
0.56 (conventional) 0.51 (powered lift)
28
28
Bypass ratio Overall pressure ratio
Avionics in Military Aircraft 219
Table 6.6 STOVL F-135 engine in hover/vertical landing mode
STOVL F-135 in Hover/Vertical Landing Mode
Maximum thrust Pressure ratio
Main Engine
Lift Fan
Roll Post
Total
83.1 kN
83.1 kN
14.6 kN
180.8 kN
29
Fig. 6.7 provides a view of the internal structure of the F-35B. The image on the top left is a view of the cockpit. The images on the top right and the bottom left show the structure from below and above the aircraft. The F-35 aircraft is manufactured by Lockheed Martin in collaboration mainly with Pratt & Whitney, Northrup Grumman, Rolls Royce and BAE Systems. The engine, as noted above, is made by Pratt & Whitney. The STOVL technology for F-35B is provided by Rolls Royce. The Electronic Warfare System is provided by BAE Systems. Northrup Grumman provides the AESA radar, the Communications, Navigation and Identification System and the Distributed Aperture System. The F-35 was developed as part of the Joint Strike Fighter program—supported by the U.S. Air Force, Navy and Army.
6.2 AVIONICS IN F-35 LIGHTNING II As illustrated in Fig. 6.8, F-35 avionics is based on an Integrated-Federated Architecture [Wroble 2006] The different avionics systems of F-35 are interconnected by a fiber optic bus [Adams 2003]. Figure 6.8 displays prominent avionics systems, which are described in greater detail in the following sections.
6.2.1 Integrated Core Processor The computing resources in the F-35 avionics—its intelligence—is contained in its Integrated Core Processor (ICP), which handles all of the intensive computing (the sensor systems perform light tasks such as analog-to-digital conversion). Physical Arrangement: The ICP is physically organized into two racks—one with 23 slots and another with 8 slots [Adams 2003]. Rack 1 houses 22 Line Replaceable Modules (LRMs). The LRMs include • 4 general purpose (GP) processing modules • 2 general purpose input/output modules • 2 signal processing modules • 5 signal processing input/output modules
Life Fan Outlet
Engine Exhaust Nozzle
Fig. 6.7 The internal structure of F-35B (Courtesy: www.f-16.net/g3/f-35-photos/F-35-graphics-and-art/Lockheed-Martin-F-35)
HOTAS
MFDS
220 Principles of Modern Avionics
Avionics in Military Aircraft 221
Active Electronically Steered Array (AESA radar)
Fusion Engine (FE)
Electro-Optical Targeting System (EOTS)
Full Authority Digital Engine Control (FADEC) Automatic Logistics Information System (ALIS)—Interface
Fiber Optic Backbone
Integrated Core Processor (ICP)
Distributed Aperture System (DAS) Vehicle Management System (VMS) Communication, Navigation & Identification (CNI)
Fig. 6.8 Avionics in F-35 Lightning II
• 2 image processor modules • 2 switch modules and • 5 power supply modules. The racks are designed to permit expansion by eight more processing modules and an additional power supply module. Hardware: The processing power in the ICP is provided by Motorola G4 PowerPC microprocessors, which have a 32-bit architecture. The data processors can perform 40.8 Giga Operations Per Second (GOPS), the signal processors 75.6 Giga FLoating-point Operations Per Second (GFLOPS), and the image processor modules can perform 225.6 Giga Multiply/ACcumulate per Second (GMACS). The two switch modules help interconnect the modules within ICP (and also the CNI, sensors and display management computer) at a bandwidth of 400 Mega bits per second (Mbps). The ICP uses Commercial Off-The Shelf (COTS) components, such as COTS Field Programmable Gate Arrays (FPGAs) in its image processor.
222 Principles of Modern Avionics
Software: The ICP uses the Integrity Real Time Operating System (RTOS) developed by Green Hills Software Inc. for data processing, display management and in its CNI computer. It uses Multi-Computing Operating System (MCOS), developed by Mercury Computer Systems for signal processing [Adams 2003].
6.2.2 Active Electronically Scanned Array (AESA) Radar (AN/APG-81) The AESA radar is based on the principle of beam-steering through phase shift, illustrated in Fig. 6.9. Consider two emitters E1 and E2, separated by a distance d, that emit electromagnetic waves of the same frequency. Assume that the waves emitted at E1 lag the waves emitted at E2 in phase by an amount ∆ϕ. Consider an arbitrary direction q, as shown in Fig. 6.9. In traveling from E1 to the point A, a wave-front travels a distance d cos(θ) or its phase increases by an amount
A d cos () d
E1
E2
Fig. 6.9 Beam steering through phase shift
a(q) =
2 π d cos (θ) λ
where l is the wavelength of emitted wave. Let q* be the special angle at which a(q*) = ∆ϕ. Then, the wave-front that reaches A from E1 along the direction q* would have increased in phase by an amount exactly equal to the phase deficit ∆ϕ. Therefore, at the special angle q* the waves from E1 and E2 would be in phase when they reach a distant observer. Being in phase, the waves reaching a distant observer at angle q* would interfere constructively and would give rise to a beam of high intensity along the direction q*. At angles other than the q* the intensity of the net radiation would be significantly less at large distances. The result is that a distant observer at q* λ ∆ϕ q* = cos −1 2π d
Avionics in Military Aircraft 223
Vertical
would see a strong beam while distant observers at other angles would see significantly weaker electromagnetic radiation.
Azimuthal Fig. 6.10 Planar array of T/R modules in AESA radar
Since l and d are constant, by varying the relative phase ∆ϕ between the two emitters E1 and E2, one can vary q*, that is, steer the beam along different directions. The foregoing argument is the principle underlying beam-steering through phase shift. The AESA radar used in F-35 has a planar array of transmitter/receiver (T/R) modules, as shown in Fig. 6.10. The planar array is mounted on the nose of the aircraft. Each small shaded square in Fig. 6.10 represents a T/R module that can both emit as well as receive electromagnetic radiation. The circuit for a T/R module is illustrated in Fig. 6.11. In order to direct the electromagnetic beam along an azimuthal (vertical) angle q*, the phase differences between transmitters in adjacent columns (rows) are set to Dj =
2 π d cos(θ*) λ
A T/R module can also sense incident electromagnetic radiation, as well as the direction from which the beam is coming using the converse of the above argument. The transmitter and receiver capabilities are bundled into a single T/R module as shown in Fig. 6.11 [Wang 2014]. In the transmit mode the switches in the circuit are in the Tx position. The waveform generated by an exciter E is routed through a phase shifter, which shifts the phase by an amount that depends on the location of the T/R module in the planar array and the desired direction in which the beam should be directed, a high power amplifier (HPA) and to the radiating antenna. The duplexer D serves to switch the circuit between Tx and Rx modes.
224 Principles of Modern Avionics
LNA
E Tx
Rx Protection Circuit
D
Rx R
Radiator/ Receiver
Tx HPA
Fig. 6.11 T/R module of AESA radar
When operating in the receive mode the electrical signal from the incident waves is routed first through a protection circuit, which protects downstream circuitry from spikes that could arise from radar jamming activities of an adversary. The signal is then routed through a low noise amplifier (LNA) to a phase shifter. By tuning the phase shifters in the individual T/R modules of the array to maximize the strength of the overall signal received at the array, one can deduce the direction of the incoming beam. The AN/APG 81 AESA radar deployed on F-35 is made by Northrop Grumman [NG1 2017]. It has both air-to-air as well as air-to-ground surveillance capabilities. Specifically, its capabilities include the following. • It can detect and track multiple moving targets on ground and in air, simultaneously [Deagel 2006]. • When operated in the air combat mode it allows the pilot to start the radar in the direction along which the pilot is looking. For example, if the pilot is looking vertically down, the radar begins the scanning and tracking in the vertically down direction. This enables the radar to respond to areas of increased current interest to the pilot. • It can image the ground with high resolution, using its Synthetic Aperture Radar (SAR) capabilities. • It allows the pilot to select the radar functionality from a menu of options on the aircraft’s MultiFunction Display System (MFDS). For example, a pilot could instruct the radar to track airborne/ground-based/marine targets, identify airborne/groundbased/marine targets or perform ground mapping using the SAR functionality [Jensen 2005].
Avionics in Military Aircraft 225
The AESA radar has the sensitivity to detect targets that have a radar cross section of 1 square meter from a distance of nearly 150 kilometers. AN/APG 81 has around 1000 T/R modules [Deagel 2006]. Built with solid state technology it has no moving parts and is, hence, very reliable. It has the capability to operate at multiple frequencies and beamshaping capabilities that can be adapted for operations such as terrain mapping/following. The beam shaping and steering are controlled by a computer.
6.2.3 Distributed Aperture System (AN/AAQ-37) The Distributed Aperture System (DAS) comprises six electro-optical (EO) sensors installed around the aircraft. The sensors continuously image the surroundings and the images obtained by the sensors are fused in real time to construct a video sphere of the region around the aircraft both during day and night. The video sphere serves several functions. The DAS imagery enables missile detection, tracking and targeting. More importantly, it can also detect the missile launch pads in the vicinity. The data acquired by the EO sensors is shared autonomously, that is, without pilot’s involvement, with other friendly F-35 aircraft in the combat zone to enhance the overall visibility [NG2 2017]. The video sphere projected onto the display of the pilot’s Helment Mounted Display System (see below) provides a pilot the capability to see in every direction, without being obstructed by the aircraft’s body [Adams 2003]. In a sense the DAS makes the aircraft function as a transparent glass object. Further, operating in the IR range, the DAS is able to present a pilot high resolution imagery during night as well. In addition, the DAS sensors enable tracking of friendly aircraft during combat.
6.2.4 Electro-Optical Targeting System (AN/AAQ-40) The Electro-Optical Targeting System (EOTS) enables high-precision air-to-air as well as air-to-surface targeting. The EOTS hardware comprises a Forward-Looking InfraRed (FLIR) tracker that provides air-to-surface or air-to-air tracking capability. It also comprises an InfraRed Search-and-Track (IRST) system which can simultaneously search and track several airborne threats as well as friendly aircraft. The hardware is connected to the ICP through a high-speed fiber-optic link. The hardware is hidden into the fuselage, thereby presenting low radar cross section and drag. The cameras are provided visibility through a sapphire window. Working together, the EOTS hardware and software provide the pilot high-resolution images of targets, and perform long-range automatic target recognition, identification and tracking. The EOTS also has the capability to accurately determine the geo-coordinates of a target for high precision strikes [LM1 2016].
6.2.5 Communication, Navigation and Identification System The Communication, Navigation and Identification (CNI) system handles intra-flight, inter-flight and air-to-ground communications, navigation and friend-or-foe identification. The CNI system in F-35 provides the following functionalities [Adams 2003].
226 Principles of Modern Avionics
• • • • • • • • • • •
VHF/UHF voice transceiver Instrument Landing System (ILS) Microwave Landing System (MLS) Tactical Air Navigation System (TACAN) Tactical Data Links (TDLs): Link 16, MADL Satellite communication equipment Identify Friend/Foe (IFF) transponder IFF Interrogator 3-D Audio Automatic Dependent Surveillance—Broadcast (ADS-B) Air Force Applications Program Development (AFAPD)
The radio frequency bands UHF and VHF are described in § 4.7.1. An example of Satellite Communications—the SAT-2100—is discussed in § 5.2.1.3. ILS is discussed in § 5.2.12. ADS-B is described in § 4.7.6. TACAN is described in § 1.2.3.9. We discuss IFF, TDL, Link 16 and MADL below.
6.2.5.1 Identify Friend or Foe (IFF) Identify Friend or Foe (IFF) is a communication system that makes it possible to determine whether an airborne aircraft or a land-based vehicle is a friend. It comprises an interrogator that transmits an electromagnetic signal—in the radio frequency band. Upon receiving the signal, the IFF transponder on a friendly aircraft/vehicle responds with a reply signal, which identifies the aircraft as a friend to the IFF interrogator. From the response transmitted by the transponder, the IFF interrogator can also determine the bearing (direction) of the friendly aircraft/vehicle. A non-response from the target aircraft/vehicle on the other hand suggests that it is either an adversary or that its transponder is malfunctioning [IFF 2017].
6.2.5.2 Tactical Data Link (TDL): Link 16 Tactical Data Link (TDL) is a secure link that enables an aircraft to exchange tactical data— both text and image data—in real time with other aircraft and with ground control. Each TDL is specified by a military standard that specifies physical details such as the frequency band of radio waves used for the link as well as the format and semantics of the messages. An example of TDL is Link 16, which is described in greater detail below. The endpoints of Link 16 TDL are either Joint Tactical Information Distribution System (JTIDS) terminals or Multifunctional Information Distribution System (MIDS) terminals [Northrop 2014]. A JTIDS/MIDS terminal has the necessary hardware to transmit and receive the waveforms that carry the Link 16 messages. The software in the terminals are designed to encode the Link 16 messages for transmission and also decode the incoming Link 16 messages.
Avionics in Military Aircraft 227
The Link 16 network uses Time Division Multiple Access (TDMA) protocol. Specifically, each second is divided into 128 equal intervals. The 128 time slots in each second are allocated to different Network Participation Groups (NPGs), which are groups of JTIDS/ MIDS terminals communicating over Link 16; see Fig. 6.12. Each NPG uses one or more of the 128 time slots that are allocated to it every second for communication. Link 16 enables the communicating terminals to use frequency hopping—a technique in which the sender and receiver randomly switch the frequency used for communication using a pseudorandom number generator known to both sender and receiver. Link 16 also incorporates the necessary safeguards to ensure jam resistance of the network. The contents of the Link 16 messages that need to be transmitted are provided by the host computer onboard an aircraft/vehicle. Link 16 Network
JTIDS/MIDS Terminal
JTIDS/MIDS Terminal
Host Computer
Host Computer
Fig. 6.12 Architecture of Link 16 Tactical Data Link
6.2.5.3 Multifunction Advanced Data Link (MADL) Multifunction Advanced Data Link (MADL), deployed on F-35, was designed to provide secure data link among F-35 aircraft during active combat, to help the aircraft synchronize their collective situational awareness. (The corresponding data link for F-22 Raptor aircraft is Inter/Intra Flight Data Link (IFDL).) It is a directional link and uses a high-datarate, narrow-beam wireless communication. Part of the AN/ASQ-242 Communication, Navigation and Identification (CNI) system developed by Northrop Grumman for F-35, MADL complements the Link 16 communication capability built into the F-35’s CNI system. Unlike Link 16 however, MADL provides low observational cross-section to the enemy [Northrop 2014].
6.2.6 Pilot Vehicle Interface A pilot’s interface with the aircraft comprises the displays that present the relevant realtime information and the touch screen as well as a stick called the HOTAS that the pilot can use to communicate the commands to the various sensors and actuators of the aircraft.
228 Principles of Modern Avionics
6.2.6.1 Displays The F-35 cockpit has an 8” × 20” panoramic Multi-Function Display System (MFDS), divided into two 8” × 10” displays, each with 1280×1024 pixels. The two 8” × 10” displays can function independently providing redundancy in case one of them malfunctions. Each of the 8” × 10” displays has its own integrated display processor, that performs the necessary computations. The screens are Active Matrix Liquid Crystal Displays (AMLCD) onto which images are projected [Adams 2003]. The MFDS is supplemented with Helmet-Mounted Display System (HMDS). The HMDS displays critical information about the flight and combat in an easily discernible manner on an AMLCD display mounted on the helmet [Adams 2003]. Examples of information displayed by HMDS include: • Airspeed • Heading • Altitude • Target information • Enemy missile and its launch site • Warnings For example, the six sensors of the DAS mounted at strategic locations on the aircraft provide a complete spherical view around the aircraft. As the pilot turns his/her head the images from the camera pointing along the direction at which he/she is looking can be streamed onto the HMDS screen allowing the pilot to ‘see through’ the aircraft’s body in any direction. Additionally, the Night Vision Goggles (NVG) and an integrated camera can provide a redundant view of the battle environment. The HMDS comprises electro-optics to display both symbols as well as images on AMLCD illuminated by a high-intensity back light. The HMDS display provides a binocular image with a 50° horizontal span and a 30° vertical span. In addition, the HMDS has head position and orientation tracking software to ensure that image displayed on the HMDS screen corresponds to the orientation of the pilot’s head. The helmet weighs about 4.2 pounds [Adams 2003].
6.2.6.2 Hand On Throttle And Stick (HOTAS) The HOTAS stick, shown in Fig. 6.13, consolidates the critical and frequently used controls. By putting the critical controls at a pilot’s fingertips HOTAS eliminates the need to search for controls in a rapidly changing combat situation. The HOTAS technology was pioneered by Ferranti, UK. HOTAS is being used in both military (F-16, F-18, F-22 and F-35) as well as civilian (Airbus A-320, A-330, A-380) aircraft.
Avionics in Military Aircraft 229
Communication Transmit Air-to-Ground Weapon Select Guided Missile Trigger Air-to-Air Weapon Select
Trim Chaffe/Flare Dispenser Event Marker Raid Mode Auto Throttle Sensor Control Speed Brake
Nose-wheel Steer Autopilot g-limit Override Fig. 6.13 HOTAS Stick
The functions that a pilot can execute using HOTAS stick include the following: • • • • • • • • •
Trimming the control surfaces Air-to-air weapons release Guided missile launch Target designation Throttle control Engaging autopilot Radar elevation Raid mode selection Dispensing chaffe/flare
• • • • • • • •
Control of sensors Air-to-ground weapons release Nose-wheel steering Speed brake engagement Reconnaissance event marking g-limit override External lights Communication control
The MFDS, HMDS and the HOTAS together constitute the Pilot Vehicle Interface (PVI). The PVI allows a pilot to control the aerodynamics of the aircraft, its fuel system, sensors, weapons, combat operations and communications. The commands given by the pilot through the HOTAS/MFDS are transmitted to the appropriate sensors and actuators by the Vehicle Management System (VMS). The data from the sensors is routed to the MFDS and the HMDS through the VMS, as shown in Fig. 6.14.
230 Principles of Modern Avionics
HOTAS
MFDS
Vehicle Management System
Sensors and Actuators
HMD Fig. 6.14 Pilot vehicle interface
6.2.7 Full Authority Digital Engine Control (FADEC) Full Authority Digital Engine Control (FADEC) system assumes full monitoring and control of the aircraft engine without assistance from the pilot, once the engine is started. The F-35 aircraft has a single PW-135 engine made by Pratt and Whitney. The F-35A (CTOL) and the F-35C (CV) aircraft have two FADEC systems onboard for redundancy. The F-35B (STOVL) aircraft has four FADEC systems—two main FADEC systems as in F-35A and F-35C, and two additional FADEC systems to control the lift fan (Rolls Royce) during hovering and/or vertical landing. The hardware for the FADEC systems was built by BAE Systems. The software for the main FADEC systems was developed by Pratt and Whitney [Kjelgaard 2007]. Lift Fan Door
Rear Exhaust Nozzle During Flight Lift Fan
Roll Post Ducts
Rear Exhaust Nozzle During Hover
Fig. 6.15 F-35B during hovering or vertical landing (Courtesy: en.wikipedia.org/wiki/LOckheed_Martin_F-35_Lightning_II#/media/File:F-35B_Joint_Strike_ Fighter_(thrust_vectoring_nozzle_and_lift_fan).PNG)
The main FADEC system monitors and regulates engine parameters such as the engine fuel flow rate and temperature. For example, when the aircraft is hovering the FADEC produces the necessary thrust as dry thrust—that is, without using the afterburner—by raising the engine’s temperature.
Avionics in Military Aircraft 231
F-35B, which has the capability to hover and also land vertically, has two additional FADECs to control the lift fan. The thrust needed for hovering or vertical landing is produced using two maneuvers: 1. directing the engine’s exhaust air downward by swiveling the rear engine threebearing swivel duct; and 2. switching on the lift fan located behind the cockpit. The rear exhaust nozzle, which provides the thrust needed to accelerate the aircraft to supersonic speeds during flight, can be directed downward using an assembly called “three-bearing swivel”. Directing the rear exhaust nozzle downward, as shown in Fig. 6.15, provides nearly half of the upward thrust needed for hovering or vertical landing. When turned on, the 2-stage lift fan, located behind the cockpit, provides approximately the remaining half of the upward thrust needed. A critical aspect during hovering or vertical landing is pitch and roll stability. Pitch stability is achieved by adjusting the upward thrust provided by the rear exhaust nozzle to balance the upward thrust provided by the lift fan at the front of the aircraft. Roll stability is provided by adjusting the upward thrusts provided by the two roll post ducts on the side, shown in Fig. 6.15. Part of the air sucked in by the F-135 turbofan engine is not used by the engine for burning fuel, but is instead redirected to the roll post ducts, as bypass air, to provide the corrective upward thrust for achieving roll stability. The two additional FADECs in F-35B are responsible for independently adjusting the upward thrusts of the two roll post ducts—in real time—to achieve the roll stability during hovering and/or vertical landing. The lift fan, roll post ducts and the swivel ducts in F-35B are manufactured by Rolls Royce. The FADEC software for the F-135 Pratt & Whitney engine is provided by Pratt & Whitney. The FADEC for the F-135 engine has its own dedicated high speed data bus that connects the computer running the software to the sensors and actuators in the propulsion system. The FADEC data bus is also linked to the main data bus designed by Lockheed Martin.
6.2.8 Electronic Warfare System (AN/ASQ-239) The Electronic Warfare System (EWS) of an F-35 draws upon the resources of both the parent F-35 and those of other F-35s engaged in combat to enhance the pilot’s capabilities in electronic surveillance, electronic counter measures, and electronic attack measures. The EWS, called Barracuda AN/ASQ-239, deployed on F-35 is designed and constructed by BAE Systems. In keeping with F-35’s IMA paradigm, the EWS uses shared resources, such as the AESA radar. In addition, the EWS has ten dedicated apertures of its own—six on the wings’ leading edge, two on the wings’ trailing edge and two on the trailing edge using of the horizontal stabilizer [Adams 2003, F35 2016, ANASQ239 2017]. In addition to sharing the resources in the other subsystems, the EWS has its own hardware that weighs about 200 pounds. We summarize the main capabilities provided by the EWS below.
232 Principles of Modern Avionics
Enhanced Electronic Surveillance: The EWS synthesizes the Signal Intelligence (SIGINT) about ground-based threats in different directions, and collected by multiple F-35 aircraft into a composite threat-landscape. Examples of such threats include ground-based radars, missile launchers and anti-aircraft artillery. Based on the images of the threats, obtained through the AESA, DAS and/or EOTS, the EWS interrogates the database of known threat sources to obtain details about the threat, if they are available. If the details of the threat are not available, EWS archives it for future classification. Thus, the EWS learns with experience. Over the Multifunction Airborne Data Link (MADL) that networks the airborne fleet of friendly F-35s in a combat, the EWS also sends/receives data, status information and/or warning to/from the EWS running on other F-35s, and with control stations on ground or at sea. Such a capability, enables EWS on an F-35 to warn other F-35s not only about ground-based threats such as radars, but also airborne threats such as incoming missiles or enemy aircraft. Enhanced Electronic Counter Measures: The EWS has the built-in capability to deploy multispectral countermeasures, including a towed radio frequency decoy and an infrared MJU-68/B flare. In addition, the EWS can generate false target information to confuse enemy’s air defense systems. The EWS also has the capability to jam radars by transmitting high energy radiation to overwhelm the enemy radar systems. The EWS can jam not only the radar systems, but also the enemy’s digital command and control systems by injecting them with harmful digital inputs. Electronic Attack Measures: Using the composite picture of the battlefield, obtained by synthesizing data from the entire fleet of F-35s, the EWS facilitates the firing of laserguided as well as GPS-guided missiles. The EWS’s role in jamming radars and control and command systems can also be viewed as attack measures.
6.2.9 Integrated Sensor System The number of sensors deployed on F-35 and the amount of data streaming from the sensors are so large that they would overwhelm a pilot, were it not for the fusion engine (FE). The fusion engine, illustrated in Fig. 6.16, fuses multimodal data streaming from not only the aircraft’s sensors but also the data from other F-35 aircraft operating in the battlefield, to present battle-critical information to an F-35 pilot, in a user-friendly format. Fig. 6.16 schematically illustrates the FE and the flow of data from the sensors to the displays (MFDS and HMDS). The FE provides several features, including the following [SLD 2016].
• The FE integrates the data from several sensor subsystems to present a composite picture to the pilot. The composite picture provides an unprecedented situational awareness.
Avionics in Military Aircraft 233
Sensor DAS Sensor Sensor EOTS MFDS
Sensor Sensor EWS
Fusion Engine
Sensor Sensor AESA
HMDS
Sensor Sensor CNI Sensor Fig. 6.16 Integrated sensor fusion engine
For example, the AESA radar, which is mounted on the nose of the aircraft can direct the RF beam anywhere along the hemisphere ahead of the aircraft. If the AESA radar detects a target of interest, either on the ground or in air, then the DAS system which has complete spherical visibility around the aircraft is alerted about the target. If the target moves out of the range of the AESA radar, and the AESA radar is no longer able to track the target, the FE tasks the DAS to fill in the missing data from the AESA radar feed, and continue to track the target. The tracking of the target is presented to the pilot in a seamless manner even as the task is handed off from the AESA radar to the DAS by the FE. Another remarkable feature of the FE is the so-called Ground Target Launch capability. The FE extrapolates the DAS data about a missile’s track back to the launch site to provide F-35 pilots an awareness of the ground-based missile launch sites. Such extrapolation is shared with all of the other F-35s in combat to provide the pilots instant awareness of detected launch sites. The FE also provides a pilot the option of displaying only data from a selected sensor subsystem—say EOTS of a different F-35—on the MFDS. The data from other sensor subsystems are then blocked to provide selected visibility to the pilot.
234 Principles of Modern Avionics
• Secondly, the FE shares the information it assembles not only with the pilot of the F-35 aircraft in which the FE is running, but with the FEs running on all the other F-35 aircraft participating in the battle, via a Multifunction Airborne Data Link (MADL) as shown in Fig. 6.17. The result of such inter-aircraft data fusion is that all of the F-35 pilots participating in battle have the same situational awareness of the battle. For example, if a sensor subsystem—say the AESA radar—on one of the F-35 aircraft is either damaged or malfunctions during combat, the data gathered by the AESA radars of other F-35s in combat—say data about locations of missile launch sites—is shared in real time over wireless links with the FE of the aircraft with the damaged AESA radar. The pilot of the F-35 without a functioning AESA radar would then see the same picture of the battlefield as the other F-35s. The real time networking of the FEs on different aircraft thus enables an unprecedented level of cooperation among the F-35s. The data sharing among the FEs happens autonomously, without the participation of the pilot. The differential advantage a pilot has over an enemy combatant is measured by the speed with which the pilot can execute the so-called OODA (Observe → Orient → Decide → Act) loop. A pilot who can execute the OODA loop faster than the adversary has a tactical advantage over the adversary. The situational awareness provided to an F-35 pilot by the enormous number of sensors deployed on F-35s and the cooperation among the sensors of all the F-35s in battle, made possible by the FE, greatly speeds up the OODA loop of F-35 pilots. The sensor fusion capability provided by FE is largely due to the real time software running in FE. Thus, the F-35 represents a significant progress towards the objective of building a so-called software-defined airplane.
6.2.10 Vehicle Management System (VMS) The Vehicle Management Computer (VMC) of the F-35, developed by the BAE Systems, provides the digital flight control capability as well as the capability to control the F-35 systems such as the fuel system, electrical system and the hydraulic system. The VMC is based on an Open-System Architecture (OSA). The OSA, which is based on standard interfaces, enables the use of COTS components in system design, greatly lowering the cost of the system and the ease of interchangeability of components from different manufacturers. Each VMC is housed in a unit, which is about the size of a shoe-box. An F-35 houses three VMCs for triple redundancy. The results of the computations performed by the VMCs are compared through a voting system, to identify faulty operation of a VMC. The triple redundancy enables the VMS to remain operational even if one or two of the three VMCs malfunction [Adams 2003, VMS 2003].
Avionics in Military Aircraft 235
Fusion Engine
Multifunction Airborne Data Link (MADL)
Fig. 6.17 Real time networking of F-35 fighters in combat
6.2.11 Autonomic Logistics Information System (ALIS) Whereas the health of the previous fighter aircraft is managed by a preventive maintenance schedule, the health of F-35 is maintained by an elaborate, worldwide Information Technology infrastructure called the Autonomic Logistics Information System (ALIS) [ALIS 2017]. ALIS provides a distributed web-based infrastructure that can be accessed by the operations staff, the maintenance crew, the technical crew and customer support facilitating their work to sustain the health of the aircraft by giving them visibility into data about the health prognostics of the onboard systems and the status of supply chains related to repair and maintenance. ALIS enables efficient communication of action items pertaining to health and maintenance of an F-35 to the appropriate personnel across the globe, greatly expediting the monitoring, maintenance and repair operations. In addition to lowering the maintenance costs of an F-35 over its lifespan, the ALIS also helps reduce the amount of downtime of the aircraft.
qqq
CHAPTER
7
Future of Avionics As Niels Bohr remarked, “Prediction is very difficult, especially if it’s about the future” [Orrell 2007]. Against the backdrop of Bohr’s remark we attempt a cautious extrapolation of the state of the art. Unforeseen disruptive technologies are, by definition, outside the scope of our discussion.
7.1 FUTURE OF AVIATION Before delving into the future of avionics we survey the expected advances in aviation technology itself, many of which would require attendant advances in avionics as well.
7.1.1 Self-healing Materials Composite materials are being used increasingly as a substitute for metal in the construction of aircraft, especially in the military sector. About 42% of the F-35 aircraft (by weight) is made of composite material [Butler 2010]. Boeing 787 is touted to be about 20% lighter owing to the use of carbon fiber reinforced plastic (CFRP) in the construction of its fuselage and wings [Japan 2016]. An exciting prospect is the new breed of composite materials—called self-healing materials. Age-related fractures or damage to critical structures such as wings, fuselage, rudder and elevators lead to catastrophic consequences. The onset of cracks in structures is hard to detect, and necessitates significant human effort for periodic inspection. Self-healing materials, which seek to mimic the healing ability of biological systems, could provide an elegant solution to the problem of preserving an aircraft’s structural integrity. Several approaches are being explored to design self-healing materials. For example, small capsules containing sealing liquids could be embedded in a material. When the material cracks the capsules would bleed and seal the crack [Ghosh 2009]. Even if the liquid in the microcapsules does not heal, if it draws attention to the damaged area maintenance would be facilitated. Whereas embedded microcapsules that rupture and heal provide an example of approaches for autonomous healing, approaches for non-autonomous healing are also being explored. For example, the local electrical resistance around cracks being higher, an electrical current through the material could differentially heat a material near cracks. The excess heat could activate a healing agent to fix the crack [Zwaag 2007, Ghosh 2009].
236
Future of Avionics 237
7.1.2 Alternative Energy Sources Currently, aviation accounts for about 2% of all the carbon dioxide emissions, which stem from the fossil fuels the aircraft burn. Over the last forty years aviation industry has reduced the emissions by about 75% owing to concerted efforts and advances in technology [Airbus2 2016]. The concerns about the environmental impact of emissions from fossil fuels are compounded by the uncertainty in the geopolitics that affect their prices, and the concerns that the fossil fuels are not inexhaustible. As a result, the aviation industry, in keeping with the other transportation industries, is looking to alternative fuels as energy sources of the future.
7.1.2.1 Biomass fuel One of the energy sources showing considerable promise is the so-called second generation sustainable aviation fuel derived from biomass [Jansen 2012]. Currently, aircraft engines are powered by kerosene-like fuels, which are preferred due to the their ability to sustain stable temperatures. The fuels derived from biomass appear to offer the same advantages. and may be usable without significant modifications to the design of current propulsion systems. Examples of biomass include algae, which are plants that grow in salt water and woodchip waste. The distinguishing advantage of biomass is that it does not compete with agricultural products for land and water. Further, the ubiquitous availability of biomass reduces the fuel consumption involved in transporting the aviation fuel from production sites to airports. The aircraft engines of the future are expected to obtain their fuels from local biomass processing plants, greatly reducing the fuel transportation overheads.
7.1.2.2 Solar Energy Photovoltaic cells (solar cells), which convert the energy in sun’s electromagnetic radiation into electrical energy, are already being deployed both on land and on spacecraft. Their widespread use is currently hindered by their relatively low efficiency and high costs. However, rapid strides are being made in boosting their efficiency. An example is the recent development of solar cells that are able to harvest energy with 46% efficiency [France-Germany 2014]. Even if the efficiency of solar cells increases and their prices drop substantially, absent fuel cells that can store prodigious amounts of harvested energy, the solar energy will be insufficient to power the takeoff and flight of regular commercial aircraft. At best solar energy can be used to run low-power devices such as those used in cabins. Solar energy, however, is finding other innovative applications in aviation. An example is the solar-powered light unmanned aircraft, called Aquila, that Facebook is attempting to build to provide wireless internet connectivity to the regions of the world that currently do not have access to the internet [Zuckerberg 2016]. Aquila is expected to fly at an altitude of about 60,000 feet and remain airborne for up to 90 days at a time. Each Aquila aircraft
238 Principles of Modern Avionics
is expected to provide internet connectivity to an area with a diameter of about 60 miles. Further, the fleet of Aquilas is expected to form an airborne network using lasers for interaircraft communications. While the aircraft is expected to have a wingspan that is larger than that of Boeing 737, it is expected to weigh around 1000 pounds or less, and have power requirements of 5000 W or less. Given their large wings, on which photovoltaic cells can be deployed, and their rather low power requirements the power derived from solar radiation is expected to be adequate to run Aquila and similar light unmanned aircraft.
7.1.2.3 Hydrogen Fuel Cells Hydrogen fuel cells convert the chemical energy stored in hydrogen into electrical energy, in the process combining hydrogen and oxygen to form water. It is a form of combustion at low temperatures. The byproduct of the cold combustion is water, which has no carbon footprint and can be used for in-cabin services during commercial flights. In tests, performed on Airbus A320, hydrogen fuel cells have produced 25 kW of electrical power, which was sufficient to control the actuators of ailerons, spoilers, elevator and back-up hydraulic circuit [Airbus2 2016]. Airbus expects to generate about 100 kW of electrical power and possibly even power the Auxiliary Power Unit (APU) using fuel cells. The fuel cell itself is expected to be stationed in the aircraft’s cargo room, while its fuel (liquid hydrogen), heat exchangers and fans are positioned in the tail section of the aircraft. Once again, fuel cells are not expected to power the aircraft’s engines during takeoff and flight. However, they have no carbon footprint and produce a useful byproduct. They remain a promising source of power in civil aviation.
7.1.2.4 Harvested Energy Boeing 787-9, one of the modern commercial aircraft, has a landing weight of about 200,000 kg [Boeing 3 2016]. Its landing speed is about 150 knots or about 275 km/hr (77 m/s). When the aircraft decelerates from its landing speed to rest it loses about 595 mega joules of energy, which is sufficient to satisfy the electricity needs of an average home for about 5.5 days; electricity consumption of an average home in the U.S. is about 901 kwh [EIA 2016]. Assuming that the above figures represent average values for commercial aircraft, and that 30 million landings happen per year, the kinetic energy being lost during landing every year is sufficient to satisfy the electricity needs of about 450,000 homes all year round. Harvesting the kinetic energy during landing through a regenerative braking scheme will contribute to cost savings. It is expected that regenerative braking will be used in aircraft in the future.
7.1.3 Spaceplanes Novel materials are also of interest for the so-called spaceplanes that are being explored. A spaceplane is envisioned to takeoff as a normal airplane, soar into a low earth orbit, in which it reaches speeds of up to Mach 25, and descend from the low earth orbit to land in a conventional airport. Such a spaceplane would circle the globe in about an hour and a half.
Future of Avionics 239
The nose and wingtips of such spaceplanes would have to endure extreme temperatures, especially during re-entry into earth’s atmosphere, and would need to be constructed using materials that withstand high temperatures. Also, if the spaceplanes run on air-breathing engines then a new breed of air-breathing engines need to be developed to achieve higher speeds and operate at very high altitudes. The conventional air-breathing turbofan engines can propel aircraft up to speeds of Mach 2.2, and the ramjet engines operate only above Mach 4 [Trimble 2013].
7.1.4 Flying Cars Whereas spaceplanes seek to fly at very high altitudes, at the other end of the spectrum efforts are underway to develop flying cars, which fly at extremely low altitudes and effectively serve as flying versions of today’s automobiles. A flying car would function as a regular automobile on the roads. It’s foldable wings, when deployed, would also enable it to takeoff from and land at airports, providing truly multi-modal transport capability, on road and in air. While a commercially viable flying car has not appeared on the market yet, it is expected to be on the market within a few years [Mack 2015].
7.1.5 Terrestrial Hypersonic Flight NASA’s unmanned X-43A is one of the fastest aircraft on record, having achieved a speed of Mach 9.6. It was accelerated to near hypersonic speed by a booster rocket, which was subsequently discarded. The aircraft’s engine, a scramjet, takes over once it reaches speeds in excess of Mach 4.5. But X-43 was a single-use vehicle—that is, it was designed to destroy itself after a few minutes of flight. The record for the fastest aircraft is probably held by Lockheed Martin’s HTV-2 (Hypersonic Transport Vehicle-2), which, in tests conducted in 2011, reached speeds between Mach 17 and Mach 22 [Bennett 2016]. Other notable unmanned hypersonic test aircraft include Boeing’s X-51 (Mach 5), Lockheed Martin’s HTV-3X (Mach 6) and SR-72 (Mach 6). Boeing is also developing a reusable unmanned hypersonic aircraft—the XS-1—which will be used to deliver satellites into the orbit [Boeing 2015]. Among manned flights the fastest aircraft on record is the North American X-15, which flew at Mach 6.72. X-15 is noteworthy for having achieved spaceflight as well, having flown to an altitude in excess of 100 km [Bentley 2009]. Given the history of hypersonic aircraft, unmanned hypersonic aircraft appear to be within reach in a few years. Attempts are also underway to develop manned hypersonic aircraft [Batchelor 2016], and it is conceivable that they will be built in a few decades. Beyond aircraft stability at hypersonic speeds, the development of hypersonic aircraft also necessitates advances in the scramjet engine technology; see [Segal 2009] for a discussion on scramjet (supersonic combustion ramjet) engines. Hypersonic fighter aircraft would be able to outpace hostile missiles, and escape to the edge of earth’s atmosphere during combat, if needed.
240 Principles of Modern Avionics
7.2 AVIONICS: THE ROAD AHEAD The preceding paragraphs focused on the expected advances in aviation, which may, in turn, drive associated advances in avionics. In the following paragraphs we discuss, more directly, the expected advances in avionics. Whereas the military aircraft are likely to witness relatively more dramatic changes and advances in the avionics systems, the progress in civilian aircraft are likely to be relatively unglamorous, but nonetheless immensely consequential. It is estimated that about 30 million commercial flights occur every year, with the world’s population currently around 7.4 billion people [World 2017]. It is expected that the world’s population would reach about 10 billion people by 2056 [World 2017]. That is, the population will likely increase by more than 30% over the next forty years. Assuming proportionate increase in air traffic, one would anticipate that the airports and the skies would witness about forty million commercial flights per year, around 2056. Given the scale of commercial aviation, even small improvements in efficiency translate to amplified gains. The two primary drivers for the evolution of civilian avionics are expected to be revenue growth, and minimization of the ecological impact. In the following discussion we examine some of the opportunities/ challenges that are likely to drive advances in avionics in the years ahead.
7.2.1 Reduction of Flight Time The typical flight time of a commercial aircraft—the duration for which the aircraft is airborne—is significantly longer today than it needs to be. Advances in avionics are expected to reduce the inefficiencies in flight time and hence the amount of greenhouse gases that the aircraft pump into the atmosphere. The routes that aircraft follow are constrained by two main factors: geopolitical boundaries and weather conditions. As a result of the interplay between the above two factors aircraft actually follow sub-optimal routes. While geopolitical boundaries may continue to impose hard constraints advanced avionics that can sense changing weather and air traffic conditions and dynamically re-route the aircraft would nudge flight-paths closer to optimality. Presently, the commercial aircraft are required to maintain a separation of about four nautical miles while flying. If onboard avionics of nearby aircraft are designed to work cooperatively with each other, then the stipulated inter-aircraft separation can be reduced significantly. Such a capability—for avionics systems to dynamically form and communicate over cooperative ad hoc networks—already exists in military aircraft (e.g., F-35) and is likely to percolate into civilian aviation in the years ahead. The second source of inefficiency in flight time pertains to Air Traffic Management (ATM). It is estimated that, on an average, flight time can be shortened by about thirteen minutes per flight, if the ATM and onboard avionics are optimized [Airbus 2016]. Eliminating the inefficiency by optimizing the onboard avionics and the ATM translates to 9 million tons of fuel savings per year, a reduction of carbon dioxide emissions of about 27 million tons per year, and savings of about 500 million man-hours of travel time per year by passengers
Future of Avionics 241
and crew [Airbus 2016]. The optimization may not even require deployment of additional hardware. It comes down to dynamic optimization of flight schedules. The scheduling problem, unfortunately, is intractable. Optimizing the ATM requires development of more efficient scheduling algorithms. Given the magnitude of potential savings it is expected that the trapped inefficiencies will be reduced in the years ahead, even if optimality cannot be achieved.
7.2.2 Reduction in Fuel Consumption Closely related to the objective of minimizing the flight time is a parallel objective of minimizing the fuel consumption. U.S. air carriers consumed over 16 billion gallons of fuel in 2015, at a cost of over 30 billion dollars [USDOT 2016]. Technological advances are expected to reduce fuel consumption in all four phases of flight—takeoff, cruising, landing and taxiing—as discussed below.
7.2.2.1 Takeoff Phase The power requirements of an aircraft are the greatest at takeoff. The power needed depends on the speed of head/tail wind and weight of the aircraft at takeoff , the length of the runway and the geometry of the wing. The climb to cruising altitude consumes a considerable amount of energy. Once the aircraft starts cruising power is consumed mainly to overcome the aerodynamic resistance. Hence, the power requirements in the cruising phase are considerably lower. The peak power to be delivered by an engine, and therefore the size of the engine, are determined by the power requirements at takeoff. One technological advance being considered is to have a ground-based mechanism for accelerating an aircraft to takeoff speed. For example, the aircraft can be accelerated by an electromagnetic propulsion system. Or the aircraft can be carried on a vehicle that accelerates the aircraft to takeoff speed. In either case, the aircraft does not rely on its engines for pre-takeoff acceleration. Hence, the engine size and its power rating can be reduced significantly. Reducing the engine size and power rating decreases the weight of the aircraft and the fuel consumption during flight. On the other hand, assisted takeoff requires deployment of new ground-based mechanisms in airports. Assisted takeoff does not require significant advances over the avionics currently in use [Airbus 2016].
7.2.2.2 Cruise Phase Advances in avionics can significantly enhance fuel efficiency during flight. Previously, we discussed some of the possible advances that can reduce the flight time. In the following discussion, we look at another aspect of cooperative flying that can reduce fuel consumption. Studies have shown that when 25 birds fly in a V-formation, on an average they achieve about a 65% reduction in drag and can extend their range by about 7%. Simulations show that the aircraft too can benefit by flying in formation. However, to realize the benefits of flying in formation the inter-aircraft separation needs to be about 20 wingspans. On the
242 Principles of Modern Avionics
other hand, the current recommended inter-aircraft separation during flight is about 4 nautical miles [Airbus3, 2016]. A new generation of sensors and avionics needs to be deployed on aircraft in order to enable them to fly in close proximity. The aircraft need to be equipped with sensors—such as Light Detection and Ranging (LIDAR) and InfraRed (IR) cameras—to detect the wake of an aircraft that is ahead. Maintaining a stable inter-aircraft separation would be possible if the avionics systems in nearby aircraft are able to communicate and cooperate with each other, autonomously. Such cooperation between avionics in different aircraft would be possible if international standards for inter-aircraft autonomous communication and real-time coordination between avionics systems are stipulated and enforced. Benefits of flying in formation can be reaped in air corridors that have high traffic. If flight scheduling is coordinated globally to synchronize the flight schedules, then a flock of aircraft could rendezvous and fly in formation over long distances. A secondary benefit of having aircraft flying in formation is the reduced load on ATM systems, which can regard a flock of aircraft as a single super-aircraft.
7.2.2.3 Landing Phase Currently, aircraft descend from cruising altitude to a runway in a stepwise path as shown in Fig. 7.1. Segments in which the aircraft loses altitude are punctuated by stretches in which it is expected to hold its altitude. The pilot is expected to request permission from the control tower each time the aircraft seeks to descend from one altitude to the next lower altitude. The stepwise approach is implemented to ensure that the aircraft’s descent progresses through a series of predetermined altitudes, to avoid mid-air collisions in the vicinity of airports where air traffic density tends to be high. The drawbacks of the stepwise
Continuous Descent Approach
Conventional Stepwise Descent
Fig. 7.1 Continuous descent versus conventional stepwise descent
Future of Avionics 243
descent approach is that each time the aircraft seeks to level off at an altitude, it has to throttle up the engines, burning considerable amount of fuel in the process. Throttling up the engines also adds both chemical and noise pollution at close proximity to the ground. If an airport is located near residential areas then the chemical and noise pollutions become even more important factors. In contrast, in the Continuous Descent Approach (CDA), illustrated in Fig. 7.1, an aircraft can idle its engine and glide in a continuous descent path from its cruising altitude to the runway. In tests conducted by United Parcel Service (UPS) the CDA was found to save about 50 gallons of fuel over the conventional stepwise descent. Assuming that commercial aircraft execute about 30 million landings per year, the CDA has the potential of saving 1.5 billion gallons of fuel per year, or more than 9% reduction from the current fuel consumption level of 16 billion gallons. Idling the engine while landing, without throttling it up intermittently, also reduces the noise and chemical pollution. Absent predetermined altitudes, the locations of aircraft that make CDA landings are tracked using GPS. In order to implement CDA landings, the necessary GPS hardware needs to be installed on aircraft and on ground, and new ATM protocols need to be deployed. The avionics on old aircraft must also be reprogrammed to make them capable of executing CDA landing. Given that GPS technologies are already deployed on several aircraft, the expectation is that the CDA landing protocols will be operational in the near future [Airbus4, 2016].
7.2.2.4 Taxiing Phase Considering aircraft engines’ fuel burn rate, shutting the engines off as early as possible after landing would greatly reduce the fuel consumption. International Air Transport Association (IATA) estimates that about six million tons of carbon dioxide emissions can be eliminated every year by shutting of engines soon after landing [Airbus5, 2016]. The delay in shutting off the engines can be minimized through several enhancements. First, the traffic congestion at gates and on runways can be minimized through dynamic optimization of flight schedules. Secondly, vehicles powered by electromagnetic propulsion systems can be used, instead of aircraft’ engines, to taxi the aircraft to and from gates. Assisted taxiing will make it possible for aircraft to turn off their engines as soon as they land. The taxiing vehicles could even be powered by the energy harvested as the aircraft is decelerated to rest after touch-down. The technologies needed to optimize fuel consumption after landing will likely be deployed over the next few decades.
7.2.3 Next Generation Air Transportation System (NextGen) Whereas the previous discussion is applicable to worldwide aviation, concrete steps are being taken in the U.S. to implement the enhancements discussed above. The Federal Aviation Authority (FAA) of the U.S. has deployed the Next Generation Air Transportation System (NextGen), which is a new National Airspace System. The airline carriers have committed to upgrading the avionics equipment on their aircraft to be compatible with
244 Principles of Modern Avionics
NextGen by January 1, 2020. The main objective of NextGen is to “optimize all phases of flight using improved data collection, analysis and communications technologies” [NextGen 2016]. Deployment of NextGen is expected to reduce fuel consumption, reduce flight delays, and expand the air traffic capacity in and around the airports and in the skyways. The main details of NextGen are presented below. An interested reader is referred to [NextGen 2016] for additional details.
7.2.3.1 Infrastructure At the heart of NextGen is a new technology—called Automatic Dependent Surveillance Broadcast (ADS-B). ADS-B is a satellite-based GPS technology that will replace the groundbased radars for tracking and managing air traffic. ADS-B enables air traffic controllers, as well as aircraft, to gain unprecedented real time situational awareness about the air traffic in the skyways and in airports. The precise GPS-based tracking of aircraft, will also enable NextGen to reduce the minimum inter-aircraft separation and thus support higher traffic density. An added enhancement of ADS-B is that, being satellite-based, it provides coverage in areas that were previously out of the range of ground-based radars. The national NextGen infrastructure has already been deployed and is operational. As mentioned above, the airline carriers have committed to upgrading the avionics systems in their aircraft to complete migration of the entire airspace system in the U.S. to NextGen.
7.2.3.2 Air Traffic Control A new air traffic control system called En Route Automation Modernization (ERAM) system has already been deployed in the Air Route Traffic Control Centers (ARTCCs) at Albuquerque, Atlanta, Boston, Chicago, Cleveland, Denver, Fort Worth, Houston, Indianapolis, Jacksonville, Kansas City, Los Angeles, Memphis, Miami, Minneapolis, New York, Oakland, Salt Lake City, Seattle and Washington D.C. Controllers at the ARTCCs manage high altitude traffic. The ERAM systems deployed at these centers receive and process flight and radar data, manage communications and display relevant information to the human controllers. Also, the overall air traffic control is defragmented since the same ERAM system is deployed at all of the twenty centers enabling them to interoperate seamlessly. The ERAM system has enabled the air traffic controllers to track 1900 aircraft at a time, in contrast to the previous legacy system which could track only 1100 aircraft at a time. The improvement represents over 70% expansion in capacity.
7.2.3.3 Performance Based Navigation The new Performance Based Navigation (PBN) procedures, implemented as part of NextGen, enables aircraft to fly more direct routes between airports and reduce flight delays at airports. Since satellites, and not ground-based navaids, are used by aircraft for navigation their routes are no longer constrained by the locations of ground-based navigation stations. Secondly, PBN provides aircraft with real time situational awareness—such as runway
Future of Avionics 245
closures and consequent congestion enabling them to dynamically alter their flight plans and thereby reduce flight delays. (PBN is discussed in Chapter 5.)
7.2.3.4 Metroplex NextGen is also implementing the concept of Metroplex, which is a collection of airports that work cooperatively to minimize air traffic congestion. Dynamic load balancing across the airports in a Metroplex helps decongest the air space and thus minimize flight delays, especially in adverse weather conditions. FAA plans to create a dozen Metroplexes near major metropolitan areas.
7.2.3.5 Equivalent Lateral Spacing Operations The new procedure—called Equivalent Lateral Spacing Operations (ELSO)—made possible by NextGen improves the duty cycle of a runway by allowing more flights to takeoff from a runway. The flights using the same runway are being put on an enlarged set of noninterfering flight paths. ELSO has enabled an airport in Atlanta to increase the number of takeoffs per hour by about 8 to 12, leading to savings of about $26 million per year in fuel costs [NextGen 2016].
7.2.3.6 Wake Recategorization The inter-aircraft separation during takeoffs and landings is determined by the wake turbulence created by the aircraft ahead. Previous estimates of the wake turbulence created by an aircraft were based mainly on the weight of the aircraft. New standards use data such as wingspan and stability of an aircraft in addition to its weight to estimate its wake turbulence. The improved standards decrease the mandated separation, thereby reducing the taxiing times.
7.2.3.7 Data Communication (DataComm) At present the air traffic controllers communicate with the crew in a cockpit mainly by voice. The NextGen Data Communication (DataComm) supplements the voice-based communication with the capability for enhanced data communication, making routine communications such as clearance for taxiing/landing/takeoff, route revisions, changeof-course instructions, and weather/runway advisories more efficient.
7.2.3.8 Network Enabled Weather (NEW) It is estimated that about 70% of all the aircraft delays are weather-related. The NextGen system seeks to fuse weather data from multiple sensors—deployed on ground, on satellites and in air—to obtain real time weather map across the entire national air space. The real time decision-making made possible by the fused data from an elaborate network of sensors is expected to reduce the weather-related delays significantly.
246 Principles of Modern Avionics
7.2.3.9 National Airspace System Voice Switch (NVS) Currently the air-to-ground and ground-to-ground voice communications used for air traffic control use disparate equipment (seventeen different types). Secondly, the communications between a cockpit and air traffic control is restricted to the Air Traffic Controllers (ATCs) in the vicinity of the aircraft. Such geographic restriction generates unsustainable loads for some ATCs even as other ATCs, elsewhere in the nation, idle. The National Airspace System Voice Switch (NVS) seeks to exploit the common platform of NextGen to perform nationwide load balancing. Specifically, if an ATC is experiencing load spikes, then communications from aircraft to such an ATC are rerouted to ATCs that have relatively lighter load. Such rerouting will be made possible by NextGen, which will ensure that all the ATCs and pilots see the same picture of the national air space.
7.2.4 Military Avionics Whereas the civil aviation industry is more transparent about its state of the art and the emerging trends, visibility into the trends in military aircraft is considerably limited. We mention the following trends that are evident from publicly available information. The functionalities envisioned in the futuristic aircraft described below, will necessitate corresponding advances in the avionics.
7.2.4.1 Split-and-Join Aircraft The transformer paradigm described by BAE systems takes formation-flying to a new level [BAE 2017]. Whereas aircraft often fly in formation to reduce aerodynamic drag and hence fuel consumption, the Transformer concept is to have multiple aircraft join together into one super-aircraft for long distance flights. As needed, or upon reaching the destination, the super-aircraft splits into a number of smaller aircraft. Besides overall gain in fuel efficiency, the split-and-join technology will make it possible to transport smaller fighter aircraft to distant battle fields by larger aircraft.
7.2.4.2 Directed Energy Beams Interception and strike capabilities in air combat will be significantly enhanced if the aircraft are able to deliver lethal quantities of energy to targets—such as incoming missiles, enemy aircraft, vehicles and installations—at the speed of light. While directed energy systems are currently being deployed on ground the expectation is that they will be deployed on aircraft in the future to enhance the speed of defensive and offensive strikes [BAE 2017].
7.2.4.3 Unmanned War Planes The Unmanned Aerial Vehicles (drones)—such as the MQ-1 Predator produced by General Atomics (see Chapter 6)—are already being used for military operations. While drones such as MQ-1 reach speeds of over 200 miles per hour, the Unmanned Combat Aerial Vehicle
Future of Avionics 247
(UCAV) named Taranis, being developed by the BAE Systems for military use, is expected to be a supersonic unmanned vehicle equipped with advanced stealth technology and with a significantly greater payload capacity than the current drones [Allison 2014].
7.2.4.4 Hypersonic Aerial Vehicles Hypersonic vehicles achieve speeds between Mach 5 and Mach 10. Lockheed Martin is exploring the design of SR-72, the successor to SR-71, that is expected to reach speeds of Mach 6 [Trimble 2013]. One of the many challenges in the construction of hypersonic aircraft is that a turbojet engine can propel an aircraft up to speeds of Mach 2.2, while the ramjet engines cannot operate below Mach 4 leaving a vast divide between the operating ranges of turbojets and ramjets [Trimble 2013]. Lockheed Martin is working on developing an engine that can propel SR-72 from rest to Mach 6.0 by building a novel breed of engines that draw upon both turbojet and a scramjet technologies [Norris 2013].
qqq
APPENDICES
APPENDIX
I
Abbreviations A ACARS Aircraft Communications Addressing and Recording System AC Advisory Circular; also Alternating Current ACE Actuator Control Electronics ACF AutoCorrelation Function ACI All-Call Interrogation ACK Acknowledgment ADC Analog-to-Digital Converter; also Air Data Computer ADF Automatic Direction Finder ADIRU Air Data Inertial Reference Unit ADM Air Data Module ADS Air Data System ADS-B Automatic Dependent Surveillance-Broadcast AESA Active Electronically Steered Array AFDX Avionics Full DupleX AFIS Aircraft Flight Information System AGL Above Ground Level AHRS Attitude and Heading Reference System ALTI Altitude Indicator ALU Arithmetic Logic Unit AOA Angle of Attack AMLCD Active Matrix Liquid Crystal Display AOA Angle Of Attack AOC Airline Operational Control API Application Program Interface APU Auxiliary Power Unit ARINC Aeronautical Radio INCorporated ARP Aerospace Recommended Practice ARTCC Air Route Traffic Control Center
250
Appendices 251
ASI ASIC ATC ATCRBS ATM ATRU ATU
Air Speed Indicator Application Specific Integrated Circuit Air Traffic Control(ler) Air Traffic Control Radar Beacon System Air Traffic Management Auto Transformer Rectifier Unit Auto Transformer Unit
BCD BJT BPF BPRZ BPSK
Binary Coded Decimal Bipolar Junction Transistor Band Pass Filter BiPolar Return to Zero Binary Phase Shift Keying
CAA CAS CATOBAR CCA CCS CDA CDMA CDU CFIT CFRP CIP CIS/MS CMOS COTS CPA CPDLC CPU CTOL CU CV CWLAN
Civil Aviation Authority Calibrated Air Speed Catapult-Assisted TakeOff But Arrested Recovery Code Coverage Analysis Common Core System Continuous Descent Approach Code Division Multiple Access Control Display Unit Controlled Flight Into Terrain Carbon Fiber Reinforced Polymer Central Integrated Processor Crew Information System/Management System Complementary Metal Oxide Seminconductor Commercial Off The Shelf Closest Point of Approach Controller-Pilot Data Link Communication Central Processing Unit Conventional Take Off and Landing Control Unit Carrier Version Crew Wireless Local Area Network
B
C
252 Principles of Modern Avionics
D DAC DAL DADC DAS DDFS DER DFBWS DGCA DGPS DLL DME DMR DMTL
Digital-to-Analog Converter Design Assurance Level Digital Air Data Computer Distributed Aperture System Direct Digital Frequency Synthesizer Designated Engineering Representative Digital Fly-By-Wire System Directorate General of Civil Aviation Differential Global Positioning System Delay Lock Loop Distance Measuring Equipment Dual Modular Redundancy Dynamic Minimum Threshold Level
EADI EAS EAFR EAS EFIS EFB EFCS EGPWS EGT EHMMS EHSI EICAS ELSO EMF EMI EOTS EPR ERAM EWS
Electronic Attitude Direction Indicator Estimated Air Speed Enhanced Airborne Flight Recorder Equivalent Air Speed Electronic Flight Instrumentation System Electronic Flight Bag Electronic Flight Control System Enhanced Ground Proximity Warning System Exhaust Gas Temperature Engine Health Monitoring and Management System Electronic Horizontal Situation Indicator Engine-Indicating and Crew-Alerting System Equivalent Lateral Spacing Operations ElectroMagnetic Force ElectroMagnetic Interference Electro-Optical Targeting System Engine Pressure Ratio En Route Automation Modernization Electronic Warfare System
FAA
Federal Aviation Authority
E
F
Appendices 253
FADEC Full Authority Digital Engine Control FANS Future Air Navigation System FBW Fly-By-Wire FCC Flight Control Computer FDR Flight Data Recorder FE Fusion Engine FET Field Effect Transistor FIS-B Flight Information Service-Broadcast FIT Flight Into Terrain FLIR Forward-Looking InfraRed FLL Frequency Lock Loop FMC Flight Management Computer FMS Flight Management System FOG Fiber Optic Gyroscope FPGA Field Programmable Gate Array FQI Fuel Quantity Indicator FRED Flight Recorder Electronic Documentation FRUIT False Replies Unsynchronized with Interrogator Transmissions G GFLOPS GIP GLS GMACS GNSS GNLU GOPS GPS
Giga FLoating-point Operations Per Second Gimballed Inertial Platform GPS Landing System Giga Multiply/ACcumulate per Second Global Navigation Satellite System GNSS Landing Unit Giga Operations Per Second Global Positioning System
HDD HF HFDL HLR HMDS HOTAS HP HPA
Head Down Display High Frequency High Frequency Data Link High Level Requirement Helmet-Mounted Display System Hand On Throttle And Stick High Pressure High Power Amplifier
H
254 Principles of Modern Avionics
HTV-2 HUD
Hypersonic Transport Vehicle—2 Head Up Display
I2C IAS IATA ICAO IC ICP IFCS IEEE IF IFF IIAS ILS IMA INR INS IP IR IRST IRU ISA ISFD ISS ITU IVHM
Inter-Integrated Circuit Indicated Air Speed International Air Transport Association International Civil Aviation Organization Integrated Circuit Integrated Core Processor Intelligent Flight Control System Institute of Electrical and Electronics Engineers Intermediate Frequency Identify Friend and Foe Instrument Indicated Air Speed Instrument Landing System Integrated Modular Avionics Integrated Navigation Receiver Inertial Navigation System Instruction Pointer InfraRed; also Instruction Register InfraRed Search and Track Inertial Reference Unit International Standard Atmosphere Integrated Standby Flight Display Integrated Surveillance System International Telecommunication Union Integrated Vehicle Health Monitoring
JTIDS
Joint Tactical Information Distribution System
LAN LED LFSR LIDAR LLR
Local Area Network Light Emitting Diode Linear Feedback Shift Register LIght Detection And Ranging Low Level Requirement
I
J
L
Appendices 255
LNA LNAV LO LP LPF LRM LRU LSB LSI LVDT
Low Noise Amplifier Lateral NAVigation Local Oscillator Low Pressure Low Pass Filter Line Replaceable Module Line Replaceable Unit Least Significant Bit Large Scale Integration Linear Variable Differential Transformer
M μP MicroProcessor MADL Multifunction Advanced Data Link MB Marker Beacon MCDU Multifunction Control Display Unit MEMS MicroElectroMechanical System MFD MultiFunction Display MIDS Multifunctional Information Distribution System MIP Main Instrument Panel MLS Microwave Landing System MMR MultiMode Receiver MOSFET Metal-Oxide-Semiconductor Field Effect Transistor MSB Most Significant Bit MSI Medium Scale Integration MSL Mean Sea Level MSTT MultiScan ThreatTrack MTBF Mean Time Between Failure N NACK NAS NCO ND NDB NEW NMI NPG
No ACKnowledgment National Airspace System Numerically Controlled Oscillator Navigation Display Non-Directional Beacon; also Navigation DataBase Network Enabled Weather Non-Maskable Interrupt Network Participation Group
256 Principles of Modern Avionics
NRZ NTA NVG NVS
No Return to Zero Number of TCAS Aircraft Night Vision Goggles National airspace system(NAS) Voice Switch
OAT OODA
Outside Air Temperature Observe, Orient, Decide, Act
PAO PBN PFC PFD PHAC PLL PMR PPI PRN PRT PSR PSU PVI
Poly Alpha Olefin Performance Based Navigation Primary Flight Computer Primary Function Display Plan for Hardware Aspects of Certification Phase Lock Loop Performance Monitoring Recorder Plan Position Indicator Pseudo Random Number Platinum Resistance Thermometer Primary Surveillance Radar Power Supply Unit Pilot-Vehicle Interface
QPSK
Quadrature Phase Shift Keying
RA RADAR RCI RCS RDC RF RFID RHI RLG RPDU
Radio Altimeter; also Resolution Advisory RAdio Detection And Ranging Roll-Call Interrogation Reaction Control System; also Radar Cross Section Remote Data Concentrator Radio Frequency Radio Frequency IDentification Range-Height Indicator Ring Laser Gyroscope Remote Power Distribution Unit
O
P
Q
R
Appendices 257
RTCA Radio Technical Commission for Aeronautics RTL Register Transfer Level RVSM Reduced Vertical Separation Minima Rx Receiver RZ Return to Zero S SAE SAARU SAR SAS SAT SATCOM SCI SCMP SCR SDC SDD SDP SECI SIGINT SIPO SIS SL SPOF SQAP SQAR SP SRD SRS SSI SSR STC SV SVP SVR
Society of Automotive Engineers Standby Air Data and Attitude Reference Unit Synthetic Aperture Radar Software Accomplishments Summary Static Air Temperature SATellite COMmunication Software Configuration Index Software Configuration Management Plan Software Conformity Review Synchro to Digital Converter Software Design Description Software Development Plan Software Environment Configuration Index SIGnal INTelligence Serial In Parallel Out Signal In Space Sensitivity Level Single Point Of Failure Software Quality Assurance Plan Software Quality Assurance Records Stack Pointer Software Requirements Data System Requirements Specifications Small Scale Integration Secondary Surveillance Radar Supplemental Type Certificate Satellite Vehicle Software Verification Plan Software Verification Results
258 Principles of Modern Avionics
T TA Traffic Advisory TACAN TACtical Air Navigation system TAS True Air Speed TAT Total Air Temperature TAWS Terrain Awareness Warning System TC Type Certificate TCAS Traffic Collision Avoidance System TDL Tactical Data Link TDMA Time Division Multiple Access TFG Tuning Fork Gyroscope TIS-B Traffic Information Service-Broadcast TM/TC TeleMetry/TeleCommand TMR Triple Modular Redundancy TRU Transformer Rectifier Unit TSO Technical Standard Order Tx Transmitter U UAV UHF ULSI
Unmanned Aerial Vehicle Ultra High Frequency Ultra Large Scale Integration
VDLM2 VFR VHF VL VLSI VMC VMS VNAV VOR VSI
VHF Data Link Mode 2 Visual Flight Rule Very High Frequency Virtual Link Very Large Scale Integration Visual Meteorological Conditions Vehicle Management System Vertical NAVigation Very high frequency(VHF) Omnidirectional Range Vertical Speed Indicator
V
W WxR
Weather Radar
APPENDIX
II
RTCA Documents Radio Technical Commission for Aeronautics (RTCA) generates four types of documents for the aviation industry in response to requests by the Federal Aviation Administration [RTCA 2017]. 1. 2. 3. 4. 5.
Safety Performance Requirements (SPR) Operational Services and Environment Definitions (OSED) Interoperability Requirements (INTEROP) Minimum Aviation System Performance Standards (MASPS) and Minimum Operational Performance Standards (MOPS).
The documents, especially MASPS and MOPS, provide guidance to both aircraft manufacturers and the regulatory agencies in the aircraft certification process. The complete list of available RTCA documents, as of December 2016, can be found at [RTCA 2016]. In Table II.1, we list selected RTCA documents pertaining to avionics, as examples. Table II.1 Selected RTCA documents.
Doc. ID
Title
DO-143
MOPS—Airborne Radio Marker Receiving Equipment Operating on 75 MHz
DO-152
MOPS—Vertical Guidance Equipment used in Airborne Volumetric Navigational Systems
DO-155
MOPS—Airborne Low-Range Radar Altimeters
DO-158
MASPS—Airborne Doppler Radar Navigation Equipment
DO-163
MOPS—Airborne HF Radio Communications Transmitting and Receiving Equipment Operating within the Radio Frequency Range of 1.5 to 30 Megahertz
DO-169
VHF Air-Ground Communication Technology and Spectrum Utilization
DO-177
MOPS—Microwave Landing System (MLS) Airborne Receiving Equipment
DO-178
Software Considerations in Airborne Systems and Equipment Certification
DO-179
MOPS—Automatic Direction Finding (ADF) Equipment (Contd...) 259
260 Principles of Modern Avionics
Doc. ID
Title
DO-181
MOPS—Air Traffic Control Radar Beacon System/Mode Select Airborne Equipment
DO-185
MOPS—Traffic Alert and Collision Avoidance System II (TCAS II)
DO-189
MOPS—Airborne Distance Measuring Equipment (DME) Operating within the Radio Frequency Range of 960—1215 MHz
DO-191
MOPS—Airborne Thunderstorm Detection Equipment
DO-192
MOPS—Airborne ILS Glide Slope Receiving Equipment Operating within the Radio Frequency Range of 328.6-335.4 Megahertz
DO-195
MOPS—Airborne ILS Localizer Receiving Equipment Operating within the Radio Frequency Range of 108-112 Megahertz
DO-212
MOPS—Airborne Automatic Dependent Surveillance (ADS) Equipment
DO-224
MASPS—Signal-in-Space Advanced VHF Digital Data Communications Including Compatibility with Digital Voice Techniques
DO-228
MOPS—Global Navigation Satellite Systems (GNSS) Airborne Antenna Equipment
DO-242
MASPS—Automatic Dependent Surveillance-Broadcast (ADS-B)
DO-254
Design Assurance Guidance for Airborne Electronic Hardware
DO-262
MOPS—Avionics Supporting Next Generation Satellite Systems (NGSS)
DO-265
MOPS—Aeronautical Mobile High Frequency Data Link (HFDL)
DO-281
MOPS—Aircraft VDL Mode 2 Physical Link and Network Layer
DO-284
SPR—Next Generation Air/Ground Communication System (NEXCOM)
DO-290
SPR—Air Traffic Data Link Services in Continental Airspace
DO-297
Integrated Modular Avionics (IMA) Development Guidance and Certification Considerations
DO-334
MOPS—Strapdown Attitude and Heading Reference Systems (AHRS)
DO-337
Recommendations for Future Collision Avoidance Systems
DO-352
INTEROP—Baseline 2 ATS Data Communications FANSA 1/A Accommodation
APPENDIX
III
ARINC Standards The ARINC standards are organized into five series as shown in Table III.1 [ARINCStore 2017]. Table III.1 Series of ARINC standards
Series
Description
400
Standards that provide the foundation for design of equipment (e.g., wiring, data bus) that conform to guidelines specified in 500 and 700 series standards.
500
Standards for analog avionics.
600
Standards that provide the foundation for design of equipment that conform to guidelines specified in the 700 series standards
700
Standards for digital avionics.
800
Standards for networked communications in aviation.
A comprehensive list of the ARINC standards can be found at [ARINCStore 2017]. In Table III.2, we list a few prominent ARINC standards. Table III.2 Selected ARINC standards.
Standard
Title
ARINC 429
Digital Information Transfer System
ARINC 453
Very High Speed Bus
ARINC 485
Cabin Equipment Interfaces
ARINC 542
Digital Flight Data Recorder
ARINC 561
Air Transport Inertial Navigation System (INS)
ARINC 562
Terrain Awareness and Warning System (TAWS)
ARINC 565
Mark 2 Sub-sonic Air Data System (Contd...) 261
262 Principles of Modern Avionics
ARINC 566
Mark 3 VHF Communications Transceiver
ARINC 569
Heading and Attitude Sensor
ARINC 578
Airborne ILS Receiver
ARINC 579
Airborne VOR Receiver
ARINC 594
Ground Proximity Warning System
ARINC 600
Air Transport Avionics Equipment Interfaces
ARINC 607
Design Guidance for Avionics Equipment
ARINC 613
Guidance for Using the ADA Programming Language in Avionics Systems
ARINC 619
ACARS Protocols for Avionics End Systems
ARINC 622
ATS Data Link Applications over ACARS Air-Ground Network
ARINC 628
Cabin Equipment Interfaces
ARINC 629
Multi-transmitter Data Bus
ARINC 631
VHF Data Link Mode 2
ARINC 633
AOC Air-Ground Data and Message Exchange Format
ARINC 635
HF Data Link Protocols
ARINC 647
Flight Recorder Electronic Documentation
ARINC 651
Design Guidance for Integrated Modular Avionics
ARINC 652
Guidance for Avionics Software Management
ARINC 655
Remote Data Concentrator (RDC) Generic Description
ARINC 701
Flight Control Computer System (FCCS)
ARINC 702
Flight Management Computer System (FMCS)
ARINC 703
Thrust Control Computer System (TCCS)
ARINC 704
Inertial Reference System (IRS)
ARINC 705
Attitude and Heading Reference System (AHRS)
ARINC 706
Mark 5 Subsonic Air Data System (Contd...)
Appendices 263
Standard
Title
ARINC 707
Radio Altimeter
ARINC 708
Airborne Weather Radar
ARINC 709
Airborne Distance Measuring Equipment
ARINC 710
Mark 2 Airborne ILS Receiver
ARINC 711
Mark 2 Airborne VOR ILS Receiver
ARINC 712
Airborne ADF System
ARINC 717
Flight Data Acquisition and Recording System
ARINC 718
Mark 3 Air Traffic Control Transponder
ARINC 723
Ground Proximity Warning System
ARINC 724
Aircraft Communications Addressing and Reporting System
ARINC 726
Flight Warning Computer System
ARINC 735
Traffic Alert and Collision Avoidance System (TCAS)
ARINC 738
Air Data and Inertial Reference System
ARINC 739
Multi-purpose Control and Display Unit
ARINC 741
Aviation Satellite Communication System
ARINC 743
Airborne Global Positioning System Receiver
ARINC 745
Automatic Dependent Surveillance
ARINC 747
Flight Data Recorder
ARINC 753
HF Data Link System
ARINC 756
GNSS Navigation and Landing Unit (GNLU)
ARINC 757
Cockpit Voice Recorder
ARINC 760
GNSS Navigation Unit
ARINC 762
Terrain Awareness and Warning System (TAWS)
ARINC 764
Head-Up Display System (Contd...)
264 Principles of Modern Avionics
Standard
Title
ARINC 767
Enhanced Airborne Flight Recorder
ARINC 768
Integrated Surveillance System
ARINC 777
Recorder Independent Power Supply (RIPS)
ARINC 781
Mark 3 Aviation Satellite Communications Systems
ARINC 803
Fiber Optic Design Guidelines
ARINC 821
Aircraft Network Server System Functional Definition
ARINC 822
On-Ground Aircraft Wireless Communication
ARINC 828
Electronic Flight Bag Standard Interface
ARINC 831
Electromagnetic Compatibility Recommended Practice
qqq
BIBLIOGRAPHY
I
[AC-20-152 2005] FAA, www.faa.gov/regulations_policies/advisory_circulars/index. cfm/go/document.information/documentID/22211, retrieved on April 2, 2017. [AC-20-174 2011] FAA, www.faa.gov/documentLibrary/media/Advisory_Circular/ AC-20-174.pdf, , retrieved on April 2, 2017. [Acar 2009] Acar C and Shkel A, MEMS Vibratory Gyroscopes, Springer, 2009. [Adams 2003] Adams C, JSF: Integrated Avionics Par Excellence, www.aviationtoday.com/ av/military/JSF-Integrated-Avionics-Par-Excellence_1067.html#.ViztOrfhCuk, accessed on October 1, 2016. [ADS-B 2012] Automatic Dependent Surveillance-Broadcast (ADS-B), Advisory Circular No. 90-114, US Department of Transportation, Federal Aviation Administration, 2012. [Allison 2014] Allison G, Taranis Stealth Drone Test Flights Successful, ukdefencejournal.org. uk/taranis-stealth-drone-test-flights-successful/, retrieved on March 5, 2017. [Airbus 2016] www.airbus.com/innovation/future-by-airbus, accessed on April 2, 2017. [Airbus2 2016] Future energy sources, www.airbus.com/innovation/future-by-airbus/ future-energy-sources/, retrieved on October 26, 2016. [Airbus3 2016] Express Skyways, www.airbus.com/innovation/future-by-airbus/smarterskies/aircraft-in-free-flight-and-formation-along-express-skyways/, retrieved on April 2, 2017. [Airbus4 2016] Free-Glide Approaches and Landings, www.airbus.com/innovation/futureby-airbus/smarter-skies/low-noise-free-glide-approaches-and-landings/, retrieved on April 2, 2017. [Airbus5 2016] Ground Operations, www.airbus.com/innovation/future-by-airbus/ smarter-skies/low-emission-ground-operations/, retrieved on April 2, 2017. [Airiau 1994] Airiau R, Berge J-M, Olive V, Circuit Synthesis with VHDL, Kluwer Academic Publishers, 1994. [ALIS 2017] Automatic Logistics Information System, www.lockheedmartin.com/us/ products/ALIS.html, retrieved on March 12, 2017. [AllAboutCircuits 2017] Lessons in Electric Circuits: Vol. IV-Digital, www.allaboutcircuits. com/textbook, last accessed on March 11, 2017.
265
266 Principles of Modern Avionics
[Analog 2017] Fundamentals of Direct Digital Synthesis (DDS), MT-085 Tutorial, Analog Devices, www.analog.com/media/en/training-seminars/tutorials/MT-085.pdf, retrieved on March 29, 2017. [ANASQ239 2017] BAE Systems’ AN/ASQ 239 Electronic Warfare System for the F-35, www. youtube.com/watch?v=AujAzP1ChaQ, accessed on March 12, 2017. [Anttalainen 2014] Anttalainen T and Jaaskelainen V, Introduction to Communication Networks, Artech House, 2014. [ARINC 1977] ARINC Specification 429: Mark 33 Digital Information Transfer System, Aeronautical Radio Inc., 1977. [ARINCStore 2017] ARINC Standards Products, store.aviation-ia.com/cf/store/category. fm?prod_group_id=1, retrieved on March 4, 2017. [ARINC429 2017] ARINC Protocol Tutorial, Condor Engineering, leonardodaga.insyde.it/ Corsi/AD/Documenti/ARINCTutorial.pdf, retrieved on March 18, 2017. [B787 2012] 787 Propulsion System, www.boeing.com/commercial/aeromagazine/ articles/2012_q3/2/, retrieved on February 20, 2017. [B787 2017] Boeing 787 Dreamliner Specs, www.modernairliners.com/boeing-787dreamliner/boeing-787-dreamliner-specs/, retrieved on March 12, 2017. [BAE 2017] Aircraft Technologies of the Future, www.baesystems.com/en/feature/aircrafttechnologies-of-the-future, retrieved on March 5, 2017. [Baker 2002] Baker RC, An Introductory Guide to Flow Measurement, Professional Engineering Publishing Limited, UK, 2002. [Bakshi 2008] Bakshi UA and Godse AP, Communication Engineering, 6th edition, Technical Publications, Pune, India, 2008. [Balle 2016] Balle JKO, About the F-35 Lightning II, www.fi-aeroweb.com/Defense/F-35Lightning-II-JSF.html, retrieved on March 8, 2017. [Bao 2005] Bao M, Analysis and Design Principles of MEMS Devices, Elsevier, 2005. [Barnes 2001] Barnes JGP, Ada, in TheAvionics Handbook, ed. by CR Spitzer, CRC Press, 2001. [Barnes 2007] Barnes JGP, Ada, in Digital Avionics Handbook, 2nd edition, Avionics: Elements, Software and Functions, ed. by CR Spitzer, CRC Press, 2007. [Bartley 2001] Bartley GF, Boeing B-777: Fly-By-Wire Flight Controls, in The Avionics Handbook, ed. Spitzer CR, CRC Press, 2001. [BEA 2009] Final Report On the Accident on 1st June 2009 to the Airbus A330-203 Registered F-GZCP Operated by Air France Flight AF 447 Rio de Janeiro-Paris, www.bea.aero/ docspa/2009/f-cp090601/pdf/f-cp090601.en.pdf, retrieved on March 5, 2017. [Beizer 1990] Beizer B, Software Testing Techniques, International Thomson Computer Press, 1990.
Bibliography 267
[Bennett 2016] Bennet J, Lockheed announces it’s going to build a Mach 6 warplane, www. popularmechanics.com/military/research/a19961/lockheed-is-planning-mach-6warplane/, accessed on October 26, 2016. [Bentley 2009] Bentley MA, Spaceplanes: From Airport to Spaceport, Springer, 2009. [Boeing 2015] Boeing continues development on XS-1 reusable hypersonic unmanned aircraft, www.intelligent –aerospace.com/articles/2016/08/ia-boeing-xs1.html, accessed on October 26, 2016. [Boeing 2017] 787 Dreamliner by Design: Unrivaled Passenger Experience, www.boeing. com/commercial/787/by-design/#/unrivaled-passenger-experience, retrieved on February 8, 2017. [Boeing2 2017] Propulsion (1): Jet Engine Basics, Flight Operations Engineering, Boeing, kimerius.com/app/download/5781578351/Jet+engines+basics.pdf, retrieved on April 1, 2017. [Boeing3 2016] FAA Reference Code and Approach Speeds for Boeing Aircraft, 30 March 2016, www.boeing.com/assets/pdf/commercial/airports/faqs/arcandapproachspeeds. pdf, retrieved on April 2, 2017. [Boyes 2010] Boyes W, Instrumentation Reference Book, Fourth Edition, Elsevier Inc., 2010. [Brower 2001] Brower RW, Lockheed F-22 Raptor, The Avionics Handbook, Spitzer CR (editor), CRC Press, 2001. [Butler 2010] Butler A, New Stealth Concept Could Affect JSF Cost, aviationweek.com/ awin/new-classified-stealth-concept-could-affect-jsf-maintenance-cost, retrieved on October 12, 2016. [C5B 2014] C-5 A/B/C Galaxy and C-5M Super Galaxy, www.af.mil/AboutUs/FactSheets/ Display/tabid/224/Article/104492/c-5-abc-galaxy-c-5m-super-galaxy.aspx, retrieved on March 12, 2017. [Chandrasetty 2011] Chandrasetty VA, VLSI Design, A Practical Guide for FPGA and ASIC Implementations, Springer Science+Business Media LLC, 2011. [Chang 2015] Chang K, U.S. Military Space Plane Begins a Fourth (Mostly) Secret Mission, www.nytimes.com/2015/05/20/science/space/secret-vessel-to-test-durability-ofmaterials-in-space-nasa-says.html, accessed on February 27, 2017. [Chow 1985] Chow WW, Gea-Banacloche J, Pedrotti LM, Sanders VE, Schleich W, Scully MO, The Ring Laser Gyro, Reviews of Modern Physics, Vol. 57, No. 1, pp. 61-104, 1985. [Clayton & Winder 2003] Clayton GB and Winder S, Operational Amplifiers, Newnes, 2003. [Collinson 2011] Collinson RPG, Introduction to Avionics Systems, Springer, 2011. [Collision 1956] 1956 Grand Canyon TWA-United Airlines Aviation Accident Site, www. nps.gov/nhl/news/LC/spring2011/GrandCanyonREDACTED.pdf, accessed on November 2, 2016.
268 Principles of Modern Avionics
[Connelly 1996] Connelly JH, Gilmore JP and Weinberg MS, Micro-electromechanical Instrument and Systems Development at the Charles Stark Draper Laboratory, ntrs.nasa. gov/archive/nasa/casi.ntrs.nasa.gov/19960054119.pdf, retrieved on March 5, 2017. [Croucher 2015] Croucher P, Avionics in Plain English, Electrocution Technical Publishers, 2015. [da Silva 2012] da Silva MM, Multimedia Communications and Networking, CRC Press, 2012. [Davis 2013] Davis B, Mastering Software Project Requirements: A Framework for Successful Planning Development and Alignment, J. Ross Publishing, 2013. [Davis 2014] Davis MD, Weyuker EJ and Rheinboldt W, Computability, Complexity and Languages: Fundamentals of Theoretical Computer Science, Academic Press, 1983. [De 2010] De D, Basic Electronics, Dorling Kindersley (India) Private Limited, 2010. [Deagel 2006] AN/APG-81, www.deagel.com/Aircraft-Warners-and-Sensors/ANAPG81-a001381001.aspx, retrieved on March 12, 2017. [Deshmukh 2005] Deshmukh AV, Microcontrollers, Theory and Applications, Tata McGrawHill, New Delhi, 2005. [Deshpande 2007] Deshpande NP, Electronic Devices and Circuits, Principles and Applications, Tata McGraw Hill, 2007. [de Silva 2005] de Silva CW (editor), Vibration Monitoring, Testing and Instrumentation, CRC Press, 2005. [Doberstein 2012] Doberstein D, Fundamentals of GPS Receivers, A Hardware Approach, Springer, 2012 [Dodt 2011] Dodt, T Introducing the 787, www.isasi.org/Documents/library/technicalpapers/2011/Introducing -787.pdf, retrieved on February 6, 2017. [Dole 2017] Dole CE, Lewis JE, Badick JR and Johnson BA, Flight Theory and Aerodynamics, a Practical Guide for Operational Safety, Wiley, 2017. [Downing 2005] Downing J, Fiber-Optic Communications, Thomson Delmar Learning, 2005. [EETimes 2014] French-German collaborators claim solar cell efficiency world record, EE Times Europe, December 2, 2014, retrieved October 26, 2016. [Egan 2008] Egan WF, Phase-Lock Basics, John Wiley & Sons, Hoboken, New Jersey, 2008. [EIA 2016] www.eia.gov/tools/faqs/faq.php?id=97&t=3, retrieved on April 2, 2017. [El-Sayed 2016] El-Sayed AF, Fundamentals of Aircraft and Rocket Propulsion, Springer-Verlag, London 2016. [F22 2017] www.f22fighter.com/avionics.htm, retrieved on April 2, 2017. [F35 2016] F-35 Information, https://comprehensiveinformation.wordpress.com/, accessed on October 10, 2016. [F35Facts 2017] F-35C Carrier Variant, web.archive.org/web/20150511095816/http:// www.lockheedmartin.com/us/products/f35/f-35c-carrier-variant.html, retrieved on March 12, 2017.
Bibliography 269
[F35s 2017] media.defenceindustrydaily.com/images/DATA_F-35_Variants.gif, retrieved on March 12, 2017. [FANS 2017] Understanding Data Comm Systems with FANS 1/A+ CPDLC DCL and ATN B1, White Paper, Universal Avionics Systems Corporation, www.uasc.com/docs/defaultsource/documents/whitepapers/uasc_fans_whitepaper.pdf?sfvrsn=4, retrieved on March 13, 2017. [FAA14 2014] Aircraft Fuel System, Aviation Maintenance Technician Handbook— Airframe, Volume 2, Chapter 14, www.faa.gov/regulations_policies/handbooks_ manuals/aircraft/ amt_airframe_handbook, accessed on April 2, 2017. [FAA 2014] Pilot’s Handbook of Aeronautical Knowledge, U.S. Department of Transportation, Federal Aviation Adminstration, Skyhorse Publishing Inc., 2014. [FAA 2016.10] NextGen Works, www.faa.gov/nextgen/works/, accessed on October 28, 2016. [FAA 2016] www.faa.gov/documentLibrary/media/Advisory_Circular/AC_23-18.pdf, accessed on November 1, 2016. [FAA 2017] Pilot’s Handbook of Aeronautical Knowledge, www.faa.gov/regulations_policies/handbooks_manuals/aviation/phak/, retrieved on March 10, 2017. [FADEC 2016] FADEC 3, fadecinternational.com/products/fadec-3.php, accessed on October 29, 2016. [FDR 2017] Flight Data Recorder, www.skybrary.aero/index.php/Flight_Data_recorder_(FDR), retrieved on March 11, 2017. [Ferrara 1989] Ferrara JM, Avionics Vol. 1: Every Pilot’s Guide to Aviation Electronics, Air and Space Co., 1989. [Ford 2010] Ford D, A Vision So Noble: John Boyd, the OODA Loop and America’s War on Terror, CreateSpace Independent Publishing Platform, 2010. [Fulton 2014] Fulton R, RTCA DO-254/EUROCAE ED-80, in Digital Avionics Handbook, Spitzer C, Ferrell U and Ferrell T (editors), CRC Press, 2014 [Ghosh 2009] Ghosh SK (editor), Self-healing Materials: Fundamentals, Design Strategies, and Applications, Wiley-VCH, 2009. [Ghosh 2012] Ghosh AK, Introduction to Measurements and Instrumentation, PHI Learning Private Limited, New Delhi, 2012. [GMV 2017] GMV, www.navipedia.net/index.php/GPS_Receivers, accessed on March 25, 2017. [Godse 2008] Godse AP and Bakshi UA, Electronic Devices and Circuits – I, Technical Publications, Pune, 2008. [Golio 2007] Golio M and Golio J (editors), RF and Microwave Applications and Systems, CRC Press, 2007.
270 Principles of Modern Avionics
[GPS 2016] The Global Positioning System, www.gps.gov/systems/gps, accessed on November 13, 2016. [GPS 2017] New Civil Signals, www.gps.gov/systems/gps/modernization/civilsignals/, retrieved on March 21, 2017. [GPS1 2013] Interface Specification, Navstar GPS Space Segment/Navigation User Interfaces, IS-GPS-200H, www.gps.gov/technical/icwg/IS-GPS-200H.pdf, accessed on November 13, 2016. [GPS2 2013] Interface Specification, Navstar GPS Space Segment/Navigation User Interfaces, IS-GPS-705D, www.gps.gov/technical/icwg/IS-GPS-200H.pdf, accessed on November 13, 2016. [GPS3 2013] Interface Specification, Navstar GPS Space Segment/Navigation User Interfaces, IS-GPS-800H, www.gps.gov/technical/icwg/IS-GPS-200H.pdf, accessed on November 13, 2016. [GPWS 2017] ARINC 597 [Grewal 2013] Grewal MS, Andrews AP and Bartone CG, Global Navigation Satellite Systems, Inertial Navigation, and Integration, John Wiley and Sons, Hoboken, New Jersey, 2013. [Gunter 2005] Gunter L, All Decked Out. Similarities between 777, 787 help Airlines, Passengers and Boeing, Boeing Frontiers, Vol. 4, Issue 6, October 2005. www.boeing.com/news/ frontiers/archive/2005/october/i_ca1.html, retrieved on March 8, 2017. [Gupta 2014] Gupta PC, Data Communications and Computer Networks, 2nd edition, PHL Learning Private Limited, 2014. [Hagen 2009] Hagen JB, Radio-Frequency Electronics: Circuits and Applications, Cambridge University Press, 2009. [Hawk 2005] Hawk JL, The Boeing 787 Dreamliner: More than an Airplane, www.scsi-inc.com/ Boeing%20Presentation%20%by%20Jeff%20Hawk.pdf, retrieved on February 8, 2017. [Heise etal 2015] Haise P, Gaillardet I, Rahman H and Mannur V, Avionics Full Duplex Ethernet and the Time Sensitive Networking Standard, www.ieee802.org/1/files/public/ docs2015/TSN-Schneele-AFDX-0515-v01.pdf, retrieved on February 5, 2017. [Helfrick 2002] Helfrick A, Principles of Avionics, Avionics Communications Inc., Virgina, USA, 2002. [Hilderman 2011] Hilderman V and Baghi T, Avionics Certification: A Complete Guide to DO-178 (Software), DO-254 (Hardware), Avionics Communications Inc., 2011. [Hitt & Mulcare 2007] Hitt EF and Mulcare D, Fault-tolerant Avionics, chapter 8 in Avionics: Development and Implementation, ed. Spitzer CR, second edition, CRC Press, 2007. [Honeywell 2012] Micro Inertial Reference System: Product Description, aerocontent. honeywell.com/aero/common/documents/myaerospacecatalog-documents/ Laseref_VI_FINAL.pdf, retrieved on February 7, 2017.
Bibliography 271
[Honeywell2 2012] VRS Industrial Magnetic Speed Sensors, Application Note, Honeywell, sensing.honeywell.com/vrs-app-note-005934-2-en-final-26jun12.pdf, accessed on December 4, 2016. [Hunecke 2010] Hunecke K, Jet Engine: Fundamentals of Theory, Design and Operation, The Crowood Press, UK, 2010. [I2C 2014] I2C-Bus Specification and User Manual, www.nxp.com/documents/user_manual/UM10204.pdf, retrieved on March 15, 2017. Also see www.i2c-bus.org/i2c-primer/, accessed on March 15, 2017. [ICAO 1964] Document 7488/2, 1964. [ICAO 2010] Operation of Aircraft, Annex 6 to the Convention on International Civil Aviation, code7700.com/pdfs/icao_annex_6_part_i.pdf, retrieved on March 11, 2017. [IEEE 2002] IEEE Standard 521-2002 IEEE Standard Letter Designations for Radar-Frequency Bands, standards.ieee.org/findstds/standard/521-2002.html, accessed on March 20, 2017. [IEEE 2008] 754-2008 – IEEE Standard for Floating-Point Arithmetic, https://standards.ieee. org/findstds/standard/754-2008.html, accessed on March 16, 2017. Also see www. csee.umbc.edu/~tsimo1/CMSC455/IEEE-754-2008.pdf, retrieved on March 16, 2017. [IFF 2017] General IFF Principles, maritime.org/doc/radar/part2.htm, retrieved on March 12, 2017. [IndustryWeek 2007] Boeing 787: A Matter of Materials—Special Report: Anatomy of a Supply Chain, www.industryweek.com/companies-amp-executives/boeing-787-mattermaterials-special-report-anatomy-supply-chain, December 1, 2007, retrieved on February 8, 2017. [INMARSAT 2017] www.stratosglobal.com/Products/Inmarsat/Classic$20Aeronautical. aspx, retrieved February 7, 2017. [ISO 2012] ISO/IEC 8652:2012, Information Technology—Programming Languages—Ada, www. iso.org/standard/61507.html, accessed on March 5, 2017. [Jain 2010] Jain RP, Modern Digital Electronics, 4th edition, Tata McGraw Hill, New Delhi, 2010. [Jansen 2012] Jansen RA, Second Generation Biofuels and Biomass: Essential Guide to Investors, Scientists and Decision Makers, Wiley-VCH, 2012. [Japan 2016] Society of Fiber Science and Technology, Japan, High-performance and Specialty Fibers: Concepts, Technology and Modern Applications of Man-made Fibers for the Future, Springer, Japan, 2016. [Jasinski 2016] Jasinski R, Effective Coding with VHDL: Principles and Best Practice, MIT Press, 2016. [Jensen 2005] Jensen D, F-35 Integrated Sensor Suite: Lethal Combination, www.aviationtoday. com/2005/10/01/f-35-integrated-sensor-suite-lethal-combination, retrieved October 1, 2016.
272 Principles of Modern Avionics
[Kaplan 2005] Kaplan E and Hegarty C (editors), Understanding GPS: Principles and Applications, Second Edition, Artech House 2005. [Kendal 1993] Kendal B, Manual of Avionics, Wiley-Blackwell, 1993. [Kjelgaard 2007] Kjelgaard C, From Supersonic to Hover: How the F-35 Flies, www.space. com/4778-supersonic-hover-35-files.html, accessed on October 7, 2016. [King 2010] King AD, Inertial Navigation—Forty Years of Evolution, deanspacedrive.org/ wp-content/uploads/2010/07/inertial_navigation_introduction.pdf, retrieved on March 6, 2017. [Kingsley 1999 ] Kingsley S and Quegan S, Understanding Radar Systems, Scitech Publishing Inc., 1999. [Kingsbury 2008] Kingsbury A, New landings save airplane fuel, www.usnews.com/news/ national/articles/2008/07/02/new-landings-save-airplane-fuel/, accessed on October 27, 2016. [Lan&Roskam 2003] Lan C-T.E. and Roskam J, Airplane Aerodynamics and Performance, DAR Corporation, 2003. [Langton 2009] Langton R, Clark C, Hewitt M and Richards L, Aircraft Fuel Systems, Wiley, 2009 [Lawrence 1998] Lawrence A, Modern Inertial Technology: Navigation, Guidance and Control, 2nd edition, Springer, 1998. [Leach 2016] Leach RJ, Introduction to Software Engineering, Second Edition, CRC Press, 2016. [Li 2006] Li SS, Semiconductor Physical Electronics, Springer Science+Business Media LLC, 2006. [Linke 2008] Linke-Diesinger A, Systems of Commercial Turbofan Engines. An Introduction to Systems Functions, Springer-Verlag, 2008. [Lipson 2011] Lipson A, Lipson SG and Lipson H, Optical Physics, 4th Edition, Cambridge University Press, 2011. [LM1 2016] F-35 Lightning II EOTS, Superior Targeting Capability,
www.lockheedmartin.com/content/dam/lockheed/data/mfc/pc/f-35-lightiningii--electro-optical-targeting-system-etos/mfc-f35-eots-pc.pdf, retrieved on October 4, 2016.
[Liu 1999] Liu Q, Doppler Measurement and Compensation in Mobile Satellite Communications Systems, Military Communications Conference Proceedings/MILCOM, 1: pp. 316320, 1999. [Lu 2004] Lu Mi, Arithmetic and Logic in Computer Systems, Wiley-Interscience, 2004. [Mack 2016] Mack E, Finally! A flying car could go on sale in 2017, www.forbes.com/ sites/ericmack/2015/03/16/finally-a-flying-car-could-go-on-sale-as-soon-as2017/#326fb1c7219e, accessed on October 26, 2016.
Bibliography 273
[Malvino 2015] Malvino A and Bates D, Electronic Principles, 8th Edition, McGraw-Hill Education, 2015. [Marriott 2009] Marriott P and Stone AD, Understanding DO-254 Compliance for the Verification of Airborne Digital Hardware, White Paper, www.synopsys.com/solutions/ industrysegmentsolutions/milaero/capsulemodule/do-254_wp.pdf, accessed on September 3, 2016. [Marsh 2014] Marsh G, Composites Fly High (Part 1), Materials Today, www.materialstoday. com/composite-applications/features/composites-flying-high-part-1/, retrieved on February 8, 2017. [Mehta 2009] Mehta N, A Flexible Jet Fighter, Ingenia Online, Issue 41, December 2009. www.ingenia.org.uk/Ingenia/Articles/576, retrieved on March 12, 2017. [McHale 2011] McHale J, Boeing 787 Avionics Review, www.militaryaerospace.com/ articles/2011/06/boeing-787-avionics.html, retrieve don March 14, 2017. [McLean 2013] Mclean D, Understanding Aerodynamics, Arguing from the Real Physics, John Wiley & Sons Ltd, 2013. [Meikle 2008] Meikle, H, Modern Radar Systems, Artech House, 2008. [Middleton 1989] Middleton D, Avionic Systems, Longman Scientific & Technical, 1989. [MIL-STD-882C 1993] Military Standard System Safety Program Requirements, www.systemsafety.org/Documents/MIL-STD-882C.pdf, retrieved on April 2, 2017. [Miller 2012] Miller S, Contribution of Flight System to Performance-Based Navigation, www. boeing.com/commercial/aeromagazine/articles/qtr_02_09/pdfs/AERO_Q209_ article05.pdf, retrieved on February 6, 2017. [ModeS 1983] Mode Select Beacon System, Order 6365.1A, www.faa.gov/documentLibrary/ media/Order/FAA_Order_6365.1A.pdf, accessed on November 7, 2016. [Mohanakumar 2008] Mohanakumar K, Stratosphere Troposphere Interactions: An Introduction, Springer, 2008. [Moir 2006] Moir I, Seabridge A and Jukes M, Military Avionics Systems, Wiley, 2006. [MQ1 2015] MQ-1B Predator, www.af.mil/AboutUs/FactSheets/Display/tabid/224/ Article/104469/mq-1b-predator.aspx, retrieved on March 12, 2017. [Myers 2011] Myers GJ, The Art of Software Testing, Wiley, 2011. [Nagabhushana 2010] Nagabhushana S and Sudha LK, Aircraft Instrumentation and Systems, I.K.International Publishing House, 2010. [Neville and Dey 2012] Neville R and Dey M, Innovatve 787 Flight Deck Designed for Efficiency, Comfort and Commonality, www.boeing.com/commercial/aeromagazine/ articles/2012_q1/3/, retrieved on February 5, 2017. [NG1 2017] APG-81 AESA Radar for the F-35 JSF, www.youtube.com/watch?v=wlwOupjMeM, accessed on March 12, 2017.
274 Principles of Modern Avionics
[NG2 2017] AN/AAQ-37 Distributed Aperture System (DAS) for the F-35, www. northropgrumman.com/Capabilities/ANAAQ37F35/Pages/default.aspx, retrieved on March 12, 2017. [NOAAD 2017] Density Altitude, www.crh.noaa.gov/images/epz/wxcalc/densityAltitude. pdf, retrieved on March 19, 2017. [NOAAP 2017] Pressure Altitude, www.crh.noaa.gov/images/epz/wxcalc/ pressureAltitude.pdf, retrieved on March 19, 2017. [Norris and Wagner 2009] Norris G and Wagner M, Boeing 787 Dreamliner, Zenith Press. [Northrop 2005] Northrop RB, Introduction to Instrumentation and Measurements, Taylor and Francis, Boca Raton, Florida, USA, 2005. [Northrop 2014] Understanding Voice and Data Link Networking, Northrop Grumman’s Guide to Secure Tactical Data Links, December 2014, www.northropgrumman.com/ Capabilities/DataLinkProcessingAndManagement/Documents/Understanding_ Voice+Data_Link_Networking.pdf, retrieved on March 1, 2017. [Novacek 2016] Novacek P, Terrain Awareness and Warning Systems—TAWS, Buyer’s Guide, www.aeapilotsguide.com/pdf/05-06_Archive/TAWSPG05.pdf, accessed on November 1, 2016. [NTSB 2017] Cockpit Voice Recorders (CVR) and Flight Data Recorders (FDR), www.ntsb.gov/ news/Pages/cvr_fdr.aspx, retrieved on March 11, 2017. [Nyce 2004] Nyce DS, Linear Position Sensors: Theory and Application, John Wiley and Sons, Hoboken, New Jersey, 2004. [Ogando 2007] Ogando J, Boeing’s `more electric’ 787 Dreamliner spurs engine evolution, www. designnews.com/document_asp?doc_id=222308, accessed on October 30, 2016. [Oishi 2007] Oishi RT, Communications, in Digital Avionics Handbook, 2nd Edition, Avionics: Elements, Software and Functions, ed. by Spitzer CR, CRC Press, 2007. [Onishi 2012] Onishi M, Toray’s Business Strategy for Carbon Fiber Composite Materials, www. toray.com/ir/pdf/lib/lib_a136.pdf, retrieved on February 8, 2017. [Orrell 2007] Orrell D, The Future of Everything: the Science of Prediction, Thunder’s Mouth Press, 2007. [Pallett 1992] Pallett EHJ, Aircraft Instruments and Integrated Systems, Longman, 1992. [Parkinson 1996] Parkinson BW and Spilker JJ (editors), Global Positioning System: Theory and Applications, Vol. I, American Institute of Aeronautics and Astronautics, 1996. [Patterson 2007] Patterson DA and Hennessy JL, Computer Organization and Design, Revised Printing, 3rd Edition, The Hardware/Software Interface, Morgan Kaufmann, 2007. [PBN 2017] NextGen Performance Based Navigation, www.faa.gov/nextgen/update/ progress_and_plans/pbn/, retrieved on March 14, 2017. [Peled 2013] Peled DA, Software Reliability Methods, Springer, 2013.
Bibliography 275
[Pelgrom 2013] Pelgrom MJM, Analog-to-Digital Conversion, 2 nd Edition, Springer Science+Business Media LLC, 2013. [Pizzocaro 2009] Pizzocaro M, Development of a Ring Laser Gyro: Active Stabilization and Sensitivity Analysis, Universita di Pisa, 2009. [PW 2017] F135 Specs Charts, www.pw.utc.com/Content/F135_Engine/pdf/B-2-4_F135_ SpecsChart.pdf, retrieved on October 11, 2016. [Radar 2016] www.radartutorial.eu, accessed on November 5, 2016. [Ramsey 2005] Ramsey JW, Boeing 787: Integration’s Next Step, www.aviationtoday. com/2005/06/01/boeing-787-integrations -next -step/, retrieved on February 7, 2017. [Richards 2010] Richards WR, O’Brien K and Miller DC, New Air Traffic Surveillance Technology, www.boeing.com/commercial/aeromagazine/articles/qtr_02_10/pdfs/ AERO_Q2-10_article02.pdf, accessed on November 4, 2016. [Rierson 2013] Rierson L, Developing Safety-Critical Software: A Practical Guide for Aviation Software and DO-178C Compliance, CRC Press, 2013. [RockwellCollins 2017] VHF 2100 High Speed Multi-Mode Data Radio, www.rockwellcollins. com/-/media/Files/Products/Product_Brochures/Communication_and_ Networks/Communication/Radios/VHF-2100_Data_Sheet.ashx, retrieved on February 7, 2017. [RockwellCollins 2016] rockwellcollins.com/~/media/Files/Unsecure/Products/ Product_brochures/Radar_and_Surveillance/Weather_Radar/WXR-2100/WXR2100_MutliScan_ThreatTrack.ashx, accessed on October 30, 2016. [RockwellCollins 2017] Aircraft Communications Addressing and Reporting System (ACARS), www.rockwellcollins.com/Services_and_Support/Information_Management/~/ media/DA843DB0792946C58740F613328E5022.ashx. [Royce 2015] Rolls-Royce, The Jet Engine, Wiley, 2015. [RNAV 2017] Section 2. Area Navigation (RNAV) and Required Navigation Performance (RNP), tfmlearning.fly.faa.gov/publications/atpubs/aim/Chap1/aim0102.html, retrieved on March 14, 2017. [Rolls-Royce 2017] Engine Health Management, www.rolls-royce.com/about/ourtechnology/enabling-technologies/engine-health-management.aspx#sense, retrieved on February 19, 2017. [RTCA 2016] List of Available Documents, www.rtca.org/Files/ListofAvailableDocsMarch2013. pdf, retrieved on March 4, 2017. [RTCA 2017] Standards and Guidance Materials, www.rtca.org/content. asp?pl=145&contentid=145, accessed on March 4, 2017. [SAE4754 1996] Certification Considerations for Highly-Integrated or Complex Aircraft Systems,, standards.sae.org/arp4754, accessed on April 2, 2017.
276 Principles of Modern Avionics
[SAE4761 1996] Guidelines and Methods for Conducting the Safety Assessment Process on Civil Airborne Systems and Equipment, standards.sae.org/arp4761, accessed on April 2, 2017. [Schafer 2016] Schafer SM, F-16 Pilot: Cessna in Sight Less Than 1 Second Before Crash, abcnews.go.com/US/wireStory/16-pilot-cessna-sight-crash-43223837, accessed on November 2, 2016. [Segal 2009] Segal C, The Scramjet Engine: Processes and Characteristics, Cambridge University Press, 2009. [Sessions 2017] Sessions D, GPS Tutorial #2, Signals and Messages, www.theairlinepilots. com/forumarchive/rnav/gps2.pdf, retrieved on March 31, 2017. [Shockley 1949] Shockley W, The Theory of p-n Junctions in Semiconductors and p-n Junction Transistors, The Bell System Technical Journal, 28 (3), pp. 435-489. [Sinnett 2007] Sinnett M, 787 No-Bleed systems: saving fuel and enhancing operational efficiencies, www.boeing.com/commercial/aeromagazine/articles/qtr_4_07/AERO_Q407_ article2.pdf, accessed on October 30, 2016. [Skybrary 2016] Terrain Avoidance and Warning System (TAWS), www.skybrary.aero/ index_php/Terrain_Avoidance_and_Warning_System_(TAWS), accessed on November 1, 2016. [Skyradar 2016] ADS-B Technology (TIS-B, FIS-B), www.skyradar.net/skyradar-system/ adsbtechnology.html, accessed on November 5, 2016. [SLD 2016] The F-35 and Advanced Sensor Fusion, a white paper by Lockheed Martin, www.sldinfo.com/whitepapers/the-f-35-and-advanced-sensor-fusion/ accessed on October 9, 2016. [Spitzer 1993] Spitzer CR, Digital Avionics Systems: Principles and Practice, 2nd edition, McGraw-Hill Inc., 1993. [Stealth 2013] How Stealthy is the F-35, https://defenseissues.net/2013/10/19/howstealthy-is-the-f-35, retrieved on March 12, 2017. [Stensby 1997] Stensby JL, Phase-Locked Loops: Theory and Applications, CRC Press, 1997. [Stephenson 2006] Stephenson D, The Airplane Doctors—Boeing, www.boeing.com/news/ frontiers/archive/2006/august/ts_sf09.pdf, retrieved on February 19, 2017. [Stewart 2014] Stewart S and Edwards J, Flying the Big Jets, 4th edition, Airlife Publishing, 2014. [Tandeske 1991] Tandeske D, Pressure Sensors, Selection and Application, Marcel Dekker Inc., 1991. [TAWS 2002] DOT, Terrain Awareness and Warning System, 14 CFR Parts 91, 125, 135, www. gpo.gov/fdsys/pkg/FR-2000-03-29/pdf/00-7595pdf#page=21, accessed on October 30, 2016. [Taylor 2016] Taylor EF, Wheeler JA and Bertschinger E, Exploring Black Holes: Introduction to General Relativity (2nd Edition), Pearson, 2016.
Bibliography 277
[TCAS 2011] Introduction to TCAS II, Version 7.1, www.faa.gov/documentLibrary/ Advisory_Circular/TCAS II V7.1 Intro booklet.pdf, accessed on November 9, 2016. [Tooley 2007] Tooley M, Aircraft Digital Electronic and Computer Systems, Elsevier 2007. [Torenbeek 2013] Torenbeek E, Advanced Aircraft Design, Conceptual Design, Analysis and Optimization of Subsonic Civil Airplanes, Wiley, 2013. [Trenkle 1973] Trenkle F and Reinhardt M, In-Flight Temperature Measurements, AGARDograph No. 160, AGARD Flight Test Instrumentation Series Volume 2, www. dtic.mil/dtic/tr/fulltext/u2/758589.pdf, retrieved on March 19, 2017. [Trimble 2013] Trimble S, PICTURES: Skunk Work Reveals Mach 6.0 SR-72 Concept, www. flightglobal.com/news/articles/picture-skunk-works-reveals-mach-60-sr-72concept-392481/, retrieved on March 5, 2017. [USDOT 2012] Automatic Dependent Surveillance-Broadcast (ADS-B) Operations, Advisory Circular No. 90-114, www.faa.gov/documentLibrary/media/Advisory_Circular/ AC%2090-114.pdf, accessed on November 4, 2016. [USDOT 2016] United States Department of Transportation, Airline fuel cost and consumption (U.S. Carriers—Scheduled), www.transtats.bts.gov/fuel.asp, accessed on October 28, 2016. [Uyemara 2001] Uyemara JP, CMOS Logic Circuit Design, Kluwer Academic Publishers, 2001. [Veltman 2016] Veltman A, Pulle DWJ and de Doncker RW, Fundamentals of Electrical Drives, Springer, 2016. [VMS 2003] First Lockheed Martin F-35 Joint Strike Fighter Vehicle Management Computer Delivered, www.prnewswire.com/news-releases/first-lockheed-martin-f-35-joint -strike-fighter-vehicle-management-computer-delivered-55707452.html, retrieved on March 12, 2017. [Wald 1984] Wald RM, General Relativity, The University of Chicago Press, 1984. [Wang 2014] Wang W, Zhong W, Xu W, Mu M and Yuan J, Low-Noise Balun Aids PhasedArray Radars, mwrf.com/active-components/low-noise-balun-aids-phased-arrayradars, retrieved on March 12, 2017. [Watkins 2006] Watkins CB, Integrated Modular Avionics: Managing the Allocation of Shared Intersystem Resources, 25th Digital Avionics Systems Conference, 2006 IEEE/AIAA, 15—19 October 2006. [Weather 2016] Pressure Altitude, www.weather.gov/media/epz/wxcalc/pressureAltitude. pdf, accessed on December 25, 2016. [Wertz 1978] Wertz JR (editor), Spacecraft Attitude Determination and Control, D. Reidel Publishing Company, 1928. [Wild 2013] Wild TW and Kroes MJ, Aircraft Powerplants, Eighth Edition, McGraw-Hill Education, 2013.
278 Principles of Modern Avionics
[World 2017] www.worldmeters.info/world-population/, retrieved on April 2, 2017. [Wroble 2006] Wroble M, Kreska J and Dungan L, SAE MIL-1394 For Military and Aerospace Vehicle Applications, ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20060009084.pdf, retrieved on March 12, 2017. [Wyatt2015] Wyatt D, Aircraft Flight Instruments and Guidance Systems, Principles, Operations and Maintenance, Routledge, Taylor & Francis Group, 2015. [Xu 2016] Xu G and Xu Y, GPS Theory, Algorithms and Applications, Third Edition, Springer 2016. [Young 2004] Young RR, The Requirements Engineering Handbook, Artech House, 2004. [Zaman et al 2011] Zaman KBMQ, Bridges JE and Huff DL, Evolution from ‘Tabs’ to ‘Chevron Technology’ – a Review, Aeroacoustics, vol. 10, no. 5&6, pp. 685-710, 2011. [Zuckerberg 2016] Zuckerberg M, The technology behind Aquila, www.facebook.com/notes/ mark-zuckerberg/the-technology-behind-aquila/10153916136506634, accessed on October 26, 2016. [Zwag 2007] Zwaag SVD (editor), Self Healing Materials: An Alternative Approach to 20 Centuries of Materials Science, Springer, 2007.
qqq
Index A Acceleration 174 Acceleration sensors 4 Accelerometers 4, 130 Acknowledgment, iso or maintenance (AIM) data 96 Active electronically scanned array (AESA) radar 222 Actual navigation performance 184 Adaptive control 197 Add instruction 74 Address bus 80 Address bus buffers 81 Address decoding logic 80 Adiabatic correction 109 Air data inertial reference unit 197 Air data sensors 3, 102, 107 Air data system 188 Air temperature measurement 108 Air traffic control 244 Air velocity measurement 110 Aircraft communication addressing and reporting system 5, 211 Aircraft development, stakeholders in 18 Air-to-ground communication system 5 All-call interrogation 142 All-call reply 141 Alternative energy sources 237 Altitude measurement 114 Amplifiers 63 Amplitude modulation 37 Analog to digital converter (ADC) 67 Analog-digital data conversion 63 Angle-of-attack sensor 128 Application specific integrated circuit 87
Area navigation 184 Arinc 429 data bus 92 Arithmetic logic unit 82 ASCII code 34 Astable multivibrator 46 Attitude and heading reference system 3 Attitude sensor 3, 102, 117 Automatic dependent surveillance-broadcast 143 Automatic direction finder 167 Autonomic logistics information system 235 Autopilot 8 Aviation, future of 236 Avionics architecture 11 classification 3 elements 90 equipment 189 functions 1 hardware development process 22 history 14 in F-35 Lightning 219, 221 in Boeing 787 Dreamliner 120 in military aircraft 214 in civilian aircraft 179 metrics for evaluating 17 process flow for design and development 19 programming 99
B Basic accelerometer 131 Basic logic gates 43 BCD 35 Binary code 35 Binary coded decimal data 95 Binary phase shift keying 39 279
280 Index Binary phase shift keying demodulator 40 Binary phase shift keying modulator 40 Biomass fuel 237 Bipolar junction transistor 54, 55 Bipolar NRZ 36 Bipolar RZ 36 Bistable multivibrator (S-R latch) 47 BNR data 95 Boeing 787 Dreamliner avionics in 180 communication avionics in 185 electrical power in 191 electrical power generation and distribution in 192 navigation avionics on 187 overview of 179
C C/A signal 154 C/A-code 153 Capacitive accelerometer 134 Capacitive fuel quantity indicator 175 Central processing unit 76 Certification of aircraft 20, 21, 26 Chromel-Alumel thermocouple 172 Civilian aircraft, anatomy of 101 Cockpit displays and recorders 10 Codes 33 Collision avoidance maneuver 206 Combinational logic circuits 45 Common computing resource 181 Common core system 181 Common data network 181 Communication, navigation and identification system 225 Communication, navigation, identification and surveillance (CNIS) system 4, 185 Complementary metal-oxide-semiconductor (CMOS) logic 59 Not and nand gates in 60 Compressor inlet temperature 172 Condition-based maintenance 197 Control and management system 6 Control bus 81
Control segment 146 Control unit 83 Controlled flight into terrain 199 Coriolis force 124 on an oscillator 126 CPU architecture 78 Crash-safe flight recorders 212 Crew information system 190 Cross correlation computation 162
D D latch 48 Data bus 6, 91 Data bus buffer 81 Data communication 245 Data demodulation 165 Data in GPS signal 166 Data representation 71 Dead reckoning 4 Delay lock loop 160 Design assurance levels 21 Digital circuits 43 Digital computer 69 architecture 70 Digital fly by wire system 194 Digital phase correction 164 Diode 53 Directed energy beams 246 Discrete data 95 Displacement sensors 102, 104 Display systems 189 Distance measuring equipment 9 Distributed aperture system 225 Doppler correction 158 Drift errors 121
E Earth’s atmosphere 108 Edge-triggering 49 Electromagnetic sensors 103, 136 Electronic attack measures 232 Electronic flight instrument system 7 Electronic warfare system 231 Electro-optical targeting system 225
Index 281
Encapsulation 12 Engine health monitoring and management system 194 Engine pressure ratio 173 Engine sensors 4, 103, 170 Engine vibration 174 Engine’s shaft speed 176 Engine-indicating and crew-alerting system 7, 191 Enhanced airborne flight recorder 191 Enhanced electronic counter measures 232 Enhanced electronic surveillance 232 Equivalent lateral spacing operations 245 Exception handling 84 Excess-3 35 Exhaust gas temperature 172 External/internal data bus 80
F F-135 Pratt & Whitney engine 218 F-35 Lightning II 216 Fault masking 98 Fault tolerance 96 Federal Aviation Authority 17 Federated architecture 12 Fiber optic cable 88 Fiber optic communications 87 Fiber optic gyroscope 122 Field programmable gate array 87 File transfer protocol 96 Flags register 82 Flash analog to digital converter 68 output 69 Flight data recording system 213 Flight management system 7, 182 Flight time, reduction of 240 Flip-flops 50 Fly by wire 7 Flying cars 239 Fuel consumption 241 Fuel flow rate 175 Fuel flowmeter 176 Fuel quantity 174 Full authority digital engine control 193, 230 Full-adder 45
G Gate densities 86 Gated S-R latch 48 General purpose IC 87 Generic avionic bus architecture 92 Gimbaled (mechanical) gyroscopes 118 Gimbaled inertial platform 130 Glide slope display 209 Global positioning system 4, 5, 145 Global positioning system receiver 155 Global positioning system satellite 152 Gray codes 35
H Half-adder 45 Hand on throttle and stick 228 Hardware design 21 Hardware design flow 24 Harvard architecture 85 Harvested energy 238 Heading sensors 3 Hotas stick 229 Hydrogen fuel cells 238 Hypersonic aerial vehicles 247
I Identification functionality 189 Indicated air speed measurement 111 Inertial navigation system 5 Inertial reference system 188 Inlet air temperature 172 Instruction decoder 82 Instruction pointer 77 Instruction register 79 Instructions 72 representation 71 Instrument landing system 8, 207 Integrated circuits (IC) 86 Integrated modular architecture 13 Integrated navigation receiver 188 Integrated sensor fusion engine 233 Integrated sensor system 232 Integrated vehicle health monitoring 196
282 Index Integration scales 86 Interference limiting protocols 204 Inter-integrated circuit (I2C) bus 60 hardware 63 Interrupt 84 Inverting summer and R/2nR DAC 66
K Keplerian geosynchronous orbit corrections 151
L Laser gyroscopes 121 Latches 48 Linear feedback shift register 51 Linear variable differential transformer 105 Line-replaceable unit 91 Localizer display 208 Long downlink format 141 Loop and sense antennae voltages 169
M Mach number 110 Machine-level instructions 73 Main memory 85 Management system 190 Manchester NRZ 36 Marker beacons 209 Memory organization 73 MEMS gyroscope 126 Metal-oxide-semiconductor field effect transistor 57 Metroplex 245 Micro-electro mechanical systems 126 Military aircraft, avionics in 214, 246 ModeS communication 141 ModeS downlink format 143 Monostable multivibrator 47 Multifunction advanced data link 227 Multivibrators 46
N National airspace system voice switch 246 Navigation 165 Negative integers and binary arithmetic 32
Network enabled weather 245 Next generation air transportation system 243 infrastructure 244 nMosfet 58 Non-inverting summer 65 n-p-n transistor 55 Number systems 31 Numerically controlled oscillator 159
O Onboard communication system 5
P P-code 153 Performance based navigation 184, 244 Phase lock loop 163 Phase shift modulation schemes 39 Piezoelectric accelerometer 132 Piezoresistive accelerometer 133 Pilot vehicle interface 227 Pitot tube 111 Plan position indicator 139 Platinum resistance thermometer 108 pMosfet 59 Position sensors 102, 129 Potentiometer 104 Power supply 9 PR codes 142 Primary surveillance radars 4 Process flow 18
Q Quadrature phase shift encoding 42 Quadrature phase shift keying 41 demodulator 43 modulator 42
R R/2nR DAC output 67 inverting summer and 66 RA sense 206 Radar altimeter 210 Radar frequency bands 137 Radio frequency bands 136
Index 283
Rate gyroscopes 118 Rate integrating gyroscopes 121 Redundancy 97 Remote data concentrators 182 Required navigation performance 184 Restrained rate gyroscope 120 Ring laser gyroscopes 123 RTCA documents 261
S Sagnac effect 122 Satcom configurations 187 Second generation sustainable aviation fuel 237 Secondary surveillance radar 140 Selected ARINC standards 263 Self-healing materials 236 Semiconductor switches 53 Sensors 3 types of 102 Sequential logic 46 Series in parallel out register 50 Series of ARINC standards 263 Servo-balance accelerometer 135 Shock 174 Short downlink format 141 Signal acquisition/tracking 157 Signal propagation time corrections 147 Software design 26 Software development process 28 tasks in 27 waterfall model 27 Solar energy 237 Space segment 146 Spaceplanes 238 Split-and-join aircraft 246 Stack pointer 77 Standby air data and attitude reference unit 198 Subframes in GPS navigation data 166 Surveillance system 6, 189 Synchro transmitter/receiver 106
T T/R module of 224 Tactical air navigation system 9 Tactical data link 226 Tau 201 TCAS architecture 203 TCAS-TCAS coordination 206 Terrain awareness and warning system 199 Terrestrial hypersonic flight 239 Thermal considerations 25 Thermocouples 172 Threat detection and tracking 201 Time-to-digital converter 164 Traffic alert and collision avoidance system 6 Traffic collision avoidance system 200 Transistor modes 56 Transponders 137 Travel and resolution advisory 202 Triple-triple redundancy 98 True air speed 113 Turbine inlet temperature 172 Two-spool turbofan engine 170 2-to-1 multiplexer 46
U Unmanned war planes 246 User segment 147
V Variable reluctance tachometer 177 Vehicle management system 234 Very high frequency omnidirectional range 8 Vibratory gyroscopes 124 Von Neumann architecture 79, 85
W Wake recategorization 245 Weather radar 198