Communications and Information Infrastructure Security [1 ed.] 9781118651704, 9781118651971

Communication and Information Systems Security features articles from the Wiley Handbook of Science and Technology for H

159 93 1MB

English Pages 124 Year 2014

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Communications and Information Infrastructure Security [1 ed.]
 9781118651704, 9781118651971

Citation preview

Voeller

V07-ffirs.tex

V1 - 12/04/2013 4:35pm Page ii

Voeller

COMMUNICATION AND INFORMATION SYSTEMS SECURITY

V07-ffirs.tex

V1 - 12/04/2013 4:35pm Page i

Voeller

V07-ffirs.tex

V1 - 12/04/2013 4:35pm Page ii

Voeller

V07-ffirs.tex

V1 - 12/04/2013 4:35pm Page iii

COMMUNICATION AND INFORMATION SYSTEMS SECURITY

Edited by JOHN G. VOELLER Black & Veatch

Voeller

V07-ffirs.tex

V1 - 12/04/2013 4:35pm Page iv

Copyright © 2014 by John Wiley & Sons, Inc. All rights reserved Published by John Wiley & Sons, Inc., Hoboken, New Jersey Published simultaneously in Canada No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permission. Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com. ePDF: 9781118651704 ePub: 9781118651834 Printed in the United States of America 10 9 8 7 6 5 4 3 2 1

Voeller

v07-ftoc.tex

V1 - 12/05/2013 7:57pm Page v

CONTENTS

Preface 1. 2. 3. 4.

Telecommunication: Critical Infrastructure Protection Strategies for Protecting the Telecommunications Sector Wireless Security Detection of Hidden Information, Covert Channels and Information Flows 5. Inherently Secure Next-Generation Computing and Communication Networks for Reducing Cascading Impacts 6. The Sensor Web: Advanced Technology for Situational Awareness 7. Consequence Mitigation: Current Research and Future Trends Index

vii 1 19 37 53 69 83 97 111

v

Voeller

v07-ftoc.tex

V1 - 12/05/2013 7:57pm Page vi

PREFACE

Adapted from the Wiley Handbook of Science and Technology for Homeland Security. The topic of homeland security did not begin with the World Trade Center or the IRA or the dissidents of past empires, but began when the concept of a nation versus a tribe took root and allegiance to a people was a choice, not a mandate. The concept of terrorism is part of homeland security but there are other risks to homeland security; such as those that come from Mother Nature or negligence of infrastructure maintenance. Indeed, these factors have much higher probabilities of causing substantial damage and loss of life than any group of terrorists could ever conceive. Hence, the focus here is on situations that put humans at risk and can disrupt and damage infrastructure, businesses, and the environment, and on scientific and technological tools that can assist in detecting, preventing, mitigating, recovering, and repairing the effects of such situations. The number of science and technology (S&T) related topics that are involved in the physical, cyber and social areas of homeland security includes thousands of specialties in hundreds of disciplines so no single collection could hope to cover even a majority of these. Instead, our intention is to discuss selected topics in ways that will allow readers to acquire basic knowledge and awareness and encourage them to continue developing their understanding of the subjects. Naturally, in the context of homeland security and counterterrorism, some work has to be classified so as not to “communicate our punches” to our adversaries and this is especially true in a military setting. However, homeland security is concerned with solutions to domestic situations and these must be communicated to officials, law enforcement, and the public. Moreover, having experts speak in an open channel is important for informing researchers, academics, and students so that they can work together and increase our collective knowledge. There are many ways to address homeland security concerns and needs, and many different disciplines and specialties. An ongoing open conversation among experts which will allow them to connect with others and promote collaboration, shared learning and new relationships is needed. Certainly, creating a forum in which theories, approaches, vii

viii

PREFACE

solutions and implications could be discussed and compared would be beneficial. In addition, reliable sources from which experts and lay persons alike could learn about various facets of homeland security are needed. It is equally important that policy and decision makers get the full picture of how much has been done and how much still needs to be done in related areas. Even in places that have dealt with terrorism for over a century, there are no strong, cost-effective solutions to some of the most pressing problems. For example, from a distance, we have very limited ability to spot a bomb in a car moving toward a building to allow decision making on whether to destroy or divert the car before it can damage the target. Even simpler, the ability to spot a personnel-borne improvised explosive device (IED) in a crowd coming into a busy venue is still beyond our collective capability. Therefore, the bounding of what we know and don’t know needs to be documented. Finding additional uses for technologies developed originally to solve a homeland security problem is one of the most important aspects of the economics involved. An inescapable issue in many areas of homeland security S&T, is that even a successful solution when applied to only a small market will likely fail because of insufficient returns. For example, building a few hundred detectors for specific pathogens is likely to fail because of limited demand, or it may never even receive funding in the first place. The solution to this issue is finding multiple uses for such devices. In such a case, a chemical detector for contraband or dangerous materials could be used also to detect specific air pollutants in a building; thus, help allergy sufferers. In this way capabilities developed for homeland security may benefit other, more frequently needed uses, thereby making the invention more viable. The editors of this work have done a superb job of assembling authors and topics and ensuring good balance between fundamentals and details in the chapters. The authors were asked to contribute material that was instructional, discusses a specific threat and a solution, or provides a case study on different ways a problem could be addressed and what was found to be effective. We wanted new material where possible. The authors have produced valuable content and worked hard to enhance quality and clarity of the chapters. And finally, the Wiley staff has taken on the management of contributors with patience and energy beyond measure. Senior Editor John G. Voeller

Voeller

V07-c01.tex

V1 - 12/05/2013 6:38pm Page 1

1 TELECOMMUNICATION: CRITICAL INFRASTRUCTURE PROTECTION Ted G. Lewis Center for Homeland Defense and Security, Naval Postgraduate School, Monterey, California

1.1

INTRODUCTION

The telecommunications sector is a complex adaptive system exhibiting self-organized criticality (SOC) suggesting its vulnerability to systemic failure. Over its 100-year history, the architecture of the telecommunications sector in the United States has evolved into a scale-free network with critical nodes located in a small number of major telecom hotels, that is, buildings containing a high concentration of switching equipment, storage, and interdependent connections. These hubs were formed by economic, regulatory, and technical forces operating over four historical periods: an unregulated beginning, the telecom war years, the regulated vertical monopoly period, and the current deregulated competitive era. This chapter briefly traces the evolution of telecommunications in general and telephony in particular. Using network science theory, we show that hubs and betweener nodes are the most critical components in the national system. Furthermore, these critical nodes are the direct result of regulatory forces shaping the industry, which have had major impact on telecommunications. Because of economic, regulatory, and technical forces ever-present in the industry, the telecom sector has evolved into a state of SOC. Although the industry has not experienced a calamity on a scale similar to the 2003 Eastern Power Grid Blackout, I believe that the two networks have evolved to a similar state.

1.2

OVERVIEW

Telecommunications infrastructure became the first critical infrastructure sector in the United States following the Cuban Missile Crisis of 1962. The confrontation between President Kennedy and Premier Khrushchev of the former Soviet Union, eventually led to the NCS (National Communications System) and the formation of the NSTAC—National Security Telecommunications Advisory Committee —created Communication and Information Systems Security, Edited by John G. Voeller © 2014 John Wiley & Sons, Inc. Published 2014 by John Wiley & Sons, Inc.

1

Voeller

2

V07-c01.tex

V1 - 12/05/2013 6:38pm Page 2

TELECOMMUNICATION: CRITICAL INFRASTRUCTURE PROTECTION

by President Reagan (EO12382–1982) to advise the President of the United States on matters pertaining to the security and well-being of telecommunications. The first critical infrastructure sector was renamed the Communications Sector by the Department of Homeland Security in 2009 [1]. Regardless of its name, telecommunications security has always been on the forefront of homeland security even before the creation of the US Department of Homeland Security in 2003. The communications sector, like many other fundamental infrastructure sectors, has a long and rich history of evolutionary change. Generally speaking, power, energy, telecommunications, and other interstate commerce sectors such as transportation have emerged as complex adaptive systems, becoming very large, complex, and malleable networks with both strengths and weaknesses. These strengths and weaknesses are a by-product of technical, economic, social, and regulatory policies of the United States, which have shaped these industries for over 100 years. Generally, it is believed that these forces are responsible for the current state of SOC for this sector. This chapter develops a framework for understanding this complex adaptive system. Note that it is a system, which is more than merely a collection of components. Network modeling is a natural way to simplify and understand the rudiments of complex systems, and in particular the architecture of such systems. A network represents a system as a collection of nodes connected by links. For example, a social network is a collection of people, represented by nodes, and their associations, represented by links. A telecommunications network may be modeled at several levels: the physical level model uses nodes to represent switches and links to represent wires. A telecommunications network model might also equate nodes with buildings containing thousands of switches, and links with thousands of fiber optical cables. Networks are abstract mathematical objects that serve only to represent what is of most interest to the study of a system. Determination of individual asset security has very little payoff for homeland security. For one thing, it is too expensive to protect every asset, and for another thing, it is not necessary. Knowing that a transformer in a power grid or a telephone pole in a telecommunication system is vulnerable to a terrorist attack or natural disaster such as a hurricane, tells us very little about the vulnerability of the larger system. Because of the complexity, interdependencies, and varying criticalities of large and complex systems that span the entire nation—or major portions of it—we must understand the system’s architecture. Network models allow us to study large and complex system architectures as evolving systems so we can understand system vulnerabilities and derive strategies to deal with them. The immediate objective, then, is to protect the entire telecommunications infrastructure by judicious selection of critical components. A longer term, more ambitious objective is to suggest measures that cause wholesale restructuring of the telecommunications sector such that it is intrinsically more secure.

1.3

EVOLUTIONARY FORCES THAT SHAPE THE SECTOR

In some sense the communications sector has come full circle: from digital to analog, and then back to digital. Its creators envisioned a system somewhat like today’s Internet, that is, a global broadcast network that connected everyone to everyone else. But their vision was limited by available technology going back over 200 years ago. Samuel Morse (1791–1872) perfected the first commercially successful digital system called the Telegraph, and demonstrated it in 1844 by transmitting Morse Code (dots and dashes are

Voeller

V07-c01.tex

V1 - 12/05/2013 6:38pm Page 3

EVOLUTIONARY FORCES THAT SHAPE THE SECTOR

3

equivalent to the binary 1s and 0s of today) from Washington D.C. to Baltimore. Western Union transmitted digital messages much like today’s e-mail from coast to coast in 1861. Thus was born the first electronic communication network for transmitting disembodied messages between a pair of humans. Technology was the limiting factor, but profitability would soon motivate rapid advances in technology, obsolescing digital telephony almost immediately. Western Union enjoyed a brief monopoly of the electronic communication business until 1876 when Alexander Graham Bell (1847–1922) successfully transmitted his voice over an analog channel to Mr. Watson, his assistant. Bell filed his “telephone patent” only a few hours ahead of Elisha Gray (1835–1901) of Western Electric. (Western Electric made the telephones and switching equipment for exchanges.) He founded the Bell Telephone Company and quickly built the first telephone exchange network in Hartford, Connecticut in 1877. Bell later purchased Western Electric from Elisha Gray in 1882, and proceeded to create one of the largest of many vertical monopolies of the twentieth century. Bell Telephone linked two cities (New York and Boston) together in 1883, but it would take the company nearly another 60 years to subscribe 50% of the population. The first mobile telephone did not appear until 1946! Compare this to the rate of adoption of the video tape recorder in the 1980s (12 years to reach 50%), and the rapid adoption of most new technologies today, such as the iPod, Internet e-mail, and cellular handsets. Historically and politically, it is important to note that the first cellular telephone network (1979) was built in Japan rather than the United States. Through most of the twentieth century, communications in the United States consisted mainly of the public switched telephone network (PSTN) owned and operated by AT&T. This slow pace of technology innovation by a vertical monopoly led to radical changes in the industry, primarily to stimulate innovation and rapid adoption of new technologies. Slow adoption of new technology would become one of the factors leading to deregulation of the communications industry in 1996. Patent litigation in 1879 separated voice and data: Justice allowed Bell Telephone to operate voice networks and Western Union to operate data networks (basically stock market quotes). This artificial separation between voice and data would become a barrier to advancement of telephony until invention of the Internet in 1969, and its commercialization in 1998. Even today, the network neutrality movement is about content, that is, whether voices, dates, pictures, etc. should be priced separately or not. The network neutrality advocates rightly claim that all information is digital, so how can telephone and telecommunications companies charge separate rates for different encodings of ones and zeros? So far, the network neutrality advocates have won, and this has not become a force shaping the sector. 1.3.1

Unregulated Beginnings

Table 1.1 lists major events in the evolution and shaping of the telecommunications sector. This timeline has four distinct periods: Unregulated , Telecom War, Regulated , and Deregulated . From 1877 to 1898, the industry was mainly unregulated. During this period, a large number of local companies emerged to serve local customers. This produced a large number of isolated and heavily connected networks characterized by dense wiring clustered around a small number of central switching offices. Even after consolidation set in, the resulting networks were clustered and

Voeller

4

V07-c01.tex

V1 - 12/05/2013 6:38pm Page 4

TELECOMMUNICATION: CRITICAL INFRASTRUCTURE PROTECTION

TABLE 1.1

Major Events in the Evolution of Telecommunications Regulation

1837–1873 Telegraphy was first digital communication system 1866 Western Union becomes first telecom monopoly 1876 Bell demonstrates first operating telephone 1878 5600 telephones in use 1882 109,000 telephones in use 1885 AT&T incorporated for long-distance service 1894 Bell’s patents expire 1898 Telecom War begins . . . through 1934. 1899 AT&T reorganized as an IP holding company 1903 Telephone industry dominated by independents 1907 AT&T reorganized and controlled by J. P. Morgan 1911 AT&T vertically integrated: Western Electric, Long Lines 1913 US DOJ sues AT&T claiming violation of Sherman Antitrust Act. 1924 AT&T owns 223 of 234 independents! 1934 Telecommunications Act of 1934 1934–1974 Vertical Monopoly Period 1974–1984 DOJ suit leads to breakup of AT&T 1996 Telecommunications Act of 1996 1996 LECs win court battle establishing states right to set retail prices

highly focused on local calls. (Highly clustered networks are more resilient and less vulnerable to failures and attacks than today’s national-scale communication networks). 1.3.2

The Telecom War

A “telecom war” broke out among competitors after Bell’s patents expired, prompting the U.S. Department of Justice to step in and enforce the Sherman Antitrust Act of 1890. While the case against AT&T was very similar to the Sherman Antitrust Act case against Microsoft, the action against AT&T was much more severe. The government forced AT&T to stop buying independent telephone companies without their permission; it required AT&T to interoperate with its competitors [the local exchange carriers (LECs), in today’s language]; and required AT&T to divest its control of the Western Electric Manufacturing Company whereas Microsoft was merely fined and allowed to remain intact after being found guilty of violating the 1890 Act. While the action against AT&T may have seemed severe, AT&T owned 223 of the 224 independent companies within a decade of the 1913 ruling! This illustrates one of the primary factors affecting and shaping many infrastructure systems: increasing returns. Increasing returns in economics says the more a certain commodity exists, the more valuable it becomes. In this case, increasing returns drove AT&T toward a monopoly: the more customers connected by the AT&T network, the more valuable the network became. The more valuable it became, the more customers wanted to subscribe. This spiral ended up with AT&T in the monopoly catbird seat. Increasing returns accelerates the adoption of one technology and service over another, because it standardizes the user interface, exhibits the compounding network effect of being able to communicate with more people over a large network versus a small network,

Voeller

V07-c01.tex

V1 - 12/05/2013 6:38pm Page 5

EVOLUTIONARY FORCES THAT SHAPE THE SECTOR

5

and motivates the owner operator to amortize fixed costs over an ever larger customer base. In network terms, increasing returns is a kind of preferential attachment effect. It works like this: as customers randomly select one of many vendors to provide telephone service, they eventually realize that more people are subscribing to one service than another. The benefits of the “more popular” service may be intangible, but if only a few customers decide to switch to the more popular service, this begins a process of preferential attachment. Think of the vendor (competitor) as a node in a network, and the links as subscriptions. The more heavily connected node is preferred over the less connected nodes, which accelerates adoption of the heavily connected node. As more consumers subscribe to the preferred vendor node, more decide to also subscribe, which snowballs into an avalanche of connections. Thus, the popular node becomes a major hub of the network. Fundamental infrastructure systems such as electric power, water and sewer, and telecommunications systems all exhibit the network effect known as preferential attachment. In the end, the preferred node becomes a monopoly, as Microsoft did in the 1990s. Emergence of a monopoly has little to do with the performance of a certain company or leader, but instead it is a fundamental property of fundamental infrastructure systems. Increasing returns sets in, and the infrastructure becomes the only player in the field, that is, a monopoly. Although the theory of preferential attachment, increasing returns, and network effects were not well-understood in the 1930s, the results were clear: as a consequence of monopolies like AT&T, most customers were happy with their service, but not all. In particular, people living in rural or thinly populated areas were without service. The network effect was inaccessible to them because it was not cost-effective for the owner or operator of the telephone company to serve an isolated customer. The Congress, in its wisdom, bartered universal access for natural monopoly of the major means of long-distance communication when it passed the 1934 Telecommunications Act. Universal access means that everyone gains access at a (somewhat) flat fee, regardless of his or her geographical location. Instead of nationalizing AT&T, the Congress provided monopolistic protections for the private enterprise in exchange for universal access. AT&T was required to amortize connection charges across all customers. But this privilege came at a price because innovation would slow to a halt without the incentives provided by competition. 1.3.3

Regulatory Period

The 1934 Telecommunications Act initiated the regulatory period of evolution. It provided for the regulation of telecom through the FCC (Federal Communications Commission), which answers only to the Congress. It declared the electromagnetic spectrum within the United States as public property and only the Congress had the right to regulate its use. A license and huge licensing fee was imposed on commercial broadcasts. Finally, the Congress required broadcasters to operate in the best interest of the public. The 1934 Act had a huge impact on shaping the communications network of the United States. For example, the so-called “long lines” established by AT&T during this period remains a major component of the communications infrastructure today. The protocols and standards of operation established by AT&T remain as legacy systems. Specifically, the SS (switching system) computers designed and built by AT&T/Bell Labs established

Voeller

6

V07-c01.tex

V1 - 12/05/2013 6:38pm Page 6

TELECOMMUNICATION: CRITICAL INFRASTRUCTURE PROTECTION

a global standard for how telecommunications systems operate. (They are legacy systems in the sense that the old analog protocols no longer work with the new digital protocols of the Internet, and in some cases, the digital protocols have to be made to interoperate with newer digital protocols). Because AT&T was a closely regulated vertical monopoly, the system operated seamlessly from end-to-end. Equipment interoperated, networks interfaced with one another, and universal access guaranteed safe and secure operation. In many ways the old AT&T system was more resilient and rugged than the cobbled together networks of today. According to Richard Kuhn, “For several decades AT&T has expected its switches to experience not more than two hours of failure in 40 years, a failure rate of 5.7 × 10−6 .” The old regulated system was extremely reliable!

1.3.4

Deregulated Oligopolies

The deregulated period began in 1974 and continues, today. It took 10 years for the US Department of Justice to break up AT&T this time. Breakup led to the “Baby Bells”, or LECs which are regional telephone companies licensed to operate as monopolies within geographical areas of the United States. A total of 22 companies were granted seven operating regions, but the infrastructure remained much the same as during the regulated period. The Congress also sought to establish pricing, which was challenged by the LECs, leading to the states having the right to set prices for their citizens, in 1996. Nonetheless, pricing restrictions on both the wholesaler (LEC and, IEC or Interexchange Carrier) as well as the retailer (consumers) had a major impact on performance and reliability of the network. Why build out newer, faster, and more reliable networks when profitability is constrained by regulation? The Baby Bells continued to be slow adopters of new technology and providers of better service. This period of telecom evolution may have persisted for some time if the Congress and the NTIA had not commercialized the Internet in 1993–1998 (National Telecommunications and Information Administration within the Department of Commerce, created in 1978 EO 12046). The commissioner of the FCC and the Congress justified its major revision of the 1934 Telecommunications Act mainly based on the concept that competition would spur innovation. One goal of the 1996 Telecommunications Act was to encourage universal Internet access for everyone’s home and office. The Baby Bells, saddled with their legacy analog systems and voice-only mentality were taking too long to transition to the new digital age. This theme continues to resonate today. The 1996 Act unbundled services, in other words, no more extra fees for extra services like digital data transmission (VoIP or voice over Internet protocol). It motivated the deployment of faster networks (xDSL), and limited ownership of Cable TV, TV, and radio stations. But most importantly, the 1996 Telecom Act established peering as a way of life for competitors. This has led to the number one vulnerability in the communications sector. Peering is the practice of sharing networks with competitors. Company A may need Company B’s network to provide long-distance connections for local customers. Conversely, Company B needs Company A’s local connections to gain access to the “last mile” or household/office consumer. LECs typically own and operate local exchanges, while IECs (“long-haul” carriers) typically own and operate long lines (Fig. 1.1). In

Voeller

V07-c01.tex

V1 - 12/05/2013 6:38pm Page 7

MAJOR COMPONENTS OF THE SECTOR

Local exchange carrier LEC Local loop

LEC End offices

Inter exchange carrier

LEC Central offices

Trunk

Wireless

IEC POPS Satellite

POP Tandem switch

7

Satellite gateway POP

Backbone

Cellular gateway

Ground station

POP Cell tower

Tree-Structured

FIGURE 1.1

Graph-Structured

Structure of the communications sector [2].

today’s market both, local access to get onto the long lines and long lines to make long-distance calls, are required. Peering radically restructured the industry, because it not only allowed, but motivated competitors to co-locate their switching equipment in close proximity to one another. The consequences were dramatic and unanticipated: a small number (30–40) major switching hubs emerged as the preferred nodes. That is, the law of increasing returns appeared again. Only this time the result was creation of the number one vulnerability of the telecommunications sector: the carrier or telecom hotel . This phenomenon is explored later. The telecommunications sector contains more than the wired-only network left over from the days of vertical monopolies and regulation. Even so, all communication networks depend on it. Satellite communication (with small exceptions), cellular telephony, Internet, and to a lesser degree, broadcast TV, Cable TV, and GPS navigation depend on the core capabilities of the wired network. Therefore, a potential threat to the wired network is also a potential threat to the wireless and associated networks. These systems are interdependent as shown in Figure 1.1.

1.4

MAJOR COMPONENTS OF THE SECTOR

A full discourse on the U.S. telecommunications system could easily fill an entire book, so the discussion here focuses on the top level. Figure 1.1 is a gross simplification of telecommunications in the United States. It consists of three major layers: the LECs and their customers; the long-distance interexchange carriers (IECs), and the various devices and services feeding into the points-of-presence (POP) gateways provided by the IECs. IECs operate long lines that connect cities and countries to one another.

Voeller

8

V07-c01.tex

V1 - 12/05/2013 6:38pm Page 8

TELECOMMUNICATION: CRITICAL INFRASTRUCTURE PROTECTION

Figure 1.1—reading from left to right—shows how a telephone call or e-mail transmission makes its way from one person to another. Suppose an e-mail sender is transmitting from one of the houses shown under the LEC column. The message travels to a headend switch, typically located in the neighborhood to serve up to 1,000 homes. The headend connects to a central office in the LEC, which forwards the message, locally or through the long-haul IEC network. If the recipient is located within the same network, the message is routed back through the tandem switches to a headend switch and then into the recipient’s home. If the recipient is at a long distance, the message travels through a POP or gateway to the IEC network. Suppose the recipient uses a cell phone from 1,200 miles away. The LEC must forward the message to a POP switch within an IEC network, as shown in the middle column of Figure 1.1. The IEC’s gateway also acts as a bridge across networks that operate at different speeds and protocols (rules for exchanging messages). The sender’s message may make its way across several IECs before finding a gateway that connects the sender with the receiver. When the recipient answers his or her cell phone call the nearest cellular tower transmits the “connection signal” to a gateway within the wired IEC network. A roaming cell phone registers with local towers, so the IEC and LEC switches can find the cell phone regardless of where it is. Eventually, the network makes a connection between the sender and the recipient’s local tower. This all happens at nearly the speed of light, so the consumer does not notice a long delay. The most interesting feature of Figure 1.1 is this: to work as one integrated communications system, cellular, satellite, and other means of communication depend entirely on the wired landlines. The old AT&T long-haul lines (and others) are the backbone of the national system. Cellular transmission has a range of about 3 miles, so cell phones actually connect to land lines via a nearby tower. Transcontinental cell phone calls are not possible without the long lines maintained by the wired IECs. Even satellites depend on the terrestrial wired lines. Ground stations often provide feeds to television and other broadcast media, but they need a stable and reliable wired landline network to operate. These systems are interdependent and together they form a very large and complex system. The logical structure of Figure 1.1 shows that the telecommunications sector is a system, rather than a loose collection of local components and unrelated assets. As such, this system’s resiliency is dependent on its “architecture” more than the resiliency of individual components. For example, the highly redundant IEC layer has many POP and gateway switches, so that the failure of one has minor implications for the reliability of the overall system. It begs the question, however, of which components are most critical , where criticality means failure of a critical component can lead to failure of the entire system. Security analysts often overlook system analysis of infrastructure, preferring to perform asset analysis because it is easier. Analysts tend to pay more attention to local incidents involving a single asset such as a bridge, building, or computer, because it is easy. In reality, simple components frequently fail without bringing down an entire system. It is well-known that natural disasters are distributed according to a power law , whereby inconsequential failures occur with high frequency and high consequence failures occur with much less frequency [3]. However, the telecommunication system is a product of human engineering and Congressional regulation and these forces that have shaped the sector have also made it more vulnerable to targeted attacks than it would

Voeller

V07-c01.tex

V1 - 12/05/2013 6:38pm Page 9

RESILIENCY OF NETWORKS

9

be if it were a product of nature. Therefore, it is important to ask, “What is critical in critical infrastructure protection, and in particular, telecom infrastructure?” Criticality is at the core of system analysis. The answer to this question is found by investigating the architecture of the system, which in turn leads to understanding critical components whose failure might bring down the entire system. Because we cannot afford to protect everything, we opt to protect the most critical components. This is the strategy called network analysis and it borrows heavily from the network science literature [4].

1.5

RESILIENCY OF NETWORKS

To understand the criticality of telecommunication system components we need to understand the fundamentals of network theory, because the structure of large and complex networks has a major impact on their survivability under stress. Ideally we want networks to continue to operate under less-than-perfect conditions. A plausible goal is to operate any infrastructure system as a robust network that is difficult to bring down even when purposely attacked. This analysis is limited to single-node attacks. The telecommunications network is a bidirectional network because messages flow in both directions. A failure of one switch may or may not halt bidirectional flow because of redundant or alternate paths. However, failure of a switch may reduce the overall network capacity or speed. Resilience can have several definitions: as a measure of the impact of an attack on overall structural integrity of the network, or as a measure of the decline of output from the network (in terms of number of messages per unit time). We study both: the structural integrity of the telecom network due to cascade failure which is a single node outage that ripples through the network taking down affected nodes; and the decrease of output flow from a network in which messages flow from source to sink node. Cascade failure analysis determines resilience of the static structure (architecture) of the network, while flow failure analysis determines resilience of commodity flow through a network. More formally, cascade failure resilience is defined as the fraction of nodes that continue to operate following the failure of a single node: Rc = 1 − (Ic /n); where Rc = cascade resilience, Ic = “infected” or damages nodes, n = number of nodes in the network. Flow resilience, in contrast, is defined as the fraction of commodity flowing into sink nodes after a single node is attacked, compared to beforehand: Rf = F1 /F0 ; where Rf = flow resilience

Voeller

10

V07-c01.tex

V1 - 12/05/2013 6:38pm Page 10

TELECOMMUNICATION: CRITICAL INFRASTRUCTURE PROTECTION

F1 = flow after an attack F0 = flow before an attack We can express both measures as a percentage in the interval (0, 100%). Cascade resilience parameter Ic is estimated by simulating an epidemic or a contagion that spreads from the attacked node to adjacent nodes with probability γ , the infection rate, and repaired in τ time steps following failure. Once repaired, a node stays repaired, simulating an SIR (Susceptible-Infected-Recovered) epidemic. Ic is obtained by merely counting the total number of nodes infected from onset to recovery of all nodes. (We could also use the peak number infected, but this is left as an exercise for the reader). Flow resilience parameters F1 and F2 are estimated by simulating flows emanating from a single source node to a single sink node. A single source is constructed by aggregating all nodes with zero inputs; similarly, a single sink is constructed by aggregating all nodes with zero outputs. In simulations conducted by the author, link capacities are all the same and flows out of a given node are evenly divided among the outgoing links. Flows into a node are summed; it is possible for nodes to overflow or underflow. We ignore overflows and limit the amount of message flow through a link to the link’s maximum capacity. In a telecommunication network, capacity is equivalent to bandwidth. 1.5.1

Hubs, Clusters, and Betweeners

We compare resiliency of three fundamental network structures: random, clustered, and scale-free. A random network is one in which nodes are connected to one another by randomly selecting pairs of nodes and linking them. Clustered networks contain highly connected neighborhoods of nodes that are connected to other nodes in the cluster. That is, if node A is connected to nodes B and C, then connecting nodes B and C forms a triangle. A, B, and C form the simplest possible fully connected network. The cluster coefficient of a single node is the number of triangles it is a part of, relative to the maximum number possible. A metric called the cluster coefficient quantifies the amount of clustering in a network, by taking the average of node cluster coefficient across all nodes. Figure 1.2 illustrates hubs, clusters, and betweenness: Figure 1.2a shows an unclustered random network. Figure 1.2b shows a highly clustered network. The central node of Figure 1.2b has k = 4 links connecting it to 4 neighbors. Thus, there are potentially k(k − 1)/2 = 4(3)/2 = 6 triangular networks that can be formed out of its four neighbors. The central node belongs to 4 triangular networks, so its cluster coefficient is 4/6 = 0.67. A scale-free network has a very definite structure: most nodes have only a few link connections, while one or two nodes have a very large number of connections. A node with five connections (and therefore five links) has a degree of five. A network’s hub is the node with the highest degree. Figure 1.2c illustrates a scale-free network containing a hub with degree of five. 1.5.2

Betweenness

Figure 1.2d illustrates another metric that will become useful later in the discussion of networks. Betweenness is a measure of intermediary power in a network. It is simple, but laborious to calculate. Betweenness is a measure of how many paths go through a node (or link) from all other nodes. To compute it, tally the number of shortest paths

Voeller

V07-c01.tex

V1 - 12/05/2013 6:38pm Page 11

RESILIENCE RESULTS

(a) Random

(b) Clustered

1

2

4

6

Hub (c) Scale-free

FIGURE 1.2

11

5

(d) Betweener

Illustration of network topologies and the betweenness metric.

passing through a node or link, from and to all other nodes, for all nodes and links. Then scale the tallies to the unit interval (0, 1) by dividing them by the largest tally. The normalized number is the node or link betweenness. Figure 1.2d shows three of the nine paths passing through node 3, from and to all other nodes. Node 3 is a kind of gatekeeper because it indirectly connects other nodes to each other. A network’s betweener is the node or link with the highest betweenness value. In a sense, the betweener is a critical traffic node, because more messages can pass through the betweener node or link than any other node or link. This measure is useful in modeling network flows, because the betweener can control the majority of flow through a network. Therefore, betweenness is an important metric for measuring resilience in flow networks such as a pipeline, transmission, or traffic networks. We want to simulate the impact of taking out the betweener on the flow resiliency of a telecom network.

1.6

RESILIENCE RESULTS

Now we turn to the subject of resiliency of a network. What constitutes “failure” in a network? If a single node or link fails, does it mean the entire network fails? If a “critical node or link” fails, does this equate to network failure? The 2003 Blackout of the Eastern Power Grid, triggered by a single link failure, propagated across the entire network disabling a large portion of it. The 2003 cascade failure acted much like an epidemic sweeping through a population. Therefore, experience tells us that epidemics are one form of failure that can bring down an entire network. Are telecommunication networks

Voeller

12

V07-c01.tex

V1 - 12/05/2013 6:38pm Page 12

TELECOMMUNICATION: CRITICAL INFRASTRUCTURE PROTECTION

subject to epidemics? Yes. The spread of computer viruses and worms behaves very much like the spread of a virus in human populations. On a physical level, outage of one node affects adjacent nodes, because continuity depends on predecessor and successor nodes. 1.6.1

Cascade Resiliency

Network epidemics or network storms are observed in many infrastructures such as traffic jams, floods, power outages, communication outages, and e-mail congestion. In social network theory, well-known contagions such as the H1N1 flu spread through human contact (links), and either persist (never go away) or die out. Similarly, an important question for homeland security is, “Do infrastructure epidemics live forever or eventually die out, once initiated within a network?” Simulation of attacks on random, cluster, and scale-free networks shows how vulnerable networks are to targeted attacks. We generated networks with 50 nodes and 100 links, and randomly connected node-pairs to obtain a random network. Similarly, we generated cluster networks from random networks by repeatedly rewiring links such that the cluster coefficient of the entire network increased. This emergent process terminated when the overall network cluster coefficient reached 50% [4]. Finally, starting with a random network, we repeatedly rewired links chosen at random, but linked by preferential attachment to a randomly chosen node, until the largest node has degree at least four times the average. This produces a scale-free network with a degree sequence distribution that obeys a power law . In each case, an attack on a single node initiates an epidemic which spreads to its neighbors with an infection rate of γ = 25%. Infected nodes are repaired in τ = 5 time steps, and once repaired, remain immune to subsequent infection. This simulates a susceptible-infected-susceptible (SIS) epidemic, producing epidemics that follow a very simple Kermack-McKendrick model [5] of logistic growth of the infection followed by a rapid decline as nodes mend and return to an operative state. As it turns out, the number of nodes infected over the life of the SIS epidemic is a measure of resiliency. Moreover, this number is different for random, clustered, and scale-free networks. Figure 1.3 summarizes the results. Attacks on hubs are the most consequential (89% of the nodes are “infected”), while attacks on high cluster coefficient nodes are the least consequential (minimum of 17% for cluster attacks, and only 23% for hub attacks on cluster networks!). Clearly, attack strategy and network topology both bear on resiliency. Cluster networks are more resilient than scale-free or random networks! A number of investigators have shown that scale-free networks are more tolerant of random attacks, but highly vulnerable to targeted attacks on the network’s hub [6]. This makes sense, because hub removal also removes the maximum number of links. It has also been shown that scale-free networks are more likely to sustain persistent epidemics, which is an important result for the telecommunication sector because computer viruses spread like epidemics. In fact, Wang and others confirmed this finding in an elegant and convincing network model that relates topology to persistence [7]. The Wang et al. model says network connectivity, as defined by the spectral radius of the network, relates to persistence as follows: ργ − ?0. If this relation exceeds zero, the contagion persists forever! Otherwise, it eventually dies out. Here ρ is spectral radius, γ is infection rate, and  is repair rate. The higher ρ, the more likely the epidemic never disappears from the network. And, scale-free networks have higher ρ than equivalent random and cluster networks.

Voeller

V07-c01.tex

V1 - 12/05/2013 6:38pm Page 13

RESILIENCE RESULTS

13

Consequence versus attack type versus network structure

Cluster network 100%

Scale-Free network

80% 60%

Random network

40% 20%

Random network

0%

FIGURE 1.3

Hub attack

Cluster attack

Random attack

Scale-Free network Cluster network

Cascade failure in random, cluster, and scale-free networks.

The opposite is true for cluster networks: they are tolerant of random and targeted attacks (because they have no hubs), and they are less likely to sustain recurring epidemics. Telecommunication networks are never clustered; rather they tend to be scale-free, so they are more susceptible to persistent epidemics. Figure 1.3 suggests that clustering improves resiliency against cascade failures by a factor of 3–4 over targeted attacks on scale-free networks. Cluster networks are tolerant of both random and targeted attacks because they have more link redundancy than either scale-free or random networks. Interestingly, the Western US Electric Power Grid is a high cluster coefficient network, which means it should be more resilient than the Eastern power grid. According to Kendall [8], the reverse is true: Western grid outages exceeded Eastern and ERCOT outages during the period 1993–1999. This unexpected result is perhaps due to SOC, a topic that is described in more detail below. Hubs attacks are more consequential than any other kind of attacks. The lesson is clear: protect the most critical nodes and links as defined by degree. An economical strategy with maximal return is to protect hubs and perhaps a few other high-degreed nodes within the telecom sector. 1.6.2

Flow Resiliency

Another form of “failure” in networks has to do with networks that supply some sort of commodity. A supply chain network such as a gas and oil pipeline network, water supply system, or airplane supply chain provide a commodity that flows from source to sink. It is important for sink nodes to keep up with demand, and to do so, flow must be continuous and not interrupted by a failure of a node or link along the way. In a telecommunication network, the commodity is voice, data, e-mail, etc. Another kind of failure is denial of service due to reduction or termination of flow of messages through a network. An arbitrary network likely has many alternative paths from source to sink, along which a commodity can flow. Removal of a single node or

Voeller

14

V07-c01.tex

V1 - 12/05/2013 6:38pm Page 14

TELECOMMUNICATION: CRITICAL INFRASTRUCTURE PROTECTION

Consequence versus attack type versus network structure

Cluster network 50% Scale-Free network

40% 30%

Random network

20% 10% Random network

FIGURE 1.4

Cluster network Betweeness

Hub

Scale-Free network Cluster

Random

0%

Simulation results for flow networks.

link may or may not have much impact. This begs the question, “Does network structure matter to networks that supply a commodity such as e-mail messages or voice streams?” If we treat the telecommunications sector as a flow network, where e-mail, voice mail, and other digital information such as photographs flow from sender to receiver, what constitutes a threat to continuity of flow? Simulation of attacks on directional random, cluster, and scale-free networks with a single source and single destination node produces much the same results as shown in Figure 1.3, except the consequences are measured in terms of flow resiliency, Rf =F1 /F0 . Given an input of say 100 units, and demand of 100 units, what fraction actually reaches the destination node? If a node failure reduces output from 100 to 70 units, then Rf = 70/100 = 0.7 = 70%. Simulation results confirm that hubs are more critical than all other nodes except for high betweenness nodes (Fig. 1.4). Random and cluster networks are more resilient, but still suffer relatively high consequences when their hubs and betweeners are attacked. Scale-free networks suffer even more consequences because their hubs are larger. In many cases, the hub is also the betweener. This is why consequences are similar. Betweenness is a measure of the number of paths running through a node. Therefore, telecommunication networks are more vulnerable to hub and betweener attack than random or cluster attacks.

1.7

TELECOMMUNICATIONS CRITICALITY

Has the telecommunications sector reached a state of resilience, or the opposite? After 100 years of evolution, do we have a more or less secure and resilient communication network? This author claims that the sector has become less resilient and therefore, more likely to fail than ever before. The argument for loss of resiliency is based on two main observations: (i) after one hundred years of economic, regulatory, and technical

Voeller

V07-c01.tex

V1 - 12/05/2013 6:38pm Page 15

TELECOMMUNICATIONS CRITICALITY

15

pressure to be efficient, profitable, and politically acceptable to the Congress, the system has evolved into a weakened state, culminating with the 1996 Telecommunications Act; (ii) like many massively large and complex infrastructure networks with a long history of evolution, the telecom industry has reached a state of SOC. This claim is supported by the existence of critical nodes in the scale-free shaped sector. These two ideas are partners: evolution from randomness to a state of SOC, resulting in a network with hubs and betweeners. 1.7.1

The Hubs: Telecom Hotels

NSTAC identified telecom hotels as the number one vulnerability in the communications sector in 2003 [9] and recommended their protection as the top priority of homeland security. The report defines a telecom hotel: “Telecom Hotel : Conditioned floor space owned and operated by a commercial Landlord for the purpose of hosting multiple service providers. Tenants may include the incumbent ILEC, competitive local exchange carriers (CLEC), Internet service providers (ISP), competitive access providers (CAP), Web hosting operations, or any other non-telecommunications commercial enterprises in need of floor space to support their electronic equipment” [9]. For example, One Wilshire Boulevard , a 30-story, 656,000 square foot building in downtown Los Angeles, is the single-most important point of connectivity in the Western United States. It houses over 240 telecommunications companies, provides up to 75 watts of power per square foot, and connects the United States with most of Asia. Backup power of 8 megawatts and 11,000 gallons of diesel provide resilience in case of power outages. Details of telecom hotels are openly advertised over the Web along with security and resiliency provisions ( www.carrierhotels.com). Thus they are not only critical, but in the open. Their vulnerability, however, is perhaps low, because their importance to the entire sector is widely known, leading to exceptional security measures. Nonetheless, the high concentration of LEC and IEC switching equipment in one location raises concern. The first line of the NSTAC report says, “The Administration has expressed concern that the concentration of multiple entities’ telecommunications assets in specific locations may have implications for the security and reliability of the telecommunications infrastructure” [9]. 1.7.2

Self-Organized Criticality

The mere existence of telecom hotels is no accident. In fact, these highly concentrated assets are a direct consequence of evolutionary forces described above, leading up to and including the 1996 Telecommunications Act, which created telecom hotels in its wake. Specifically, the 1996 Act promotes peering and allows for sharing of facilities among competitors. Technically, it is better to peer with one’s competitors by co-locating switches and routers in the same building, because it reduces time delays in transmission. Economically, the costs of housing, powering, and servicing an aggregated collection of equipment is lower, because they can be amortized over many companies. Politically, the Congress sought to increase innovation by encouraging collaboration between competitors. These forces set into motion the famous preferential attachment, leading to highly connected nodes housed under one roof. Telecom hotels are the most connected hubs of the communications sector.

Voeller

16

V07-c01.tex

V1 - 12/05/2013 6:38pm Page 16

TELECOMMUNICATION: CRITICAL INFRASTRUCTURE PROTECTION

Regulation leads to increasing returns, which leads to hubs, which in turn leads to SOC. SOC is a property of a complex adaptive system that operates near or at its tipping point [10]. Originally studied by Turcotte and colleagues [11, 12], definitions of SOC vary across fields of study. More formally, SOC is a property of a system that operates at or near a critical point dividing chaos and stability. Consequently, a slight change in system state tips the system in either direction, from stability into instability or the reverse. Asset concentration in the telecommunications sector has been building for over 100 years. Initially, telephone companies were local and isolated. The telecom wars produced consolidation under AT&T. The breakup of AT&T into Baby Bells had little effect on the communications network, because the separate companies used the same telephone lines and switches. During this period of asset consolidation the communication sector was perhaps a cluster network, rather resilient against hub and betweener attacks. Then the 1996 Telecom Act accelerated increasing returns; leading to fewer than 20 major hubs among the top 30 routes (Fig.1.5). Note that Chicago is both the hub and the betweener of this network. A similar analysis (not shown here) confirms this theory for the top dozen AS-level Internet service providers, the Tier-1 ISP network . These major hubs handle e-mail, web, and voice-over-IP traffic and inevitably process everyone’s Internet transactions. Their Autonomous System number identifies them, such as AS#1234. The large ISP located in Reston, VA. (AS#1239) is both betweener and hub of the top-level peers. Montreal (AS#6453) and Broomfield, CO (AS#3356) rank second. No one has studied in detail the system-wide consequences of a targeted attack on these ASs, so it is unknown how vulnerable the Internet is to removal of these critical nodes. For a variety of reasons, the national telecommunications sector is optimized for high performance and low cost. Peering and the 1996 Telecom Act have moved the sector

Seattle Portland Boston New York San Francisco

Sacramento

Philadelphia WashingtonBaltimore

Chicago

Denver Kansas City

Los Angeles Atlanta

San Diego Dallas-Fort Worth Houston

Orlando Miami

FIGURE 1.5 Network of the top 30 telecom routes. [2002 Telegeography Research, www .telegeography.com].

Voeller

V07-c01.tex

V1 - 12/05/2013 6:38pm Page 17

REFERENCES

17

closer to its SOC, as evidenced by a few heavily connected hubs, highly concentrated in a handful of large telecom hotels, connected by thousands of links. As a result, vulnerability to cascade and perhaps flow attacks has increased since deregulation. Removal of a small proportion of highly connected nodes could result in major telecommunication outages. While the conclusion drawn here has not been verified in practice, because we have no historical precedent for such a calamity, a similar SOC state exists for the electric power grid [13]. Dobson et al. extensively analyzed electrical power grids from around the world and made a strong case for SOC. Historical data supports Dobson’s results, but such data is lacking in the telecommunications sector. Clearly, the power grid is different from the telecommunications network, but it suffers from similar regulatory forces. Without similar analysis, SOC of the communications infrastructure remains speculative.

1.8

FINAL ANALYSIS

The telecommunications sector is an example of a scale-free network containing hubs (telecom hotels). This topography is a direct result of economic, regulatory, and technical forces that have shaped the sector over a period of 100 years. Perhaps the most dramatic step toward SOC occurred after the 1996 Telecommunications Act, which created the telecom hotels. Whether or not these will turn out to be the Achilles Heel of the sector remains to be seen. Currently, there is no historical data to support the conjecture that telecom hotel failure will lead to a telecommunications Pearl Harbor. The components of the telecommunications sector in need of protection are the telecom hotels, major POP, network access points (NAP), and root servers. This adds up to perhaps 100–200 sites at the high end, and as few as 26 sites at the lower end. While the 13 root servers of the Internet have gone into hiding, there are 26 widely known NAP servers around the world that are critical to the security of the Internet. A full simulation of the effects of a cascade failure caused by a virus or physical attack on one of these critical nodes has yet to be carried out. This remains an open research question. On a longer time scale, the telecommunications sector may evolve even closer to its SOC, unless policies are changed. Congressional regulation and economic forces have shaped the sector, so they can also be employed to reshape it. If Congress were to enact legislation that resulted in the dissipation of telecom hotels, increased redundancy, fewer betweeners, and perhaps increased path redundancy, the sector would remove its tipping point. Under this scenario, the network architecture would revert to a clustered or random network. In other words, an alternative strategy is to restructure the network instead of hardening critical nodes.

REFERENCES 1. The National Infrastructure Protection Plan, Department of Homeland Security. (2006). www.dhs.gov. 2. Lewis, T. G. (2006). Critical Infrastructure Protection: Defending a Networked Nation. John Wiley & Sons, Hoboken, NJ, p. 500. 3. Malamud, B. D., and Turcotte, D. L. (2006). The applicability of power-law frequency statistics to floods. J. Hydrol. 322, 168–180. www.elesevier.com/jhydrol.

Voeller

18

V07-c01.tex

V1 - 12/05/2013 6:38pm Page 18

TELECOMMUNICATION: CRITICAL INFRASTRUCTURE PROTECTION

4. Lewis, T. G. (2009). Network Science: Theory and Applications. John Wiley & Sons, Hoboken, NJ, p. 450. 5. Kermack, W. O., and McKendrick, A. G. (1927). A contribution to the mathematical theory of epidemics. Proc. R. Soc. London, Ser. A 115, 700–721. 6. Albert, R., Jeong, H., and Barabasi, A. (2000). The Internet’s Achilles’ heel: error and attack tolerance of complex networks. Nature 406, 378–382. 7. Wang, Z., Chakrabarti, D., Wang, C., and Faloutsos, C. (2003). Epidemic spreading in real networks: an eigenvalue viewpoint. Proceedings of 22nd International Symposium on Reliable Distributed Systems, ISBN 1060-9857. 8. Kendall, G. (2001). Power outages during market deregulation. IEEE Control Syst. Mag., 21, 33–39. 9. NSTAC Task Force on Concentration of Assets: Telecom Hotels, February 12, 2003, National Security Telecommunications Advisory Committee, www.ncs.gov/nstac/reports/2003/Telecom %20Hotels.pdf. 10. Gladwell, M. (2002). The Tipping Point. Little, Brown and Company, New York, p. 301. 11. Turcotte, D. L., Smalley, R. F., and Solla, S. A. (1985). Collapse of loaded fractal “Trees”. Nature 313, 6004. 12. Turcotte, D. L. (1999). Self-organized criticality. Rep. Prog. Phys. 62, 1377–1429. DOI: 10.1088/0034-4885/62/10/201. 13. Dobson, I., Carreras, B. A., Lynch, V. E., and Newman, D. E. (2007). Complex Systems Analysis of Series of Blackouts: Cascading Failure, Critical Points, and Self-organization, Madison, Wisconsin, Vol. 17(2). CHAOS, p. 15.

FURTHER READING Barabasi, A.-L. (2003). Scale-free networks. Sci. Am. 288(5), 60–69. Kuhn, D. R. (1997). Sources of failure in the public switched telephone network. IEEE Comput. 30(4), 31–36.

Voeller

V07-c02.tex

V1 - 12/04/2013 1:14pm Page 19

2 STRATEGIES FOR PROTECTING THE TELECOMMUNICATIONS SECTOR John Sullivant S3 E–Sisters Three Entrepreneurs Security Consultants Company, West Hollywood, California and Magallanes Associates International (MAI), Thousand Oaks, California

2.1

INTRODUCTION

To provide for the economic and national security of America, it is essential that we establish and maintain a telecommunications capability adequate to satisfy the needs of the nation during and after any national emergency. We now live in a world that is increasingly more dependent on information and the technology that allows us to communicate and do business globally, at the speed of light. Information has always been time-dependent but is more so today than ever. The composition of the telecommunications sector evolves continuously due to technology advances, business and competitive pressures, and changes in the marketplace and regulatory environment. Despite its dynamic nature, the sector has consistently provided robust and reliable communications and processes to meet the needs of businesses and governments [1].

2.2 2.2.1

BACKGROUND A Historical Perspective

The Telecommunications Sector has evolved from a predominantly closed and secure wire-line telecommunications network focused on providing equipment and voice services, into a diverse, open, technologically sophisticated, highly interconnected, and complex industry with a wide array of infrastructure that spans critical aspects of the US government, economy, and society. Three distinct policy events have shaped the course of the modern-day telecommunications industry. The first event was the 1984 court-ordered breakup of AT&T [2], a company that controlled the majority of the local and long distance markets. The second event was the passage of the Telecommunications Act of 1996 [3], which opened local PSTN (public switched telephone network) service to competition. It required incumbent Communication and Information Systems Security, Edited by John G. Voeller © 2014 John Wiley & Sons, Inc. Published 2014 by John Wiley & Sons, Inc.

19

Voeller

20

V07-c02.tex

V1 - 12/04/2013 1:14pm Page 20

STRATEGIES FOR PROTECTING THE TELECOMMUNICATIONS SECTOR

carriers to allow their competitors to have open access to their networks. As a result, carriers began to concentrate their assets in collection facilities and other buildings known as telecom hotels, collection sites, or peering points instead of laying down new cable. Internet Service Providers (ISPs) also gravitated to these facilities to reduce the costs of exchanging traffic with other ISPs. Open competition has caused the operation of the PSTN and the Internet to become increasingly interconnected, software-driven, and remotely managed, while the industry’s physical assets are increasingly concentrated in shared facilities. The third event was the horrific attacks of 2001, which led to sweeping changes in the reorganization of the US government with the creation of the US Department of Homeland Security (DHS), the realignment of security responsibilities of several government agencies under this new entity, and a wave of laws and national actions aimed at redefining the importance of the telecommunications and other sectors, and their threat environment. Homeland security is a concerted national effort to prevent terrorist attacks within the United States, reduce America’s vulnerability to terrorism, and minimize the damage and recover from attacks that do occur [4]. In response to the attacks of 2001, the DHS issued a National Infrastructure Protection Plan (NIPP) in June 2006. This plan establishes a bold, comprehensive unifying structure and overall framework for the integration of national critical infrastructures and key resources protection efforts, into a single National Security Program. It requires designated lead federal agencies to work with the private sectors to address how sector stakeholders should implement the national strategy and protection measures, and how they can improve the security of their assets, systems, networks and functions [5]. The telecommunications sector, the theme of this chapter, is one of the 18 major industries identified in the NIPP as a National Critical Infrastructure. These sectors are diverse, operate in every State and affect every citizen, private and public entity, and the government at every level. The National Telecommunications Sector-Specific Plan augments the NIPP and describes the collaborate efforts between state, local, and tribal governments; nongovernmental organizations; and the federal government to secure the sector from a terrorist attack or other disaster. The plan offers a road map to prioritizing protection initiatives within and across the sector to ensure risk mitigation by lowering vulnerabilities, deterring threats, and minimizing the consequences of attacks and other incidents [1].

2.2.2

What Makes Up The Telecommunications Sector?

The Communications Sector is integrally linked with the Information Technology (IT) Sector. In general usage they are often referred to and incorporated under the common name “Telecommunications Sector.” Driven by twenty-first century technology transformation and convergence, the Communications and the IT Sectors are becoming more closely aligned with telecommunications and eventually will merge into one entity. For the purposes of this chapter we will consider that merging to have taken place. The Telecommunications Infrastructure Sector [6] is a complex system of systems that incorporates multiple technologies and services with diverse ownership. More than 85 percent of telecommunications-related assets, systems, and networks are owned and operated by the private sector. Some owners and operators are government or quasi-government entities.

Voeller

V07-c02.tex

V1 - 12/04/2013 1:14pm Page 21

BACKGROUND

21

The infrastructure includes wire-line, wireless, satellite, cable and broadcasting, and the transport networks that support the Internet and other information systems. The sector provides voice and data service to public and private users through a complex and diverse public network infrastructure encompassing the PSTN, the Internet, and private enterprise networks. The PSTN provides switched circuits for telephone, data, and leased point-to-point services. It consists of physical facilities, access tandems and other equipment. The physical PSTN remains the backbone of the infrastructure with cellular, microwave, and satellite technologies providing extended gateways to the wireline network for mobile users. The Internet and private enterprise networks are key resources, comprising the domestic and international assets within both the IT and Communications Sectors, and are used by all other sectors to varying degrees.

2.2.3

How Do We Secure the Telecommunications Sector?

Much of the expertise required for planning and taking action to protect telecommunications assets lies outside the federal government, including precise knowledge of what needs to be protected [7]. The sector has historically factored natural disasters and accidental disruptions into network resiliency architecture, business continuity, and emergency preparedness planning strategies. The interconnected and interdependent nature of these service provider networks has fostered information sharing, cooperative response, and recovery relationships for decades. Since one service provider network problem nearly always impacts the networks owned and operated by other network providers, the community has a long-standing tradition of cooperation and trust, even in today’s highly competitive business environment. Owners and operators have always been responsible for protecting their physical assets against unauthorized intruders. These measures, however conventionally effective in the past, generally have not been designed to cope with significant military or terrorist threats or the cascading economic and psychological impacts they may entail [8]. Such planning to defend against a terrorist attack is a relatively new phenomenon for the industry. With the wide range of operators and owners, companies, technologies, and government interests that make up the telecommunications community, it is important to find common ground in establishing sector security goals. Despite any initial variances in agreeing on a single strategic security vision, much headway has been made and new security enhancement initiatives continue to emerge as new technologies are developed and employed. Moreover, the telecommunications sector recognizes that other critical infrastructures are highly dependent on its services for basic operations. In this respect, interconnection, interoperability, and security are achieved through technology standards, regulations, carrier agreements, and inter-carrier cooperation, enabling the infrastructure to operate effectively and rapidly restore networks after a disaster. Resiliency is achieved through the technology, redundancy, and diversity employed in the network design and by customers who plan for and employ diverse and resilient primary and backup communications capabilities [9]. Although industry partners maintain and protect the core backbone of the network and share assets and systems, and the facilities connecting these assets to the customer premises, customers are largely responsible for developing and employing mitigation strategies for access to their portion of the network through continuity of operations planning [10].

Voeller

22

V07-c02.tex

V1 - 12/04/2013 1:14pm Page 22

STRATEGIES FOR PROTECTING THE TELECOMMUNICATIONS SECTOR

2.2.4

What are Critical Telecommunications Infrastructure Systems and Assets?

The U. S. Patriot Act defines critical infrastructure as “systems and assets, physical or virtual, so vital to the U.S. that the incapacity or destruction of such systems and assets would have a debilitating impact on national security, national economic security, national public health and safety, or any combination of these matters.” Telecommunications, IT, and cyber space systems, functions and assets fall under this definition [11]. 2.2.5 What is the U.S. Policy on Protecting National Critical Telecommunications Services and Computer-Driven Systems? The Telecommunication Infrastructure and its computer-driven systems are essential to the nation’s security, public health and safety, economic vitality, and way of life. Within the government these systems process and communicate classified national security information concerning the vital interests of the United States. This is necessary to gather intelligence, conduct diplomacy, command and control military forces, provide continuity of essential functions of government, and to reconstitute the political, economic, and social structure of the nation. A survivable communications system is a vital component of our deterrent posture for national defense. Within the private sector these systems process and communicate sensitive business, financial, and other competitive information vital to the interests of specific enterprises. Computer-driven systems monitor, operate, and maintain prime systems in all commerce activity: Banking and Finance; Busing and Ocean Liners; Chemical; Civil Aviation; Commercial Real Estate; Defense Industrial Base; Drinking Water and Water Treatment; Education and Research; Emergency Services; Energy (electric, oil, gas, and nuclear); Food and Agriculture; Manufacturing and Construction; Postal and Shipping; Public Health and Healthcare; Seaports and Staging Areas; and Trucking and Distribution Terminals. These services provide for the economic stability and survivability of the nation. The national policy [12] to protect telecommunications has evolved from experiences and events stemming from previous national security incidents, natural disasters, and industrial mishaps. A family of policy directives and federal laws now formulate the policy of the United States to protect these systems. The overarching goals [13] of the US policy are as follows: 1. Build a safer, more secure, and more resilient America by enhancing the protection of telecommunication assets, operations, functions, and systems. 2. Prevent, deter, neutralize, or mitigate the effects of deliberate efforts by terrorists to destroy, incapacitate, or exploit them. 3. Ensure that disruption, interruption, or manipulation of critical functions that do occur are brief, infrequent, manageable, geographically isolated, and minimally detrimental to the welfare of the United States. 4. Strengthen national preparedness, timely response, and rapid recovery in the event of an attack, natural disaster, or industrial mishap. The national policy to secure cyberspace [14] further articulates five national priorities: (i) the establishment of a security response system, (ii) a threat and vulnerability reduction program, (iii) an awareness and training program, (iv) efforts to secure government cyberspace, and (v) international cooperation. The policy emphasizes that cyber elements should have the following characteristics:

Voeller

V07-c02.tex

V1 - 12/04/2013 1:14pm Page 23

THREATS, CHALLENGES, AND CONTINUOUS IMPROVEMENT

23



Be robust enough to withstand attacks without incurring catastrophic damage; • Be resilient enough to sustain nationally critical operations; • Be responsive enough to recover from attacks in a timely manner. The US policy focuses on protection measures that ensure essential government operations, public services, and economic functions are maintained in the event of a terrorist attack, natural disaster or other type of incident, and that elements of the telecommunication sector are not exploited for use as weapons of mass destruction.

2.3 2.3.1

THREATS, CHALLENGES, AND CONTINUOUS IMPROVEMENT The General Threat Assessment

The threat to the telecommunications sector did not emerge on September 11, 2001. Long before that day, the nation’s electronic systems had been targets of interception and disruption during times of war and armed conflicts, and from those who engaged in economic espionage and other criminal activity. In less than one generation, the information revolution and the introduction of the computer into virtually every dimension of our society has changed how our economy works, how we provide for our national security, and how we structure our everyday lives. In the future, computer-related technologies will continue to open new vistas of opportunity for the American people. Yet this new age of promise carries within it peril. All computer-driven systems are vulnerable to intrusion and destruction. A concerted attack on the computers of any one of our critical infrastructures could have catastrophic affects on the economy [15]. The telecommunications sector faces both natural and man-made threats. It is susceptible to cyber threats such as interception, unauthorized electronic access, related forms of technical exploitation, as well as the hostile intelligence threat. The technology to exploit these electronic systems is widespread and used extensively by foreign nations, terrorist groups, and criminal elements. Other threats confronting the sector include: natural threats; cyber threats; workforce threats; explosive, chemical, and biological terrorist threats; and supply chain threats. The sector also is vulnerable to unintentional human error because of its high reliance on human interaction. It is important to mention that the threat assessment performed by the Homeland Infrastructure Threat and Risk Analysis Center (HITRAC) on the Telecommunication Sector identified only a few direct threats, giving the sector a low threat rating. However, the risk for the sector as a residual target is moderate to high due to its interdependencies on other critical infrastructure and the significant consequences of loss that could materialize from collateral damaged sustained to sector assets and services, resulting from a direct attack upon the assets of another sector [16]. 2.3.2

The Threat to America’s Telecommunications Components

Attacks on telecommunications assets can produce cascading collateral damage for other sectors, far beyond the targeted asset and physical location of the incident. Natural, man-made or technological hazards could produce catastrophic losses in terms of human casualties, property destruction, and economic effects, as well as profound damage to

Voeller

24

V07-c02.tex

V1 - 12/04/2013 1:14pm Page 24

STRATEGIES FOR PROTECTING THE TELECOMMUNICATIONS SECTOR

public morale and confidence. Conversely, an attack on another sector could affect telecommunications assets. During the 2001 attacks, the telecommunications sector was not a direct target but nonetheless, significant damage impaired telecommunications facilities, lines, and equipment in the northeastern part of the nation. Much of the disruption to voice and data communications services throughout lower Manhattan occurred when a building at the World Trade Center complex collapsed into an adjacent Verizon communications center, which served as a major local communications hub within the public network. Approximately 34,000 businesses and residences in the surrounding area lost services for extended periods. Several critical government operations, among them the Federal Aviation Administration, NY City services, NY/NJ Port Authority, US Customs, FBI and US Secret Service activities were affected. Disruptions to other government and private customers in other service areas also occurred, as various carriers had equipment collocated at the site that linked their networks to Verizon. In addition, a considerable amount of telecommunications traffic that originated and terminated in other areas also passed through this location and was disrupted. AT&T’s local network service in lower Manhattan was significantly disrupted following the attacks, which directly affected the operations of the World Financial District, the Stock Exchange, and international banking operations. Conversely, the electric system [17] in New York City remained operational for the island of Manhattan outside of the World Trade Center complex. Furthermore, needed electric service at Ground Zero was quickly and efficiently restored to support rescue and recovery operations. Although not directly targeted, the telecommunications sector became a victim of significant cascading collateral damage. The loss of telecommunications service, as well as the damage sustained to the energy, water, banking and financial, public health, and transportation infrastructures created significant management challenges for the city of New York during the crisis. Despite this, the telecommunications sector demonstrated remarkable resiliency as the direct damage to its assets and operations was offset by diverse, redundant, and multifaceted communications capabilities—a key aspect of security for this infrastructure [18].

2.3.3

The Threat to America’s Internet Networks

The Internet threat is not only physical damage to its assets but also stems from an intruder’s ability to hack into the system from the outside and be capable of causing disruption or destruction from within. The most widely reported Internet security problems of the past few years are “denial-of-service”, which are designed to “crash” systems. With the explosive growth in broadband services, high-speed Internet access for home users makes it likely that future denial-of-service attacks on home computers will emerge. The misuse of the on-line environment through spam, identity theft, fake websites, and other means threaten to undermine the potential economic and social benefits of the on-line environment by eroding the trust and confidence in its safety and security [19]. Another concern is the potential for an intruder to hijack a user’s computer, establishing a “backdoor” that can be activated anytime the computer is on-line, giving the intruder control over the user’s system. Ironically, as government and corporate organizations have hardened their networks and become more sophisticated at protecting their systems, they have driven adversaries to pursue other targets of opportunity. Home-users with broadband connections are these new targets of opportunity both for their own

Voeller

V07-c02.tex

V1 - 12/04/2013 1:14pm Page 25

CONCLUSIONS

25

computing resources and as an alternative method for attacking and gaining access to government and corporate networks [14]. Every day, in America, thousands of unauthorized attempts are made to intrude into the computer systems that control key defense facilities, power grids, banks, government agencies, telephone systems, nuclear power plants, water treatment plans, and public health and transportation systems. Some attacks are the equivalent of car thief “joy riders,” committing a felony as a thrill. Others are committed for industrial espionage, theft, revenge-seeking vandalism, or extortion. Still others are committed for intelligence collection, reconnaissance, or the creation of a future attack capability. The perpetrators range from juveniles to thieves, from organized crime groups to terrorists, and from hostile militaries to intelligence services of foreign governments [20]. What has emerged in the last several years is an increase in the seriousness of the threat and a broadening of infrastructure sector targeting. Cyber crime costs have reached $100 billion a year in business disruptions and damage to systems. As a result, the Justice Department’s Computer Crime and Intellectual Property Section, the FBI’s Cyber Division, and the US Secret Service Electronic Crimes Task Force all play a central role in tracking crime and offenders, apprehending, and bringing to justice the responsible individuals and organizations [21].

2.4 TELECOMMUNICATIONS CHALLENGES AND CONTINUOUS IMPROVEMENT Managing threat and reducing vulnerability in the telecommunications sector is a particularly complex challenge. While industry partners have been successful in protecting the core backbone of the telecommunications network, gaps exist in customer user responsibility. Not all customer critical systems and operations are supported by diverse primary and backup telecommunications capabilities or address the issue of single point failure consequences. Developing and employing mitigation strategies for their portion of the network, and mitigating their own risk through continuity of operations planning requires strengthening [22]. Table 2.1 summarizes a sampling of some of these challenges and potential solutions. The listing by no means is all-inclusive. My goal is to provide a top-level overview of some of the most significant complexities facing the telecommunications sector and what is being done, or should be considered, to narrow the vulnerability gap.

2.5 2.5.1

CONCLUSIONS Performance, Reliability and Efficiency

Three national program elements contribute to the stability and survivability of the telecommunications sector. The first is the advancement in micro-electronics technology, which has stimulated an unprecedented growth in the supply of telecommunications and information processing services. The second is the resiliency built into the telecommunications infrastructure that has increased the availability of services to its customers and reduced the impact of outages. The third is the priority service programs that have contributed to the continuance of critical telecommunications services and functions during an emergency [31]. Collectively, these initiatives have directly advanced the US

26

3. Protecting privacy and civil liberties. Technologies designed to protect information and systems, if not carefully utilized, could inadvertently undermine civil liberties. Even with the best of intentions, technology that protects against intrusions, when cast too broadly, might profile innocent activity [25].

2. Identifying and prioritizing critical assets. The national security strategy for protecting critical resources recognizes that it is not practical or feasible to protect all assets, systems, and networks against every possible terrorist attack all the time [24].

1. Vagueness and ambiguity of planning. In 2005, the Government Accountability Office (GAO) [23] reported that sector-specific plans varied significantly in addressing security protection measures. None of the 18 sector-specific plans fully addressed all 30 cyber security-related criteria. The Telecommunications and Information Technology Sector plans fully addressed many of the criteria, but the remaining 16 sector plans were less comprehensive.

Continuous Improvement



protect the private information of its citizens that resides on its computer systems; partner with the private sector to protect personal information within their systems; ◦ consult with communities to define acceptable solutions; conduct legal reviews; and commit to comply with statutory and privacy solutions; ◦ develop robust intelligence and law enforcement capabilities to protect information systems, consistent with the law. • Ensure the protection of American citizens’ civil liberties, their rights to privacy, and to the protection of proprietary data.



• The U.S. Government must continue to do the following:

processes, provide advance warning to potentially targeted assets and assist owners and operators in taking steps to protect assets in an all-hazards environment. • The approach requires a coordinated and focused effort from our entire society.

• A proactive performance-based approach is required to enhance the decision-making

not just those most frequently reported or considered to be the most likely to occur.

• Protection planning must address the full range of plausible threats and hazards,

• The

introduce reasonable and prudent mitigation solutions. emerging terrorist threat requires new approaches focused on intelligence-driven analyses, information sharing, and unprecedented partnership between the government and the private sector.

• Enterprises must continuously evaluate program weaknesses to reduce risk and

threat and hazard environment to address weaknesses and gaps in protection.

• Enterprises that fell short of achieving expectations must adapt to the changing

Telecommunications Sector Challenges and Continuous Improvements

Significant Challenges

TABLE 2.1

Voeller V07-c02.tex V1 - 12/04/2013 1:14pm Page 26

5. Improving system performance, reliability and efficiency. Significant challenges are to ensure the diversity of telecommunications services; to improve the reliability and efficiency of networks, telecommunications carriers, and the physical network facilities they use to route circuits; and to support critical government and industry operations to withstand the effects of wide-scale network disruptions [27].

4. Internet architecture and working remotely. With many corporations allowing employees to work at home, computer systems used for business purposes outside the official work environment create a challenge for enterprises to safeguard such systems from attack. As organizations increasingly demand remote connectivity to corporate and government networks, the security of these remote endpoints becomes increasingly critical to the overall protection of the network. This increase in interconnectivity exposes the on-line environment to a growing number, and a wider variety, of threats and vulnerabilities [26].

V07-c02.tex

(continued)

ships and organizations currently available to address security include the Presidents National Critical Infrastructure Board, the Government Network Security Information Exchanges, the Telecommunications Information Sharing Analysis Centers (ISAC), and the National Reliability and Interoperability Council of the Federal Communications Commission (FCC). Other programs include the National Security Telecommunications Advisory Committee, Government Emergency Telecommunications Service, Telecommunications Service Priority and Wireless Priority Service, Special Routing Arrangement Service, Next Generation Priority Service, and the Hotline System. The continued success of these partnerships and programs assures the reliability and interoperability of the government’s owned or commercially provided national security and emergency preparedness communications resources.

• Promote collaboration to advance performance, technology, and security. Partner-

built quickly and without concern for security, without thought that a sophisticated enemy might attack it. Now we must fix it to protect, guard against, or reduce the existing vulnerabilities. • Government and industry must continue to work together to understand vulnerabilities, develop countermeasures, establish policies and procedures, and raise awareness necessary to mitigate risks. This includes the following: ◦ defining an appropriate threshold for security; ◦ expanding infrastructure diverse-routing capability; ◦ understanding the risks associated with vulnerabilities of the telecommunications infrastructure.

• A need exists to redesign the national information infrastructure. Initially it was

Voeller V1 - 12/04/2013 1:14pm Page 27

27

28

(Continued )

6. Consistency in applying evaluation measurement standards. Although stakeholders share similar objectives, they have different perspectives on what constitutes acceptable risk and how to achieve security and reliability. Therefore, an agreement on a sustainable security threshold and corresponding security requirements remains elusive [28]. 7. Government and private sector cooperation. There is a growing awareness that confidence and security in the use of the on-line environment cannot be addressed solely as a technology matter, but requires that all users be aware of the security risks and preventive measures, assume responsibility, and take steps to strengthen the security and reliability of information and the networks it travels on [29].

Significant Challenges

TABLE 2.1

global partners. This includes agency involvement by the: ◦ National Coordinating Center for Telecommunications ◦ National Counterterrorist Center ◦ National Joint Terrorist Task Force ◦ US Coast Guard Intelligence Coordination Center ◦ National Maritime Intelligence Center ◦ International Maritime Intelligence Center

• Reducing risk requires an unprecedented partnership between our country and our

for outreach to users, and for cooperation across borders.

• There is a need for strong leadership between government and the private sector,

vide uniformity to the analysis process.

• Adapting the measurement and evaluation criteria described in the NIPP will pro-

of failure in one location will not adversely impact connecting others. Security assessment teams need to characterize this state of diversity and collaborate to understand the topography of the physical components of the architecture in order to establish a foundation for defining a strategy that ensures physical and logical protection diversity. • Improving business protocols, hardening facilities, building resiliency, incorporating hazard resistance into facility designs, initiating countermeasures, installing security systems, leveraging “self-healing” technologies, promoting workforce surety programs, and implementing cyber security measures will enhance program performance effectiveness.

• Redundancy within the infrastructure is also critical to ensure that single points

Continuous Improvement

Voeller V07-c02.tex V1 - 12/04/2013 1:14pm Page 28

(continued)

threats. The nation must continue to work with industry to manage risk and to enhance its ability to minimize the damage that results from attacks that do occur. This includes the following strategies: ◦ develop cohesive domestic strategies to ensure a trusted, secure, and sustainable on-line environment; ◦ address the threat posed by the misuse, malicious use, and criminal use of the on-line environment; ◦ develop watch, warning, and incident response and recovery capabilities to prevent cyber attacks and minimize damage and recovery time; ◦ encourage the establishment of mutual assistance programs for cyber security emergencies; ◦ raise awareness about the removal of impediments to information sharing about cyber security vulnerabilities between the public and private sectors; ◦ encourage software industry to consider promoting more secure “out-of-the-box” installation and implementation of their products; ◦ facilitate a national effort to promulgate best practices and methodologies that promote integrity and reliability in software code development, and processes and procedures that diminish the possibilities of erroneous code, malicious code, or trapdoors that could be introduced during development.

• No single strategy can eliminate cyberspace vulnerabilities and their associated

Voeller V07-c02.tex V1 - 12/04/2013 1:14pm Page 29

29

30

(Continued )

8. International protection. Telecommunications networks are global in scope and extend beyond US borders. The federal government and private sector corporations also have a significant number of critical facilities located outside the United States. Providing adequate protection measures for these assets and services presents unique challenges [11].

Significant Challenges

TABLE 2.1

national organizations such as those listed below: ◦ G8 ◦ United Nations ◦ NATO ◦ European Union ◦ Organization of American States ◦ Asia-Pacific Economic Cooperation ◦ Organization for Economic Cooperation and Development ◦ International Maritime Organization ◦ International Watch and Warning • The US State Department should continue to collaborate with other federal agencies and international partners to do the following: ◦ identify and prioritize the nation’s critical foreign telecommunications dependencies; ◦ build and strengthen international partnerships; ◦ implement a comprehensive, integrated, international risk management program; ◦ implement protection programs and resiliency strategies; ◦ share information with international entities; ◦ perform outreach to enhance information exchange and management of international agreements. ◦ protect assets, systems, and networks that operate across or near the borders of foreign countries or rely on international aspects to enable critical functionality, require continuing coordination, and planning with all stakeholders;

• Industry and the US State Department should continue their involvement in inter-

Continuous Improvement

Voeller V07-c02.tex V1 - 12/04/2013 1:14pm Page 30

9. The Technology Revolution and its impact on law enforcement activities. Rapid changes in the telecommunications and computer industries have blurred the traditional gaps that separated these technologies. While the result of these changes improved our capability, many improvements make it difficult for law enforcement agencies to detect and prevent terrorist acts and inhibit lawfully authorized electronic surveillance. Some advanced technologies that form the backbone of the information superhighway also nullify the effectiveness of traditional methods of carrying out court-authorized wiretaps. Encryption technologies used to protect data are now available lawbreakers, preventing the government from obtaining contents of information it is authorized to intercept [30]. 10. Trained people. America is short on training the IT specialists it needs to operate, improve, and secure its new IT-based economy.

interact with foreign governments and international organizations to enhance the confidentiality, integrity, and availability of cyber-based infrastructure having an international or global dimension; coordinate the protection of physical assets located on, near, or extending across the borders with Canada and Mexico or those with important economic supply chain implications; coordinate protection needs where the service originates from outside the United States to include US government and corporate facilities located overseas;

information security specialists to meet the demands of the industry is in order.

• A nationwide initiative to recruit, employ, train, and retain adequate numbers of

threats in the making rather than responding to an attack after the fact.

• Continue to improve surveillance, monitoring, and detection capabilities to discover

• Develop new encryption technologies to counter the capabilities of lawbreakers.

lawful searches.

• Encourage and fund research and development to create capabilities to respond to

• Adapt legislation and appropriations to support lawful programs.







Voeller V07-c02.tex V1 - 12/04/2013 1:14pm Page 31

31

Voeller

32

V07-c02.tex

V1 - 12/04/2013 1:14pm Page 32

STRATEGIES FOR PROTECTING THE TELECOMMUNICATIONS SECTOR

policy on protecting national critical telecommunications services and computer-driven systems. 2.5.2

The Threat to America’s Telecommunications Sector

The pace of the technological drive to install computer controls in every one of our nation’s critical infrastructures far outstrips our potential to design computer security software, train IT security personnel, or develop and promulgate computer security practices and standards. We have created a gaping vulnerability in our national security and economic stability that affects not only computer-controlled systems, but also the vital databases maintained by public health centers, law enforcement, legal and judicial institutions, educational and research institutions, and proprietary data managed and operated by other sector functions and systems. We are vulnerable to mischief-making hackers, hardware and software failures, cyber criminals and, most alarmingly, to deliberate attack from nation states and terrorists. To complicate matters further, most critical infrastructures interconnect and therefore, depend on the continued availability and operation of other systems and functions. This interconnectivity is provided by the IT and telecommunications sectors, which increasingly control the operations and productivity of the other critical infrastructures. Given the dynamic nature of these interdependent infrastructures and the extent to which our daily lives rely on them, a successful terrorist attack to disrupt or destroy IT and telecommunications critical assets could have a tremendous impact beyond the immediate target and continue to reverberate long after the immediate damage is done [7]. 2.5.3

Future Research Direction

The unique characteristics of the telecommunications infrastructure sector, the rapid change in technology, and the significant impediments complicating their protection requires an unprecedented level of public and private sector cooperation and government coordination. The challenge ahead is to develop a coordinated and complementary system that reinforces protection efforts rather than duplicates them, and that meets mutually identified essential requirements. In addition, many telecommunications assets, systems, and functions span national borders and, therefore, must be protected within the context of international cooperation [32]. Several government initiatives [33] have helped keep America safe since the 2001 attacks. The first initiative is the development of partnerships among government, industry, academics, and others to ensure a trusted, secure, and sustainable telecommunications environment through the development, implementation, and review of guidelines and best practices. These federal programs augment the extensive state, local, tribal, territorial, and private sector protection programs that constitute important efforts already being implemented. The second initiative is the creation of Homeland Security Centers of Expertise, which challenges us to marshal our nation’s advantages in the sciences and technology. These Centers are combating terrorism across a wide range of research and development activity and are studying the following: • •

the interactions of networks and the need to use models and simulations; risk analysis related to the economic consequences of terrorist threats and events;

Voeller

V07-c02.tex

V1 - 12/04/2013 1:14pm Page 33

CONCLUSIONS

33



potential threats to animals and agriculture and agro-security issues related to postharvest food protection; • improvement and promotion of the design, development, and implementation of usable security measures in existing and new technologies. The third initiative is the removal of structural and legal impediments that prevented the collection and sharing of information by our law enforcement and intelligence agencies. The fourth initiative concerns the government’s diplomatic outreach and operational capabilities to build global partnerships to combat cyber crime whenever it originates.

2.5.4

The Prospects for the Future

Despite all the hard work accomplished to create new laws and the means to combat telecommunications terrorism, our work is far from complete in both planning and execution. We can expect to see more frequent and escalated planning activity by adversarial groups to attack our critical assets. Future attempts to disrupt or damage our communications capabilities is almost evident and probably a matter of time. The threat is that in a future crisis a criminal cartel, terrorist group, or hostile nation could seek to inflict economic damage, debilitating disruption and death, and degradation of our national response by attacking critical networks. Terrorists are relentless and patient. They are also opportunistic and flexible. Terrorists are inventive and resourceful in terms of target selection, as well as in the selection and use of specific instruments of violence, and intimidation at the time and location of their choosing [34]. In focusing our protection efforts, we must acknowledge three key factors [35]: (i) Given the immense size and scope of potential target sets within the telecommunications sector we cannot protect completely all things, at all times, against all conceivable threats. (ii) The assets, systems and functions that comprise the telecommunications infrastructure are not uniformly “critical” in nature, particularly in a national or regional context. (iii) Given the dynamics of the sector’s size and scope, and diversity of mission, we must prioritize protection strategies to reduce vulnerability and threat condition; to maximize program effectiveness, efficiency, resources, and funding allocations; and to maximize returns on our investment. Despite the tremendous effort we are putting into our planning, no protection strategy can succeed in isolation. It must be a part of a larger strategy that is responsive to economic and national security considerations [36]. We must move forward with the understanding that there are enemies who seek to inflict damage on our way of life. They have attacked us on our own soil, and they have shown a willingness to use unconventional means to execute those attacks. The attack tools and methodologies they are employing are becoming widely available and, the technical capability and sophistication of users bent on causing havoc or disruption is improving. We now have an opportunity and an obligation to take action to prevent, deter, neutralize, or mitigate the effects of deliberate efforts to disrupt, interrupt, manipulate, destroy, incapacitate, or exploit telecommunications assets. We must renew our resolve to embark upon a program of self-assessment to identify program weaknesses; mitigation analysis and problem-solving to strengthen our security status; advanced research and development to improve technology performance and survivability; and enhance practices whereby we can

Voeller

34

V07-c02.tex

V1 - 12/04/2013 1:14pm Page 34

STRATEGIES FOR PROTECTING THE TELECOMMUNICATIONS SECTOR

objectively reduce the risk to telecommunications resources, assets, facilities, systems, and functions.

REFERENCES 1. Sullivant, J. (2007), Strategies for Protection National Critical Infrastructure Assets: A Focus on Problem-Solving. John Wiley & Sons, Hoboken, NJ, p. 500. 2. Communications Infrastructure Plan (2007), p. 9, U.S. Department of Homeland Security, Washington, DC. 3. National Strategy for the Physical Protection of Critical Infrastructure & Key Assets (2003), p. 3, U.S. Department of Homeland Security, Washington, DC. 4. The National Infrastructure Protection Plan (2009), p. 8, U.S. Department of Homeland Security, Washington, DC. 5. Communications Infrastructure Plan (2007), p. 1, U.S. Department of Homeland Security, Washington, DC. 6. National Strategy for the Physical Protection of Critical Infrastructure & Key Assets (2003), p. 8, U.S. Department of Homeland Security, Washington, DC. 7. Ibid. (2003), p. 8, U.S. Department of Homeland Security, Washington, DC. 8. Communications Infrastructure Plan (2007), p. 2, U.S. Department of Homeland Security, Washington, DC. 9. Ibid. pp. 9–11; 21. 10. National Strategy for the Physical Protection of Critical Infrastructure & Key Assets (2003), p. 6, U.S. Department of Homeland Security, Washington, DC. 11. Presidential National Security Directive (NSD): National Policy for the Security of National Security Telecommunications and Information Systems (1990), p. 1–3, Office of the President of the United States, Washington, DC. 12. The National Infrastructure Protection Plan (2009), p. 9, U.S. Department of Homeland Security, Washington, DC. 13. National Strategy to Secure Cyberspace (2003), p. 13–14, U.S. Department of Homeland Security, Washington, DC. 14. Defending America’s Cyberspace–National Plan for Information Systems Protection (2006), pp. 6–9, U.S. Department of Homeland Security, Washington, DC. 15. Communications Infrastructure Plan (2007), pp. 35–37, U.S. Department of Homeland Security, Washington, DC. 16. National Strategy for the Physical Protection of Critical Infrastructure & Key Assets (2003), p. 9, U.S. Department of Homeland Security, Washington, DC. 17. Sullivant, J. (2007). Strategies for Protection National Critical Infrastructure Assets: A Focus on Problem-Solving. John Wiley & Sons, Hoboken, NJ, p. 502. 18. Ibid. pp. 502–503. 19. Ibid. p. 503. 20. The National Strategy to Secure Cyberspace (2003), p. 28. 21. Ibid. p. 7. 22. GAO. (2005). Critical Infrastructure Protection: Department of Homeland Security Faces Challenges in Fulfilling Cyber Security Responsibilities. GAO-05-434, Washington, DC, May 25, 2005. 23. The National Infrastructure Protection Plan (2009), pp. 40–42.

Voeller

V07-c02.tex

V1 - 12/04/2013 1:14pm Page 35

FURTHER READING

35

24. Defending America’s Cyberspace–National Plan for Information Systems Protection (2006), pp. 11–15. 25. Ibid. (2006), pp. 4–5. 26. Sullivant, J. (2007). Strategies for Protection National Critical Infrastructure Assets: A Focus on Problem-Solving. John Wiley & Sons, Hoboken, NJ, p. 523. 27. Ibid. p. 506. 28. Defending America’s Cyberspace–National Plan for Information Systems Protection (2006), pp. 21–80. 29. Communications Infrastructure Plan (2007), pp. 11–19. 30. Communications Infrastructure Plan (2007), p. 4. 31. National Strategy for the Physical Protection of Critical Infrastructure & Key Assets (2003), pp. 6–7. 32. The National Infrastructure Protection Plan (2009), pp. 25–26; 49–70; 149–153. 33. National Strategy for the Physical Protection of Critical Infrastructure & Key Assets (2003), p. 7. 34. Ibid. (2003), pp. 2–3. 35. National Strategy to Secure Cyberspace (2003), pp. 5–11. 36. Ibid. pp. 5–11.

FURTHER READING Executive Order 13231: Critical Infrastructure Protection in the Information Age. (October 16, 2001). Executive Order 12382: National Security Telecommunications Advisory Committee. (February 28, 2003). Executive Order 12472: Assignment of Emergency Preparedness Telecommunications Functions. (April 3, 1984), Information Technology Infrastructure Plan (2007). Lewis, T. G. (2006). Critical Infrastructure Protection in Homeland Security: Defending A Networked Nation. John Wiley & Sons, New York. National Response Plan (2008). National Strategy for Combating Terrorism (2006). National Strategy for Homeland Security (2002). National Strategy to Combat Weapons of Mass Destruction (2002). NSDD 145: National Policy on Telecommunications and Information Systems Security (1984).

Voeller

V07-c02.tex

V1 - 12/04/2013 1:14pm Page 36

Voeller

V07-c03.tex V1 - 12/06/2013 12:43pm Page 37

3 WIRELESS SECURITY Matthew Sexton, Edward Smith, and Bernie Eydt Booz Allen Hamilton, McLean, Virginia

3.1

SCIENTIFIC OVERVIEW

Wireless networks have undoubtedly proliferated into many aspects of daily life, both personal and professional. Although many wireless networks exist, ranging from cellular networks to Wireless Fidelity (Wi-Fi), they can be divided into two primary categories: data-centric and voice-centric. Data-centric wireless networks primarily provide extensions of the existing Internet Protocol (IP)-centric networks and evolved from wired IP-centric networks providing security similar to that of wired networks. Voice-centric wireless networks evolved as extensions of the legacy wireline telephony networks and focus security on end user issues, such as cloning, billing fraud, and other forms of subscriber-based attacks. This end user fraud protection focus, although valid for an extension of the wireline voice network, offers weaker security architecture for data services added to the voice network as enhancements to user services. 3.1.1

Voice-Centric Networks

Security within today’s voice-centric networks has advanced well beyond the rudimentary security provided by first-generation mobile phone services that were mere extensions of the public service telephone network (PSTN). User authentication, voice and data privacy, and assurance of transmitted data are becoming the norm in wireless voice services. Security technologies have evolved as voice and data networks have converged with modern cellular networks, which have a mixture of security technologies. Third-generation (3G) cellular networks are poised to enhance security based on lessons learned from current networks and the application of technologies from data-centric networks. The following is a brief summary of the advances and issues with current and future voice-centric wireless networks. 3.1.1.1 Network Evolution Path. Cellular security is the result of two evolutionary paths taken from first-generation cellular services, which were simple extensions of the PSTN. In the United States, wireless security evolved from the first-generation Advanced Mobile Phone Service (AMPS), which provided limited security. The United States Communication and Information Systems Security, Edited by John G. Voeller © 2014 John Wiley & Sons, Inc. Published 2014 by John Wiley & Sons, Inc.

37

Voeller

38

V07-c03.tex V1 - 12/06/2013 12:43pm Page 38

WIRELESS SECURITY

deployment of second-generation cellular (time division multiple access technology and code division multiple access [CDMA] technology) took a more limited approach to security and focused on addressing issues identified with the AMPS service, such as billing fraud resulting from the ability to clone phones and off-air voice intercept. The first cellular enhancement addressed in the United States was billing fraud resulting from thieves collecting user information from off-air signaling and using that information to clone phones. The solution was to employ more robust user authentication through a challenge-response method known as the cellular authentication and voice encryption (CAVE) algorithm. The CAVE algorithm is a challenge-response method that uses network and user parameters to develop shared secret data (SSD) to form the basis of over-the-air parameter exchanges. The method uses cryptography to create a dynamic authentication response used to verify the identity of the user, thus greatly reducing the possibility of successful phone cloning and fraudulent access. Another US enhancement was increased voice privacy. This enhancement was made possible by increasing the complexity of the signal modulation and voice coding techniques employed in the digital transmissions, and by adding the optional application of a voice privacy mask to the digital voice. The digital modulation made the possibility of publicly available radiofrequency (RF) intercept systems, such as police scanners, intercepting and decoding the voice extremely unlikely. The application of a voice privacy mask added security from unwanted intercept if the data was intercepted. Today, 3G cellular communications face many of the same design challenges and constraints as its 2G/2.5G ancestors. Two standards bodies govern the cellular industry; the Universal Mobile Telecommunications System (UMTS) is specified by the Third Generation Partnership Project (3GPP) and the CDMA2000 architecture is specified by 3GPP2 [1]. Both systems face many of the same risks and threats, and therefore share a number of common security elements or traits [2]. Subscriber authentication, session confidentiality, and signaling integrity are the primary goals of secure communications. UMTS and CDMA2000 security protocols such as the authentication and key agreement (AKA) protocol are more robust mechanisms designed to limit the vulnerability of and improve the security stance of cellular communications. A central concept for all wireless communications is access security. Restricting subscriber access to authenticated and authorized devices, coupled with voice and data encryption will reduce an organization’s network vulnerability. The Internet Protocol Multimedia Subsystem (IMS), originally designed by the 3GPP, is a widely accepted open standard effort to define an all IP-based wireless network. Its adoption has been slow, but the technology may help promote a convergent future that will bring cellular networks together with other wireless technologies. IMS promotes the use of employing established, open protocols and commodity equipment. This approach facilitates application access and promotes interoperability between disparate networks by bridging the gap for access technology, that is, IMS networks operate with any endpoint that supports IP capabilities. However, since IMS requires new network infrastructure, many carriers have been slow to implement their rollout plans because of costs and complexity. 3.1.1.2 Project 25 Digital Radio. Project 25 (P25) is a standard for the manufacturing of interoperable digital two-way wireless communication products. Formed in 1989, P25 began as a joint effort between the National Association of State Telecommunications Directors (NASTD), the Association of Public-Safety Communications

Voeller

V07-c03.tex V1 - 12/06/2013 12:43pm Page 39

SCIENTIFIC OVERVIEW

39

Officials-International (APCO), and a group of federal agencies to develop a series of standards to define a digital radio system (conventional and trunked) [3]. The Telecommunications Industry Association (TIA)-102.AAAB standard provides an overview of the security services available in land mobile radio (LMR) [4]. It generalizes security requirements into three categories: (i) confidentiality, (ii) authentication and integrity, and (iii) key management. Two additional security-related TIA standard documents detail block encryption [5] and over-the-air-rekeying (OTAR) [6]. Either the Data Encryption Standard (DES) or Advanced Encryption Standard (AES) can provide confidentiality. Currently, AES support is required for compliance with the P25 block encryption standard, and DES is optional. Although DES was officially withdrawn (i.e. Federal Information Processing Standards [FIPS] 46-3) by the Secretary of Commerce, many governmental organizations including state and local governments still implement this algorithm. Encryption, chronological message numbers, and message authentication codes provide integrity, as does a device’s electronic serial number (ESN). A key management facility (KMF) or a key variable loader (KVL) provides key management. A KMF uses OTAR and is therefore much more scalable than the manual use of KVLs to load subscriber device encryption keys. OTAR distributes traffic encryption keys (TEK) using key encryption keys (KEK) (also known as shadow keys) to ensure confidentiality of cryptographic secrets. A critical challenge to today’s public safety personnel is secure and interoperable communications. There are many challenges to implement a KMF to KMF interface standard, that is, a shared interoperability key environment. Key generation/distribution between multiple agencies remains a challenge for a number of reasons. For example, agencies that employ OTAR may have the same interop keys, but subscriber unit storage location numbers may not map to the proper key identifier required for voice communications. This has been a problem for many years and is discussed in greater depth later in this technical chapter. It is also important to note that the key rotation schedule for P25 devices usually exceeds 30 days, compared to other wireless technologies that may rotate their communication key each session. While P25 devices can connect to laptops or other data networks [7], their transmission rates are limited and they cannot support broadband data applications.

3.1.2

Data-Centric Networks

3.1.2.1 Radio Frequency Identification Technologies. Radio Frequency Identification (RFID) technologies are a form of automatic identification and data capture technology that uses electric or magnetic fields at radio frequencies to transmit information [8]. RFID tags are tiny microchip transponders that constantly listen for a radio signal sent by transceivers. The majority of RFIDs are “passive” or do not require a power source, they get their power from the radio signals themselves. Several common uses of RFID are: asset tracking, manufacturing, retailing, payment systems, security and access control, and even health care patient tracking. RFID has two groups of technical controls: controls to protect the tag, and controls to protect RF communications. Security controls vary by tag generation, and by the power supply and processing capability of the tag. Security is focused on the system as a whole, rather than the RFID tag alone. A comprehensive review of RFID technical, managerial, and operational controls are detailed in National Institute of Standards and Technology (NIST) SP 800-98: Guidance for Securing RFID Systems [8].

Voeller

40

V07-c03.tex V1 - 12/06/2013 12:43pm Page 40

WIRELESS SECURITY

3.1.2.2 IEEE 802.11 Wireless Local Area Networks. Wireless local area network (WLAN) communications offer organizations and users many benefits, such as portability, flexibility, increased productivity, and lower installation costs. The IEEE 802.11 standards working group developed the medium access control (MAC) and physical layer (PHY) specifications for wireless connectivity for fixed and mobile devices within a local area. The original confidentiality protocol—Wired Equivalent Privacy (WEP)—was intended to provide security comparable to that of wired local area networks (LANs) [9]. Unfortunately, WEP turned out to be susceptible to a variety of attacks [10], and it suffered even more from poor vendor implementations, such as the reuse of initialization vectors [11] and the use of initialization vector of all zeros [12]. To address WEP’s problems, IEEE established an IEEE 802.11 security enhancements working group that published the 802.11i specification. The Wi-Fi Alliance, an industry group, also promoted this effort through interoperability testing of equipment designed to the IEEE 802.11i specification. The Wi-Fi Alliance refers to the IEEE 802.11i enhancements as Wireless-Fidelity Protected Access (WPA). The initial WPA specification addressed only a subset of IEEE 802.11i, the IEEE security amendment to the MAC layer, because the standard was not yet complete when WPA testing began. WPA employs the Temporal Key Integrity Protocol (TKIP) for data encryption which uses the same RC4 algorithm as WEP, but adds improved key management and message integrity checking. Of WEP, TKIP, and Counter Mode cipher block chaining-message authentication code (CBC-MAC) Protocol (CCMP), only CCMP uses the AES [13], which is FIPS-validated [14]. Both WEP and TKIP are based on Rivest’s Cipher (RC4), which is not FIPS-validated. Following ratification of IEEE 802.11i, the Wi-Fi Alliance introduced WPA2, which identifies equipment capable of supporting all 802.11i requirements. In addition, two wired standards were implemented to provide network access control: 802.1x [15] and the Extensible Authentication Protocol (EAP) [16]. A comparison of the IEEE 802.11 security protocols is provided in Table 3.1. 3.1.2.3 IEEE 802.16 Wireless Metropolitan Area Networks. As cellular services race toward 3G deployments, Mobile Worldwide Interoperability for Microwave Access (WiMAX) is emerging as a complementary technology to existing telecommunication and fixed-data technologies. The industry trade group WiMAX Forum trademarked the WiMAX name and promotes the interoperability of broadband wireless products based TABLE 3.1

A Comparison of the 802.11 Security Protocols

Cipher Key size (bits) Key life Packet key Data integrity (MSDU) Header integrity (MPDU) Replay attack Key management a

WEP (RC4)

WPA (TKIP/RC4)

802.11i (AES-CCMP)

RC4 40/104 24-bit IV Concatenated CRC-32 None None None

RC4 64/128 48-bit IV Mixing function Michael Michael Use IV EAP-based

AES 128a , 192, 256 48-bit IV Not needed CCM CCM Use IV EAP-based

NIST only specifies this block size. MSDU, Media Access Control Service Data Unit; MPDU, Media Access Control Protocol Data Unit.

Voeller

V07-c03.tex V1 - 12/06/2013 12:43pm Page 41

MOBILE AND WIRELESS SECURITY LANDSCAPE

41

on the IEEE 802.16 standard. WiMAX technology provides yet another wireless service and more drive to develop seamless interoperability standards, such as IMS, among other wireless services, such as cellular and Wi-Fi. WiMAX technology was initially envisioned as a fixed wireless point-to-multipoint service that would provide backhaul services from homes and small businesses. It has evolved beyond this concept into a technology that can provide not only last-mile access to homes and businesses but also mobile user access to broadband services. Not only does the technology support coverage of a large number of users in a geographical area from a single Base Station, but also, with the addition of technologies to address mobility, the system is frequency efficient. A frequency reuse of one to one can be employed, resulting in a single carrier frequency used throughout an entire system. In addition, WiMAX provides high quality of service (QoS) for time-critical services, such as Voice over Internet Protocol (VoIP). 3.1.2.4 WiMAX Security. The end-to-end WiMAX network architecture is based on a security framework that is service-application agnostic and gives users a strong suite of security tools. In particular, the framework supports the following: strong mutual device authentication between a mobile subscriber (MS) and the WiMAX network; all commonly deployed authentication mechanisms and authentication in homes and visited operator network scenarios based on a consistent and extensible authentication framework; data integrity, replay protection, confidentiality, and nonrepudiation using applicable key lengths; use of MS-initiated and terminated security mechanisms, such as Virtual Private Networks (VPN); and standard secure IP address management mechanisms between the MS/subscriber station (SS) and its home or visited network service provider (NSP). Mobile WiMAX supports current security features by adopting the best technologies available today. Support exists for mutual device and user authentication, flexible key management protocol, strong traffic encryption, control and management plane message protection, and security protocol optimizations for fast handovers. Table 3.2 details the usage aspects of WiMAX’s security features.

3.2

MOBILE AND WIRELESS SECURITY LANDSCAPE

Today, the wireless landscape continues to evolve. Security continues to remain a concern in the commercial or public space and for the handling of sensitive or classified information in government or private organizations. Federal data handling requirements are much more complex and restrictive than those in the commercial market; therefore, wireless systems in the government tend to remain independent of one another and commonly fail to implement an enterprise or holistic approach to information security. The commercial market’s security stance is easier to understand than the government market’s stance; as a result, the standards bodies that define and develop future technologies are working to address commercial issues without addressing the added security needs of the federal government. To help close this gap, the federal government has become a greater participant in wireless standards development and therefore, interacts directly with industry standards bodies. Standards bodies promote regulatory compliance by providing the reference model for layered security. The International Organization for Standardization (ISO) Open System

Voeller

42

V07-c03.tex V1 - 12/06/2013 12:43pm Page 42

WIRELESS SECURITY

TABLE 3.2

Usage Aspects of WiMAX Security Features

WiMAX Security Feature

Description

Key Management Protocol

Privacy and Key Management Protocol Version 2 (PKMv2) is the basis of Mobile WiMAX security as defined in 802.16e-2005. Device/User Authentication Mobile WiMAX supports device and user authentication using RSA or IETF EAP protocol by providing support for credentials that are SIM based, Universal SIM (USIM) based, or digital certificate or username/password based. Traffic Encryption AES-Counter with CBC-MAC (CCM) is the cipher used for protecting all user data over the Mobile WiMAX MAC interface. The keys used for driving the cipher are generated from the EAP authentication. A traffic encryption state machine that has a periodic key (TEK) refresh mechanism enables sustained transition of keys to further improve protection. Control Message Protection/ Control data is protected using AES-based Cipher-based message Secure Key Exchange authentication code (CMAC), or MD5-based hash message authentication code (HMAC) schemes. Fast Handover Support Mobile WiMAX supports a three-way handshake scheme to optimize the reauthentication mechanisms for supporting fast handovers. This mechanism is also useful to prevent any man-in-the-middle attacks. RSA, Rivest-Shamir-Adleman; SIM, Subscriber Identity Module.

Interconnection (OSI) standard is a worldwide communications framework for implementing protocols in seven layers. Wireless security technologies operate at Layer 1 (physical) and Layer 2 (data link) of the OSI model and may be tightly integrated with higher layer (Layers 3–7) security mechanisms to provide to a robust security solution.

3.2.1

Federal Legislation and Regulation

Although wireless security is not specifically addressed in the federal laws, the requirement to protect information affects the implementation of wireless technologies. Title III of the E-Government Act (Public Law 107-347)—the Federal Information Security Management Act (FISMA)—requires each federal agency to develop, document, and implement an agency-wide information security program. The government has also passed legislations to better regulate commercial industry and protect the privacy of American citizens. These laws include the Health Insurance Portability and Accountability Act of 1996, Gramm-Leach-Bliley Act of 1999, and Sarbanes-Oxley Act of 2002. The 9/11 Commission Act of 2007 also mandates an Federal Communications Commission (FCC) vulnerability assessment of the nation’s critical communications and systems infrastructure and to create a backup emergency communications system which includes next generation and advanced communications technologies [17]. In April 2007, the FCC chartered the Communications Security, Reliability, and Interoperability Council (CSRIC) to recommend best practices to ensure the security, reliability, operability, and interoperability of public safety communications systems [18].

Voeller

V07-c03.tex V1 - 12/06/2013 12:43pm Page 43

MOBILE AND WIRELESS SECURITY LANDSCAPE

43

Legislation focused on the lawful intercept of confidential communications also includes the Communications Assistance for Law Enforcement Act (CALEA). CALEA is a US wiretapping law passed in 1994 that amends Title 18 of the United States Code to make clear a telecommunications carrier’s duty to cooperate in the interception of communications for law enforcement and other purposes. CALEA aids law enforcement in its effort to conduct surveillance of citizens via digital telephone networks within the United States. 3.2.2

Federal Standards and Guidance Publications

This category of information security guidance is primarily provided by NIST [19] publications in accordance with the NIST’s statutory responsibilities under FISMA. Federal agencies are required to use FIPS-validated cryptographic algorithms verified by the Cryptographic Module Validation Program (CVMP). Products validated as conforming to FIPS 140-2 [20] are accepted by federal agencies for the protection of sensitive information. Table 3.3 presents a list of NIST publications whose topics touch on wireless technologies. 3.2.3

Industry Standards and Guidance Organizations

Standards and guidance organizations play an important role in information security and interoperability. Industry plays an important role by promoting technical progress within a standard, but it can also pose a barrier to interoperable communications. Fortunately, there are non-profit groups that help drive the adoption of wireless communications standards. The major worldwide communications standards organizations periodically meet at an event called Global Standards Collaboration [21]. These organizations TABLE 3.3 Table of NIST Wireless Security-Related Special Publications NIST Special Publication 800-48 Rev. 1 800-52 800-57 800-58 800-77 800-88 800-94 800-97 800-98 800-101 800-111 800-121 800-124

Description Guide to Securing Legacy IEEE 802.11 Wireless Networks, July 2008 Guidelines for the Selection and Use of Transport Layer Security (TLS) Implementations, June 2005 Recommendation on Key Management, March 2007 Security Considerations for Voice Over IP Systems, January 2005 Guide to IPsec VPNs, December 2005 Guidelines for Media Sanitization, September 2006 Guide to Intrusion Detection and Prevention (IDP) Systems, February 2007 Establishing Wireless Robust Security Networks: A Guide to IEEE 802.11i, February 2007 Guidelines for Securing Radio Frequency Identification (RFID) Systems, April 2007 Guidelines on Cell Phone Forensics, May 2007 Guide to Storage Encryption Technologies for End User Devices, November 2007 Draft Guide to Bluetooth Security, July 2008 Draft Guidelines on Cell Phone and PDA Security, July 2008

Voeller

44

V07-c03.tex V1 - 12/06/2013 12:43pm Page 44

WIRELESS SECURITY

address many of industry’s technical and operational issues to promote interoperable end-to-end communication solutions. The global standards organizations include the International Telecommunications Union (ITU), Telecommunications Industry Association (TIA), Internet Engineering Task Force (ITEF), Institute of Electrical and Electronic Engineers (IEEE), Alliance for Public Technology (APT), International Organization for Standardization (ISO), and the International Electrotechnical Commission (IEC). Whereas the Department of Defense (DOD) may have departments focused on development of standards and their associated testing to ensure interoperability (e.g. the Joint Interoperability Test Command), civilian agencies lack such standards and verification groups for tactical or mission-critical communications. Fortunately, there are several standards organizations focused on developing this work. The Telecommunications Industry Association, Project 25 Technology Interest Group, Association of Public-Safety Communications Officials International and its subsidiaries promote the development of public safety communications, that is, the Project 25 communications standard. The IEEE 802 LAN/Metropolitan Area Networks (MAN) Standards Committee has also been instrumental in the development of data-centric wireless standards, including IEEE 802.11 (Wi-Fi), 802.15.1-20051 (Bluetooth), and 802.16-2004/802.16e-2005 (WiMAX). IEEE is focused on standards development, not interoperability; however, several industry organizations have formed groups (e.g. the WiMAX Forum and Wi-Fi Alliance) to promote the interoperability of several prevalent wireless technologies based on the IEEE 802 family of standards. In addition, perhaps the most widely used wireless technology, cellular communications, is governed through two international partnership projects. The Third Generation Partnership Project (3GPP) focuses on the evolution of the GSM family of technologies to Wideband Code Division Multiple Access (W-CDMA) technologies; whereas, a parallel partnership project, 3GPP2 was established as a collaborative 3G telecommunications specification-setting project for 3G technologies derived from North American Code Division Multiple Access (CDMA) systems (IS-95). To facilitate the convergence of wireless technologies, the IEEE 802.21 standards body and the 3GPP Unlicensed Mobile Access (UMA) [25] partnerships formed a working group to standardize subscriber access to mobile circuit, packet, and IMS-based services over IP-based access networks, including the Internet.

3.2.4

Governmental Wireless Communications Initiatives

The American National Standards Institute (ANSI) Homeland Security Standards Panel (HSSP) helps identify existing consensus standards. If no standard exists, HSSP assists the U.S. Department of Homeland Security (DHS) and those entities requesting assistance to accelerate development and adoption of consensus standards critical to homeland security [21]. To date, two workshop meetings have been held. Their focus was to identify existing standards, standards under development, and gap areas in standardization for emergency communications. However, government-to-government communications were not addressed by HSSP, as public safety interoperability still remains the responsibility the publication of 802.15.1-2005, the HYPERLINK “http://en.wikipedia.org/wiki/IEEE” \o “IEEE” IEEE Study Group 1b voted 90-0 to discontinue their relationship with the HYPERLINK “http://en.wikipedia.org/wiki/Bluetooth SIG” \o “Bluetooth SIG” Bluetooth SIG, effectively meaning that the later versions of Bluetooth will not become future IEEE standards. Bluetooth standards are developed through the Bluetooth Special Interest Group (SIG) http://www.bluetooth.com/Bluetooth/SIG/ 1 Following

Voeller

V07-c03.tex V1 - 12/06/2013 12:43pm Page 45

CRITICAL NEEDS ANALYSIS

45

of several Federal agencies and working groups, which include the Federal Communications Commission (FCC), National Public Safety Telecommunications Council (NPSTC), DHS, National Telecommunications and Information Administration (NTIA), Institute for Telecommunication Sciences (ITS), and the National Security Telecommunications Advisory Committee (NSTAC). These organizations promote the advancement of governmental P25 technology and serve as a resource for addressing the telecommunications challenges of Federal agencies, state and local governments. DHS relies heavily on wireless communications for mission success. The DHS Science and Technology Directorate is the primary research and development component of DHS, which pilots and evaluates emerging wireless technologies in support of homeland security. The Office of Emergency Communications (OEC) works across Federal, state and local entities to ensure, accelerate, and attain interoperable and operable emergency communications nationwide. OEC has engaged the Federal Partnership for Interoperable Communications Committee to actively address technical and operational interoperable activities with the federal wireless communications community [22]. In addition, DHS has other committees, working groups, subcomponents, and agencies focused on wireless communications and emergency response communications including the Federal Emergency Management Agency (FEMA), National Protection and Programs Directorate (NPPD), Office of Cybersecurity and Communications, Emergency Communications Preparedness Center (ECPC), and the National Communications System (NCS). Federal government partnerships have also been launched to help facilitate the development of secure and interoperable communications Originally developed by a DoD/National Security Agency (NSA) program, the Secure Mobile Environment Portable Electronic Device (SME PED) is a Type-1 capable mobile and wireless device designed to enhance cross communication and support the mission of the US military, intelligence, and homeland security organizations. The SME PED offers converged secure Type 1 and Non-Type 1 wireless voice and data capabilities and is the first portable device to offer multiple independent levels of secure communication in both unclassified and classified modes.

3.3

CRITICAL NEEDS ANALYSIS

The protection, response, and recovery capabilities among federal, state, and local authorities rely on guaranteed secure wireless communications to extend their disparate wired infrastructures to the first responder. DHS dynamic business requirements and distributed wired infrastructure require complex solutions that are more difficult to secure. All connections to the wireless network must be authenticated, authorized, and protected. Guaranteed communications also require that wireless communications infrastructure provide a high level of availability. Wireless security must address the areas where the wireless network is vulnerable: the device level and the network level (wireless and wireline). Wireless devices represent the edge of the extended infrastructure and should to be thoroughly secured in case of loss or theft. Unfortunately, a negative trade-off in performance exists when implementing device security. Mobile or portable devices have limited processing and power; therefore, implementing additional device security controls will adversely affect responsiveness and battery life.

Voeller

46

V07-c03.tex V1 - 12/06/2013 12:43pm Page 46

WIRELESS SECURITY

Wireless access points are the gateways to an organization’s trusted wired network. Although the air interface may be properly secured, a common shortfall of wireless architectures is their infrastructure. Generally, attacks fall into one of two categories: active or passive attacks. Active attacks require actions on the part of the attacker to penetrate or disrupt the network; passive attacks are used primarily for information gathering and surveillance. intrusion prevention systems (IPSs) provide a layer of defense and help protect a wireless system against threat consequences, including unauthorized disclosure, disruption, deception, and corruption [21]. 3.3.1

Intrusion Prevention Systems

Wireless IPSs operate at Layer 2 of the OSI model, compared to Layer 3 and higher for wireline systems. IPSs are used to detect the presence of rogue and misconfigured devices, scan the air medium for denial of service (DoS) and other forms of attacks, and take defensive actions to protect the network. A key characteristic of wireless IPS systems is their responsibility to ensure that only authorized devices participate in an enterprise’s wireless network. In addition, wireless IPSs address a variety of wireless-only attacks and respond with predefined steps that usually involve the wireless infrastructure. For example, an IPS must be able to identify and respond to known vendor implementation weaknesses, such as the EAP-Lightweight Extensible Authentication Protocol (LEAP) vulnerability to dictionary attacks. 3.3.2

Internet-Based Security Protocols

Wireless technologies must leverage the security capabilities of Internet-based standards. Wireless technologies that implement their own unique security protocols hinder secure and interoperable communications. Security should not be a barrier to communications but rather a catalyst for interoperability. Federal government-level wireless communications require encryption, authentication, and authorization-related activities. These symmetric and asymmetric keys are required for the generation, distribution, storage, and destruction of cryptographic key material. Public safety technologies should employ key management solutions that are standards-based and promote interoperability. Wireless technologies can leverage the mature security protocols implemented at higher layers in the protocol stack. For example, Secure Socket Layer (SSL) or Transport Layer Security (TLS) are Layer 4, certificate-based security protocols used extensively over the Internet; Internet Protocol Security (IPSec) is a Layer 3 security protocol that secures Layer 4 and higher. It is important to note that encryption only protects the layer above its implementation. For example, masking an Internet address requires Layer 1 or 2 security, which is where wireless security resides. Secure Multipurpose Internet Mail Extensions (S/MIME) is a security protocol that adds asymmetric encryption for secure text messaging, which is most commonly used for Internet e-mail. Table 3.4 details some common OSI layer security protocols. There would be benefit to federal agencies leveraging DoD best practices and technology research. For example, the NIST and NSA established National Information Assurance Partnership (NIAP) to evaluate Information Technology (IT) product conformance to international standards. The program, officially known as the NIAP Common Criteria Evaluation and Validation Scheme (CCEVS) for IT Security, is a partnership between

Voeller

V07-c03.tex V1 - 12/06/2013 12:43pm Page 47

CRITICAL NEEDS ANALYSIS

47

TABLE 3.4 OSI Layers and Corresponding Security Protocols Layer Application

Presentation Session Transport

Network

Data Link (Wireless Security)

Physical (Wireless Security)

Security Considerations Detect and prevent malicious code, viruses, and other malware applications. Mitigation tools include firewalls, antivirus application, and intrusion detection applications. Protect data files by cryptography (e.g. file password encryption). Protect system from port exploits and validate digital certificates. SSL operates between the session and transport layers. Provide authentication and secure end-to-end communications. Encryption protocols include Secure Shell (SSH-2) and Simple Key Management for Internet Protocols (SKIP). Protect the routing and the forwarding protocols. The IPSec standard provides multiple and simultaneous tunnels versus the single connection limit of the lower layer encryption standards. Protect the MAC layer from masquerade, DoS, impersonation, and Address Resolution Protocol (ARP) threats. Common encryption protocols include the Point-to-Point Tunneling Protocol (PPTP) and the Layer 2 Tunneling Protocol (L2TP). Prevent jamming and DoS attacks in the air medium through frequency hopping and similar techniques.

the public and private sectors to help organizations select commercial off-the-shelf IT products that meet international security requirements and to help manufacturers of those products gain acceptance in the global marketplace. Twenty countries now recognize the Common Criteria as the official third-party evaluation criteria for IT security products. Civilian agencies may wish to explore NIAP in its product selection to promote security and interoperability throughout the federal government. First responders and other federal, state, and local personnel will increasingly need to carry radios with multiple air interfaces to interoperate with other agents and civilians in need. The ideal DHS handset would support public safety communication using P25; cellular telecommunications using CDMA, GSM, and evolving 3G protocols; and personal area communication using Bluetooth, ZigBee, or ultra-wideband techniques. The future will inevitably bring even more capabilities and IP-based services. First responders need enhanced mobile devices that leverage industry standards to provide secure, scalable, and interoperable communication architectures. In addition, devices need to have a small form factor and be upgradeable, preferably over the air. Approaches that rely on hardware alone do not meet these requirements because adding hardware-based interfaces increases size and weight. Moreover, hardware cannot be modified over the air. Consequently, radio systems will rely increasingly on software. An ideal software radio will be able to switch between all of the air interface protocols discussed on a single platform. The devices might also employ adaptive or cognitive radio technology that senses the spectrum environment and automatically makes choices for the user to optimize the user’s communications objectives (e.g. associating with a local Wi-Fi access point when a WiMAX connection is no longer available). The flexibility of Software Defined Radios (SDRs) also introduces a significant security risk. In particular, when radio systems are software defined, they are vulnerable to software virus attacks. To modify a hardware radio, the attacker must physically touch

Voeller

48

V07-c03.tex V1 - 12/06/2013 12:43pm Page 48

WIRELESS SECURITY

each device and must have some level of radio expertise. In SDR systems, the potential exists for an attacker to disable thousands of radios over the air using automated tools acquired on publicly accessible web sites. To counter this threat, SDRs can be designed to authenticate mobile code using digital signature technology. DHS personnel, for example, might only download and execute radio software that has been digitally signed by a DHS authority that vouches for its reliability. SDR platforms can also benefit from isolation kernels and operating systems that can partition computing resources to limit the impact of malicious code. Finally, trusted computing technologies, such as platform attestation and sealed storage, allow network managers to ensure that only terminals with approved software loads are operating on the network.

3.4

RESEARCH DIRECTIONS

The Internet is the largest system ever built by, and leveraging the Internet’s security protocols not only improves interoperability but also improves the public’s ability to react and protect against new threats. If a transport medium employs its own unique security protocols, adversaries will quickly adapt and attack a network where it is most vulnerable. Strong physical security or wired encryption is ineffective if the extended wireless network employs untested or weak security controls. Research and development is ongoing and advancing in the wireless security industry. Grants and loans by federal agencies encourage technological advances, help promote economic growth, and encourage international trade. 3.4.1

Intrusion Prevention Systems

As vendors continue to offer all-in-one security solutions, the gap between wired and wireless IPS solutions should narrow. Monitoring the wireless medium alone will not properly safeguard an infrastructure from threats. One current trend is that wireless hardware vendors are partnering with wireless IPS software providers to help meet NIAP Common Criteria Certification as required by the DoD. 3.4.2

Internet-Based Security Protocols

Wireless standards that do not support IP hinder progress and may not provide holistic security or interoperability. Even the well-entrenched LMR (P25) market has started to embrace IP communications. Key and identity management will not be ubiquitous until wireless technologies agree on a common approach or leverage an existing solution, such as Internet public key infrastructure (PKI) or DoD PKI infrastructure. Key generation/distribution dissemination between multiple organizations remains a challenge for a number of reasons, that is, constrained by both technology and bureaucracy. In addition, an addressing scheme (i.e. a subscriber number) may better enable first responders and law enforcement personnel (federal, state, local, and tribal) to intercommunicate. For example, an IP (or VoIP) Private Branch Exchange (PBX) can incorporate cellular and other IP-enabled devices into a department’s in-house telephone switching system, that is, increasing information sharing.

Voeller

V07-c03.tex V1 - 12/06/2013 12:43pm Page 49

REFERENCES

3.4.3

49

Overlaying Security Services over IP-based Access Networks

When Transmission Control Protocol (TCP)/IP is used as the ubiquitous transport medium, wired and wireless devices can securely communicate if all devices employ the same effective security protocol. For example, personal digital assistants (PDAs) loaded with a secure application on one device can automatically detect and communicate securely with instances of the same application on other devices in network. IMS is a standardized networking architecture that provides IP-based services through mobile and fixed networks. Mentioned earlier, IMS is agnostic to access technology and can therefore operate with any endpoint that supports IP capabilities. The service-oriented architecture of IMS facilitates the development of new services while supporting legacy systems. IMS uses open standard IP protocols, defined by the Internet Engineering Task Force (IETF). Therefore, sessions between two IMS users, between an IMS user and a user on the Internet, and between two users on the Internet are established using exactly the same protocol. IMS bridges the interoperability gap of disparate wireless networks by operating with any endpoint that supports IP capabilities. 3.4.4

Mobile Device Security

Improvements in PED processing power coupled with decreased power requirements should be exploited to provide more advanced device security. Mobile devices have four general categories of risk: device access, data storage, data transmission, and data access. Technological advances should be leveraged to comply with Homeland Security Presidential Directive (HSPD-12), and personal identity verification (PIV) solutions that already exist should be explored and implemented where appropriate, such as, the DoD’s PKI for the common access cards program. Greater cooperation and information sharing between civilian and defense agencies can help facilitate many legislative requirements focused on information security and empower our country to react better to natural and man-made disasters.

REFERENCES 1. 3GPP, 3GPP Specification Series, 3GPP, http://www.3gpp.org/ftp/Specs/html-info/33-series. htm. 2. Rose, G. (2004). Access Security in CDMA2000, Including a Comparison with UMTS Assess Security, IEEE Wireless Communications. February. 3. Hart, J. W., Imel, K. J., Powell, J., Tom, T., and Funk, D. (2003). Understanding Wireless Communications in Public Safety. 4. TIA (2002). Standard: Digital Land Mobile Radio, Security Services Overview , TIA-102.AAAB, Telecommunications Industry Association, Virginia, August 2002. 5. TIA/EIA (2002). Standard: Project 25, Block Encryption Protocol , TIA-102.AAAD, Telecommunications Industry Association, Virginia, July 2002. 6. TIA/EIA (2001). Standard: Project 25, Digital Radio Over-the-Air Reykeying (OTAR) Protocol , TIA-102.AACA, Telecommunications Industry Association, Virginia, April 2001. 7. TIA/EIA (2000). Standard: Data Overview—New Technology Standards Project—Digital Radio Technical Standards, TIA-102.BAEA-2000, Telecommunications Industry Association, Virginia, March 2001.

Voeller

50

V07-c03.tex V1 - 12/06/2013 12:43pm Page 50

WIRELESS SECURITY

8. Barber, G., Bunn, L., Eydt, B., Karygiannis, T., and Phillips, T. (2007). NIST 800-98, Guidelines for Securing Radio Frequency Identification (RFID) Systems. 9. ANSI/IEEE (1999). Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications, ANSI/IEEE Std 802.11, 1999 Edition. 10. Arbaugh, W. A., Shankar, N., Wan, Y. C. J., and Zhang, K. (2002). Your 802.11 wireless network has not clothes. Wireless Commun., IEEE 9(6), 44–51. 11. Aime, M. D., Calandriello, G., and Lioy, A. (2007). Ependability in wireless networks: can we rely on WiFi? IEEE Security Privacy 5(1), 23–29. 12. Arbaugh, E. (2003). Real 802.11 Security, Addison-Wesley Longman Publishing Co., Inc., Boston, MA. 13. Frankel, S., Eydt, B., Owens, L., Kent, K. (2006). Guide to IEEE 802.11i: Establishing Robust Security Networks, National Institute of Standards and Technology Draft Special Publication, 800-97. June. Gaithersburg, MD. 14. National Institute of Standards and Technology (2001). Announcing the Advanced Encryption Standard (AES), Federal Information Processing Standards Publication, 197. Gaithersburg, MD. 15. IEEE (2004). IEEE Standards for Local and Metropolitan area Networks—Port-Based Network Access Control , IEEE Std. 802.1X, 2004 Edition. 16. Aboba, B., Blunk, L., Vollbrecht, J., Carlson, J., and Levkowetz, H. (2004). Extensible Authentication Protocol , Network Working Group, p. 3748. Request for Comments. 17. Koien, G. M. (2007). Public Information Collection Requirement Submitted to OMB for Emergency Review and Approval , August 30, 2007. http://a257.g.akamaitech.net/7/257/2422/ 01jan20071800/edocket.access.gpo.gov/2007/E7-17507.htm. 18. American National Standards Institute (2008). Emergency Communications Standardization, ANSI Homeland Security Standards Panel , April 2008. http://publicaa.ansi.org/sites/ apdl/Documents/Standards%20Activities/Homeland%20Security%20Standards%20Panel/ Workshop%20Reports/Emergency%20Communications.pdf. 19. National Institute of Standards and Technology (2008). NIST Special Publications, Computer Security Division: Computer Security Resource Center, http://csrc.nist.gov/publications/ nistpubs/. 20. National Institute of Standards and Technology (2008). Validated FIPS 140-1 and FIPS 140-2 Cryptographic Modules, Cryptographic Module Validation Program, http://csrc.nist.gov/ cryptval/140-1/140val-all.htm. 21. (2006). IP multimedia subsystems (IMS): a standardized approach to All-IP converged networks. Bechtel Commun. Tech. J. 4(1), 13–36. 22. Shirey, R. (1997). Internet Security Glossary, Network Working Group, p. 2828. Request for Comments. Gaithersburg, MD.

FURTHER READING Ahson, S. A., and Ilyas, M. (2007). WiMAX Handbook , CRC Press, New York. American National Standards Institute (2008). Emergency Communications Standardization, ANSI Homeland Security Standards Panel , April 2008. http://publicaa.ansi.org/sites/apdl/Documents/ Standards%20Activities/Homeland%20Security%20Standards%20Panel/Workshop%20Reports/ FPIC website. http://www.dhs.gov/xprepresp/committees/gc 1170097478666.shtm. Harte, L. GSM Overview: Introduction to GSM: Physical Channels, Logical Channels, Network, and Operation. USA. Heine, G. GSM Networks: Protocols, Terminology, and Implementation. USA. Final%20Report%20from%20ANSI-HSSP%20EC%20Workshop.pdf. IEEE 802 LAN/MAN Standards Committee http://ieee802.org/.

Voeller

V07-c03.tex V1 - 12/06/2013 12:43pm Page 51

FURTHER READING

51

IEEE 802.11 Working Group for WLAN Standards, http://grouper.ieee.org/groups/802/11/. IEEE 802.16 Working Group on Broadband Wireless Access Standards, http http://grouper.ieee. org/groups/802/16/. IEEE 802.20 Mobile Broadband Wireless Access (MBWA), http://grouper.ieee.org/groups/802/20/. IEEE 802.21 Media Independent Handover Services, http://grouper.ieee.org/groups/802/21/. National Institute of Standards and Technology. Federal Information Security Management Act Implementation Project, Computer Security Division: Computer Security Resource Center. http://csrc.nist.gov/sec-cert/. Potter, B. (2003). Wireless security’s future. Security & Privacy Magazine, IEEE 1(4), 68–72. Project 25, Project 25 Technology Interest Group, http://www.project25.org/modules.php?name= Content&file=viewarticle&id=2. The Global Standards Collaboration, http://www.gsc.etsi.org/. Wi-Fi Alliance, http://www.wi-fi.org.

Voeller

V07-c03.tex V1 - 12/06/2013 12:43pm Page 52

4 DETECTION OF HIDDEN INFORMATION, COVERT CHANNELS AND INFORMATION FLOWS Neil F. Johnson and Phil A. Sallee Booz Allen Hamilton, McLean, Virginia

4.1

INTRODUCTION

The oldest form of protecting information is to conceal its existence—hide it. Numerous methods for hiding information have been developed through the years, ranging from physical concealment of objects which are “hidden in plain sight” to high-tech methods for covert communications in digital media. Some examples of hiding information include hidden tattoos, covered writing, invisible inks, microdots, grille and null ciphers, code words, digital signatures, covert channels and spread-spectrum communications, to name but a few [1–5]. Steganography is used to conceal the existence of hidden messages within seemingly innocuous carriers. Common techniques in digital steganography usually camouflage the intended message within another object or media, referred to as either the cover or carrier. By far, the most common steganography tools embed information within image files. However, hidden information can be embedded within nearly any type of digital media or information flow. Research also extends beyond digital media such as DNA [6–8], chemical compounds [9], and circuit boards [10–12]. The focus in this chapter is on digital steganography: hiding in electronic files and media. Covert channels, though not necessarily designed for communication, provide a means to communicate through the misuse of another mechanism, typically a shared resource. Such a mechanism can be exploited to convey information from a higher (more secure) environment to a lower (less secure) environment [13]. Human ingenuity and the availability of shared resources provide a variety of resources for communications. For example, convicted spy Robert Hanson would signal to his counterparts that information could be picked up by placing a chalk mark on a park sign: no mark, no information. In a digital world, signaling takes place in 0s and 1s. Any organization or individual requiring secret communication will employ whatever technology is at their disposal to achieve the perceived secrecy. As a result, there is practically no limit to the variety of steganography implementations that may be developed. Communication and Information Systems Security, Edited by John G. Voeller © 2014 John Wiley & Sons, Inc. Published 2014 by John Wiley & Sons, Inc.

53

54

DETECTION OF HIDDEN INFORMATION, COVERT CHANNELS AND INFORMATION FLOWS

Images 56%

Audio 15%

Video 3% Text 8%

Disk space 14% Other files 3%

Network 1%

FIGURE 4.1 Chart illustrates the observed distribution of media type supported as carriers for hidden information through steganography-related software development from 1992 through 2007.

Inherent redundancy in many digital media formats provides ample storage space for hidden information. Examples of potential carriers include text [14–16], audio [17–19], image (the majority of steganography research), video [20–22], and hidden file systems [23, 24]. Research and techniques also exist for exploiting network packets and protocols to establish covert channels for communication in information flows [25–38]. Figure 4.1 illustrates the general distribution of media types used as carriers for hidden information by publicly available steganography software. With the growth of the Internet, the introduction of new digital media formats, and increased concerns over privacy and security, tools and methods for protecting individuals and their information continue to expand. Each year reveals growing interest in steganography as new tools are developed, new research papers are published, and the number of academic research programs and conferences dedicated to the subject of hiding digital information continue to increase. Research efforts on the subject span the globe, and hundreds of new steganography-related software programs are released each year on the Internet. Some of these tools are academic in nature as student projects, others are malicious utilities designed to circumvent security mechanisms, and others are commercially marketed. Some steganographic products are marketed to the public with claims of protecting personal information or encourage the use of steganography where cryptography may be scrutinized. 4.2

SCIENTIFIC OVERVIEW

While many steganography tools and academic papers still focus primarily on the imperceptibility of the hidden information, the most current art in steganography takes into

SCIENTIFIC OVERVIEW

55

account the potential for statistical steganalysis. Likewise, research efforts in steganalysis quickly respond to new steganography methods and techniques as they are presented. Steganography and steganalysis research efforts form a cat and mouse game that continually advances the state of the art—similar to that of cryptography and cryptanalysis. Both cryptography and steganography may be used to secure information. However, the two address differing requirements. Cryptography secures information by scrambling it, thus rendering a message unintelligible. Yet, cryptographic data may still be observed in state or in transmission. If an encrypted message is intercepted, the interceptor knows the text is an encrypted message. Steganographic security is obtained by keeping the very existence of the embedded message from being discovered—essentially camouflaging information so that any observable data or communication does not appear out of the ordinary. The steganography has failed if an observer can determine that a hidden message exists. Like cryptography, however, really good steganography is challenging to design. Detectable signatures can easily be introduced during the embedding process. Consequently, relying solely on “security by obscurity” (concealing the details of the encoding/decoding algorithm) is considered highly inadvisable. Strong steganography, like cryptography, relies on academic peer review to identify potential weaknesses.

4.2.1

Hiding Information

Assume a digital carrier, C {p, h} . C p is the perceptual portion of the carrier and any manipulation to this portion will be readily noticeable. C h is the portion of the carrier that falls below the perceptual threshold and manipulation to this portion will not be readily noticed. If the construct of a + b denotes a composition of a and b, then the carrier may be represented as C {p, h} = C p + C h . The size of C h depends upon the properties of the carrier, complexity of the data hiding process, and the need to balance constraints of imperceptibility versus robustness (survivability of the embedded data to some level of distortion). If the operation e → f represents a process for hiding data e into f , then a representation of the embedding process is m → C h = C  h , where the message (m) is hidden in the imperceptible component C h of the carrier C . The resulting modified carrier is in the form of C  {p, h} = C p + C  h and is perceptually indistinguishable from C {p, h} . The resulting C  {p, h} contains the hidden message and is referred to as the steganogram or stego-media (images may be referred to as stego-images, audio as stego-audio, etc.). A common method for hiding data in digital carrier is to manipulate the bits that have the least impact to the observable carrier if changed. In 24-bit images such as bitmap (BMP) or audio files such as PCM WAVs, the least significant bits (LSBs) provide an instance of the C h portion in these carriers. Table 4.1 illustrates the impact of changing the LSBs of the color cells. Changing three bits of data has no visual impact. In fact, changing the lowest four bits of the color block also has little visible impact to this color. Therefore some bits can be used to embed any data we want without significantly changing the image [39]. Research examining image color reduction based on human vision limitations [40] inspired others to experiment with data hiding techniques and develop steganography tools [41].

56

DETECTION OF HIDDEN INFORMATION, COVERT CHANNELS AND INFORMATION FLOWS

TABLE 4.1 Illustrates the Impact of Changing the LSBs of the Color Cells. Changing Three Bits of Data has No Visual Impact. In Fact, Changing the Lowest Four Bits of the Color Block Also has Little Visible Impact to This Color. We Can Clearly See that Some Bits Can be Used to Embed any Data We Want without Significantly Changing the Image Color

Color Values

Original

Red: Blue: Green:

11000000 (192) 11000000 (192) 11000000 (192)

Only LSBs changed

Red: Blue: Green:

11000001 (193) 11000001 (193) 11000001 (193)

HALF the image data changed

Red: Blue: Green:

11001111 (207) 11001111 (207) 11001111 (207)

A 24-bit image that is 1024 by 768 pixels in size yields 2,359,296 (1024*768*3) bytes of image data from the pixels. Changing only the LSBs permits storage capacity of 294,912 bytes. This chapter, with the figures and tables would fit with room to spare in such an image—(escaping detection is an altogether different matter). Many steganography tools that embed in BMP images or WAV audio files hide data using this type of technique. The compressed formats GIF and JPEG are more prevalent on the Internet than BMP files. GIF images have only up to 256 colors and use only one byte to represent an image pixel. This byte is a pointer to a color table (palette) that is used to paint the colors of the image to the screen. Changing the LSB of one of these pointer bytes changes the position being pointed to in the color palette. If neighboring colors are distant, then changing the LSB will have visible difference on the resulting stego-image [39, 42]. To avoid this, steganography developers have investigated ways of arranging image palettes so that near colors are closer [43], or reconstructing new palettes [41] so that LSB manipulation is less visible. Because JPEG images are highly compressed, the pixel data is not as readily available for manipulation. Therefore, manipulation to JPEG images occurs most commonly in the DCT compression coefficients. However, given a process to decode and re-encode coefficients within the JPEG format, the process of hiding data within DCT coefficients can be performed similarly to changing LSBs of pixels in a BMP file. Early methods for steganography follow the simple LSB embedding technique described. As research progressed and methods for detecting hidden data became published, techniques for hiding became more elaborate. Researchers began combining cryptographic techniques—scrambling and diffusing the hidden messages across the cover media to make the message recovery more difficult if detected. Later techniques consider properties of the cover media and try to mimic the expected statistics of bits that are manipulated within the cover, or attempt modeling other characteristics of the cover media so embedded messages are more difficult to detect [44–52]. Each of these improvements in data hiding prompted related research for detection [42, 50, 53–65].

COUNTERMEASURES

4.3

57

COUNTERMEASURES

Countermeasures to hidden communications fall into two general categories: detection and disruption. Steganalysis involves analyzing steganography algorithms, techniques, and their output to devise methods to detect the presence of the hidden data and, if possible, extract the data to reveal the hidden message. Alternatively, one may be more interested in protecting legitimate communications and disrupting potential covert communications by rendering the hidden data unusable. 4.3.1

Countermeasures: Detection

The basis of detecting hidden information involves making observations similar to forensic analysis of files and systems. These observations aid in determining whether applications were used to hide information, or if any media contains hidden data. Such observation includes the examining of media to look for indicators of manipulation by various steganographic tools, and may lead to the development of new steganalysis techniques. Two types of signatures emerge when investigating hidden information. System Signatures are residual artifacts on computer systems that result from the installation, execution, or removal of tools that hide information [39]. Steganographic Signatures are detectable distortions that occur in the carrier media when the carrier is manipulated to conceal the hidden data [39, 42]. The identification of steganographic signatures generally requires extensive experimentation with various products that provide information hiding capabilities. Stego-media is compared with the original carrier media to determine what properties or characteristics change when data is hidden. Such experimentation is also useful in determining the capabilities and limitations of data hiding methods (i.e. the breaking points), as well as discovering signatures that may be leveraged for more rapid detection. Distortions that take the form of repeatable patterns and provide reliable indication that a steganography tool has been used are often referred to as hard signatures. The Steganalysis Research Center (SARC) produces forensic and steganalysis tools that look for system signatures and hard signatures in stego-media [66]. Hard signatures do not provide a complete solution to the steganalysis problem, however. Developing a database of signatures requires time, and the growth in the number of steganography applications exceeds the current discovery of steganographic signatures provided in forensic and steganalysis tools. Additionally, not all hiding techniques produce such patterns. In some cases, distortions caused by a steganographic method may simply violate the expected structure of the carrier media to a degree that allows for some uncertainty as to the cause of the distortion. Here, steganalysists must rely on soft signatures, based on statistical anomalies in the stego-media to identify potential data hiding. To address the rapid growth in steganography tools and techniques, other detection methods need to be employed. Research is progressing in the area of blind detection, which does not rely on specific knowledge of a steganography technique, and holds the promise of detecting even previously unknown steganography applications. Most academic research in steganalysis is currently pursuing approaches for statistical and blind detection [50, 53–58, 61–65]. Wetstone Technologies produces a steganalysis suite for investigations to perform detection of system signatures (Gargoyle) and assist investigators in discovering stego-media based on statistical analysis [67]. Other

58

DETECTION OF HIDDEN INFORMATION, COVERT CHANNELS AND INFORMATION FLOWS

researchers have also made attempts at producing steganography detection techniques with mixed results [64]. Both hard and soft signature approaches to detecting stego-media have merit. However, the volume of emerging steganography methods and tools is a hindrance in producing hard signatures for them all. Blind and Statistical detection methods are generally limited in accuracy, due to the intrinsic variability and unpredictability of media content as well as variations due to lossy compression, transcoding, and processing. Even relatively small false alarm rates can overwhelm analysts working with large data sets. Combining features and signatures may help investigators to make better determinations and reduce false alarms. 4.3.2

Countermeasures: Disruption

In some cases, preventing data leakage may be of greater importance than detecting the presence of hidden data. In this case an active warden may manipulate suspect media or traffic in an attempt to render any embedded data unusable. Revisiting the steganography system of C  {p, h} = C p + C  h , the warden may also manipulate C  h to change, overwrite, or remove any embedded data without noticeable distortion to C  {p, h} . From the warden’s view point, as long as the portion or simulation of C h is below the perceptual level, then a C h  exists that is of the warden’s choosing and prevents the hidden information from being passed [3]. Researchers have investigated using such approaches in multimedia (images, audio, video) [42, 61, 69–73] and as countermeasures to hiding in network traffic and covert channels [31, 38, 74–80]. Some disruption attacks can be defeated. The Stirmark tool was developed to test the robustness of watermarking algorithms by applying various distortions to images [71]. In [81], however, authors described a generalized countermeasure against the distortion attacks executed by Stirmark, demonstrating the ability to recover previously unreadable watermarks from distorted images.

4.4 4.4.1

RESEARCH AND DEVELOPMENT TRENDS Research Trends

Application development and academic research in digital steganography and steganalysis started gaining momentum during the 1990s and has steadily increased in depth and breadth since. The first academic conference on the subject, the International Information Hiding Workshop, was held in Cambridge, UK, in 1996 [82]. Conferences in information hiding share venues and ideas with research in related areas including anonymity and privacy, cryptology, computer security and forensics. Continued research in information hiding takes inspiration from many other fields as well, including signal detection theory, information theory, signal processing, computer vision, and machine learning. Although having different end goals, digital watermarking is closely related to steganography as a type of information hiding and the two have developed alongside one another with many shared ideas and techniques. Research in both areas has focused on expanding information hiding concepts to address many different types of digital media and signals, and also in the sophistication of the techniques employed and their ability to conceal and reveal information using statistical methods.

RESEARCH AND DEVELOPMENT TRENDS

59

The earliest academic research in digital steganography focused primarily on hiding from view, that is, a human observer, in a variety of digital formats. Common techniques of this era included modifying LSBs of a carrier signal, or hiding in specific bit-planes of binary values [39, 83]. Images provided an attractive carrier, with a relatively large capacity for hiding information, and BMPs were an easy target for manipulation due to their simple format. Later, academic research in image steganography shifted to hiding in DCT coefficients in the more popular JPEG format [47, 51, 52], and eventually also in the wavelet transform domain [84]. Hiding in text gained early interest before images were prevalent in e-mail attachments, and while the Internet was still in its infancy. Text embedding techniques included whitespace manipulation, such as line and word spacing [16], and synonym replacement [14, 15]. Other media formats were also exploited, including audio [17, 19] and internet protocols [28, 30, 35]. Early approaches to digital steganalysis included visualization techniques, and histogram tests such as the Chi-squared attack, which demonstrated weaknesses in LSB steganography [42, 65]. In response, steganography algorithms were designed to avoid detection by these specific attacks, and the focus shifted toward resisting statistical detection rather than avoiding human observation. For example, LSB methods were replaced by ±1 additive embedding [85], and in DCT coefficients by reduction in magnitude [51] and histogram preservation [47]. As steganography improved in sophistication, research in steganalysis quickly responded, focusing primarily on specific attacks to counter known steganography algorithms [59, 60]. Theoretical research has focused on the possibility of provably secure steganography. Taking inspiration from cryptology and information theory, Cachin introduced the concept of epsilon-security for steganography based on the similarity between the true cover and embedded cover distributions in terms of information theoretic divergence measures [86]. A link was recognized between compression and steganography that later served as inspiration for model-based techniques [49, 50, 87]. Some work has also considered the maximum capacity of secure steganographic embedding [49, 88]. Drawing from the computer vision and natural scenes communities, other research has looked at statistical differences of image steganography as compared to the statistics of natural scenes [89], with obvious implications to steganalysis. Out of these early starts in digital steganography, some recent trends emerged. Steganography methods began to focus less on avoiding specific attacks and more on preserving the statistical properties of the cover media, and reducing the number of required changes to the media. For example, matrix-embedding techniques provide a means to reduce the number of changes required to hide information, by trading off capacity for a given sized cover. Also, model-based techniques use statistical models of the cover media to preserve the modeled cover statistics more accurately and efficiently [49, 50, 90, 91]. Some more recent steganography techniques dynamically adapt to the characteristics of the cover media. For example, these techniques may identify locations with higher variability that are considered more suitable for embedding data. A significant advancement in this regard is known as informed embedding, which uses side information about the media that is not preserved or passed to the receiver in order to improve the quality and undetectability of the resulting stego-object [44, 92]. Similarly, steganalysis methods began to focus more on modeling the properties of cover media, as well as the results of steganographic embedding. Most notable in this regard is the advancement of general, or universal, steganalysis methods, which use machine learning and classification techniques such as Support Vector Machines (SVMs)

60

DETECTION OF HIDDEN INFORMATION, COVERT CHANNELS AND INFORMATION FLOWS

to detect steganography based on exemplars rather than specific knowledge of the algorithms being detected. Taking inspiration from computer vision, SVMs were trained using image wavelet statistics [55–57], and later, calibrated features computed in the DCT domain [61, 62]. Future work in this area continues to improve the features that are useful for detecting steganography and explore classification techniques.

4.4.2

Development Trends

Relatively few developers of steganography-related tools appear to incorporate advances published in related academic research. Those that do, introduce methods in an attempt to reduce the detectability of the hidden content. Thus, as research and development in the academic arena continues, the sophistication of available steganography software continues to improve. For example, many applications now incorporate cryptographic methods to encrypt data prior to embedding, or select pseudo-random locations within the digital carriers to hide their data. Between 1992 and 2004, interest in steganography exploded at a near exponential rate. The most significant jump in software releases that claim to provide steganographic capabilities appear from 2001 to 2002. This growth may be due, in part, to the security and threat frenzy following the 9/11 attacks in 2001. Shortly after the attacks, some authors of steganography tools abandoned their wares while others were driven by curiosity, growing markets, and the desire for privacy protection in a seemingly ever pervasive world. Figure 4.2 illustrates the trend of steganography-related tools released each year from 1992 through 2007 [93]. Due to the increasing volume of tools being released each year and the fact that some products are released for time, it can be difficult to accurately depict the overall size of this market. As of the writing of this text, over 3000 tools are identified by the authors representing over 1500 software titles. Keeping track of steganography-related tools can be complicated when multiple authors select the same name for their application. For example, over 40 software titles from more than 30 authors are variants of the words “steganography” or “steganographie”. Other popular steganography tool names include variants of steg, stego, stegano, camouflage, hide, invisible, and stealth. Steganography research and application development takes place around the globe. Observing research publications and software releases, the top 10 countries with active steganography-related research are (in alpha order) Canada, China, France, Germany, India, Italy, Japan, Russia, United Kingdom, and the United States. A number of steganography applications are multinational, meaning that collaborative software development occurs between individuals from multiple countries. International academic researchers and hacker organizations typically pool talent from multiple countries.

4.5

CRITICAL NEEDS ANALYSIS

Information hiding techniques pose a threat to national security and law enforcement through the potential loss of secrets or intellectual property and as possible distribution and communication channels for coordinating illicit activities. An investigator or analyst must find ways to analyze information that cannot be readily apparent and must seek subtleties that may suggest hidden information. It is not sufficient for investigators to have

61

2007

2006

2005

2004

2003

2002

2001

2000

1999

1998

1997

1996

1995

1994

1993

460 440 420 400 380 360 340 320 300 280 260 240 220 200 180 160 140 120 100 80 60 40 20 0

1992

RESEARCH DIRECTIONS

FIGURE 4.2 Chart illustrates the overall observed trend of steganography software development from 1992 through 2007 [93].

tools and techniques for handling password-protected files, but must also be involved in locating and recovering data hidden within seemingly innocuous carriers [69, 70]. Traditional perimeter security mechanisms such as firewalls, intrusion detection systems, and virus scanners are ill-equipped to handle media that may contain hidden information. Some tools available to forensic investigators/analysts are useful at examining slack space on storage devices or matching hash sets for known applications. However, many common forensic tools do not specifically detect the presence of steganographic content in potential carriers. Hash sets are available as part of the National Software Reference Library (NSRL) Project. This project collects and compiles digital signatures of software to produce a reference data set (RDS). The RDS includes signatures of applications including steganography tools and hacking scripts [94].

4.6

RESEARCH DIRECTIONS

The ease in use and abundant availability of steganography tools has authorities concerned about the trafficking of illicit material, or coordination of terrorists’ plots via web page images, audio, video, and other transmissions over the Internet. Methods of message detection and understanding the thresholds of current technology are continually under investigation. The success of steganography is dependent upon selecting the proper mechanisms. However, a stego-medium that seems innocent may actually broadcast the existence of embedded information upon further investigation. As long as the need exists for covert communications, development of information hiding techniques will continue. Systems to recover seemingly destroyed information and steganalysis techniques will be useful to authorities in computer forensics, digital traffic analysis, cyber-warfare, and counterterrorism.

62

DETECTION OF HIDDEN INFORMATION, COVERT CHANNELS AND INFORMATION FLOWS

Steganographic implementations, through the development of open source or commercial products, are generally behind the ideas and methods published in the research community. Many tools continue to use only basic methods at attempting to hide information. However, research continues to improve and hundreds of applications are released each year. The sheer volume of emerging steganography methods and tools requires innovative and accurate approaches to steganalysis. Being able to detect unforeseen techniques requires continued research. Research efforts continue to push the technological envelope, and staying abreast of such research is increasingly important. Technical publications and conferences provide the best window to the evolution of information hiding techniques from the open source and around the world. For a list of relevant proceedings from the International Information Hiding Workshop see the Further Reading List. REFERENCES 1. Kahn, D. (1967, 1996). The Codebreakers, The Macmillan Company, New York. 2. Wrixon, F. B. (1998). Codes, Ciphers and Other Cryptic and Clandestine Communication, Black Dog & Leventhal Publishers, New York. 3. Johnson, N. F., Duric, Z., and Jajodia, S. (2000). Information Hiding: Steganography and Watermarking—Attacks, and Countermeasures, Kluwer Academic Press, Norwell, MA. 4. Katzenbeisser, S., and Petitcolas, F. A. P., Eds. (2000). Information Hiding Techniques for Steganography and Digital Watermarking, Artech House Books, Norwood, MA. 5. Bender, W., Gruhl, D., Morimoto, N., Lu, A. (1996). Techniques for data hiding. IBM Syst. J . 35(3&4), 313–336. MIT Media Lab. 6. Clelland C. T., Risca, V., and Bancroft, C. (1999). Hiding messages in DNA microdots. Nature 399(6736), 533–534. 7. Saeb, M., El-Abd, E., and El-Zanaty, M. E. (2007). On covert data communication channels employing DNA recombinant and mutagenesis-based steganographic techniques. WSEAS Trans. Comput. Res. 2(1), 50–56. 1991-8755. 8. Shimanovsky, B., Feng, J., and Potkonjak, M. (2003). Hiding data in DNA. In Proceedings of the 2003 Information Hiding 5th International Workshop, Springer Lecture Notes In Computer Science, Vol. 2578, Springer-Verlag, Berlin/Heidelberg, pp. 373–386. 9. Eggers, J. J., Ihlenfeldt, W., and Girod, B. (2001). Digital watermarking of chemical structure sets. In Proceedings of the Information Hiding 4th International Workshop, Springer Lecture Notes In Computer Science, Vol. 2137, Springer-Verlag, Berlin/Heidelberg, pp. 200–214. 10. Lach, J., Mangione-Smith, W. H., and Potkonjak, M. (1998). Fingerprinting digital circuits on programmable hardware. In Proceedings of the Information Hiding Second International Workshop, Springer Lecture Notes In Computer Science, Vol. 1525, Springer-Verlag, Berlin/Heidelberg, pp. 16–31. 11. Lach, J., Mangione-Smith, W. H., and Potkonjak, M. (1999). Enhanced intellectual property protection for digital circuits on programmable hardware, Proceedings of the Information Hiding: Third International Workshop, Vol. 1768, Springer-Verlag, Berlin/Heidelberg, pp. 286–301. 12. Jain, A. K., Yuan, L., Pari, P. R., and Qu, G. (2003). Zero overhead watermarking technique for FPGA designs. Proceedings of the 13th ACM Great Lakes Symposium on VLSI . 28–29 April, 2003, pp. 147–152. 13. Lampson, B. (1973). A note on the confinement problem. Commun. ACM 16(10), 613–615. 14. Chapman, M., Davida, G. I., and Rennhard, M. (2001). A practical and effective approach to large-scale automated linguistic steganography. In Proceedings of the 4th International

REFERENCES

15.

16.

17. 18.

19.

20.

21. 22. 23.

24.

25. 26.

27.

28.

29. 30.

63

Conference on Information Security, Springer Lecture Notes In Computer Science, Vol. 2200, Springer-Verlag, Berlin/Heidelberg, pp. 156–165. Bolshakov, I. A. (2005). A method of linguistic steganography based on collocationallyverified synonymy. In Proceedings of the Information Hiding 6th International Workshop, Springer Lecture Notes in Computer Science, Vol. 3200, Springer-Verlag, Berlin/Heidelberg, pp. 180–191. Low, S. H., Maxemchuk, N. F., Brassil, J. T., and O’Gorman, L. (1995). Document marking and identification using both line and wordshifting, Proceeding of the Fourteenth Annual Joint Conference of the IEEE Computer and Communications Societies, Vol. 2, Boston, Massachusetts, pp. 853–860. DOI: 10.1109/INFCOM.1995.515956. Petitcolas, F. A. P. (1998). MP3Stego software. http://www.petitcolas.net/fabien/steganography/mp3stego/. Franz, E., Jerichow, A., M¨oller, S., Pfitzmann, A., and Stierand, I. (1996). Computer based steganography: how it works and why therefore any restrictions on cryptography are nonsense, at best. In Proceedings of the Information Hiding First International Workshop, SpringerVerlag Lecture Notes in Computer Science, Vol. 1174, Springer-Verlag, Berlin/Heidelberg, pp. 7–21. Gruhl, D., Bender, W., and Lu, A. (1996). Echo hiding. In Proceedings of the Information Hiding First International Workshop, Springer-Verlag Lecture Notes in Computer Science, Vol. 1174, Springer-Verlag, Berlin/Heidelberg, pp. 295–315. Westfeld, A., and Wolf, G. (1998). Steganography in a video conferencing system. In Proceedings of the Information Hiding Second International Workshop, Springer-Verlag Lecture Notes In Computer Science, Vol. 1525, Springer-Verlag, Berlin/Heidelberg, pp. 32–47. Robie, D. L., and Mersereau, R. M. (2002). Video error correction using steganography. EURASIP J. Appl. Signal Process. 2002(1), 164–173. Xu, C., Ping, X., and Zhang, T. (2006). Steganography in compressed video stream. Proc. IEEE Comput. Soc. First Int. Conf. Innovative Comput. Inf. Control 2006(1), 269–272. Anderson, R. J., Needham, R., and Shamir, A. (1998). The steganographic file system, Proceedings of the Second International Workshop, Vol. 1525, Springer-Verlag, Berlin/Heidelberg, pp. 73–82. McDonald, A. D., and Kuhn, M. G. (1999). StegFS: a steganographic file system for Linux, Proceeding of the Information Hiding: Third International Workshop, Vol. 1768, SpringerVerlag, Berlin/Heidelberg, pp. 454–468. daemon9 (1997). LOKI2: the implementation. Phrack Mag. 7(51) September 01, 1997, article 06 of 17. Dittmann, J., Vogel, T., and Hillert, R. (2006). Design and evaluation of steganography for voice-over-IP, Proceedings of the IEEE Circuits and Systems Society (ISCAS), Kos, Greece. Giffin, J., Greenstadt, R., Litwack, P., and Tibbetts, R. (2002). Covert messaging through TCP timestamps, Proceedings of the Privacy Enhancing Technologies Workshop (PET), Vol. 2482, Springer-Verlag, Berlin/Heidelberg, pp. 194–208. Handel, T. G., Stanford, M. T. III (1996). Hiding Data in the OSI Network Model. Proceedings of the Information Hiding: First International Workshop. Vol. 1174, Springer-Verlag, Berlin/Heidelberg, pp. 23–38. Jones, E., Robert, J., and Moigne, O. L. E. (2005). IP Time to Live (TTL) Field Used as a Covert Channel 2005 . European Patent No. EP1517517, CIT ALCATEL (FR). Llamas, D., Allison, C., and Miller, A. (2005). Covert channels in internet protocols: a survey, Proceedings of the 6th Annual Postgraduate Symposium about the Convergence of Telecommunications, Networking and Broadcasting, (PGNET).

64

DETECTION OF HIDDEN INFORMATION, COVERT CHANNELS AND INFORMATION FLOWS

31. Llamas, D., Miller, A., and Allison, C. (2005). An evaluation framework for the analysis of covert channels in the TCP/IP protocol suite. In Proceedings of the 4th European Conference on Information Warfare and Security (ECIW 2005), University of Glamorgan, Pontypridd, Vol. 4, pp. 205–214. 32. Lucena, N. B., Lewandowski, G., and Chapin, S. J. (2005). Covert channels in IPv6. Proceedings of the Privacy Enhancing Technologies (PET), Vol. 3856, Springer-Verlag, Berlin/Heidelberg, pp. 147–166. 33. Mazurczyk, W., and Kotulski, Z. (2006). Covert channel for improving VoIP security. Proceedings of the Multiconference on Advanced Computer Systems (ACS), Vol. 2006, Miedzyzdroje, Poland, pp. 311–320. 34. Moskowitz, I. S., Newman, R. E., Crepeau, D. P., and Miller, A. R. (2003). Covert channels and anonymizing networks. Proceedings of the Workshop on Privacy in the Electronic Society (WPES), Washington, DC. 35. Rowland, C. H. (1997). Covert channels in the TCP/IP protocol suite (Covert-TCP). First Monday, Peer Reviewed Journal on the Internet, July 1997. 36. Simple Nomad (2003). Covering your tracks: NCrypt and NCovert. Proceedings of the Black Hat, USA. 37. Zander, S., Armitage G., and Branch, P. (2006). Covert channels in the IP time to live field. Proceedings of the Australian Telecommunication Networks and Applications Conference (ATNAC), Melbourne, Australia. 38. Zander, S., Armitage, G., and Branch, P. (2007). A survey of covert channels and countermeasures in computer network protocols. IEEE Commun. Surv. Tutorials 9(3), 44–57. 39. Johnson, N. F., and Jajodia, S. (1998). Exploring steganography: seeing the unseen. IEEE Comput. 31(2), 26–34. 40. Heckbert, P. (1982). Color image quantization for frame buffer display. Proceedings of the 9th ACM Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH). Vol. 1982, Boston, Massachusetts, pp. 297–307. 41. Brown, A. (1998). S-Tools: Steganography Tools for Windows. 42. Johnson, N. F., and Jajodia, S. (1998). Steganalysis of images created using current steganography software. Proceedings of the Information Hiding: Second International Workshop, Vol. 1525, Springer-Verlag, Berlin/Heidelberg, pp. 32–47. 43. Machado, R. (1996). Stego and EzStego for Hiding in PICT and GIF Images. 44. Fridrich, J., Goljan, M., and Soukal, D. (2004). Perturbed quantization steganography with wet paper codes. Proceedings of the Workshop on Multimedia and Security (MM&Sec), pp. 4–15. 45. Fridrich, J., Pevny, T., and Kodovsky, J. (2007). Statistically undetectable JPEG steganography: dead ends, challenges, and opportunities. Proceedings of the Workshop on Multimedia and Security (MM&Sec). Vol. 9, Dallas, TX, pp. 3–14. 46. Gul, G., Dirik, A. E., and Avcibas, S. (2007). Steganalytic features for JPEG compression-based perturbed quantization. IEEE Signal Process. Lett. 14(3), 205–208. 47. Provos, N. (2001). Defending against statistical steganalysis. Proceedings of the USENIX Security Symposium. Vol. 10, pp. 323–335. 48. Provos, N. (1998). OutGuess—Universal Steganography, http://www.outguess.org/, August 1998. 49. Sallee, P. (2004). Model-based steganography. In Proceedings of the International Workshop on Digital Watermarking, Springer Lecture Notes in Computer Science, Springer-Verlag, Berlin/Heidelberg, pp. 154–167. 50. Sallee, P. (2005). Model-based methods for steganography and steganalysis. Int. J. Image Graph. Spec. Issue: Image Data Hiding 3304, 167–189.

REFERENCES

65

51. Westfeld, A. (2001). F5-A steganographic algorithm. Proceedings of the Information Hiding: 4th International Workshop, Proceedings. Vol. 2135, Springer-Verlag, Berlin/Heidelberg, pp. 289–302. 52. Wong, K., Qi, X., and Tanaka, T. (2007). A DCT-based mod4 steganographic method. ACM Signal Process. 87(6), 1251–1263. 53. Avcibas, I., Memon, N., and Sankur, B. (2003). Steganalysis using image quality metrics. IEEE Trans. Image Process. 12(2), 221–229. 54. Dumitrescu, S., and Wu, X. (2005). LSB steganalysis based on high-order statistics. Proceedings of the International Multimedia Conference Workshop on Multimedia and Security. Vol. 7, New York, NY, pp. 25–32. 55. Farid, H. (2002). Detecting hidden messages using higher-order statistical models. Proceedings of the International Conference on Image Processing. Vol. 2, Rochester, NY, pp. 905–908. 56. Farid, H., and Lyu, S. (2003). Higher-order wavelet statistics and their application to digital forensics. Proceedings of the IEEE Workshop on Statistical Analysis in Computer Vision, Conference on Computer Vision and Pattern Recognition Workshop. Vol. 8, Madison, Wisconsin, pp. 94. 57. Lyu, S., and Farid, H. (2003). Detecting hidden messages using higher-order statistics and support vector machines. Fifth Proceedings of the Information Hiding: International Workshop. Vol. 2578, Springer-Verlag, Berlin/Heidelberg, pp. 340–354. 58. Lyu, S., and Farid, H. (2006). Steganalysis using higher-order image statistics. IEEE Trans. Inf. Forensics Secur. 1(1), 111–119. 59. Fridrich, J., Goljan, M., and Hogea, D. (2002). Attacking the outguess. Proceedings of the ACM Workshop on Multimedia and Security, Juan-les-Pins, France, December 6, 2002. 60. Fridrich, J., Goljan, M., and Hogea, D. (2003). Steganalysis of JPEG images, breaking the F5 algorithm. In Proceedings of the Information Hiding 5th International Workshop, Springer-Verlag, Lecture Notes in Computer Science, Vol. 2578, Springer-Verlag, Berlin/Heidelberg, pp. 310–323. 61. Fridrich, J. (2004). Feature-based steganalysis for JPEG and its implications for future design of steganographic schemes. In Proceedings of the Information Hiding: 6th International Workshop, Springer Lecture Notes in Computer Science, Vol. 3200, Springer-Verlag, Berlin/Heidelberg, pp. 67–81. 62. Fridrich, J., and Pevny, T. (2006). Multiclass blind steganalysis for JPEG images. In Proceedings of the SPIE, Electronic Imaging, Photonics West, Security, Steganography, and Watermarking of Multimedia Contents VIII . Vol. 6072, pp. 257–269. 63. Ji, R., Yao, H., Liu, S., Wang, L., and Sun, J. (2006). A new steganalysis method for adaptive spread spectrum steganography. In Proceedings of the International Conference on Intelligent Information Hiding and Multimedia, (December 18–20, 2006). IIH-MSP. IEEE Computer Society, pp. 365–368. 64. Provos, N., and Honeyman, P. (2002). Detecting steganographic content on the internet. In Proceedings of the ISOC Network and Distributed System Security Symposium (NDSS’02), Also CITI Technical Report 01-11, 2001. University of Michigan. 65. Westfeld, A., and Pfitzmann, A. (2000). Attacks on steganographic systems. Proceedings of the Information Hiding: Third International Workshop. Vol. 1768, Springer-Verlag, Berlin/Heidelberg, pp. 61–76. 66. Steganalysis Reseach Center (SARC)/Backbone Security Developer of StegAlyzerAS and StegAlyzerSS for Detecting Steganography Applications and Steganographic Data Embedded within Various Carrier Files by Numerous Steganography Applications. 67. Wetstone Technologies Developer of Gargoyle and StegWatch for Digital Forensics and Steganography Detection.

66

DETECTION OF HIDDEN INFORMATION, COVERT CHANNELS AND INFORMATION FLOWS

68. Ker, A. D. (2008). A fusion of maximum likelihood and structural steganalysis. Proceedings of the Information Hiding: 9th International Workshop, IH 2007 . Saint Malo, Vol. 4567, pp. 204–219. 69. Johnson, N. F., Giordano, J., and Jajodia, S. (1999). Steganography and Computer Forensics: The Investigation of Hidden Information, Technical Report, CSIS-TR-99-10-NFJ, George Mason University, Center for Secure Information Systems, October 1999. 70. Johnson, N. F., and Kong, E. G. (2002). Investigating Hidden Information: Steganography and Computer Forensics, American Academy of Forensic Sciences (AAFS), Atlanta, GA, February 11–16. 71. Petitcolas, F. A. P., Anderson, R. J., and Kuhn, M. G. (1998). Attacks on copyright marking systems. Proceedings of the Information Hiding Second International Workshop. Vol. 1525, Springer-Verlag, Berlin/Heidelberg, pp. 219–239. 72. Petitcolas, F. A. P. (2000). Watermarking schemes evaluation. IEEE Signal Process. 17(5), 58–64. 73. Francia III, G. A., and Gomez, T. S. (2006). Steganography obliterator: an attack on the least significant bits. In Proceedings of the 3rd Annual Conference on information Security Curriculum Development (InfoSecCD’06), ACM, Kennesaw, Georgia, pp. 85–91. DOI: 10.1145/1231047.1231066. 74. Borders, K., and Prakash, A. (2004). Web tap: detecting covert web traffic. Proceedings of the 11th ACM Conference on Computer and Communications Security (CCS). Vol. 11, pp. 110–120. 75. Cabuk, S., Brodley, C. E., and Shields, C. (2004). IP covert timing channels: design and detection. Proceedings of the 11th ACM conference on Computer and Communications Security (CCS). Vol. 11, Washington, DC, pp. 178–187. 76. Department of Defense National Computing Security Center (NCSC) (1993). A Guide to Understanding Covert Channel Analysis of Trusted Systems, NCSC-TG-030. 77. Fisk, G., Fisk, M., Papadopoulos, P., and Neil, J. (2003). Eliminating steganography in internet traffic with active wardens. In Proceedings of the Information Hiding 5th International Workshop, Springer Lecture Notes in Computer Science, Vol. 2578, Springer-Verlag, Berlin/Heidelberg, pp. 340–354. 78. Gianvecchio, S., and Wang, H. (2007). Detecting covert timing channels: an entropy-based approach. Proceedings of the 14th ACM Conference on Computer and Communication Security (CCS). Vol. 14, Alexandria, VA, pp. 307–316. 79. Kemmerer, R. (1983). Shared resource matrix methodology: an approach to identifying storage and timing channels. ACM Trans. Comput. Syst. (TOCS) 1(3), 256–277. 80. Sohn, T., Seo, J., and Moon, J. (2003). A study on the covert channel detection of TCP/IP header using support vector machine. Proceedings of the 5th International Conference on Information and Communications Security. Vol. 2578, Springer-Verlag, Berlin/Heidelberg, pp. 313–324. 81. Johnson, N. F., Duric, Z., and Jajodia, S. (2000). Recovery of watermarks from distorted images. In Proceedings of the Third International Workshop on Information Hiding (September 29–October 01, 1999), Lecture Notes In Computer Science, Vol. 1768, Springer-Verlag, Berlin/Heidelberg, pp. 318–332. 82. Anderson, R. J., ed. (1996). Information hiding: first international workshop, Cambridge, UK . Lecture Notes in Computer Science, Vol. 1174. Springer-Verlag, Berlin/Heidelberg. DOI: 10.1007/3-540-61996-8. 83. Kawaguchi, E., and Eason, R. O. (1998). Principle and applications of BPCS steganography. Proceedings of the SPIE International Symposium on Voice, Video, and Data Communications. Vol. 3528, Boston, Massachusetts, pp. 464–473. DOI: 10.1117/12.337436. 84. Su, P., and Kuo, C. J. (2003). Steganography in JPEG2000 compressed images. IEEE Trans. Comput. Electron. 49(4), 824–832.

FURTHER READING

67

85. Sharp, T. (2001). An implementation of key-based digital signal steganography. In Proceedings of the Information Hiding 4th Internationl Workshop, Springer Lecture Notes in Computer Science, Vol. 2137, Springer-Verlag, Berlin/Heidelberg, pp. 13–26. 86. Cachin, C. (1998). An information-theoretic model for steganography. In Proceedings of the Information Hiding: 2nd International Workshop, Springer Lecture Notes in Computer Science, Vol. 1525, Springer-Verlag, Berlin/Heidelberg, pp. 306–318. 87. Anderson, R. J., and Petitcolas, F. A. P. (1998). On the limits of steganography. IEEE J. Sel. Areas Commun. 16, 474–481. 88. Cox, I. J., Kalker, T., Pakura, G., and Scheel, M. (2005). Information transmission and steganography. In Proceedings of the International Workshop on Digital Watermarking, Springer Lecture Notes in Computer Science, Vol. 3710, Springer-Verlag, Berlin/Heidelberg, pp. 15–29. 89. Mart´ın, A., Sapiro, G., and Seroussi, G. (2005). Is image steganography natural? IEEE Trans. Image Process. 14(12), 2040–2050. 90. Eggers, J. J., B¨auml, R., and Girod, B. (2002). A communications approach to image steganography. Proceedings of the SPIE Electronic Imaging Security and Watermarking of Multimedia Contents IV . Vol. 4675, San Jose, CA, pp. 26–37. 91. Fridrich, J., and Goljan, M. (2003). Digital image steganography using stochastic modulation. Proceedings of the SPIE Electronic Imaging, Security, Steganography, and Watermarking of Multimedia Contents V . Vol. 5020, Santa Clara, CA, pp. 191–202. 92. Fridrich, J., Goljan, M., and Soukal, D. (2005). Efficient wet paper codes. In Proceedings of Information Hiding 7th International Workshop, Springer Lecture Notes in Computer Science, Vol. 3727, Springer-Verlag, Berlin/Heidelberg, pp. 204–218. 93. Johnson, N. F. (2008). Observations and Trends in Open Source and Commercial Steganography Tools, Booz Allen Hamilton Technical Report. 94. National Software Reference Library (NSRL) (2008). http://www.nsrl.nist .gov. 95. Radhakrishnan, R., Kharrazi, M., and Memon, N. (2005). Data masking: a new approach for steganography. J. VLSI Signal Process. Syst. 41(3), 293–303.

FURTHER READING Additional Proceedings from the International Information Hiding Workshops/Conferences: Aucsmith, D., ed. (1998). Information Hiding: Second International Workshop, IH’98 Portland, Oregon, USA. Lecture Notes in Computer Science, Vol. 1525, Springer-Verlag, Berlin/ Heidelberg. DOI: 10.1007/3-540-49380-8. Barni, M., Herrera-Joancomart´ı, J., Katzenbeisser, S., and P´erez-Gonz´alez, F., eds. (2005). Information Hiding: 7th International Workshop, IH 2005, Barcelona, Spain, June 6–8, 2005 , Lecture Notes in Computer Science, Vol. 3727, Springer-Verlag, Berlin/Heidelberg. DOI: 10.1007/ 11558859. Camenisch, J.L., Collberg, C.S., Johnson, N.F., and Sallee, P., eds. (2006). Information Hiding: 8th International Workshop, IH 2006, Alexandria, VA, USA, July 10-12, 2006 , Lecture Notes in Computer Science, Vol. 4437, Springer, Berlin/Heidelberg. DOI: 10.1007/978-3-540-74124-4. Fridrich, J., ed. (2004). Information Hiding: 6th International Workshop, IH 2004, Toronto, Canada, May 23-25, 2004, Revised Selected Papers, Lecture Notes in Computer Science, Vol. 3200, Springer-Verlag, Berlin/Heidelberg. DOI: 10.1007/b104759. Furon, T., Cayre, F., Do¨err, G., and Bas, P., eds. (2007). Information Hiding: 9th International Workshop, IH 2007, Saint Malo, France, June 11–13, 2007 , Lecture Notes in Computer Science, Vol. 4567, Springer, Berlin/Heidelberg. DOI: 10.1007/978-3-540-77370-2.

68

DETECTION OF HIDDEN INFORMATION, COVERT CHANNELS AND INFORMATION FLOWS

Moskowitz, I.S., ed. (2001). Information Hiding: 4th International Workshop, Proceedings, Pittsburgh, Pennsylvania, USA, April 2001. Lecture Notes in Computer Science, Vol. 2135, 2001. Springer-Verlag, Berlin/Heidelberg. DOI: 10.1007/3-540-45496-9. Petitcolas, F.A.P., ed. (2003). Information Hiding: 5th International Workshop, Noordwijkerhout, The Netherlands, 7–9 October 2002 , Lecture Notes in Computer Science, Vol. 2578, Springer-Verlag, Berlin/Heidelberg. DOI: 10.1007/3-540-36415-3. Pfitzmann, A., ed. (2000). Information Hiding: Third International Workshop, Proceedings, Dresden, Germany, 29 September-1 October 1999, Lecture Notes in Computer Science, Vol. 1768, Springer-Verlag, Berlin/Heidelberg. DOI: 10.1007/10719724.

OTHER SUGGESTED READING Cole, E. (2003). Hiding in Plain Sight: Steganography and the Art of Covert Communication. John Wiley & Sons, Inc., Indianapolis, Indiana. Cox, I., Miller, M. L., Bloom, J. A., Fridrich, J., and Kalker, T. (2007). Digital Watermarking and Steganography, 2nd ed., Morgan Kaufmann Publishers Inc ISBN 0123725852. Burlington, MA. Wayner, P. (2002). Disappearing Cryptography: Information Hiding: Steganography and Watermarking, 2nd ed., Morgan Kaufmann Publishers Inc., San Francisco, CA.

5 INHERENTLY SECURE NEXT-GENERATION COMPUTING AND COMMUNICATION NETWORKS FOR REDUCING CASCADING IMPACTS Robert P. Evans Idaho National Laboratory, Idaho Falls, Idaho

Virgil B. Hammond and Shabbir A. Shamsuddin Argonne National Laboratory, Argonne, Illinois

5.1

INTRODUCTION

Security is of vital interest to all participants in the control system sphere of interest. This includes governmental agencies, vendors, users, and consultants, as well as industry advisory groups. The chapter explores some of the efforts being used by these participants to identify and mitigate security exposures using risk management methodologies, technology tools, and standards. 5.2

STANDARDS, GUIDELINES, AND BEST PRACTICES

Standardization has a major impact on each of us, yet most of us do not understand what it means or how it affects our lives. Standardization is the process of establishing a technical benchmark that may be defined by written documents that lay out the criteria for the standardized measure. This technical benchmark document may take one of several forms, depending on its level of acceptance, and can be described as a set of criteria some of which may be mandatory, voluntary guidelines, and/or best practices. 5.3

STANDARDS

Standards are an important part of the total effort to achieve control system cyber security. As rules or requirements that define accepted operational criteria, they provide a measure Communication and Information Systems Security, Edited by John G. Voeller © 2014 John Wiley & Sons, Inc. Published 2014 by John Wiley & Sons, Inc.

69

70

COMMUNICATION NETWORKS FOR REDUCING CASCADING IMPACTS

of consistency and a means for quantifying quality and reliability. Standards provide a performance framework for hardware and software vendors who build the components for a control system. Standards provide a similar service for the personnel who operate and maintain the control system, once it becomes operational. Standards are most effective when the engineers and operators using the standards understand the capabilities and limitations of each standard and its history. A standard, as defined by the National Standards Policy Advisory Committee is: “A prescribed set of rules, conditions, or requirements concerning definitions of terms; classification of components; specification of materials, performance, or operations; delineation of procedures; or measurement of quantity and quality in describing materials, products, systems, services, or practices” [1].

Standards are sets of rules or requirements, which define the accepted criteria for a component, procedure, system, and so on. Standards are developed by a consensus of the judgment of volunteers, which pool their knowledge base and experience. 5.3.1

Guidelines

Guidelines are tools that attempt to streamline a process or procedure. They may consist of rules or suggestions that, when applied, may simplify the process or procedure, and provide a level of quality and consistency. Guidelines may be issued by any organization to make the processes more uniform and expectantly, of high quality. By definition, guidelines are not mandatory but attempt to provide a set of knowledge that can be applied [2, 3].

5.4

BEST PRACTICE

Best practices, sometimes referred to as recommended practices, are a management tool that asserts that there is a technique, method, process, and so on, which is more effective at delivering a particular result than any other. As with standards and guidelines, best practices may consist of a set of good and practical industry practices or suggestions, which, when followed, will produce superior performance. As with guidelines, best practices are not mandatory, unless they become a standard and are imposed by a particular organization as a requirement [4, 5]. 5.4.1

Cyber and Control Systems Security Standards in Common Use

The use of cyber security standards (including standards, guidelines, and best practices) can greatly assist in the protection of critical infrastructure by providing requirements, guidelines, and requisite imperatives in the implementation and maintenance of computer-controlled systems. Standards are most effective when the decision-makers, engineers, and operators using the standards understand what each addresses and does not address. There is a link between cyber vulnerabilities and the standards that are intended to provide mitigation opportunities. For example, standards for equipment design and operation offer direction for vendors to use in bringing usable and compatible products to

MEASURE AND ASSESS SECURITY POSTURE

71

market, while providing companies the specifications required to select and implement the appropriate equipment and procedures. Most of all, these standards ensure that equipment is operated and maintained efficiently [6]. Standards’ organizations are, for the most part, public organizations that have little or no enforcement ability. They rely on educating the users as to the importance of security, and of the potential benefits that standards can add to their operations. Where cyber security standards are implemented, they provide reliable direction toward achieving an acceptable level of cyber security by providing a framework on which to construct a viable and rational security policy. They also provide an important frame of reference when performing risk analysis of an operating control system. The cyber security standards issued by these organizations are frequently referred to as either sector-specific or cross-sector in their focus. Sector-specific standards include standards and associated documents, which address cyber security considerations that are specific to operators within the issuing industry. Cross-sector standards are developed and issued by organizations whose focus extends across several discrete and dissimilar operating arenas, whose only common interest may be the prevention and mitigation of cyber attack upon their facilities. These standards address security issues that are of universal concern to infrastructure operators, without regard to the particular industry that may be implementing the standard. Certain of these standards, such as those issued by the Federal Energy Regulatory Commission (FERC) and the Health Insurance Portability and Accountability Act (HIPAA), come from the Federal government and have the driving force of public law. Most others are issued by private and/or public industry organizations, and are dependent upon voluntary compliance.

5.5 5.5.1

MEASURE AND ASSESS SECURITY POSTURE Risk Assessment Factors

Managing the security risks associated with the industry’s growing reliance on control system and information technology (IT) is a continuing challenge. In particular, many private organizations have struggled to find efficient ways to ensure that they fully understand the cyber security risks affecting their operations, and can implement appropriate controls to mitigate these risks. A principal challenge that many companies face is identifying and ranking the cyber and control systems’ security risks to their operations, which is the first step in developing and managing an effective security program. Taking this step helps ensure that organizations identify the most significant risks, and determines what actions are appropriate to mitigate them [7]. The General Accounting Office, in its white paper titled, “Information Security Risk Assessment: Practices of Leading Organizations” [8], has identified a set of common critical success factors that are important to the efficient and effective implementation of the organizations’ information security risk assessment programs. These factors help ensure that the organizations benefit fully from the expertise and experience of their senior managers and staff, that risk assessments are conducted efficiently, and that the assessment results lead to appropriate remedial actions. The critical risk assessment success factors include the following:

72

COMMUNICATION NETWORKS FOR REDUCING CASCADING IMPACTS

1. Obtain senior management commitment, support, approval, and involvement to ensure that the resources are available to implement the program, and that assessment findings result in implementation of appropriate changes to policies and controls. 2. Designate individuals or groups as focal points to oversee and guide the overall risk assessment processes. 3. Define documented procedures for conducting risk assessments, and develop tools to facilitate and standardize the process. 4. Involve business and technical experts including a variety of individuals from the business unit having expertise in business operations, business processes, security, information resource management, IT, and system operations. 5. Hold business units responsible for initiating and conducting risk assessments, as well as evaluating and implementing the resulting recommendations. 6. Limit the scope of individual assessments by conducting a series of narrower assessments on various individual segments of the business and operations. 7. Document and maintain results so that managers could be held accountable for the decisions made, and a permanent record is established that can be used by auditors for compliance to the security policy [8].

5.5.2

Risk Measurement

The challenge in measuring risk is determining what to measure and how it should be measured. To measure the security posture of a control system, the organization needs to follow a set of rules that focuses the company security goals by applying the risk assessment factors described earlier. When assessing vulnerability, it is worthwhile to be aware of certain qualitative terms. Exposure is about possibility. Risk is about probability. And impact is about consequence. The following equation is sometimes used to express these mathematically: [9] Expected loss × threat × vulnerability = exposure = risk Exposure measurements can be used as a relative comparison within an environment or across companies. If one can assume that risk is constant for like-sized companies (even if we do not know the number itself), this exposure measure can act as a “risk proxy” to measure the relative difference in risk levels. The Department of Homeland Security (DHS) under the FY2007 Homeland Security Grant Guidance describes the DHS approach to risk assessment as follows: risk will be evaluated at the federal level using a risk analysis model developed by DHS in conjunction with other federal entities. Risk is defined as the product of three principal variables: • •

Threat (T)—the likelihood of an attack occurring. Vulnerability and consequence (V&C)—the relative exposure and expected impact of an attack [10]. Risk (R) = T × V × C

CYBER SECURITY THREATS AND VULNERABILITIES

5.5.3

73

Security Metrics

Metrics and measurement are two vastly different concepts. Measurements are generated by counting, and provide specific views of discrete factors. Metrics, on the other hand, are generated through analysis. They are derived from measurements, to which contextual information has been added for comparison, to a predetermined baseline, or comparing two or more measurements taken over time [11]. The measure of security policies, processes and products is the much-sought-after solution to this conundrum. Security managers in industry look for a magic formula that calculates risk and effectiveness in reducing risk, but the reality is that security metrics are not that simple. Measuring security is about using common sense. An organization needs to determine what to measure, and to organize the variables in a way that makes them manageable and meaningful. It needs to build repeatable formulas that show the snapshot status of security and how it changes over time. Truly useful metrics indicate the degree to which goals are being met, and then drive actions taken to improve organizational processes. When applied to control system security performance, the metric is the expression of the state and/or quality of a critical aspect of the control system infrastructure. It is the basis for directing investments to areas of high risk, as well as a forum for communication to stakeholders both inside and outside the organization. Applying regular, repeatable metrics to a security performance initiative can benefit organizations in a number of ways. They: 1. 2. 3. 4. 5. 6.

5.6

provide a measurement of the effectiveness of controls; identify and target areas for improvement; communicate the effectiveness of risk management programs; drive proper actions in focused areas and extend accountability; provide hard evidence of compliance for internal and external use; and, provide actionable views across the enterprise, lines of business, or specific areas of IT and control systems infrastructures [11].

CYBER SECURITY THREATS AND VULNERABILITIES

Many companies today have and are conducting security vulnerability analyses to evaluate the risks of physical attacks on their facilities, and many of these facilities have been hardened since 9/11. However, the importance of cyber security for manufacturing and control systems has only recently been recognized, and therefore has not yet been fully addressed by most industrial companies. Appropriate security measures must be taken to avoid events, which could have cascading impacts on other critical infrastructures (Figure 5.1) [12]. Lesser cyber attacks have and are occurring everyday. Actions are needed now to deal with this threat. Companies must conduct cyber security vulnerability analyses to identify threats to their control and support systems, to determine if vulnerabilities are present, and to evaluate existing countermeasures to determine if they need to be strengthened or new ones implemented. Control systems, and their support systems, are subject to threats from adversaries who may wish to disable or manipulate them by cyber or physical means, or who may want to obtain, corrupt, damage, destroy, or prohibit access to valuable information. The organization should evaluate the risk of these threats in

74

COMMUNICATION NETWORKS FOR REDUCING CASCADING IMPACTS Fuels, Lubricants

Fuels, Transport, Shipping

Power for Pump and Lift Stations, Control Systems r Water fo Cooling, s Emission n Reductio

Water

SCAD

Wate r fo Cooli r ng

A, Co

mmu

Fuel transport, Shipping

Power for Compressors, Starage, Natural Control Electric Gas Systems Power r Fuel fo tors Genera SCAD Comm A, t Hea unica tions DA, ns SCAnicatio Po mu Sw wer f Com itch or es Telecom

nicati

ons

SCADA, Communications

Fuels, Lubricants Power for Pumping Stations, Storage, Control Systems

Transportation

Sh ipp ing

Water for production, Cooling, Emissions Reduction

SCADA, Communications

Oil

Power for Signaling, Switches

Fuel for Generators, Lubricants

ping

Ship ors

Fuel for Generat

, Cooling Water for production,

ions Emiss

Re d

u c ti o

n

Peerenboom, Fisher, and Whitfield, 2001

FIGURE 5.1

Illustrative infrastructure interdependencies.

order to decide what protective measures should be taken to protect systems from disruption. The vulnerabilities typically observed in the course of conducting vulnerability assessments are grouped in the following five categories: data, security administration, architecture, network, and platforms. Any given control system will usually exhibit a subset of these vulnerabilities, but may also have some unique additional problems [13]. The Federal government has played an irreplaceable role in providing support for fundamental, long-term IT research and development (R&D), generating technologies that gave rise to the multibillion-dollar IT industry. The President’s Information Technology Advisory Committee (PITAC) review of current federally supported R&D in cyber security finds an imbalance, however, in the current cyber security R&D portfolio. Most support is for short-term, defense-oriented research; there is relatively little support for fundamental research to address the larger security vulnerabilities of the civilian IT infrastructure, which supports defense systems as well. In the report to the President in 2005, PITAC urged changes in the Federal government’s cyber security R&D portfolio to increase federal support for fundamental research in civilian cyber security, intensify federal efforts to promote recruitment and retention of cyber security researchers and students at research universities, provide increased support for the rapid transfer of federally developed cutting-edge cyber security technologies to the private sector, and strengthen the coordination of the Interagency Working Group on Critical Information Infrastructure Protection and integrate it under the Networking and Information Technology Research and Development Program [14]. The Homeland

CASCADING FAILURE

75

Security Department has teamed with 13 organizations on a 12-month project to secure the process control systems of the nation’s oil and gas industries against cyber security threats. A cyber attack on the control and data systems that operate electric power plants, oil refineries, and gas pipelines, which are pieces of the nation’s 18 critical infrastructure sectors, could potentially bring the country to a halt. The problem is compounded because private companies control more than 85% of the country’s critical infrastructure, leaving the government few avenues to ensure that IT and control systems are secure. The potential costs of an infrastructure attack are significant. The Northeast Blackout on August 14, 2003, left 50 million customers and parts of eight states and Canada without power. According to a report by an electricity consumers research council, the outage cost an estimated $7–10 billion in financial losses; shut down parts of a 2 million barrel-per-day pipeline; and airports in 13 cities, To combat the cyber threats, the government, industry, research labs, security vendors, and process control technology vendors embarked on the project, “Linking the Oil and Gas Industry to Improve Cyber security”, to come up with technology that could reduce vulnerabilities in infrastructure and could fix system vulnerabilities. The potential solution to such cyber threats is a strong cyber security posture by the entities that may be vulnerable to such attacks. A major challenge to preserve system protection is that system architectures change, technology changes, and threats change, all of which means that defenses must change.

5.7

CASCADING FAILURE

A cascading failure occurs when a disruption in one infrastructure causes a disruption in a second infrastructure (e.g. the August, 2003, blackout led to communications and water-supply outages, air traffic disruptions, chemical plant shutdowns, and other interdependency-related impacts) [12]. The complexity of multiple infrastructure linkages and the implications of multiple contingency events that may affect the infrastructures are apparent even in the highly simplified representation shown in Figure 5.1. The security, economic prosperity, and social well being of the nation depend on the reliable functioning of our increasingly complex and interdependent infrastructures. These include energy systems (electric power, oil, and natural gas), telecommunications, water-supply systems, transportation (road, rail, air, and water), banking and finance, and emergency and government services. In the new economy, these interconnected infrastructures have become increasingly fragile and subject to disruptions that can have broad regional, national, and global consequences. A disruption in an infrastructure would be magnified by the codependencies in supervisory control and data acquisition (SCADA) systems. An example might be a power loss that affects telecommunication systems upon which banking transactions rely. Vulnerability to these cascading effects was seen during Hurricanes Katrina and Rita in 2005, where a major American city came to a virtual standstill. As we are now seeing, it will take years to rebuild. Failure nodes are repeatedly created at the intersections of our tightly coupled, highly sophisticated transportation, electric power, and telecommunications systems. These failure potentials are compounded by the infrastructures’ reliance on information and control systems’ hardware and software. Understanding, analyzing, and sustaining the robustness and resilience of these infrastructures require multiple viewpoints and a broad set of interdisciplinary skills. For example, engineers (civil, electrical, industrial, mechanical, systems, etc.) are needed to

76

COMMUNICATION NETWORKS FOR REDUCING CASCADING IMPACTS

understand the technological underpinnings of the infrastructures, as well as the complex physical architectures and dynamic feedback mechanisms that govern their operation and response (e.g. response to stresses and disruptions). Computer scientists, IT specialists, and network/telecommunication experts are needed to understand the electronic and informational (cyber) linkages among the infrastructures. IT security, information assurance professionals, and control engineers are needed to ensure information and control system security [15].

5.8

LEGACY SYSTEMS

The term legacy control system is used variously to refer to old mainframe, dumb-terminal applications from the 1970s and 1980s; client/server systems of the 1990s; and even to first generation web-based business applications developed in the late 1990s [16]. In this section we will refer to legacy systems in the context of the first two examples. Legacy control systems were originally designed to be free standing networks without Internet access. These control systems monitored and controlled critical infrastructure processes. They were operated in an isolated or stand-alone environment where computer systems and devices communicated with each other exclusively, and typically did not communicate or share information with systems not directly connected to the control system network. These control systems typically comprised proprietary hardware, software, and protocols designed specifically for control system operations. Knowledge of these proprietary applications and protocols was limited to a small population. Proprietary control system protocols and data were not readily available to the general population and significant effort and resources would have been required to acquire the proprietary information, understand the control system, discover vulnerabilities in the control system, develop the tools to exploit the identified vulnerabilities, and gain sufficient access to the control system so that vulnerabilities could be exploited to carry out unauthorized or malicious activities. For the reasons presented, in particular because access to control systems was greatly limited, critical infrastructure control system security efforts were primarily focused on protecting control systems from physical attacks. More recently, with the vast IT expansion and the drive toward having information readily available from any location, many previously stand-alone control systems are being transitioned to the “always connected” world, where real-time control system information can be readily and easily accessed remotely by vendors, engineers, maintenance personnel, business managers, and others via corporate networks, the Internet, telephone lines, and various wireless devices. Legacy systems that have been retrofitted to incorporate Internet accessibility may be especially vulnerable to attack due to the ad hoc manner of their integration with the network. This imperfect fit between the different software applications could generate more vulnerable code aspects than would be found in a single piece of software. It may be possible, for example, through a poorly defined variable, to force a software program to behave in a way not expected by the author. When two programs are brought together, the potential program weaknesses are multiplied. Thus, legacy systems with network access added may be more prone to security flaws and weaknesses than systems that use a single piece of software for both functions [17]. To reduce operational costs and improve performance, control system vendors and critical infrastructure owners and

INTRUSION DETECTION AND RESPONSE TECHNOLOGY

77

operators have been transitioning from proprietary systems to less expensive standardized technologies, operating systems, and protocols currently prevalent on the Internet. These widely accepted technologies, protocols, and operating systems, such as Ethernet, Internet Protocol, Microsoft Windows, and web technologies, have a large number of known cyber vulnerabilities, and new vulnerabilities are reported on a daily basis. Exploitation tools, malware, and how-to papers are often readily available shortly after the announcement of a new vulnerability. Significant information on control systems is now publicly available, including design and maintenance documents, technical standards for the component interconnections, and standards for communicating between devices. In addition, control system security concerns are elevated because control systems are typically not up-to-date with the latest security patches, fixes, and best practices due to concerns with taking real-time systems off-line and concerns over making system modifications, which might affect the time sensitive operations of the control system or potentially affect existing agreements with control system vendors or others [18]. Legacy system operators must be aware of the vulnerabilities inherent with upgrading to meet today’s networking capabilities, and implement appropriate protection options. Some examples of “best practice” options (that are applicable to all systems, from legacy to state-of-the-art) include: disabling unused ports; encryption; dual authentication; and working with both private sector and government agencies to identify and put into use more robust security measures.

5.9

INTRUSION DETECTION AND RESPONSE TECHNOLOGY

The increasing speed of attacks against IT and control systems highlights a requirement for comparably timely responses. Threats such as malware and scripted exploits often allow a time frame of only a few minutes or even seconds to respond, which effectively eliminates the feasibility of manual intervention and highlights a requirement for automated approaches to provide a solution. However, it can be seen that existing security technologies are often insufficient. For example, although intrusion detection systems (IDS) can be used to identify potential incidents, they have a tendency to produce high volumes of false alarms and consequently cannot be trusted to issue automated responses for fear of disrupting legitimate activity. Intrusion detection has been at the center of intense research in the last decade, owing to the rapid increase of sophisticated attacks on computer systems. Typically, intrusion detection refers to a variety of techniques for detecting attacks in the form of malicious and unauthorized activity. In the event that intrusive behavior is detected, it is desirable to take evasive and/or corrective actions to thwart attacks and ensure safety of the computing environment. Such countermeasures are referred to as intrusion response. Although the intrusion response component is often integrated with the IDS, it receives considerably less attention than IDS research, owing to the inherent complexity in developing and deploying responses in an automated fashion. Development of an effective response mechanism for potential intrusions is inherently complex due to the requirement to analyze a number of “unknown” factors in various dimensions: intrusion cause/effect, identification of optimal response, state of the system, maintainability, and so on. As such, it is necessary to have a complete understanding of the problems that need to be addressed for developing a smart and effective response system.

78

COMMUNICATION NETWORKS FOR REDUCING CASCADING IMPACTS

Considerable research has focused on intrusion response specification that addresses the countermeasure steps to sophisticated attacks on the control and computer support systems. For example, the following specifications are being considered as requirements in the development of an ideal intrusion response system: 1. Automatic. The volume and the intensity of intrusions today require rapid and automated response. The system must be reliable to run without human intervention. Human supervision often brings a significant delay into intrusion handling; the response system alone should have means to contain incurred damage and prevent harmful activity. Although complete automation may not be achievable in practice due to presence of novel intractable intrusions, significant reduction of human effort and expert knowledge is desirable. 2. Proactive. Modern software systems are built on multiple heterogeneously developed components that have complex interactions with each other. Because of these interactions, intrusions are likely to spread rapidly, causing more damage. A proactive approach to response is the most practical in intrusion containment. 3. Adaptable. The presence of multiple components that constitute a software system also results in a dynamic environment owing to the complex interactions between components. As such, intrusive behavior can affect systems in a way that is unpredictable. The intrusion response system should be equipped with means to recognize and react to changes in the dynamic environment. 4. Cost-sensitive. Response to intrusions in dynamic and complex systems requires a careful consideration of the trade-offs among cost and benefits factors. A simple basic response action, triggered every time certain symptoms are observed, might be a wasteful effort and may cause more damage [19]. 5.10

RESEARCH DIRECTION

Because of the constantly changing threats to control systems, as well as the vulnerabilities of these systems to cyber attack, multiple approaches to security should be undertaken. For one, continued research is needed to develop security policies, guidelines, and standards for control system security. This could include things such as authentication methods and the use of networks. The results of this research should then be incorporated into standards, in order that all stakeholders may benefit from the research. Continued development of strong standards is a key in securing control systems from cyber intrusions. Another approach to be considered is the use of vulnerability assessments. An organization must be able to conduct a comprehensive vulnerability assessment if it intends to successfully measure the security posture of its control systems. A key step in this process is to learn and apply the seven critical risk assessment success factors listed earlier in the chapter. These factors are important to the efficient and effective implementation of the organizations’ information security risk assessment programs. The Federal government must continue to be in the forefront of programs providing support for fundamental research in civilian cyber security. Organizations should implement effective security management programs that include consideration of control system security. To measure security posture of the control systems, the organization needs to employ a set of rules, or metrics that quantify its

REFERENCES

79

achievement in terms of the company security goals. Vulnerability should be determined in terms of exposure to attack, probability of attack, and consequences of an attack. The goal should always be to identify vulnerabilities and then to implement mitigation strategies. Possible strategies include developing or improving the organization security policy. Adherence to one or more recognized security standard should always be part of organization policy. Cascading failures can have broad regional, national, and global consequences. Control systems need to be carefully designed to reduce the interdependence of multiple infrastructures, and to mitigate the effects when a failure occurs. Legacy control systems no longer profit from “security through obscurity” [20]. In fact, those that have been retrofitted to incorporate Internet accessibility may be especially vulnerable to attack, due to imperfect matchups between software applications. Legacy system operators must be aware of the vulnerabilities inherent with upgrading to meet today’s networking capabilities, and implement all appropriate protection options. In order to cope with the speed and frequency of today’s cyber attacks, effective intrusion detection and response systems must react in similar rapid fashion. Current research and development efforts focused on new technology and tools to counter such attacks indicate a need for automated, proactive responses, which are adaptable to changing situations and technology, and are cost-effective.

REFERENCES 1. National Standards Policy Advisory Committee (1978). National Policy on Standards for the United States and a Recommended Implementation Plan, National Standards Policy Advisory Committee, Washington, DC, p. 6. 2. CPM Resource Center (2007). How to Write Practice Guidelines, CPM Resource Center, http://www.cpmrc.com/events/workshop 17.shtml, accessed 01/22/2007. 3. Guideline, Wikipedia, the Free Encyclopedia, (2007). http://en.wikipedia.org/wiki/Guideline, accessed 01/22/2007. 4. Definition of Best Practices, (2007). Walden 3-D, Inc., http://www.walden3d.com/og1/bp.html, accessed 01/22/2007. 5. Best Practice, Wikipedia, the Free Encyclopedia, (2007). http://en.wikipedia.org/wiki/Best practice, accessed 01/21/2007. 6. Joseph Weiss, P. E., Ed. (2003) IEEE Task Force Revising Equipment Standards to Protect Against Cyber Attacks, Electric Energy T & D Magazine http://realtimeacs.com/?page id=13. 7. U.S. General Accounting Office (1999). Information Security Risk Assessment; Practices of Leading Organizations Exposure Draft, U.S. General Accounting Office (GAO/AIMD99-139)] 08/1999, http://www.gao.gov/special.pubs/ai00033.pdf. 8. U.S. General Accounting Office (1999). Information Security Risk Assessment: Practices of Leading Organizations Exposure Draft, U.S. General Accounting Office (GAO/AIMD-99-139) 08/1999, http://www.gao.gov/special.pubs/ai00033.pdf. 9. Lindstrom, P. “RISK MANAGEMENT STRATEGIES” Security: Measuring Up, CISSP 02/18/ 2005. 10. The Department of Homeland Security’s Risk Assessment Methodology: Evolution, Issues, and Options for Congress, CRS Report for Congress, February 2, 2007. 11. Seven Steps to Security Metrics Success, white paper by ClearPoint Metrics http://www. dreamingcode.com/dc ecomm/DocumentManage/DocumentManagement/56 82doc.pdf, 2008.

80

COMMUNICATION NETWORKS FOR REDUCING CASCADING IMPACTS

12. Peerenboom, J. P., and Fisher, R. E. (2007). Analyzing Cross-Sector Interdependencies, Infrastructure Assurance Center, Argonne National Laboratory, http://ieeexplore.ieee.org/iel5/ 4076361/4076362/04076595.pdf. 13. Stamp, J., Dillinger, J., and Young, W. (2003). Common Vulnerabilities in Critical Infrastructure Control Systems, Sandia National Laboratories, May 22, http://www.oe.netl.doe.gov/ docs/prepare/vulnerabilities.pdf. 14. President’s Information Technology Advisory Committee (PITAC) (2005). Report to the President - Cyber Security: A Crisis of Prioritization, February. 15. Rinaldi, S. M., Peerenboom, J. P., and Kelly, T. K. (2001). Complex Networks: Identifying, Understanding, and Analyzing Critical Infrastructure Interdependencies. Infrastructure Interdependencies—Overview of Concepts and Terminology, Infrastructure Assurance Center, Argonne National Laboratory, http://www.ce.cmu.edu/∼hsm/im2004/readings/ CII-Rinaldi.pdf. 16. Weber C. (2006). Assessing Security Risk in Legacy Systems, Cigital, Inc., Copyright © 2006, Cigital, Inc., https://buildsecurityin.us-cert.gov/daisy/bsi/articles/best-practices/legacy/624-BSI. pdf. 17. Shea, D. A. (2003). Critical Infrastructure: Control Systems and the Terrorist Threat Report for Congress (Updated February 21, 2003) Consultant Resources, Science, and Industry Division. 18. Dacey, R. F. (2003). Critical Infrastructure Protection: Challenges in Securing Control Systems, Information Security Issues, US General Accounting Office, October 10. 19. Stakhanova, N., Basu, S., and Wong, J. (2006). A Taxonomy of Intrusion Response Systems, Department of Computer Science Iowa State University, Iowa, USA, February. 20. Furnell, S., and Papadaki, M. (2005). Automated Intrusion Response, Network Research Group, School of Computing, Communications & Electronics, University of Plymouth, for Business Briefing Data Management, Storage, & Security Review, http://www.sciencedirect.com/ science? ob=ArticleURL& udi=B6VJC-4HDWHP7-4& user=1722207& rdoc=1& fmt=& orig=search& sort=d&view=c& version=1& urlVersion=0& userid=1722207&md5= b8a685ed03dfeadde206a5e355f4f2dd.

FURTHER READING Carlson, R. E., Dagle, J. E., Shamsuddin, S. A., and Idaho, P. E. (2005). A Summary of Control System Security Standards Activities in the Energy Sector prepared for Department of Energy Office of Electricity Delivery and Energy Reliability under National SCADA Testbed , October 2005. Balepin, I., Maltsev, S., Rowe, J., and Levitt, K. (2003). Using specification-based intrusion detection for automated response”. Proceedings of the 6th International Symposium on Recent Advances in Intrusion Detection, Pittsburgh, PA. Chiles, J. R. (2001). Inviting Disaster: Lessons From The Edge of Technology, HarperCollins Publishers, New York. Critical Foundations: Protecting America’s Infrastructures, The Report of the President’s Commission on Critical Infrastructure Protection, October 1997. Instrumentation, Systems, and Automation Society. (2004). ISA-TR99.00.02-2004, Integrating Electronic Security into the Manufacturing and Control Systems Environment , ISBN: 1-55617-889-1, Research Triangle Park, NC. Instrumentation, Systems, and Automation Society. (2004). ISA-TR99.00.01-2004, Security Technologies for Manufacturing and Control Systems, ISBN: 1-55617-886-7, Research Triangle Park, NC.

FURTHER READING

81

Kabiri, P., and Ghorbani, A. A. (2005). Research on intrusion detection and response. A survey. Int. J. Netw. Secur. 1. Perrow, C. (1999). Normal Accidents: Living with High-Risk Technologies, Princeton University Press, Princeton, NJ. Petroski, H. (1992). To Engineer Is Human: The Role of Failure in Successful Design, Vintage Books, New York. Petroski, H. (1994). Design Paradigms: Case Histories of Error and Judgment in Engineering, Cambridge University Press, Cambridge. Rinaldi, S., Peerenboom, J., and Kelly, T. (2001). For a more complete description of infrastructure interdependencies, see Complexities in Identifying, Understanding, and Analyzing Critical Infrastructure Interdependencies invited paper for special issue of IEEE Control Systems Magazine on “Complex Interactive Networks,” December. United States Computer Emergency Readiness Team (2005). Control Systems Cyber Security Awareness US-CERT Informational Focus Paper, Produced by, July 7.

6 THE SENSOR WEB: ADVANCED TECHNOLOGY FOR SITUATIONAL AWARENESS Kevin A. Delin SensorWare Systems, Inc., Pasadena, California

Edward Small Sacramento Metropolitan Fire District, Sacramento, California and FEMA Urban Search and Rescue Team, CA Task Force 7, Sacramento, California

6.1

INTRODUCTION

The need for situational awareness in the dynamic environment of emergency and rescue operations is well understood. Data must be continually collected, analyzed, assimilated, and disseminated to both local operational personnel and remote commanders. The basic principles of “Facts, Probabilities, Own Situation, Decision, and Plan of Operation” for fire and rescue strategies are just as relevant today as they were when originally described in 1953 [1]. Simply stated, situational awareness informs decision making and decreases reaction time to changing conditions, even allowing for anticipation of events in certain instances. Failure to effectively collect, synthesize, and distribute facts to personnel involved at all levels of a field operation will result in service delays, or worse, death. Emergency services personnel cannot begin operations without having the ability to monitor for hazards and account for personnel. Because emergency and rescue operations are labor intensive, however, continuous and effective monitoring for hazardous conditions often becomes less of a priority, or disappears entirely. It is therefore of critical importance to find a technological means to generate situational awareness for those personnel working in the hazardous area, both as a means to speed the course of the operation and to protect the personnel from danger. Here, one such new piece of equipment, the Sensor Web, is examined. This technology can aid and substitute for human efforts in understanding the changing, and often chaotic, conditions during emergency service operations. First, the Sensor Web technology will be briefly described. Then, a series of representative field applications, including actual Communication and Information Systems Security, Edited by John G. Voeller © 2014 John Wiley & Sons, Inc. Published 2014 by John Wiley & Sons, Inc.

83

84

THE SENSOR WEB: ADVANCED TECHNOLOGY FOR SITUATIONAL AWARENESS

operations, will be given, which illustrate the unique capabilities of the Sensor Web as applied to emergency services. Finally, future directions of the technology are considered.

6.2

SENSOR WEB TECHNOLOGY

The Sensor Web is an embedded, intelligent infrastructure for sensors. Physically, it consists of spatially distributed sensor/actuator platforms (called “pods”) that wirelessly communicate with one another (Fig. 6.1). Originally developed at National Aeronautics and Space Administration (NASA) for planetary exploration of unknown environments, the Sensor Web is also well suited for providing situational awareness in the chaotic and unpredictable environments associated with emergency and rescue operations. Despite its sophistication, such a system would cost no more than traditional, less capable wireless solutions and could actually reduce total operational costs by providing continual, automated in-field analysis thereby freeing up rescue personnel for other, more demanding, tasks. 6.2.1

Sensor Web Protocols

The wireless communication between pods should be thought of as an information bus, in the same way that buses connect the individual components (hard drive, optical drive, memory, logic units, etc.) of a computer. Consequently, while the individual pods are certainly networked with each other, the Sensor Web is not, in and of itself, a network but rather a spatially distributed macroinstrument [2]. The distinction is crucial: a network consists of components that route information along communication paths to specific destination points, while a macroinstrument consists of components that share information with each and every other piece at all times without any intervening routing. The Sensor Web’s applicability to situational awareness, in fact, derives from having its sensor measurements taken, distributed, and interpreted collectively over this unique, massively redundant, communication architecture.

FIGURE 6.1 Schematic representation of a Sensor Web. Sensors are connected to the pods (white boxes). The pods communicate wirelessly to form an amorphous network where all pods are equivalent and any pod can be a portal to the outside world.

SENSOR WEB TECHNOLOGY

85

It can be stated in another way, that every pod pushes its data out onto the Sensor Web in an omni-directional manner. There is never a purposeful routing of data toward a specific pod or a special portal or gateway. In this way, every pod is made aware of conditions throughout the entire Sensor Web during each measurement cycle. In fact, while the computation hardware in a pod can be quite sophisticated, it is the sharing of information among the pods that gives the Sensor Web its macrointelligence. This is similar to how perception is created in the brain from a complex, interacting set of neurons that share electrochemical signals [3] rather than from individual intelligence at each neuron. The Sensor Web communication architecture of data sharing is distinct from both hub-and-spoke and mesh network types. In hub-and-spoke networks, individual spoke nodes can be synchronized to the hub but the information must always be routed through the hub to get to other points. Mesh networks are typically based on asynchronous Internet-like (TCP/IP) protocols and require information routing as well. In marked contrast to these two network types, the Sensor Web communication architecture, by design, is synchronous (all measurements across the system are taken at the same time) and requires no routing. The Sensor Web communication protocols are simple and robust. Each measurement cycle begins with the pods taking in sensor data. After a measurement is taken, each individual pod in the system broadcasts its information (data it has taken or received from others) in an omni-directional manner to all pods in communication range. Each pod then processes and analyzes the information it has received and the cycle repeats. In this way, the information is hopped pod-to-pod and spread throughout the entire Sensor Web. The entire system becomes a coordinated whole by possessing this internal, continuous data stream, drawing knowledge from it, and reacting to that knowledge.

6.2.2

Sensor Web Pods

A key feature of the Sensor Web is that its component parts, the sensor platforms or pods, are all alike (Fig. 6.2). In general, they only differ by the sensors attached to them. A single pod, known as the mother, holds the single, system-wide clock that will synchronize all pods. The mother, however, holds no special hardware; indeed any pod may be designated as the mother simply by labeling it as such. Unlike hub-and-spoke and mesh networks, the Sensor Web is a truly amorphous network with no central point and no specially designated portal or gateways. Each Sensor Web pod consists of five basic modules: 1. The radio. Although any radio frequency can be chosen, the 900 MHz license-free Industrial, Science and Medical (ISM) band has been used in manufactured Sensor Web systems to date. This frequency requires no licensing of end-users and does not compete with the more common frequencies found at emergency sites (minimizing jamming). In addition, radios operating at this frequency do not require line-of-sight communication (even going through concrete walls) and have an upper range of about 200 m (compliant with government power regulations). Because each pod will essentially function as a repeater and retransmit data it receives from other pods, the effective radio range is extended far beyond the limits imposed by the specifications and regulations associated with a single radio.

86

THE SENSOR WEB: ADVANCED TECHNOLOGY FOR SITUATIONAL AWARENESS

(a)

(b)

FIGURE 6.2 Sensor Web pods. (a) A standard pod is in the background and shows both the 900-MHz antenna and a solar panel to harvest additional energy for the pod’s rechargeable batteries. In the foreground is a special display pod where a flat panel display has replaced the solar panel and reveals the conditions at other pods. This allows mobile personnel to monitor the Sensor Web without having to access a computer. Labels have been attached to the pods to clearly mark their software-assigned identification. (b) A responder deploys a Sensor Web pod. The pods can be mounted using a variety of hardware including stands, magnetic bases, and spikes.

2. The microprocessor. This component contains the system’s protocols, communicates with the attached sensors, and carries out data analysis as needed. 3. The power system. The combination of solar panels, rechargeable batteries and micropower electronic design have kept Sensor Web pods operating in the field for years without requiring maintenance. 4. The pod packaging. The package is lightweight, durable, inexpensive, and sealed against such elements as rain, standing water, water sprays, dust storms, and caustic chemicals. In addition, it enables an easy and rapid mounting. 5. The sensor suite. This module is completely determined by the specific application. It is the ability to accommodate a wide range of sensor types that makes the Sensor Web so versatile. For the types of operations discussed here, typical sensors include those for monitoring gases and environmental conditions such as air temperature and humidity. 6.2.3

Sensor Web Properties

As just described, the Sensor Web is a distributed macroinstrument based on its unique protocols that allow for data sharing via nonrouted, synchronous inter-pod communication. These characteristics create valuable Sensor Web properties when the technology is applied to emergency and rescue operations (Fig. 6.3).

SENSOR WEB TECHNOLOGY

87

FIGURE 6.3 A confined space training facility for Urban Search and Rescue. The Sensor Web system was deployed inside the structure as indicated schematically by the dots. The size of the structure is indicated by the size of the firefighter on top of the structure near pod 7. Lines connecting dots indicate physical paths, not pod connectivity. When deployed, the pods were able to wirelessly communicate through the concrete barriers; for example, pod 4 was in direct communication with pods 1, 2, and 6. When a chain saw was intentionally left running between pods 5 and 6, personnel were able to observe the migration of carbon monoxide through the structure with one front toward pods 6, 1, and 2 and a secondary front toward pod 7. All pods in the system went into alarm together when the gas first reached threshold levels at pod 6.

1. The redundant, nonrouted data sharing allows for any pod to be a portal to the outside world. Since every piece of the Sensor Web contains all the same information, anyone with access to a single pod will be provided with the same situational awareness picture. This means that the first responder at the rescue scene looking at the flat panel display of a pod, the incident commander directing operations who has plugged his laptop computer into another pod, and remote government personnel examining the information of a third pod being sent out over the Internet, will all simultaneously have a common, unified picture of operations. 2. A single, system-wide clock provides for an immediate synchronous snapshot that can be intuitively understood by field personnel. Because the Sensor Web is synchronous, all the measurements are taken at the same time and, in essence, become a pixel in the overall picture taken by the Sensor Web. This picture is an immediate integration and valuable for in-field personnel to understand plume motion, for example, without the need for time-consuming posthoc analysis. Moreover, when combined with the massively redundant, nonrouted communication paths, this snapshot is known to all pods in the Sensor Web which allows for anticipatory warnings and alarms. In this way, field personnel are alerted to dangerous conditions anywhere in the work zone immediately by the Sensor Web rather than by a remote incident commander.

88

THE SENSOR WEB: ADVANCED TECHNOLOGY FOR SITUATIONAL AWARENESS

3. Synchronous system behavior reduces latency between data taking and data reporting. Each pod has the same situational awareness picture at the end of every measurement cycle. Because each pod can act as a portal into the system, this implies that all end-users are kept up-to-date on every measurement cycle. 4. The massively redundant, nonrouted communication paths provide for a highly robust structure with no central point of system failure. Since all pods are potential portals, the loss of any particular pod does not affect the entire operation. Since any pod can be designated as the single, system-wide clock, the worse case scenario of a damaged mother pod is rapidly recoverable simply by assigning (via a software label) the designation of “mother” to another pod. This worse case failure can even be corrected automatically without human intervention if, after a predetermined amount of time without receiving signal from the mother, the pod with the current lowest serial number in the Sensor Web promotes itself to mother. 5. The redundant, nonrouted, data sharing allows for recoverable single point sensor failure. Because every pod contains a microprocessor and knows all sensor measurements across the system at each measurement cycle, each pod can locally analyze global conditions across the entire Senor Web. It is therefore possible for the system to immediately evaluate a seemingly anomalous measurement against the background of neighboring measurements to determine, on a statistical basis, false positives or to distinguish between a merely worsening trend versus a true critical situation requiring evacuation. It is also possible for the system to “suggest”, within the same measurement cycle, a missing sensor measurement by combining spatial interpolation of neighboring measurements with recent local measurement trends. 6. The massively redundant, nonrouted communication paths allow the system to be easily and rapidly deployed. No special skill is required to set up a Sensor Web as all the pods are essentially the same (in contrast to other networking schemes which requires special hardware gateways or router tables). As a result, once the mother pod is switched on to provide a clock for the system, the pods may be dispersed as dictated by the needs of the situation. Data taking is immediate and the pods may even be reshuffled on the fly and, as long as they stay within communication range, maintain the overall Sensor Web macroinstrument. This is particularly valuable if pods are assigned to specific rescue squads that are moving independently through a building. Because emergency service operations do not allow the time to leisurely recall complex equipment function, the simplicity and speed of deployment may be among the Sensor Web’s most compelling and important features.

6.3

SENSOR WEBS APPLIED TO FIELD OPERATIONS

To date, there have been over 30 Sensor Web field deployments with systems spanning distances up to 6 miles and running continuously for over 3 years. The systems have been tested extensively in numerous, challenging environments from the remote ice slopes of Antarctica to the searing heat of the central New Mexico desert to the corrosive salt air of the Florida coast [4–6]. Real time, streaming output of some of these systems may be viewed over the Internet using a variety of user interface displays [6]. Here, the Sensor Web capabilities as applied specifically to emergency and rescue operations will be

SENSOR WEBS APPLIED TO FIELD OPERATIONS

89

examined. As will be shown, the Sensor Web’s properties and physical robustness allow it to efficiently bring key environmental parameters together in a continuous operational picture and disseminate this picture to in-field and remote personnel. A critical issue for any type of deployment is determining a pod’s location. For extended, outdoor operations, this is most easily accomplished by using an external Global Positioning System (GPS) unit during deployment, noting pod placement coordinates, and putting these coordinates into the pod’s memory as it is deployed. Pods can then share their individual coordinates with each other just as they do with sensor data. If pod power usage, size, and cost are not an issue, it is also possible to place a GPS unit inside each pod. Sometimes, however, GPS coordinates may not be a practical option because (i) pods may be shielded from strong GPS signals (as when placed inside a building), (ii) building geometry provides a more transparent understanding of pod placement (e.g. “pod at the west end of the first floor corridor”), or (iii) typical GPS resolution may not be accurate enough (as in the case of placing a pod against a wall and determining which room it is in). In such cases, simple hand mapping by in-field personnel has been found to be effective and easily performed, even under rapid deployment circumstances. In addition, these hand-mapped “coordinates” typically provide rescue personnel with a clearer, intuitive picture of how the Sensor Web is deployed in the area. Pods can still autonomously perform spatial data analysis in these cases because a relative pod placement map can be formed by each pod through a shared knowledge of every individual pod’s nearest neighbors.

6.3.1

Atmospheric Monitoring

The predominant cause of death in a confined space incident is from a hazardous atmosphere. Typically, in a confined space or structural collapse operation, one highly competent person is dedicated to the position of “environmental officer”. This person continuously monitors the atmosphere by way of a gas sensor with remote sampling capability, or by periodically requesting a reading from the entrant who is carrying a gas sensor, or both. This person also directs the forced ventilation efforts to enhance victim survivability and to maintain as tenable, and explosion-free, an atmosphere as possible. There are several difficulties associated with the present technique. First, because all of the gases monitored have different vapor densities, they tend to stratify at different levels in the space, or may become trapped in dead spaces and fail to diffuse into the atmosphere. For this reason, atmospheric sampling must occur at 4-ft intervals, both vertically and horizontally. This significantly slows the progress of field personnel motion as they move into or even exit a confined space operation. Second, while the use of a sampling pump and tubing is a common method of obtaining remote gas samples within the confined space, there is the issue of the time it takes for the atmospheric sample to be drawn through the sample tubing. Since this may take up to 3 sec/ft of tubing and 50-ft sections of tubing are not uncommon, gas monitoring can require 1.5–2.5 min of delay per sample. Third, with the portable devices typically used by rescue personnel, evacuation of the space is often at the discretion of those working. Experience has shown that most in-field personnel consider an alarm to be an annoyance, and remain focused on completing their task; the atmosphere is only a “little hazardous”. Finally, documentation of the atmospheric monitoring for most types of confined space operation is required by

90

THE SENSOR WEB: ADVANCED TECHNOLOGY FOR SITUATIONAL AWARENESS

state/federal law, yet maintaining this documentation during the operation takes away from the actual emergency or rescue services. Sensor Webs have been built and successfully used for monitoring confined space atmospheres. Here, the pods are equipped with the four gas sensors necessary for this application (e.g. oxygen, carbon monoxide, hydrogen sulfide, and explosive limits). The pods also contain sensors for air temperature and humidity. The measurement cycle for this Sensor Web is programmed for 30 sec. These systems have been used for several years now in confined space operations and collapsed structure training exercises. It has been found that the rescue personnel readily adapt to the new technology and have no difficulty infusing the technology into standard procedures (Fig. 6.4). Benefits of using the Sensor Web for atmospheric monitoring are as follows: 1. Providing a permanent sensing infrastructure that frees the rescue personnel from having to take measurements. As a squad penetrates the confined space or collapsed structure, it deploys pods along the ingress path, effectively growing the Sensor Web. Once in place, the Sensor Web allows personnel to move freely into and out of the operations area. This is especially important during lengthy operations where squads will be replaced periodically. Moreover, since all pods would alarm when any pod detects a hazard, other responders working in other portions of the building will be aware if there is a gas leak that could affect them as well. As a

(a)

(b)

FIGURE 6.4 Sensor Web used for atmospheric monitoring. (a) A 10-pod Sensor Web system, including laptop and several varieties of pod mounting hardware fit compactly in a case, ready for rapid field deployment. (b) A pod being lowered into a confined space to determine atmospheric conditions. The pod will be left in this space for the duration of the operation allowing personnel to freely move into and out of the area.

SENSOR WEBS APPLIED TO FIELD OPERATIONS

91

FIGURE 6.5 A screen-capture of the user interface for the Sensor Web. Time-based trending data are immediately available as are current readings from the system. The interface also serves as a command portal into the system. Note the spike in CO concentration at hour 15. This was a result of an acetylene torch being ignited inside the collapsed structure.

result, responders will quickly know when to exit and how to modify egress paths by detecting remote atmospheric changes due to gas leaks. 2. Reducing the latency of obtaining measurements compared to drawing gas through a tube. Measurements are now available to the environmental officer at the sampling frequency (here, 30 sec.) throughout the field of operation. 3. Providing the environmental officer and incident commanders with a full picture of atmospheric conditions without diverting the rescuer’s attention from other crucial tasks (Fig. 6.5). During an actual collapsed structure training operation, the Sensor Web revealed trends of oxygen displacement from the expired carbon dioxide of rescuers in confined areas, as well as increases in temperature and humidity from rescuers and equipment. This allowed the incident commander to move ventilation fans from other parts of the operation into the affected confined areas to allow rescuers to proceed without stopping. While the Sensor Web system provides warning of imminent hazardous conditions, just like the single station gas detectors, the greater value may be in its ability to display trends in environmental conditions and disseminate that information to commanders with authority to act. 4. Providing an accurate and immediate record of conditions recorded every 30 sec by the Sensor Web and output to a laptop connected to the mother pod.

92

THE SENSOR WEB: ADVANCED TECHNOLOGY FOR SITUATIONAL AWARENESS

FIGURE 6.6

6.3.2

Pods are attached to shoring.

Structural Integrity Monitoring

The dangers present in and around structures compromised by natural or man-made disasters are rarely static. Building conditions can continue to deteriorate by the actions of earthquake aftershocks, wind, rain, and snow loading, and the intentional or unintentional actions of rescue workers. The 4-gas Sensor Web pods described above also have an accelerometer built into them. The accelerometer functions as a tiltmeter to determine a change of state in the pod’s orientation. The pod, attached to shoring or a building wall, can monitor changes while they are occurring and warn of impending failure. Such pods therefore perform double-duty, monitoring both atmosphere and structural integrity with the attendant reporting benefits for both parameters (Fig. 6.6). Shoring stress tests revealed that the Sensor Web provided warning 60 s (two measurement cycles) before shoring failure. Such an advanced warning, distributed throughout the entire space of operations, would greatly reduce the risk of personnel being caught in a further collapse. 6.3.3

Decontamination Monitoring

During biological decontamination operations, a chemical agent is introduced at the proper temperature and humidity to destroy the intended pathogen. Such operations are typically very complex logistically and labor intensive. The target structure to be decontaminated is first sealed with tarps that must hold the caustic decontaminant gas over many hours. Typical operation will have the caustic gas pumped into and out of the structure continually with small fans placed throughout the structure to ensure even distribution of the gas. The entire building’s atmosphere needs to be monitored during the decontamination procedure to ensure that proper conditions exist (decontaminant concentration, temperature, and humidity) to kill the pathogen.

FUTURE DIRECTIONS

93

Presently, wet chemistry techniques are used to monitor the operation. Long plastic tubing is distributed throughout the structure from a central hub. Atmospheric samples are pumped out of the hub at regular intervals and examined chemically to maintain appropriate conditions in the building. There are several disadvantages with this technique. First, the distribution of the plastic tubes can take nearly a full day, even in a modest-sized home. Second, sampling in this manner means that the chemical analysis will only yield the average conditions of the building due to gas mixing in the central hub. As a result, there still might be pockets in the structure where an appropriate concentration of decontaminant is not obtained, allowing for the pathogen to live. Lastly, water can collect in the plastic tubing due to the high humidity conditions typically needed during operations. This water can effectively absorb the gas (ClO2 in the case of anthrax decontamination), which will create spurious results during chemical analysis. The tubes can also fill with ice during winter-time decontaminations and therefore make sample retrieval impossible and force operators to abort the entire operation. A Sensor Web with ClO2 sensors attached to each pod has been successfully used to alleviate these problems (Fig. 6.7). Inclusion of the technology into the operations was easy and personnel were able to learn the system in literally under 15 min. At least a dozen actual decontaminations have been performed using this system. In addition to the tremendous labor reduction in eliminating the tube system for gas sample retrieval, the Sensor Web allows operators to follow the ClO2 gas plume throughout the structures in real time. On more than one occasion, this enabled operators to immediately find leaks in the building’s tarps as well as building pockets where ClO2 gas concentrations were lower than expected.

6.4

FUTURE DIRECTIONS

The Sensor Web is a general information infrastructure for sensors. It collects, analyzes, and reacts to whatever conditions are important to the end user, and can be coupled with existing methods of obtaining the necessary data. Any low-bandwidth sensor can be attached to the Sensor Web and enhance the situational awareness properties of the system. Initial experiments using a Sensor Web to track in-field personnel during operations show promise. Such a capability would benefit locating fire fighters lost in buildings as well as provide an accurate count of personnel going into and out of a rescue operation area. Other applications for the Sensor Web immediately present themselves, especially involving infrastructure protection. The properties of the Sensor Web make it ideal for sentinel security systems where a disturbance at one pod will be known by all. Moreover, the massively redundant connectivity of the macroinstrument makes the Sensor Web amenable to over-the-horizon monitoring of rail and highways. Finally, the reactive capabilities of the Sensor Web to dynamic environments are only now being explored. Future systems may, for example, control ventilation during rescue operations based on changing atmospheric conditions and consequently free up additional labor that can be better applied to the actual rescue tasks at hand. Clearly, the situational awareness capability inherent in the Sensor Web can only increase with advances in the underlying technology and with no additional cost compared to that of more traditional, less capable wireless solutions.

94

THE SENSOR WEB: ADVANCED TECHNOLOGY FOR SITUATIONAL AWARENESS

(a)

(b)

FIGURE 6.7 Sensor Web used for decontamination operations. (a) Environmental Protection Agency (EPA) decontamination team member deploying a Sensor Web pod. The black box attached to the pod is the ClO2 sensor. (b) A Sensor Web is deployed in a decontamination area that also uses the traditional sampling method to monitor ClO2 concentration. Note the traditional method requires plastic tubing that runs through the entire house and connects to a central hub where atmosphere samples will be pumped out for chemical analysis.

REFERENCES

95

REFERENCES 1. Layman, Lloyd. (1953). Fire Fighting Tactics, National Fire Protection Association, Quincy, MA. 2. Delin, K. A. (2002). The Sensor Web: A macro-instrument for coordinated sensing. Sensors 2, 270–285. 3. Koch, C., and Laurent, G. (1999). Complexity and the Nervous System. Science 284, 96–98. 4. Delin, K. A. (2005). Sensor webs in the wild, Wireless Sensor Networks: A Systems Perspective, N. Bulusu, and S. Jha, Eds. Artech House, Norwood, MA, pp. 259–272. 5. Delin, K. A., Jackson, S. P., Johnson, D. W., Burleigh, S. C., Woodrow, R. R., McAuley, J. M., Dohm, J. M., Ip, F., Ferr´e, T. P. A., Rucker, D. F., and Baker, V. R. (2005). Environmental studies with the sensor web: principles and practice. Sensors 5, 103–117. 6. See links at www.sensorwaresystems.com.

7 CONSEQUENCE MITIGATION: CURRENT RESEARCH AND FUTURE TRENDS Po-Ching DeLaurentis, Mark Lawley, and Dulcy M. Abraham Purdue University, West Lafayette, Indiana

7.1

INTRODUCTION

The 1998 President’s Commission on Critical Infrastructure Protection identified telecommunications, energy systems, water supply systems, transportation, banking and finance, and emergency and government services as essential core infrastructures for our modern society. A later national plan for critical infrastructure protection developed jointly by the Executive Office of the President, Office of Science and Technology Policy (OSTP) and the Department of Homeland Security (DHS), identified other key infrastructures such as agriculture and food, the defense industrial base, national monuments and icons, dams, commercial facilities, nuclear reactors, and materials and waste [1]. The increase in terrorist activities around the world, the possible use of weapons of mass destruction, and acts of nature such as earthquakes and floods pose threats to the security of various civil infrastructure systems. Further, increasing population concentrations have significantly stressed many infrastructure systems [2]. These factors combined with the cascading effects of system failures in power grids, telecommunication networks, transportation systems, and so on, intensify the need for effective disaster response planning and mitigation strategies. Many definitions have been proposed for disaster/hazard/consequence mitigation. In this chapter, consequence mitigation refers to response strategies performed on existing infrastructures after an attack or disaster has occurred. It uses existing resources to take effective actions in order to minimize damage and propagation of damage and loss of life and property. Consequence mitigation strategies include contingency plans, rapid recovery plans, and operational tactics. Examples of these include damage control, toxic agent confinement, first responder deployment, and resource reallocation [3]. Figure 7.1 illustrates the distinction between mitigation strategies for pre- and postdisaster planning and response.

Communication and Information Systems Security, Edited by John G. Voeller © 2014 John Wiley & Sons, Inc. Published 2014 by John Wiley & Sons, Inc.

97

98

CONSEQUENCE MITIGATION: CURRENT RESEARCH AND FUTURE TRENDS

System modification

Existing system

Before Disaster

FIGURE 7.1 al. [3]).

7.2

After disaster

Security improvements

Consequence mitigation

• Identification of critical components • Security investments

• Contingency plans • Rapid recovery plans • Operational tactics

• Network topology change • Redundancy enhancement • Resiliency enhancement

Disaster response and planning for infrastructure systems (adapted from Qiao et

SCIENTIFIC OVERVIEW

There is only limited research in consequence mitigation. Most existing work focuses on public policy and government planning issues, with little emphasis on operational issues. Researchers distinguish between the goals of disaster prevention and consequence mitigation. Prevention focuses on preparation activities that reduce the likelihood of adverse events. These may include security improvements, upgrades in topology, addition of redundancies, and/or resiliency enhancements. In contrast, mitigation involves alleviating disaster effects, responding to their impacts, and recovering from the consequences. Recovery may involve reconstruction of damaged facilities, restoration of physical and social networks, rehabilitation (including psychological recovery), and restitution (such as return to a previous physical or societal state). Both prevention and mitigation depend on predisaster planning and preparation. The following sections provide an overview of recent mitigation research, mitigation tactics and technologies, and effectiveness measures for mitigation efforts. 7.2.1

Active Research Work and Recent Accomplishments

The National Critical Infrastructure Protection Research and Development (NCIP R&D) Plan was established in 2004 by the US DHS and the OSTP [1]. This plan identifies the critical R&D needs in securing the nation’s infrastructures and key resources, and outlines several science, technology, and engineering themes that support all aspects of infrastructure protection. The NCIP R&D Plan summarizes recent accomplishments in the development of strategies for consequence mitigation. It includes ongoing activities regarding protection of critical infrastructures undertaken by different agencies in the United States. Key highlights of consequence mitigation related efforts are listed in Table 7.1.

SCIENTIFIC OVERVIEW

99

TABLE 7.1 Ongoing and Recent Efforts of Consequence Mitigation by National Agencies in the United States Agency

Highlights of Consequence Mitigation Effort

Department of Defense (DOD)

Department of Energy (DOE)

Department of Labor (DOL)

National Science Foundation (NSF)

US Department of Homeland Security, Federal Emergency Management Agency (FEMA)

US Environmental Protection Agency (EPA) and National Homeland Security Research Center (NHSRC)

7.2.2

Development of technology for unexploded ordnance and dangerous materials detection inside assets and underground facilities Development and deployment of a real-time global positioning system (GPS) with synchronized wide-area sensor network systems for electric grid monitoring and control Development of decontamination foam, which neutralizes chemical and biological agents in minutes Developing protection, decontamination, and training guidance for hospital-based first receivers of victims of biological/chemical weapons of mass destruction Supporting research in nano- and biotechnology applications in protective materials and devices, new architectures for secure and resilient cyber and physical infrastructures, and sensors and sensor networks Best practice portfolio —contains a collection of ideas, activities/projects, and funding sources that can help reduce or prevent the impacts of disasters, and is searchable by state, county, sector type, hazard, category/activity/project, and keywords [4] Homeland security scientific research and technology development activities consist of the following: (i) threat and consequence assessment of human exposure to hazardous materials; (ii) decontamination and consequence management; (iii) water infrastructure protection; (iv) emergency response capability enhancement; (v) technology testing and evaluation [5]

Mitigation Technologies and Tactics

There are different ways for mitigating disaster damages and potential hazards. One tactic aims at providing early detection and monitoring of the progression of a disaster so that damage control can be implemented at the earliest. Another method is the application of technologies that help minimize the damage caused by a disaster. Proper training for emergency workers who are called to respond to disasters is also deemed necessary for effective disaster mitigation.

100

CONSEQUENCE MITIGATION: CURRENT RESEARCH AND FUTURE TRENDS

7.2.2.1 Communication. A commonly suggested means for consequence mitigation is reliable wireless information technology (IT) infrastructure [6] that can be deployed in various scenarios as an independent and secured emergency communication system [7]. Emergency management and control and responders may rely heavily on such infrastructure to communicate during disasters particularly when the public communications infrastructure is destroyed or severely damaged by the disaster. Developing frameworks for coordinated chains of communication is essential to provide quick response to victims as well as to ensure the safety of the responders. 7.2.2.2 Hazard Detection. Chemical/biological (C/B) or radiological attacks, in general, are difficult to detect and control especially in public spaces. The Program for Response Options and Technology Enhancements for Chemical/Biological Terrorism in Subways (PROTECTS) is an initiative of the Department of Energy aimed at developing and applying technologies dealing with C/B terrorism. It covers both emergency planning and response phases of an incident and focuses on modeling and analyzing responses using engineering technologies [8]. A key technology developed by the PROTECTS program is the detection of the release of a C/B or radiological attack. Detection can be made by agent sensors, artificial intelligence, and video technologies. An artificial intelligence algorithm is used to recognize certain patterns of motion and sounds that are characteristics of a panicking crowd. The confirmation of a true incidence (not a false alarm) is then done by inspecting closed-circuit TV images. The use of this technology helps to keep casualties low since the situation can be observed remotely. Another frequently mentioned technology is sensor networks [9–11]. A sensor network is a collection of small, low-cost, and low-power devices with limited memory storage and wireless communication capabilities. One application is the personal digital assistant (PDA)-based multiple-patient triage device that can be used for on-scene patient triage, tracking, data recording, and monitoring of the physical environment. These data can help a medical facility prepare and appropriately allocate required resources (e.g. medical staff, beds, and operating rooms) before patients arrive, especially in a mass casualty scenario. Another sensor network application is the use of vital sign sensors that can be worn by disaster victims and first responders. These sensors can monitor a patient’s physical condition and relay data to emergency medical personnel, enabling them to manage multiple patients simultaneously and be alerted if there are sudden changes in a patient’s physiologic status [9]. Bioscience is another type of technology that can be used to detect hazards. For instance, the Sandia National Laboratories have been developing a state-of-the-art type of technology that uses special proteins to accurately and quickly detect specific bioterror threat agents [12]. 7.2.2.3 Mitigating Technologies. Several technologies can be applied in mitigating hazard from a C/B attack: (i) inflatable barriers for blocking the spread of agent, (ii) water curtains, air curtains, and water or foam sprays for containing and detoxifying contaminated air, (iii) support tools for first responders and incidence commanders, such as hand-held devices that can receive information on-site, and (iv) training and exercises [8]. The Sandia National Laboratories have been active in developing effective tools to counter C/B attacks. One technology is containment foam that can be rapidly applied around an explosive device in order to reduce the blast effects and to capture the hazardous material that may be further spread [13]. Special types of coating materials can be used to contain radioactive materials by binding them, thereby preventing them from

SCIENTIFIC OVERVIEW

101

spreading. The lab has also developed bomb disablement tools that can be deployed by first responders to disable improvised explosive devices (IEDs) safely and remotely while preserving forensic evidence [12]. Successful technology applications are adopted by various US government agencies such as the DHS, the National Institute for Occupations Safety and Health (NIOSH), and the US Army. 7.2.2.4 Training. Emergency response/recovery technologies will not be effective without well-trained personnel [7]. Thus, organizations responsible for emergency or disaster mitigation tasks should provide proper training for first responders and incident management teams. Federal Emergency Management Agency (FEMA)’s Emergency Management Institute (EMI) provides a comprehensive list of training courses, both on-line and on-site, for emergency management officials [14]. In addition to educational and knowledge-based training, scenario-based physical exercises, such as simulated fires in a subway system, can also be an effective way for first responders and the incident commander to respond and implement optimal ventilation control strategies [8]. The Technical Support Working Group (TSWG), a US interagency forum, dedicates itself to developing technologies and equipment for the needs of the combating terrorism community. TSWG’s training efforts include the development of “delivery architectures” (e.g. knowledge management systems and software architectures), “advanced distributed learning” (e.g. tools and guidelines for developing standard training materials), along with “training aids, devices and simulations” (e.g. virtual reality and computer-based simulations) [15]. 7.2.2.5 Modeling and Simulation. Modeling and simulation tools can be used to understand what happens in disasters and thus what needs to be done in the mitigating and recovery stages. These tools can also be used as training exercises for the first responders, as well as the command and control personnel of Emergency Operations Centers (EOC) at all levels. In 2003 and 2004, the National Institute of Standards and Technology (NIST) held workshops on Modeling and Simulation for Emergency Response aimed at utilizing modeling and simulation technologies to better prepare for, mitigate, respond to, and recover from emergencies. Table 7.2 summarizes the range of emergency response modeling and simulation application tools discussed in the workshop. Details on the standard and related tools available can be found in Appendices A and B in NIST’s report [16]. Bryson et al. [17] pointed out that there have been few studies in disaster recovery plan modeling in the management science (MS) and operations research (OR) community. They demonstrated an application of mathematical modeling techniques for decision-making support for disaster recovery planning, suggesting the need for dedicated efforts in integrating technical and MS/OR knowledge and techniques so that mitigation tactics and strategies can be more efficient and effective. 7.2.3

Effectiveness Measures

In addition to pursuing state-of-the-art technologies and techniques for disaster consequence mitigation planning and implementation, having precise and accurate methods for evaluating the effectiveness and efficiency of the mitigation actions is very important. Effectiveness measures can serve as a real-time feedback mechanism in the process of mitigating a disaster so that decision makers and responders can

102

CONSEQUENCE MITIGATION: CURRENT RESEARCH AND FUTURE TRENDS

TABLE 7.2 List of Modeling and Simulation Applications for Emergency Response Talks in NIST 2003 Workshop Purpose Planning applications

Training

Real-time response

Application Tool Virtual reality (VR) modeling and simulation—using geographic information systems (GISs) data, drawings, building plans, and city maps are processed using terrain generation software JWFC simulation toolbox—providing multiechelon simulation across federal, state, and local government agencies 3D digital modeling, simulation, communication, and emergency response database (including facility models of high risk sites and libraries of the “best practice” processes) Mass prophylaxis planning using ARENA simulation and queueing analysis in Excel

Top Officials (TOPOFF)—a national-level “real-time” weapons of mass destruction (WMDs) response exercise 2D simulations for command and control exercises of fire incidents Automated exercise and assessment system (AEAS)—for emergency response and emergency management practitioners from the infrastructure owner/operator and local and state jurisdictional levels specific to WMD terrorist attacks Tools and services for atmospheric plume predictions—in time for an emergency manager to decide if taking protective action is necessary to protect the health and safety of people in affected areas

Developer Institute for Defense Analyses (IDA)

Joint Warfighting Center (JWFC) Dassault Systems, Data Systems & Solutions (DS&S), Science Applications International Corporation (SAIC), and SafirRosetti Department of Health and Human Services (DHHS), Agency for Healthcare Research and Quality (AHRQ) The Department of Justice, the Department of State, and the Federal Emergency Management Agency (FEMA) National Fire Programs, US Fire Administration, FEMA National Guard Bureau (NGB)

National Atmospheric Release Advisory Center (NARAC)

adjust plans and tactics accordingly. Examples of effectiveness measures include (i) public management—evacuation, restriction of entry, and quarantine; (ii) damage control—damage evaluation and pinpointing, agent confinement and elimination (e.g. toxic substance), and providing alternative sources of service; and (iii) recovery and rebuilding. However, the real challenge lies in defining and determining these effectiveness measures. Some methods that can help achieve the goals of effectiveness

CRITICAL NEEDS ANALYSIS

103

measures of consequence mitigation based on general emergency management include the following: 1. Effectiveness measures developed from past experience. After Hurricane Katrina devastated the New Orleans region in 2005, emergency response agencies began to evaluate strategies for better evacuating and sheltering of residents and reducing loss of lives and properties. Whatever actions and plans were taken during that specific incident, for example, can serve as “the least effective” measure. Future mitigation plans and strategies for responses during hurricanes can then be improved by benchmarking against this measure. 2. Use of risk and vulnerability assessment/mapping. Risk and vulnerability assessment/mapping can be used for designing mitigation tasks and examining if current mitigation strategies and tactics are sufficient or effective. For example, matrices of multiple hazards can be used to analyze the relevance of different mitigation tools for responding to various hazards. Risk and vulnerability mapping can also be used as a tool to show the probabilities of disaster occurrences as well as the possible damage on physical properties, community infrastructure systems, and human lives [18]. 3. Simulation models. Even though simulation may not provide the optimal solution, it can be useful as an effectiveness measure for disaster/hazard mitigation in integrating predisaster and postdisaster management and action plans and tactics to validate their effectiveness in a virtual environment. For example, the use of computer simulation for assessing the benefits of new technologies for disaster mitigation could reveal significant insight of such applications in a cost- and time-efficient manner [19]. Discrete event simulation and agent-based modeling, combined with human-in-the-loop and live simulation can provide significant insights on the effectiveness and efficiency of disaster responses. 4. Use of detection device and remote sensing tools. Detection devices such as sensors that can monitor or detect levels of C/B agents may be used after some hazardous material has been released. From these sensor readings, responders can determine the speed at which the agent is spreading or determine if the agent is confined where it was released. On the basis of this assessment they can plan their response. Then using real-time sensing, they can obtain feedback to assess whether their actions were effective in the situation.

7.3

ACTIVE RESEARCH AND FUNDING

Funding for research efforts in consequence mitigation has seen a surge since September 11, 2001. Table 7.3 lists several sources of funding for consequence mitigation.

7.4

CRITICAL NEEDS ANALYSIS

Consequence mitigation focuses on recovery after an attack or disaster has occurred. Technologies, such as those discussed above, support disaster mitigation efforts by providing more precise postdisaster information, such as pinpointing the location and amount

104

CONSEQUENCE MITIGATION: CURRENT RESEARCH AND FUTURE TRENDS

TABLE 7.3

Research Funding Available in the United States and Other Countries

Agency

Funding Interest US Sources of Funding

National Science Foundation (NSF)—Division of Civil, Mechanical and Manufacturing Innovation, Infrastructure Management and Extreme Events (IMEE) [20] National Science Foundation (NSF)—hazards mitigation and structural Engineering (HMSE) [21]

Centers for Disease Control and Prevention (CDC)—engaging state and local emergency management agencies to improve states’ ability to prepare for and respond to bioterrorism (funding opportunity number: CDC-RFA-TP06-601) [22] National Institutes of Health—Small Business Innovation Research (SBIR) E-learning for HAZMAT and emergency response (SBIR [R43/R44]) [23] US Department of Homeland Security—FEMA Hazard Mitigation Grant Programs (HMGP) [24] US Department of Homeland Security—Predisaster Mitigation Grant Program [24] US Department of Homeland Security Office of Grants and Training—2006 Homeland Security Grant Program (HSGP) [25] US Department of Homeland Security Office of Grants and Training—Emergency Management Performance Grant (EMPG) [25]

For scientists, engineers, and educators focusing on large-scale hazards on civil infrastructure and society, and on issues of preparedness, response, mitigation, and recovery. For scientists, engineers, and educators working on fundamental research such as the design and performance of structural systems, and new technologies for improving the behavior, safety, and reliability of structural systems and their resistance to natural hazards. Preparing the Nation’s public health systems to minimize the consequences associated with natural or man-made, intentional or unintentional, disasters. Funding is only available to the National Emergency Management Association (NEMA). For small businesses concerning the development of advanced technology training (ATT) products for the health and safety training and hazardous materials (HAZMAT) workers, emergency responders, and skilled support personnel. For states and local governments for implementation of long-term hazard mitigation measures after a major disaster declaration. For states, territories, Indian tribal governments and communities for hazard mitigation planning and the implementation of predisaster mitigation projects. For state and urban areas to obtain critical resources to achieve the interim national preparedness goal and implement homeland security strategies. The objective of this program is to help states develop effective emergency management systems that encourage the cooperation and partnership among government, business, volunteer, and community organizations and thus strengthening their preparedness for catastrophic events and emergency management capabilities.

CRITICAL NEEDS ANALYSIS

105

TABLE 7.3 (Continued ) Agency

Funding Interest International Sources of Funding

The European Mediterranean Disaster Information Network (EU-MEDIN)—applied multirisk mapping of natural hazards for impact assessment (ARMONIA) [26] Disaster Hazard Mitigation Project Kyrgyz Republic [27]

Australian Government—the local grants scheme (LGS) and the national emergency volunteer support fund (NEVSF) [28]

To provide the European Union (EU) with harmonized methodologies for producing integrated risk maps to achieve effective spatial planning in areas prone to natural disasters in Europe. To achieve (i) minimizing the exposure of humans, livestock, and riverine flora and fauna to radionuclides associated with abandoned uranium mine tailings and waste rock dumps in the Mailuu-suu area; (ii) improving the effectiveness of emergency management and response by national, local government agencies and local communities; (iii) reducing the loss of life and property in key landslide areas of the country. For local governments to help communities develop and implement emergency management initiatives and enhance critical infrastructure protective measures, as well as to provide training of security awareness for local government staff.

of a toxic chemical released. Nevertheless, there is little discussion of the operational aspects of the disaster mitigation effort itself, that is, on the mitigation decision-making processes, triggering events, responder coordination, and so forth. Thus, research contributions in the following areas are essential for effective operational response after disaster. 7.4.1

Multiorganizational Coordination

In any disaster scenario, it is likely that multiple agencies across different jurisdictions in the affected area would participate in the mitigation effort. For example, if a C/B attack were to occur at a subway station in a city, the local jurisdiction levels involved would include the subway management team (e.g., the city transportation administration), the city government, and the county in which the city is located. Public, private, and nonprofit organizations would need to coordinate their actions with each other and across jurisdictions to create a dynamic emergency response system to ensure effective mitigation and response to a disaster. Since the most effective response processes and plans will transcend political and response agency boundaries, coordinated response plans must be developed through the collaborative efforts of all key responders. 7.4.2

Management Framework

A disaster response management framework is needed for supporting decision makers in a disaster scenario. Such an emergency management framework should include the use

106

CONSEQUENCE MITIGATION: CURRENT RESEARCH AND FUTURE TRENDS

of real-time monitoring of the situation (such as the general environment, infrastructure systems, first responders, and victims) and real-time information and communication tools, both vertically in the organizational structure and horizontally among responders. Coordination of effective reporting from multiple sites (for instance, in an earthquake rescue effort) should also be incorporated in the framework so that mitigation decisions can be made to achieve effective and efficient actions taken to minimize negative impacts of a disaster [29]. Research on technology integration for real-time information exchange, decision-making support systems, and cross-infrastructure recovery efforts is very much needed. Furthermore, since response resources are likely to be limited or overwhelmed, it is important to prioritize response activities. Scarce resource allocation decisions need to be modeled, analyzed, and optimized beforehand to ensure that the most effective priorities and trade-offs occur during response.

7.5

RESEARCH DIRECTIONS

Four major themes can be identified for future research. 7.5.1

Computing Hardware, Software, and Research Tools

In the current environment, network of computers and devices embedded in infrastructure systems are susceptible to crippling cyber attacks. Thus, there is a significant need for next-generation Internet architectures that are designed to have security and inherent protection features at all levels of hardware and software [1]. The NCIP R&D plans propose the designing of cyber infrastructures that are resilient, self-diagnosing, and self-healing. In addition, future research will need to focus on developing integrated systems architecture/framework so that management and responders across jurisdictions and hierarchical response organizations can be well coordinated. 7.5.2

Interdependencies between Civil Infrastructures

Since modern civilization relies heavily on basic civil infrastructures that are interdependent of each other, improving coordination of security precautions taken by different utility systems is also an important task. In other words, it is necessary to look beyond the effects of an incident on a single system. Instead, future technologies and practice should be able to understand the perturbed behaviors of a complex, “system of systems” [30], which includes the unexpected cascading effects of infrastructure damage and failure. A good example is the concept of “resilient cities” that focuses on strengthening physical and social elements of an urban area and on bridging natural hazard and antiterrorism predisaster mitigation and postdisaster response strategies [31]. More effective use of technology transfer and broader industry/government support are needed to achieve the effectiveness of consequence mitigation for the industry/society as a whole [7]. 7.5.3

Sensing Technologies for Assessment

Advanced sensing technologies are essential for assessing the causes and effects of disaster events. Future infrastructures must be designed with more embedded sensing, diagnostic, and predictive capabilities. Integrated sensor networks with powerful

REFERENCES

107

computational and communication capabilities are now becoming a reality, and infrastructural planners, engineers, managers, and operators must develop fuller understandings of how these can be incorporated into design and decision processes. For instance, a real-time building damage sensing network could be used to assess the condition of a building after an extreme event, such as an earthquake. This type of information would greatly help disaster responders responsible for evacuation, search, rescue, and building reenforcement efforts. Further, remote sensing technologies, such as satellite images, are especially helpful in surveying a large affected area [32]. 7.5.4 Modeling and Analysis of Disaster Scenarios, Mitigation Plans, and Response Efforts Simulation modeling can be very useful in assessing the effectiveness of mitigation plans for different disaster scenarios. For example, scenario analyses of various types and/or severity of terrorist attack can be performed using simulation. This type of “what if” analysis serves as a virtual exercise test bed for identifying critical mitigation resources and improving mitigation plans and decisions. Furthermore, optimization techniques are very useful for analyzing decision problems where priorities must be considered and trade-offs must be made. For example, reconfiguring damaged infrastructures and prioritizing repair tasks to optimally meet a population’s demand during repair can be addressed by using advanced optimization techniques from Operations Research and Management Science. Technical challenges associated with developing these types of simulation and optimization models include model abstraction, validation, data scarcity, computational requirements, identifying appropriate objective functions, and interpreting the results, among others. A significant practical challenge is engaging laypeople in the development of the models and in ensuring that users of the model have adequate confidence in the results obtained through the models. Although this type of analysis has a long history in other large-scale planning and operation efforts such as manufacturing and transportation, it is in its infancy in disaster mitigation and response planning. Thus, there are significant opportunities for researchers to make seminal contributions in this area. One example is the application of a computer interface that integrates an agent-based discrete event simulation model and a geographic information system such that real-time data exchange and communication can coordinate and facilitate large-scale disaster response efforts [33]. Stochastic programming technique can be applied to address the issue of transporting first aid commodities and response personnel in an earthquake scenario [34]. Other examples of this type of work can be seen in [35–37].

REFERENCES 1. The Executive Office of the President Office of Science and Technology Policy (OSTP), and the Department of Homeland Security Science and Technology Directorate. (2004). The National Plan for Research and Development in Support of Critical Infrastructure Protection, US Department of Homeland Security, Washington, DC. 2. Hamilton, R. M. (2000). Science and technology for natural disaster reduction. Nat. Hazard. Rev. 1(1), 56–60. 3. Qiao, J., Jeong, H. S., Lawley, M. A., and Abraham, D. M. (2006). Physical security aspects in water infrastructure. In Advances in Homeland Security Volume 1, The Science of Homeland

108

4. 5. 6.

7. 8. 9.

10. 11. 12. 13. 14. 15. 16. 17. 18. 19.

20. 21. 22. 23. 24. 25. 26.

CONSEQUENCE MITIGATION: CURRENT RESEARCH AND FUTURE TRENDS

Security, S. F. Amass, A. K., Bhunia, A. R., Chaturvedi, D. R., Dolk, S. Peeta, and M. Atallah, Eds. Purdue University Press, pp. 37–62. FEMA. FEMA Mitigation Best Practices Search. (2008). http://www.fema.gov/mitigationbp/ index.jsp. U.S. Environmental Protection Agency. (2008). Homeland Security Research, http://www.epa. gov/nhsrc/index.htm. Midkiff, S. F., and Bostian, C. W. (2002). Rapidly-deployable broadband wireless networks for disaster and emergency response. The First IEEE Workshop on Disaster Recovery Networks (DIREN ’02). June 24, 2002, New York. Schainker, R., Douglas, J., and Kropp, T. (2006). Electric Utility Responses to Grid Security Issues. IEEE Power and Energy Magazine, March/April, 31–37. Policastro, A. J., and Gordon, S. P. (1999). The use of technology in preparing subway systems for chemical/biological terrorism. APTA 1999 Rapid Transit Conference. Toronto, ON. Lorincz, K., Malan, D. J., Fulford-Jones, T. R. F., Nawoj, A., Clavel, A., Shnayder, V., Mainland, G., Welsh, M., and Moulton, S. (2004). Sensor networks for emergency response: challenges and opportunities. Pervasive computing. IEEE 3(4), 16–23. ARC Research Network on Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP) Applications. (2007). http://www.ee.unimelb.edu.au/ISSNIP/apps/index.html. Zussman, G., and Segall, A. (2003). Energy efficient routing in ad hoc disaster recovery networks. Proceedings IEEE INFOCOM 2003 Conference. San Francisco, CA. Sandia National Laboratories. (2006). Defense Against Chemical and Biological Threats. http://www.sandia.gov/mission/homeland/chembio/development/biotechnology/index.html. Sandia National Laboratories. (2008). Explosive Countermeasures. http://www.sandia.gov/ mission/homeland/programs/explosives/index.html/. FEMA Emergency Management Institute. (2008). http://training.fema.gov/. The Technical Support Working Group. (2008). http://www.tswg.gov/. National Institute of Standards and Technology. (2008). Modeling and Simulation for Emergency Response Workshops. http://www.mel.nist.gov/div826/msid/sima/simconf/mns4er.htm. Bryson, K.-M., Millar, H., Joseph, A., and Mobolurin, A. (2002). Using formal MS/OR modeling to support disaster recovery planning. Eur. J. Oper. Res. 141, 679–688. Godschalk, D. R., and Brower, D. (1985). Mitigation strategies and integrated emergency management. Public administration review. Public Adm. Rev. 45, 64–71, Special Issue. Robinson, C. D., and Brown, D. E. (2005). First responder information flow simulation: a tool for technology assessment. Proceedings of the 37th Conference on Winter Simulation, Orlando, FL. NSF. Infrastructure Management and Extreme Events (IMEE). (2008). http://www.nsf.gov/ funding/pgm summ.jsp?pims id=13353&org=CMMI. NSF. Hazard Mitigation and Structural Engineering (HMSE). (2008). http://www.nsf.gov/ funding/pgm summ.jsp?pims id=13358&org=CMMI&sel org=CMMI&from=fund. CDC Grant. (2007). http://www.grants.gov/search/search.do?oppId=10597&mode=VIEW. Department of Health and Human Services. (2005). http://grants.nih.gov/grants/guide/rfa-files/ RFA-ES-05-003.html. FEMA. (2008). FEMA Hazard Mitigation Grant Programs. http://www.fema.gov/government/ grant/hmgp/. U.S. Department of Homeland Security, Office of Grants and Training (G&T). (2008). http://www.ojp.usdoj.gov/odp/about/overview.htm. ARMONIA –Applied multi Risk Mapping of Natural Hazards for Impact Assessment. (2008). http://www.ist-world.org/ProjectDetails.aspx?ProjectId=46f1c981ed3b4821947cd3624b820fb4 &SourceDatabaseId=7cff9226e582440894200b751bab883f.

FURTHER READING

109

27. Disaster Hazard Mitigation Project Kyrgyz Republic. (2004). http://web.worldbank.org/ external/projects/main?pagePK=64283627&piPK=73230&theSitePK=40941&menuPK= 228424&Projectid=P083235. 28. Australian Government. National Emergency Volunteer Support Fund (NEVSF). (2008). http://www.ema.gov.au/communitydevelopment. 29. Comfort, L. K., Dunn, M., Johnson, D., Skertich, R., and Zagorecki, A. (2004). Coordination in complex systems: increasing efficiency in disaster mitigation and response. Int. J. Emerg. Manage. 2(1-2), 62–80. 30. Little, R. G. (2003). Toward more robust infrastructure: observations on improving the resilience and reliability of critical systems. Proceedings of the 36th Annual Hawaii International Conference on System Sciences. Big Island, HI. 31. Godschalk, D. R. (2003). Urban hazard mitigation: creating resilient cities. Nat. Hazard. Rev. 4(3), 136–143. 32. Adams, B. J., Huyck, C. K., Gusella, L., Wabnitz, C., Ghosh, S. and Eguchi, R. T. (2006). Remote Sensing Technology for Post-Earthquake Damage Assessment- A Coming of Age. Proceedings of the 8th U.S. National Conference on Earthquake Engineering. April 18–22, San Francisco, CA. 33. Wu, S., Shuman, L., Bidanda, B., Kelly, M., Sochats, K. and Balaban, C. (2007). Disaster policy optimization: a simulation based approach. Proceedings of the 2007 Industrial Engineering Research Conference. Nashville, TN. 34. Barbarosoglu, G., and Arda, Y. (2004). A two-stage stochastic programming framework for transportation planning in disaster response. J. Oper. Res. Soc. 55, 43–53. 35. Friedrich, F., Gehbauer, F. and Rickers, U. (2000). Optimized Resource Allocation for Emergency Response after Earthquake Disasters, Safety Science 35. 36. Altay, N., and Green, W. G., III. (2006). OR/MS research in disaster operations management. Eur. J. Oper. Res. 175, 475–493. 37. Lee, E., Maheshwary, S., Mason, J., and Glisson, W. (2006). Large-scale dispensing for emergency response to bioterrorism and infectious-disease outbreak. Interfaces 36(6), 591–607.

FURTHER READING Disaster Planning and Public Policy Berke, P. R., Kartez, J., and Wenger, D. (1993). Recovery after disaster: achieving sustainable development, mitigation and equity. Disasters 17(2), 93–109. Comfort, L., Wisner, B., Cutter, S., Pulwarty, R., Hewitt, K., Oliver-Smith, A., Wiener, J., Fordham, M., Peacock, W., and Krimgold, F. (1999). Reframing disaster policy: the global evolution of vulnerable communities. Env. Hazard 1, 39–44. Godschalk, D., Beatley, T., Berke, P., Brower, D. J., and Kaiser, E. J. (1999). Natural Hazard Mitigation Recasting Disaster Policy and Planning, Island Press, Washington, DC. Freeman, P. K., Martin, L. A., Linnerooth-Bayer, J., Mechler, R., Pflung, G., and Warner, K. (2003). Disaster Risk Management: National Systems for the Comprehensive Management of Disaster Risk and Financial Strategies for Natural Disaster Reconstruction. Inter-American Development Bank, Sustainable Development Department, Environment Division, Integration and Regional Programs Department, Regional Policy Dialogue, Washington, DC.

Technical Research Jacobson, S. H., Kobza, J. E., and Pohl, E., Eds. (2007). IIE. Trans. 39(1), Special Issue on Homeland Security.

110

CONSEQUENCE MITIGATION: CURRENT RESEARCH AND FUTURE TRENDS

General Research for Natural Disasters Cruz, A. M. (2008). Engineering Contribution to the Field of Emergency Management. http:// training.fema.gov/EMIWeb/edu/docs/Engineering%20Contribution.pdf. Heaney, J. P., Peterka, J., and Wright, L. T. (2000). Research needs for engineering aspects of natural disasters. J. Infrastruct. Syst. 6(1), 4–14.

Voeller

v07-bindex.tex V1 - 12/13/2013 4:38pm Page 111

INDEX

9/11 Commission Act (2007), 42 active warden, 58 Advanced Encryption Standard (AES), 39 Advanced Mobile Phone Service (AMPS), 37–38 Alliance for Public Technology (APT), 44 American National Standards Institute (ANSI) Homeland Security Standards Panel (HSSP), 44 artificial intelligence algorithm, 100 Association of Public-Safety Communications Officials International (APCO), 38–39 AT&T, 4–6, 16 atmospheric monitoring and Sensor Webs, 89–91 authentication and key agreement (AKA) protocol, 38 Baby Bells, 6 betweenness, 10–11 bioscience, 100 bitmap (BMP), 55, 56, 59 blind detection, 57, 58 cascade failure analysis, 9, 11 cascade failure resilience, 9 cascade resiliency, 12–13 cascading failure, 75–76 cellular authentication and voice encryption (CAVE) algorithm, 38 cluster coefficient, 10 cluster networks, 10, 11, 13 code division multiple access (CDMA), 38

Communications Assistance for Law Enforcement Act (CALEA), 43 Communications Security, Reliability, and Interoperability Council (CSRIC), 42 consequence mitigation, 97 active research and funding, 103, 104 critical needs analysis, 103, 105 management framework, 105–106 multiorganizational coordination, 105 research directions computing hardware, software, and research tools, 106 interdependencies between civil infrastructures, 106 modeling and analysis of disaster scenarios, mitigation plans, and response efforts, 107 sensing technologies for assessment, 106–107 scientific overview, 98 active research work and recent accomplishments, 98–99 effectiveness measures, 101–103 mitigation technologies and tactics, 99–101 Counter Mode cipher block chaining-message authentication code (CBC-MAC) Protocol (CCMP), 40 Cryptographic Module Validation Program (CVMP), 43 cryptography, 54, 55 data-centric networks IEEE 802.11 wireless local area networks, 40

Social and Behavioral Research for Homeland Security, Edited by John G. Voeller © 2014 John Wiley & Sons, Inc. Published 2014 by John Wiley & Sons, Inc.

111

Voeller

112

v07-bindex.tex V1 - 12/13/2013 4:38pm Page 112

INDEX

data-centric networks (Continued) IEEE 802.16 wireless metropolitan area networks, 40 radio frequency identification technologies (RFID), 39 WiMAX security, 41 Data Encryption Standard (DES), 39 decontamination monitoring and Sensor Webs, 92–93, 94 denial of service, 13 Department of Defense (DOD), 44 Department of Homeland Security (DHS), 20, 44, 48, 72 Science and Technology Directorate, 45 deregulated oligopolies, 6–7 digital steganography, 58 electronic serial number (ESN), 39 Emergency Communications Preparedness Center (ECPC), 45 epsilon-security, 59 Federal Communications Commission (FCC), 42, 45 Federal Emergency Management Agency (FEMA), 45 Emergency Management Institute (EMI), 101 Federal Energy Regulatory Commission (FERC), 71 Federal Information Processing Standards (FIPS), 39 Federal Information Security Management Act (FISMA), 42, 43 flow failure analysis, 9 flow resilience, 9, 13–14 General Accounting Office, 71 GIF, 56 Global Positioning System (GPS), 89 Global Standards Collaboration, 43 governmental wireless communication initiatives, 44–45 Gramm-Leach-Bliley Act (1999), 42 hard signatures, 57, 58 Health Insurance Portability and Accountability Act (HIPAA), 42, 71 hidden information, covert channels, and information flows detection, 53 countermeasures, 57 detection, 57–58 disruption, 58 critical needs analysis, 60–61 research directions, 61–62 research trends, 58–60 development trends, 60 scientific overview, 54–55 information hiding, 55–56 Homeland Infrastructure Threat and Risk Analysis Center (HITRAC), 23

Homeland Security Presidential Directive (HSPD-12), 49 hub, 10, 11 attacks, 12, 13 and telecom hotels, 15 IEEE 802.11 wireless local area networks, 40 IEEE 802.16 wireless metropolitan area networks, 40 increasing returns, 4 informed embedding, 59 Institute for Telecommunication Sciences (ITS), 45 Institute of Electrical and Electronic Engineers (IEEE), 44 Interagency Working Group on Critical Information Infrastructure Protection, 74 interexchange carrier (IEC), 6, 7–8, 15 International Electrotechnical Commission (IEC), 44 International Information Hiding Workshop, 58 International Organization for Standardization (ISO), 44 International Organization for Standardization (ISO) Open System Interconnection (OSI), 42 layers, and corresponding security protocols, 47 International Telecommunications Union (ITU), 44 Internet-based security protocols, 46–48 Internet Engineering Task Force (ITEF), 44 Internet Protocol Multimedia Subsystem (IMS), 38, 41, 49 Internet Protocol Security (IPSec), 46 Internet Service Providers (ISPs), 20 intrusion detection system (IDS), 77–78 intrusion prevention systems, 46, 48 JPEG, 56, 59 key encryption keys (KEK), 39 key management facility (KMF), 39 key variable loader (KVL), 39 land mobile radio (LMR), 39 least significant bits (LSBs), 55, 56, 59 Lightweight Extensible Authentication Protocol (LEAP), 46 local area networks (LANs), 40 local exchange carriers (LECs), 6, 7–8, 15 matrix-embedding techniques, 59 medium access control (MAC), 40 mobile and wireless security landscape, 41–42 federal legislation and regulation, 42–43 federal standards and guidance publications, 43 governmental wireless communication initiatives, 44–45 industry standards and guidance organizations, 43–44 mobile device security, 49 model-based techniques, 59 Morse Code, 2

Voeller

v07-bindex.tex V1 - 12/13/2013 4:38pm Page 113

INDEX National Association of State Telecommunications Directors (NASTD), 38 National Communications System (NCS), 45 National Critical Infrastructure Protection Research and Development (NCIP R&D), 98 National Information Assurance Partnership (NIAP), 47 Common Criteria Evaluation and Validation Scheme (CCEVS), 46 National Infrastructure Protection Plan (NIPP), 20 National Institute of Standards and Technology (NIST), 39, 43, 101 National Protection and Programs Directorate (NPPD), 45 National Public Safety Telecommunications Council (NPSTC), 45 National Security Telecommunications Advisory Committee (NSTAC), 1, 15, 45 National Software Reference Library (NSRL) Project, 61 National Standards Policy Advisory Committee, 70 National Telecommunications and Information Administration (NTIA), 6, 45 network access points (NAP), 17 network analysis, 9 Networking and Information Technology Research and Development Program, 74 network neutrality, 3 networks resiliency, 9–10 betweenness, 10–11 hubs and clusters, 10 next-generation computing and communication networks, for cascading impacts reduction, 69 best practices, 70 cyber and control systems security standards in common use, 70–71 cascading failure, 75–76 cyber security threats and vulnerabilities, 73–75 guidelines, 70 intrusion detection and response technology, 77–78 legacy systems, 76–77 research direction, 78–79 security posture measurement and assessment risk assessment factors, 71–72 risk management, 72 security metrics, 73 standards, 69–70 Northeast Blackout (2003), 75 Office of Cybersecurity and Communications, 45 Office of Emergency Communications (OEC), 45 One Wilshire Boulevard, 15 over-the-air-rekeying (OTAR), 39 peering, 6–7, 16 personal digital assistants (PDAs), 49, 100 personal identity verification (PIV), 49

113

physical layer (PHY), 40 power law, 8, 12 preferential attachment effect, 5, 15 President’s Commission on Critical Infrastructure Protection (1998), 97 President’s Information Technology Advisory Committee (PITAC), 74 Program for Response Options and Technology Enhancements for Chemical/Biological Terrorism in Subways (PROTECTS), 100 Project 25 digital radio, 38–39 protection strategies, 19 background critical telecommunications infrastructure systems and assets, 22 historical perspective, 19–20 sector meaning, 20–21 sector securing, 21 U.S. policy on protecting national critical telecommunications services and computer-driven systems, 22–23 future prospects, 33–34 future research direction, 32–33 performance, reliability, and efficiency, 25, 32 sector challenges and continuous improvement, 25, 26–31 threat to America’s Internet networks, 24–25 to America’s telecommunications components, 23–24 to America’s telecommunications sector, 32 assessment, 23 public key infrastructure (PKI), 48, 49 public switched telephone network (PSTN), 3, 19, 20, 21, 37 radio frequency identification technologies (RFID), 39 random networks, 10, 11, 12 reference data set (RDS), 61 regulatory period, 5–6 Sandia National Laboratories, 100 Sarbanes-Oxley Act (2002), 42 scale-free network, 10, 11 Secure Mobile Environment Portable Electronic Device (SME PED), 45 Secure Multipurpose Internet Mail Extensions (S/MIME), 46 Secure Socket Layer (SSL), 46 self-organized criticality (SOC), 1, 2, 15–17 Sensor Web, 83 applied to field operations, 88–93, 94 pods, 85–86 properties, 86–88 protocols, 84–85 shadow keys. See key encryption keys (KEK) Sherman Antitrust Act (1890), 4

Voeller

114

v07-bindex.tex V1 - 12/13/2013 4:38pm Page 114

INDEX

situational awareness, 83–84 future directions, 93 Sensor Web technology, 84 applied to field operations, 88–93, 94 pods, 85–86 properties, 86–88 protocols, 84–85 soft signatures, 57, 58 Software Defined Radios (SDRs), 47–48 spectral radius, of network, 12 steganalysis, 57 Steganalysis Research Center (SARC), 57 steganogram (stego-media), 55 steganography, 53, 55, 59–62 Stirmark tool, 58 structural integrity monitoring and Sensor Webs, 92 supervisory control and data acquisition (SCADA) systems, 75 Support Vector Machines (SVMs), 59–60 susceptible-infected-susceptible (SIS) epidemic, 12 system signatures, 57

to America’s telecommunications components, 23–24 to America’s telecommunications sector, 32 assessment, 23 Tier-1 ISP network, 16 traffic encryption keys (TEK), 39 Transmission Control Protocol (TCP)/IP, 49 Transport Layer Security (TLS), 46

Technical Support Working Group (TSWG), 101 telecom hotels, 15 telecommunication and critical infrastructure protection, 1 components of sector, 7–9 criticality, 14–15 hubs and telecom hotels, 15 self-organized criticality, 15–17 evolutionary forces shaping, of sector, 2–3 deregulated oligopolies, 6–7 regulatory period, 5–6 telecom war, 4–5 unregulated beginnings, 3–4 final analysis, 17 networks resiliency, 9–10 betweenness, 10–11 hubs and clusters overview, 1–2 resilience results, 11–12 cascade resiliency, 12–13 flow resiliency, 13–14 telecommunications Act 1934, 5 1996, 6, 15, 16 Telecommunications Industry Association (TIA), 39, 44 telecom war, 4–5 Temporal Key Integrity Protocol (TKIP), 40 third-generation (3G) cellular networks, 37, 38 Third Generation Partnership Project (3GPP), 38 threat to America’s Internet networks, 24–25

Western US Electric Power Grid, 13 wet chemistry techniques and Sensor Webs, 93 Wetstone Technologies, 57 Wi-Fi Alliance, 40 Wired Equivalent Privacy (WEP), 40 Wireless-Fidelity Protected Access (WPA), 40 wireless local area network (WLAN), 40 wireless security critical needs analysis, 45–46 Internet-based security protocols, 46–48 intrusion prevention systems, 46 mobile and wireless security landscape, 41–42 federal legislation and regulation, 42–43 federal standards and guidance publications, 43 governmental wireless communication initiatives, 44–45 industry standards and guidance organizations, 43–44 research directions, 48 Internet-based security protocols, 48 intrusion prevention systems, 48 mobile device security, 49 overlaying security services over IP -based access networks, 49 scientific overview, 37 data-centric networks, 39–41 voice-centric networks, 37–39 Worldwide Interoperability for Microwave Access (WiMAX), 40–41 security features usage aspects, 42

U. S. Patriot Act, 22 U.S. policy, on protecting national critical telecommunications services and computer-driven systems, 22–23 Universal Mobile Telecommunications System (UMTS), 38 Virtual Private Networks (VPN), 41 voice-centric networks, 37 network evolution path, 37–38 Project 25 digital radio, 38–39 voice privacy, 38 VoIP (voice over Internet protocol), 6

Xdsl, 6