December 2019 
ComputingEdge

Citation preview

> Visualization > Robotics > Autonomous Vehicles > Edge Computing

DECEMBER 2019

www.computer.org

COMPSAC 2020 Madrid, Spain July 13-17, 2020

COMPSAC is the IEEE Computer Society Signature Conference on Computers, Software and Applications. It is a major international forum for academia, industry, and government to discuss research results and advancements, emerging challenges, and future trends in computer and software technologies and applications. The theme of COMPSAC 2020 is “Driving Intelligent Transformation of the Digital World”. Staying relevant in a constantly evolving digital landscape is a challenge faced by researchers, developers, and producers in virtually every industry and area of study. Once limited to software-enabled devices, the ubiquity of digitally-enabled systems makes this challenge a universal issue. Furthermore, as relevance fuels change, many influencers will offer solutions that benefit their own priorities. Fortunately, history has shown that the building blocks of digital change are forged by those conducting foundational research and development of digital systems and human interactions. Artificial Intelligence is not new, but is much more utilized in everyday computing now that data and processing resources are more economically viable, hence widely available. The opportunity to drive the use of this powerful tool in transforming the digital world is yours. Will your results help define the path ahead, or will you relegate those decisions to those with different priorities for utilizing intelligence in digital systems? COMPSAC has been and continues to be a highly respected venue for the dissemination of key research on computer and software systems and applications, and has influenced fundamental developments in these fields for over 40 years. COMPSAC 2020 is your opportunity to add your mark to this ongoing journey, and we highly encourage your submission! COMPSAC 2020, organized as a tightly integrated union of symposia, will focus on technical aspects of issues relevant to intelligent transformation of the digital world. The technical program will include keynote addresses, research papers, industrial case studies, fast abstracts, a doctoral symposium, poster sessions, and workshops and tutorials on emerging and important topics related to the conference theme. Highlights of the conference will include plenary and specialized panels that will address the technical challenges facing researchers and practitioners who are driving fundamental changes in intelligent systems and applications. Panels will also address cultural and societal challenges for a society whose members must continue to learn to live, work, and play in the environments the technologies produce. Authors are invited to submit original, unpublished research work, as well as industrial practice reports. Simultaneous submission to other publication venues is not permitted except as highlighted in the COMPSAC 2020 J1C2 & C1J2 program. All submissions must adhere to IEEE Publishing Policies, and will be vetted through the IEEE CrossCheck portal. Further info is available at www.compsac.org. Organizers Standing Committee Chair: Sorel Reisman (California State University, USA) Steering Committee Chair: Sheikh Iqbal Ahamed (Marquette University, USA) General Chairs: Mohammad Zulkernine (Queen’s University, Canada), Edmundo Tovar (Universidad Politécnica de Madrid, Spain), Hironori Kasahara (Waseda University, Japan) Program Chairs in Chief: W. K. Chan (City University, Hong Kong), Bill Claycomb (Carnegie Mellon University, USA), Hiroki Takakura (National Institute of Informatics, Japan) Workshop Chairs: Ji-Jiang Yang (Tsinghua University, USA), Yuuichi Teranishi (National Institute of Information and Communications Technology, Japan), Dave Towey (University of Nottingham Ningbo China, China), Sergio Segura (University of Seville, Spain) Local Chairs: Sergio Martin (UNED, Spain), Manuel Castro (UNED, Spain) Important Dates Workshops proposals due: 15 November 2019 Workshops acceptance notification: 15 December 2019 Main conference papers due: 20 January 2020 Paper notification: 3 April 2020 Workshop papers due: 9 April 2020 Workshop paper notifications: 1 May 2020 Camera-ready and registration due: 15 May 2020

Photo: King Felipe III in Major Square, Madrid Photo credit: Iria Castro - Photographer (Instagram @iriacastrophoto)

IEEE COMPUTER SOCIETY computer.org • +1 714 821 8380

STAFF Editor Cathy Martin

Publications Portfolio Managers Carrie Clark, Kimberly Sperka

Publications Operations Project Specialist Christine Anthony

Publisher Robin Baldwin

Production & Design Carmen Flores-Garvey

Senior Advertising Coordinator Debbie Sims

Circulation: ComputingEdge (ISSN 2469-7087) is published monthly by the IEEE Computer Society. IEEE Headquarters, Three Park Avenue, 17th Floor, New York, NY 10016-5997; IEEE Computer Society Publications Office, 10662 Los Vaqueros Circle, Los Alamitos, CA 90720; voice +1 714 821 8380; fax +1 714 821 4010; IEEE Computer Society Headquarters, 2001 L Street NW, Suite 700, Washington, DC 20036. Postmaster: Send address changes to ComputingEdge-IEEE Membership Processing Dept., 445 Hoes Lane, Piscataway, NJ 08855. Periodicals Postage Paid at New York, New York, and at additional mailing offices. Printed in USA. Editorial: Unless otherwise stated, bylined articles, as well as product and service descriptions, reflect the author’s or firm’s opinion. Inclusion in ComputingEdge does not necessarily constitute endorsement by the IEEE or the Computer Society. All submissions are subject to editing for style, clarity, and space. Reuse Rights and Reprint Permissions: Educational or personal use of this material is permitted without fee, provided such use: 1) is not made for profit; 2) includes this notice and a full citation to the original work on the first page of the copy; and 3) does not imply IEEE endorsement of any third-party products or services. Authors and their companies are permitted to post the accepted version of IEEE-copyrighted material on their own Web servers without permission, provided that the IEEE copyright notice and a full citation to the original work appear on the first screen of the posted copy. An accepted manuscript is a version which has been revised by the author to incorporate review suggestions, but not the published version with copy-editing, proofreading, and formatting added by IEEE. For more information, please go to: http://www.ieee.org/publications_standards/publications/rights/paperversionpolicy.html. Permission to reprint/republish this material for commercial, advertising, or promotional purposes or for creating new collective works for resale or redistribution must be obtained from IEEE by writing to the IEEE Intellectual Property Rights Office, 445 Hoes Lane, Piscataway, NJ 08854-4141 or [email protected]. Copyright © 2019 IEEE. All rights reserved. Abstracting and Library Use: Abstracting is permitted with credit to the source. Libraries are permitted to photocopy for private use of patrons, provided the percopy fee indicated in the code at the bottom of the first page is paid through the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923. Unsubscribe: If you no longer wish to receive this ComputingEdge mailing, please email IEEE Computer Society Customer Service at [email protected] and type “unsubscribe ComputingEdge” in your subject line. IEEE prohibits discrimination, harassment, and bullying. For more information, visit www.ieee.org/web/aboutus/whatis/policies/p9-26.html.

IEEE Computer Society Magazine Editors in Chief Computer

IEEE Security & Privacy

David Alan Grier (Interim), Djaghe LLC

David Nicol, University of Illinois at Urbana-Champaign

IEEE Software

IEEE Micro

Ipek Ozkaya, Software Engineering Institute

Lizy Kurian John, University of Texas, Austin

IEEE Internet Computing

IEEE Computer Graphics and Applications

George Pallis, University of Cyprus

IT Professional Irena Bojanova, NIST

www.computer.org/computingedge

Torsten Möller, University of Vienna

IEEE Pervasive Computing Marc Langheinrich, University of Lugano

Computing in Science & Engineering Jim X. Chen, George Mason University

IEEE Intelligent Systems V.S. Subrahmanian, Dartmouth College

IEEE MultiMedia Shu-Ching Chen, Florida International University

IEEE Annals of the History of Computing Gerardo Con Diaz, University of California, Davis

1

DECEMBER 2019 • VOLUME 5, NUMBER 12

THEME HERE

8

Exploranation: A New Science Communication Paradigm

16

Advances in Visualization Recommender Systems

34

Vehicle Telematics: The Good, Bad, and Ugly

Visualization

8 Exploranation: A New Science Communication Paradigm

ANDERS YNNERMAN, JONAS LOWGREN, AND LENA TIBELL

16 Advances in Visualization Recommender Systems KLAUS MUELLER

Robotics

18 Privacy and Civilian Drone Use: The Need for Further Regulation

STEPHANIE WINKLER, SHERALI ZEADALLY, AND KATRINE EVANS

27 The Robotization of Extreme Automation: The Balance between Fear and Courage

JINAN FIAIDHI, SABAH MOHAMMED, AND SAMI MOHAMMED

Autonomous Vehicles

34 Vehicle Telematics: The Good, Bad, and Ugly HAL BERGHEL

39 Can We Trust Smart Cameras? BERNHARD RINNER

Edge Computing

43 A Disaggregated Cloud Architecture for Edge Computing

RAFAEL MORENO-VOZMEDIANO, EDUARDO HUEDO, RUBEN S. MONTERO, AND IGNACIO M. LLORENTE

49 Oscillation VINTON G. CERF

39

Departments

4 Magazine Roundup 7 Editor’s Note: Engaging and Effective Visualizations 72 Conference Calendar

Can We Trust Smart Cameras?

Subscribe to ComputingEdge for free at www.computer.org/computingedge.

CS FOCUS

Magazine Roundup

T

he IEEE Computer Society’s lineup of 12 peer-reviewed technical magazines covers cutting-edge topics ranging from software design and computer graphics to Internet computing and security, from scientific applications and machine intelligence to visualization and microchip design. Here are highlights from recent issues.

Computer Creating a Resilient IoT with Edge Computing Denial-of-service (DoS) attacks on the safety-critical Internet of Things (IoT) can lead to life-threatening consequences, and the risk of these attacks is 4

December 2019

increasing. The authors of this article from the August 2019 issue of Computer propose levels of context awareness to address availability threats and illustrate how context-aware edge computing enhances the IoT’s resilience to DoS attacks through our edge computingbased security solution.

Computing in Science & Engineering Toward a Methodology and Framework for WorkflowDriven Team Science Scientific workflows are powerful tools for the management of scalable experiments, often composed of complex tasks running on distributed

resources. Existing cyberinfrastructure provides components that can be utilized within repeatable workflows. However, data and computing advances continuously change the way scientific workflows get developed and executed, pushing the scientific activity to be more data-driven, heterogeneous, and collaborative. Workflow development today depends on the effective collaboration and communication of a cross-disciplinary team, not only with humans but also with analytical systems and infrastructure. This article from the July/August 2019 issue of Computing in Science & Engineering presents a collaborationcentered reference architecture to extend workflow systems with dynamic, predictable, and programmable interfaces to systems and infrastructure while bridging the exploratory and scalable activities in the scientific process. The authors present a conceptual design toward the development of methodologies and services for effective

Published by the IEEE Computer Society

2469-7087/19 © 2019 IEEE

workflow-driven collaborations, namely the PPoDS methodology for collaborative workflow development and the SmartFlows Services for smart execution in a rapidly evolving cyberinfrastructure ecosystem.

IEEE Annals of the History of Computing TeX: A Branch in Desktop Publishing Evolution, Part 2 Donald Knuth began the development of TeX in 1977 and had an initial version running in 1978, with the aim of typesetting mathematical documents with the highest quality and with archival reproducibility far into the future. Its usage spread, and today the TeX system remains in use by a vibrant user community. However, the world of TeX has always been somewhat out of sync with the commercial desktop publishing systems that began to come out in the mid-1980s and are now ubiquitous in the worlds of publishing and printing. Read more in the April–June 2019 issue of IEEE Annals of the History of Computing.

IEEE Computer Graphics and Applications Flow Field Reduction via Reconstructing Vector Data from 3D Streamlines Using Deep Learning The authors of this article from the July/August 2019 issue of IEEE Computer Graphics and Applications present a new approach for streamline-based flow field representation and reduction. www.computer.org/computingedge

Their method can work in the in situ visualization setting by tracing streamlines from each time step of the simulation and storing compressed streamlines for post hoc analysis where users can afford longer reconstruction time for higher reconstruction quality using decompressed streamlines. At the heart of the approach is a deep-learning method for vector field reconstruction that takes the streamlines traced from the original vector fields as input and applies a two-stage process to reconstruct high-quality vector fields. To demonstrate the effectiveness of the approach, the authors show qualitative and quantitative results with several datasets and compare the method against the de facto method of gradient vector flow in terms of speed and quality tradeoff.

IEEE Intelligent Systems Sentiment and Sarcasm Classification with Multitask Learning Sentiment classification and sarcasm detection are both important natural language processing tasks. Sentiment is always coupled with sarcasm where intensive emotion is expressed. Nevertheless, most literature considers them as two separate tasks. The authors of this article from the May/June 2019 issue of IEEE Intelligent Systems argue that knowledge in sarcasm detection can also be beneficial to sentiment classification and vice versa. They show that these two tasks are correlated, and they present a multitask learning-based framework using a deep neural

network that models this correlation to improve the performance of both tasks in a multitask learning setting. Their method outperforms the state of the art by 3-4% in the benchmark dataset.

IEEE Internet Computing Integration of Wireless Sensor Networks and Smart UAVs for Precision Viticulture Precision viticulture (PV) aims to improve grapevine production efficiency, quality, and profitability, while reducing the environmental impact. The promises of PV are realized only if large areas are monitored with high spatial and temporal resolutions. This article from the May/June 2019 issue of IEEE Internet Computing considers the integration of a wireless sensor network and a smart unmanned aerial vehicle platform. To this end, local variations of factors that influence grape yield and quality are measured, and site-specific management practices are applied. This approach achieves real-time, uninterrupted monitoring of the vine growth environment, as well as on-demand imaging and highresolution data collection from any specific location, thereby optimizing the production efficiencies and the application of inputs in a costeffective way.

IEEE Micro Leveraging Cache Management Hardware for Practical Defense against Cache Timing Channel Attacks Sensitive information leakage 5

MAGAZINE ROUNDUP

through shared hardware structures is becoming a growing security concern. In this article from the July/August 2019 issue of IEEE Micro, the authors propose a practical protection framework against cache timing channel attacks by leveraging commercial off-theshelf hardware support in last-level caches for cache monitoring and partitioning.

IEEE MultiMedia Compact Descriptors for Video Analysis: The Emerging MPEG Standard This article from the April–June 2019 issue of IEEE MultiMedia provides an overview of the ongoing compact descriptors for video analysis standard (CDVA) from the ISO/IEC moving pictures experts group (MPEG). MPEG-CDVA aims to define a standardized bitstream syntax to enable interoperability in the context of video analysis applications. During the developments of MPEG-CDVA, a series of techniques aiming to reduce the descriptor size and improve the video representation ability have been proposed. This article describes the new standard that is being developed and reports the performance of these key technical contributions.

IEEE Pervasive Computing Accelerating Conversational Agents Built with Off-theShelf Modularized Services Today’s common practice in developing conversational agents is 6

ComputingEdge

pipelining off-the-shelf modularized services as ready-made building blocks. However, the discrete and sequential nature of the modules yields long response latency. This article from the April–June 2019 issue of IEEE Pervasive Computing introduces Sci-Fii, a speculative inference framework accelerating conversational agent systems built with off-the-shelf modules.

IEEE Security & Privacy Applied Digital Threat Modeling: It Works Threat modeling, a structured process for identifying risks and developing mitigation strategies, has never been systematically evaluated in a real environment. This article from the July/August 2019 issue of IEEE Security & Privacy provides a case study at the New York City Cyber Command— the primary digital defense organization for the most populous city in the United States—which found tangible benefits.

IEEE Software Automated Testing of Simulation Software in the Aviation Industry: An Experience Report An industry-academia collaboration developed a test automation framework for aviation simulation software. The technology has been successfully deployed in several test teams. Read more in the July/August 2019 issue of IEEE Software.

IT Professional Using Blockchain to Rein in the New Post-Truth World and Check the Spread of Fake News In recent years, “fake news” has become a global issue that raises unprecedented challenges for human society and democracy. This problem has arisen due to the emergence of various concomitant phenomena such as 1) the digitization of human life and the ease of disseminating news through social networking applications (such as Facebook and WhatsApp); 2) the availability of “big data” that allows customization of news feeds and the creation of polarized so-called “filter-bubbles”; and 3) the rapid progress made by generative machine-learning and deep-learning algorithms in creating realistic-looking yet fake digital content (such as text, images, and videos). There is a crucial need to combat the rampant rise of fake news and disinformation. In this article from the July/August 2019 issue of IT Professional, the authors propose a high-level overview of a blockchain-based framework for fake news prevention and highlight the various design issues and considerations of such a blockchain-based framework for tackling fake news.

F O LLOW US @s e curit y p riva c y

December 2019

EDITOR’S NOTE

Engaging and Effective Visualizations

I

n myriad industries and fields, visualization is a way to analyze data and disseminate insights. For a visualization to convey information successfully, it must be clear, userfriendly, and visually appealing. This issue of ComputingEdge shares methods and recommendations for creating visualizations that make an impact on audiences. IEEE Computer Graphics and Applications’ “Exploranation: A New Science Communication Paradigm” provides examples of popular museum installations that make use of data and visualization software from research (exploratory visualization) to educate the public about scientific topics (explanatory visualization). The authors also present design principles for compelling interactive visualizations. Computer’s “Advances in Visualization Recommender Systems” describes a tool that aims to help people decide on the best graphical design for their data and analytical goals. Another technology that elicits reactions from the public is robotics. The increasing presence of robots in our lives fosters new concerns, with

2469-7087/19 © 2019 IEEE

privacy and job loss being chief among them. In IEEE Security & Privacy’s “Privacy and Civilian Drone Use: The Need for Further Regulation,” the authors advocate for more protection against drone surveillance and intrusion. IT Professional’s “The Robotization of Extreme Automation: The Balance between Fear and Courage” argues that we should embrace the benefits that robots bring to manufacturing while working to mitigate job loss. Autonomous vehicles also earn their share of concerns. Computer’s “Vehicle Telematics: The Good, Bad, and Ugly” discusses security and privacy vulnerabilities in most high-tech cars. Also from Computer, “Can We Trust Smart Cameras?” questions whether the cameras in self-driving cars are reliable enough for real traffic scenarios. Finally, two IEEE Internet Computing articles cover different aspects of edge computing. “A Disaggregated Cloud Architecture for Edge Computing” proposes a model that manages both on-premises cloud infrastructure and third-party dispersed edge nodes, and “Oscillation” highlights the evolution of edge computing and predicts where it will lead.

Published by the IEEE Computer Society

December 2019

7

DEPARTMENT: Education

Exploranation: A New Science Communication Paradigm Anders Ynnerman Linköping University

Science communication is facing a paradigm shift,

Jonas Löwgren Linköping University

explanatory visualization. In this article, we coin the

Lena Tibell Linköping University Editors: Beatriz Sousa Santos [email protected]

based on the convergence of exploratory and term exploranation to denote the way in which visualization methods from scientific exploration can be used to communicate results and how methods in explanatory visualization can enrich exploration.

Ginger Alford alfordg@trinityvalleyschool .org

The visual language spans borders of knowledge, experience, age, gender, and culture. This arguably makes visualization the most effective form of expression in science communication. Visual science communication is facing a paradigm shift caused by the rapid pace of digitalization and widespread availability of visual data analysis tools and computing resources. Until now there has been a clear division between visualization enabling effective data analysis leading to scientific discovery (exploratory visualization) and visual representations used to explain and communicate science to a general audience (explanatory visualization). To denote the confluence of exploration and explanation, we coin the neologism exploranation, as a combination of the two terms. The underpinning trend behind the paradigm shift is based on the emerging strong synergy between exploratory and explanatory visualization driven by two connected facts: • •

Science communication can now be directly based on real, large-scale and/or complex scientific data—i.e., the same data that researchers have gathered and explored for scientific discovery. Advanced visualization and interaction methodology has reached a high level of maturity and can be used on standard GPUs, making it possible to use the same visualization approaches in both exploratory and explanatory visualization.

The use of visual exploranation approaches in science communication is still in its infancy. At the Norrköping Visualization Center in Sweden, a center with research units and public facilities with 150,000 visitors per year, we have, over the past decade, worked on developing a range of visualization experiences for the general public and in particular for schoolchildren. To do this, we’ve been using various technical platforms and storytelling approaches in interactive datadriven visualization. IEEE Computer Graphics and Applications 8 December 2019 May/June 2018

Published 13 by the IEEE Computer Society 

Published by the IEEE Computer Society 2469-7087/19 © 2019IEEE IEEE 0272-1716/18/$33.00 USD ©2018

IEEE COMPUTER GRAPHICS AND APPLICATIONS

It is through this work that we have identified the disruptive changein science communication that we are facing, and we have started to systematically approach the challenges involved in bringing interactive data visualization to the public. These challenges are reaching into the foundation of not only technical areas such as data access and representations, real-time data analysis, visual representations, and rendering and interaction paradigms, but also research areas such as storytelling in interactive media and design for visual learning. On a fundamental level, there is also a need to develop a theory and understanding of how visual literacy is established in interactive exploration. Additionally, we need ways to measure user engagement or involvement and the corresponding effect on learning outcomes in exploranation scenarios. On a general note, it can be added that over the past decades, visualization has proven itself to be a highly versatile tool, being deployed in a range of applications spanning across academic and commercial domains, enabling scientific breakthroughs, improving industrial workflows, creating new business models, etc. However, one question often asked is why the visualization research community has not been more actively involved in the development of some of the most widespread visualization services such as Google Earth, services that reach a large-scale general public. One of the answers to this question may be found in the emphasis visualization research has had on exploratory visualization used by domain experts. It is our hope that the convergence of exploratory and explanatory approaches will lead to the increased and widespread impact of interactive data visualization approaches. In this article we feature three scenarios, providing examples and showcasing different flavors of how we have worked with exploranation applications. We then conclude by providing a set of empirically verified design recommendations and a summary from the visual-learning perspective.

EXPLORANATION SCENARIOS One of the underpinning strategies in our work to bring interactive data visualization to public spaces is based on the need to produce functioning installations that are robust enough to endure interactions with large numbers of visitors per year. This entails going the extra mile in terms of robustness, reliability, and extracting feedback from user experiences of the software. We have seen this as an enabling factor in gaining insights into the special demands on interactive data visualization that science communication makes.

From Medical Researchers to Visitors at the British Museum Medical-imaging modalities are generating increasingly large amounts of data. In a single fullbody scan from a computerized-tomography scanner, approximately 20,000 high-resolution data slices can be generated in just a few seconds. Interactive direct volume rendering (DVR) of this data has become one of the main analysis tools. In our research on DVR, we developed a range of methods to deal with data sizes1 and methods to improve the quality and speed of volumetric lighting.2 We also implemented our software on large-scale multitouch tables.3 All this work was targeting medical-domain experts, clinicians, and medical students. However, we soon realized that the interactive exploration of medical data on touch tables raised a tremendous amount of interest among the general public and, in particular, visitors to the Visualization Center. At the Center, we had placed one touch table with sample datasets and a simplified interface with presets and limited interaction, together with a narrative evolving with the help of information hotspots. In essence, this insight was the starting point for the concept of exploranation—that the same data and underlying methods can be used for both experts in professional tasks and lay audiences in public settings. Interacting at a large touch table affords the formation of temporary social structures around objects of interest. Interfaces combining overall narrative structures with scaffolded, self-directed examination facilitate the integration of explanatory and exploratory modes of inquiry (more on this to follow). Formal evaluations of our public installations in museums and science centers show that audience engagement is consistently very high. More importantly, the installation of touch tables to

www.computer.org/computingedge May/June 2018

14

9 www.computer.org/cga

EDUCATION

explore the insides of interesting artifacts in museums (for an example, see Figure 1) in fact increases the proportion of visitors studying the actual artifacts. For a comprehensive example, refer to our work with the Gebelein Man mummy at the British Museum.4

Figure 1. Children at the Mediterranean Museum in Stockholm using our visualization table to interactively explore a data visualization of an ancient Egyptian mummy.

Perceiving the Forces in the Microcosmos For 30 to 40 years, visualization has been one of the most common communication methods in cell- and molecular-biology research to analyze large amounts of data—for example, the coordinates of thousands of atoms in large dynamic molecules and the rate constants for their interactions. The objects are very small (nm), their shapes do not have a direct resemblance to macroscopic visual metaphors, and the underlying processes are complex. Interactions at the molecular level are of fundamental importance for all aspects of life and involve the participation of many weak forces. These interactions are dynamic, complex, and stochastic in nature but lead to highly regulated processes. In fact, they are highly counterintuitive,5,6 especially when described in text. To explore the added value of multimodal interaction in visual learning and to enable improved understanding of molecular interactions, we created a haptic application named MolDock (see Figure 2). Molecular coordinates for a protein are imported from the Protein Data Bank (PDB files), and a 3D model of a protein and a corresponding ligand is created. The protein–ligand interaction is then mapped to force feedback on the haptic device, using our toolkit for proxybased volumetric haptic rendering.7 The application is implemented on a Phantom device with collocated stereoscopic visual rendering.

Figure 2. Haptic exploration of protein docking. The docking “force” is perceived through the haptic interaction and is collocated with visual stereoscopic images of the involved molecules.

10 May/June 2018

ComputingEdge

15

December 2019 www.computer.org/cga

IEEE COMPUTER GRAPHICS AND APPLICATIONS

This application was tested in several learning environments and at the Visualization Center. Through extensive evaluation, it was concluded that the haptic system helped students to learn more about the process of protein–ligand recognition. It also changed the way they reasoned about molecules to include more force-based explanations. When the system was used by upper secondary students and the audience at the Visualization Center, they seemed to grasp the principles of the process. From an information-processing standpoint, visual and haptic coordination was seen to offload the visual pathway by placing less strain on visual working memory. From an embodied-cognition perspective, visual and tactile sensorimotor interactions may promote the construction of knowledge. The system may also have prevented students from drawing erroneous conclusions about the process. The results have cognitive and practical implications for the use of multimodal VR technologies in educational contexts.8 9

Bringing an Audience to Space Following a knowledgeable guide on an interactive exploration of space is the dominant form of planetarium shows, from historical planetariums with analog “star ball” projectors to the introduction of digital planetariums in the 1990s. The development of computer graphics and visualization has paved the way for interactive and immersive experiences. An early example of such software was Uniview,10 which has been developed since 2003 and is now widely used in the planetarium community. A more recent example is OpenSpace,11 a NASA-funded initiative in open source interactive data visualization software designed to visualize the entire known universe and portray our ongoing efforts to investigate the cosmos. It supports interactive, contextualized presentations of dynamic data from observations, simulations, and space mission planning and operations. The versatility of OpenSpace has been used to create science communication events such as the one related to the passage of Pluto by the New Horizons spacecraft in 2015 (see Figure 3). In a live “domecast,” 11 sites were connected. Visualization of the position and workings of the spacecraft was produced using OpenSpace, effectively letting the audience fly with the spacecraft toward Pluto. At the same time, mission experts were connected using videoconferencing to discuss the workings of the instruments, creating a “Science Live” experience. Another recent feature of OpenSpace is the ability to visualize planetary surfaces with extreme resolution. It is now possible to bring the audience to the surface of Mars in interactive exploration of the topography at 25-cm resolution.12

Figure 3. OpenSpace was used to visualize how the spacecraft New Horizons captured images of Pluto on 14 June 2015. In a live event, sites from all over the world were connected. The image shows the New York site with host Neil deGrasse Tyson. With OpenSpace, our work has progressed in a new direction of mediated exploranation in largescale immersive environments. We have found that the levels of audience engagement can be remarkably high, due mainly to the immediacy of the mediated, nonlinear exploration of more or less real-time data (as opposed to a preproduced movie). However, this requires significant skills and knowledge on behalf of the guide navigating space for the audience. www.computer.org/computingedge May/June 2018

16

11 www.computer.org/cga

EDUCATION

The OpenSpace collaboration will progressively include more features and data. In addition to advanced navigation interfaces, our current research agenda comprises implementation of tools to analyze data from the ongoing Gaia mission and improved support for dynamic, large-scale simulation data such as space weather and galaxy formation.

DESIGN PRINCIPLES FOR EXPLORANATION Our work on exploranation in various settings has been largely explorative in itself. We have been feeling our way, designing visualizations and interactions based on a general sense of the data and the audience, identifying and elaborating the most successful ideas. In this kind of extended process, experience accumulates over time and eventually forms the basis for articulation and abstraction. This section summarizes our main insights in four design principles for exploranation that may be of value for other designers, engineers, and storytellers involved in science communication. In interaction design in general, the task of designing explanations entails knowing the topic, the audience, and the available media. The responsibility lies with the designer to construct an explanation that fits the communicative situation. Designing for exploration, on the other hand, is a matter of setting the stage and providing the props for users to explore the topic at hand. The designer exerts indirect influence by defining the extent of the subject matter to be explored and the properties of the exploration tools provided. Designing for exploranation, where explanation and exploration converge, becomes largely a question of accommodating mixed-initiative interaction, as the following four principles demonstrate.

Interleaving Explorative Microenvironments and Signposted Narratives An overall structure we find conducive to exploranation includes making curiosity-arousing parts of the data available for the users’ self-directed exploration. Within those parts, annotations and other techniques are used to provide local explanatory information, also under the users’ control. The explorative parts are linked together into an overall narrative structure, which is crafted in advance to tell a compelling story. In our interactive multitouch-table productions, for example, annotation callouts are used to focus the user’s attention on the parts of the 3D volume that are the most rewarding ones for deeper exploration. On a higher narrative level, different sections of the interactive experience are presented within an overall structure where explanatory information is presented in a navigation interface to convey the big picture. The guided space explorations exhibit similar narrative structures combining big-picture views (in a very literal sense!) and sweeping movement over great distances with close-up examinations of particular points of interest, such as the most detailed images of the Mars surface to ever be shown to the public. The framing of the presentation as a live event under the control of a knowledgeable guide makes the combination of big stories and in-depth examinations significantly more engaging than a prerecorded movie using the same narrative structure. From the audience point of view, a skilled performer can make the experience seem remarkably explorative even though there is no actual control of navigation or focus of attention.

Constraining Explorative Interactions while Leveraging Pliability In each explorative microenvironment, the explorative tools are carefully constrained to convey a sense of expressiveness and agency to the users while at the same time ensuring that views resulting from manipulations are never disappointing or difficult to interpret. The interaction is highly responsive, with a tight connection between user actions and resulting outputs, thus forming a pliable user experience that is known to facilitate explorative behavior in interactive visualizations.13

12 May/June 2018

ComputingEdge

17

December 2019 www.computer.org/cga

IEEE COMPUTER GRAPHICS AND APPLICATIONS

The tools to rotate, crop, and dynamically light the 3D volume in our interactive multitouch-table productions illustrate the combination of productive constraints and pliability. They do not offer complete freedom of manipulating orientation and perspective, but rather nudge the user toward staying within visually interesting views of the most topical content. The MolDock example illustrates the didactic benefits of extending this principle to multimodal interaction. Similarly, the need for semantic-level navigation tools in the guided space explorations instantiates the principle of productive interaction constraints in an immersive environment (more on this later).

Foregrounding the Topic through Perceptual Layering Exploranation is facilitated by keeping the object of interest in focus in a very concrete sense. Exploration tool palettes are relegated to the visual periphery. Annotations and other complementary information are visually subordinated to the topical objects. Icons and other parts of the supporting interface employ a subdued form and color scheme. This is a general principle in interaction and information design, and it is used consistently in the examples provided above.

Supporting Performative Interaction Engaging with an interactive installation in a public space entails the tacit taking on of different roles by audience members. These roles can include a performer navigating the interaction, a number of viewers engaging in dialogue with the performer, and a number of bystanders gaining serendipitous insights and balancing between engaging more deeply and moving on. This is quite different from designing for generic multiuser interaction.14 In the case of interactive DVR, the interaction paradigm employed for the interactive touch tables epitomizes this principle with tools enabling an impromptu performer to take control through highly visible physical actions. It is commonplace in museums and galleries to observe a hotspot of interest and energy forming around one of our touch tables, where a visitor is exploring a dataset in the manner of performing for his or her group. The visibility and social inclusiveness of the explorative actions invite strangers into a temporary social structure forming around objects of mutual interest. In the guided space explorations, the role of the guide is performative in a more conventional and formalized sense, taking the audience through the story. What is more, the visually immersive environment poses unique requirements on the performer, in the visceral sense of creating a pleasant trajectory through space. Camera movement and other interaction needs to be slow, smooth, and steady in order to avoid motion sickness and other discomfort. More research is needed to create semantic-level “virtual pilots” supporting the performer in such settings.

CONCLUSION Exploranation methodology is in its infancy, and further research is needed on technology, communicative strategies, and application domains to understand the possibilities and limitations. From a learning perspective, data-driven interactive visualization is a relatively new medium, with opportunities as well as challenges. It poses new and different requirements for the visualization tools and for how to guide the experience. In both cases, the aim is to instill in the user or audience a sense of curiosity and a desire to understand, while considering the diverse levels of previous knowledge in heterogeneous audiences ranging from schoolchildren, to the general public, to domain experts. It is essential to provide not only interactive modes of self-directed exploration but also intuitive and correct feedback. This requires adaptive interfaces as well as knowledgeable and sensitive guidance.15–17 However, it is the story and how it is told that make the difference. Furthermore, among the skills needed for the guidance are the ability to simultaneously handle a wide range of preconceptions, adaptive storytelling skills, and knowledge of specific design principles for education, without causing cognitive overload. This article points to the need for research to meet these challenges—i.e., to integrate the cognitive and sociocultural perspectives and accommodate a diverse group of users in learning through exploranation. www.computer.org/computingedge May/June 2018

18

13 www.computer.org/cga

EDUCATION

ACKNOWLEDGMENTS The authors wish to thank the Swedish eScience Research Center (SeRC) and NASA for support of the OpenSpace project. The support of the Swedish Research Council for funding of underpinning research is greatly acknowledged. The Knut and Alice Wallenberg Foundation is supporting the WISDOME project through a generous donation for development of dome productions and technology. The authors would also like to acknowledge the excellent support and contributions from the staff at the Norrköping Visualization Center C and the division for Media and Information Technology at Linköping University.

REFERENCES 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17.

14 May/June 2018

C. Lundström, P. Ljung, and A. Ynnerman, “Local histograms for design of Transfer Functions in Direct Volume Rendering,” IEEE Transactions on Visualization and Computer Graphics, vol. 12, no. 6, 2006, pp. 1570–1579. E. Sundén, A. Ynnerman, and T. Ropinski, “Image plane sweep volume illumination,” IEEE Transactions on Visualization and Computer Graphics, vol. 17, no. 12, 2011, pp. 2125–2134. C. Lundström et al., “Interactive visualization of 3D scanned mummies at public venues,” IEEE Transactions on Visualization and Computer Graphics, vol. 17, no. 12, 2016, pp. 1775–1784. A. Ynnerman et al., “Interactive visualization of 3D scanned mummies at public venues,” Communications of the ACM, vol. 59, no. 12, 2016, pp. 72–81. K. Scalise et al., “Student Learning in Science Simulations: Design Features that Promote Learning Gains,” Journal of Research in Science Teaching, vol. 48, no. 9, 2011, pp. 1050–1078. M.J. Jacobson, “Problem solving, cognition, and complex systems: Differences between experts and nov-ices,” Complexity, vol. 6, no. 3, 2001, pp. 41–49. K. Lundin et al., “Enabling design and interactive selection of haptic modes,” Virtual Reality Journal, vol. 11, no. 1, 2007, pp. 1–13. P. Bivall, S. Ainsworth, and L.A.E. Tibell, “Do Haptic Representations Help Complex Molecular Learning?,” Science Education, vol. 95, no. 4, 2011, pp. 700–719. K.J. Schönborn, P. Bivall, and L.A.E. Tibell, “Exploring relationships between students’ interaction and learning with a haptic virtual biomolecular model,” Computers and Education, vol. 57, no. 3, 2011, pp. 2095–2105. S. Klashed et al., “Uniview - Visualizing the Universe,” Eurographics 2010 - Areas Papers, 2010, pp. 37–43. A. Bock et al., “Openspace: An open-source astrovisualization framework,” Journal of Open Source Software, vol. 2, no. 15, 2017, p. 281. K. Bladin et al., “Globe browsing: Contextualized spatio-temporal planetary surface visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 1, 2018, pp. 802–811. J. Löwgren, “Pliability as an experiential quality: Exploring the aesthetics of interaction design,” Artifact, vol. 1, no. 2, 2007, pp. 85–95. P. Dalsgaard and L.K. Hansen, “Performing perception—staging aesthetics of interaction,” ACM Transactions on Computer-Human Interaction, vol. 15, no. 3, 2008, p. 13. D. Clark, E. Tanner-Smith, and S. Kolloingsworth, “Digital games, Design and Learning: A Systematic review and Meta–Analysis,” Review of Educational Research, vol. 86, no. 1, 2016, pp. 79–122. C. D'Angelo et al., “Simulations for STEM Learning: Systematic Review and MetaAnalysis,” SRI International, 2014. C. Zahn, “Digital Design and Learning: Cognitive-Constructivist Perspectives,” The Psychology of Digital Learning, Schwan S., Cress U., Springer, 2017.

ComputingEdge

19

December 2019 www.computer.org/cga

IEEE COMPUTER GRAPHICS AND APPLICATIONS

ABOUT THE AUTHORS Anders Ynnerman holds the chair in scientific visualization at Linköping University, is the director of the Norrköping Visualization Center C, and is a research professor at the University of Utah. Contact him at [email protected]. Jonas Löwgren is professor of interaction and information design at Linköping University. Contact him at [email protected]. Lena Tibell is a professor of biochemistry and visual learning and communication at Linköping University. Contact her at [email protected]. Contact department editor Beatriz Sousa Santos at [email protected] and department editor Ginger Alford at [email protected].

This article originally appeared in IEEE Computer Graphics and Applications, vol. 38, no. 3, 2018.

PURPOSE: The IEEE Computer Society is the world’s largest association of computing professionals and is the leading provider of technical information in the field. MEMBERSHIP: Members receive the monthly magazine Computer, discounts, and opportunities to serve (all activities are led by volunteer members). Membership is open to all IEEE members, affiliate society members, and others interested in the computer field. OMBUDSMAN: Email [email protected] COMPUTER SOCIETY WEBSITE: www.computer.org

2020 BOARD OF GOVERNORS MEETING 22 – 23 January: Costa Mesa, California

EXECUTIVE STAFF Executive Director: Melissa A. Russell; Director, Governance & Associate Executive Director: Anne Marie Kelly; Director, Finance & Accounting: Sunny Hwang; Director, Information Technology & Services: Sumit Kacker; Director, Marketing & Sales: Michelle Tubb; Director, Membership Development: Eric Berkowitz

COMPUTER SOCIETY OFFICES Washington, D.C.: 2001 L St., Ste. 700, Washington, D.C. 20036-4928; Phone: +1 202 371 0101; Fax: +1 202 728 9614; Email: [email protected] Los Alamitos: 10662 Los Vaqueros Cir., Los Alamitos, CA 90720; Phone: +1 714 821 8380; Email: [email protected]

EXECUTIVE COMMITTEE President: Cecilia Metra; President-Elect: Leila De Floriani; Past President: Hironori Kasahara; First VP: Forrest Shull; Second VP: Avi Mendelson; Secretary: David Lomet; Treasurer: Dimitrios Serpanos; VP, Member & Geographic Activities: Yervant Zorian; VP, Professional & Educational Activities: Kunio Uchiyama; VP, Publications: Fabrizio Lombardi; VP, Standards Activities: Riccardo Mariani; VP, Technical & Conference Activities: William D. Gropp; 2018–2019 IEEE Division V Director: John W. Walz; 2019 IEEE Division V Director Elect: Thomas M. Conte; 2019–2020 IEEE Division VIII Director: Elizabeth L. Burd

MEMBERSHIP & PUBLICATION ORDERS: Phone: +1 800 678 4333; Fax: +1 714 821 4641; Email: [email protected]

IEEE BOARD OF DIRECTORS

BOARD OF GOVERNORS Term Expiring 2019: Saurabh Bagchi, Gregory T. Byrd, David S. Ebert, Jill I. Gostin, William Gropp, Sumi Helal Term Expiring 2020: Andy T. Chen, John D. Johnson, Sy-Yen Kuo, David Lomet, Dimitrios Serpanos, Forrest Shull, Hayato Yamana Term Expiring 2021: M. Brian Blake, Fred Douglis, Carlos E. Jimenez-Gomez, Ramalatha Marimuthu,

President & CEO: Jose M.D. Moura President-Elect: Toshio Fukuda Past President: James A. Jefferies Secretary: Kathleen Kramer Treasurer: Joseph V. Lillie Director & President, IEEE-USA: Thomas M. Coughlin; Director & President, Standards Association: Robert S. Fish; Director & VP, Educational Activities: Witold M. Kinsner; Director & VP, Membership and Geographic Activities: Francis B. Grosz, Jr.; Director & VP, Publication Services & Products: Hulya Kirkici; Director & VP, Technical Activities: K.J. Ray Liu

Erik Jan Marinissen, Kunio Uchiyama revised 10 October 2019

www.computer.org/computingedge May/June 2018

20

15 www.computer.org/cga

SPOTLIGHT ON TRANSACTIONS

Advances in Visualization Recommender Systems Klaus Mueller, Stony Brook University

This installment of Computer’s series highlighting the work published in IEEE Computer Society journals comes from IEEE Transactions on Visualization and Computer Graphics.

A

s we take part in the revolution of big data, visualization has become an important mechanism for enabling human-in-the-loop data analytics and data-driven decision making. Since visual representations of data and information greatly appeal to the fast cognitive facilities of the human brain, they form a powerful interface to connect the human to the data and any advanced machine-learning algorithms that could be operating on them. This human–machine discourse is commonly referred to as visual analytics. Digital Object Identifier 10.1109/MC.2019.2918513 Date of publication: 30 July 2019

16 4

December 2019

COM PUTE R PUBLISHED BY THE IEEE COMPUTER SOCIET Y

Data visualization begins with the data that are transformed into a graphical representation by a given visualization method. At this point, the rendered visualization is just an image, which the human observer’s perceptual and cognitive facilities subsequently convert into insight. But not all visualizations succeed in this crucial latter task, and many do not even entice a human to engage, that is, look and spend the effort to study it. While the latter is not yet fully understood, it is related to aesthetics and graphic appeal. Selecting the best visualization for a given data configuration and analytical goal is difficult. This might be surprising to some and is one of the reasons that work in this research area is so important. The visualization design begins with deciding what basic chart is best suited for the type (that is, categorical, numerical, temporal, string) and size (that is, number of columns and rows) of the data at hand. The design then continues with choosing the right set of colors, the appearance of marks, graphical detail, aspect ratio, chart orientation, and so on.

Published by the IEEE Computer Society 

2469-7087/19 © 2019 IEEE 0018-9162/19©2019IEEE

EDITOR RON VETTER

University of North Carolina Wilmington; [email protected]

There are two principal factors that amount of visualization design knowl- findings they generate from their own govern the graphical design of a data edge and best practices that the au- (and other) empirical studies. These visualization and promote a human’s thors use ef fectively. Another en- findings can then be captured as conunderstanding of the data and the abling techn ol o g y is Vega-L ite, a straints to construct, augment, or upin formation they represent, namely, ex- high-level grammar-for-visualization date a new or existing visualization pressiveness and effectiveness. A visu- design that was developed in the au- recommender and validator system. alization is expressive when it is able thors’ research laboratory and described to communicate all information in the in a different set of articles. Vega-Lite data; it is effective when it does so better maps data to properties of graphical hese are undoubtedly exciting marks, that is, points or bars, and an than an alternative visualization. times for data visualization Since a mainstream user cannot be associated compiler then automatically use and research. The emergexpected to possess the kind of exper- produces visualization components in- ing web-scale community databases tise needed to design the most expres- cluding axes, legends, and scales. of visualization designs are great ensive and effective chart ablers in the quest to for his or her data, auexpressively and effectomating this design tively design visualizaThe ensuing democratization of insight has task has been an active tion of data and bring area of research. In fact, it to the mainstream the propensity to propel both science several research sysarena. The development and society forward. tems on this topic had of Draco, and other sysalready emerged in the tems that may emerge, The Draco system, which is the empowers everyone to generate comlate 1980s. These were essentially rule-based systems informed by con- subject of this article, uses answer-set pelling, yet truthful, visualizations temporary fundamental research on programming in conjunction with a of data. They can then share this visual information encoding. While weight-learning method to represent information with others, enabl i ng effective, rule-based systems have the and apply the visualization design them to form their own opinion on inherent limitations that the coding of knowledge gained through the afore- the substance of the data and the inrules is a tedious process and execut- mentioned empirical studies. The de- formation conveyed. The ensui ng ing a rule-based system can be prone to sign knowledge is encoded as a set of democratization of insight has the constraints used to design future visual propensity to propel both science and combinatorial explosion. The featured spotlight article is representations. Draco uses the an- society forward and lead to a better in“Formalizing Visualization Design swer-set solver to evaluate the set of con- formed world as a whole. Knowledge as Constraints: Actionable straints when generating or suggesting and Extensible Models in Draco” (IEEE a visualization. Some constraints can Transactions on Visualization and Com- be soft, which enables tradeoffs for usputer Graphics, vol. 25, no. 1, pp. 438–448, ing a particular visual design instead of 2019) by Dominik Moritz, Chenglong another. In such cases, these soft conKLAUS MUELLER is a professor with Wang, Greg L. Nelson, Halden Lin, straints are weighted, allowing the systhe Computer Science Department, Adam M. Smith, Bill Howe, and Jef- tem to compose a list of the best candiStony Brook University; a senior frey Heer. It addresses this problem in a dates. The weights can be set manually scientist with the Computational unique and forward-looking manner by or learned with a RankSVM classifier. Science Initiative, Brookhaven National taking advantage of advanced machine Users can then input their data, and Laboratory; and the editor-in-chief of learning and a set of empirical studies Draco recommends a list of visualizaIEEE Transactions on Visualization on visualization designs that recently tions, ranked by their model-based prefand Computer Graphics. Contact him became available. These studies pro- erence scores. at [email protected]. The system is also notable in that duced many user-rated visualization designs and represent a considerable researchers can use Draco to encode

T

This article originally appeared in Computer, vol. 52, no. 8, 2019. www.computer.org/computingedge

AUGUST 2019

17 5

DRONES

Privacy and Civilian Drone Use: The Need for Further Regulation Stephanie Winkler | Pennsylvania State University Sherali Zeadally | University of Kentucky Katrine Evans | Hayman Lawyers

Current US regulation is not equipped to provide explicit privacy protection for drone use in an era of sophisticated audio/video and social media. In 2016, the National Telecommunications and Information Association recognized this deficit by releasing a set of best practices, which we examine in light of the current privacy concerns with drone use in the US.

W

e tend to think of drones—or, more properly, “unmanned aerial vehicles” or “UAVs”—as a new phenomenon. However, the concept of using pilotless aircraft for military purposes has been around since at least the American Civil War period. Civilians have also used UAVs for recreation since remote controlled model aircraft were invented. Some of the problems with civilian UAV use also date back to those early days, including the risk of collision with people or property and interference with other aircraft or vehicles. However, modern computing power and connectivity have changed both the functionality and the capabilities of drones. This article deals with one of those new risks: the potential for onboard video and audio to intrude on personal privacy. Aerial photography has always had some capacity to affect privacy, and the prevalence of cell phone cameras as well as CCTV systems means that our images are being captured and potentially republished more than ever before. However, the use of drones, coupled with easy access to publication channels such as social media, create particularly intense new challenges. Current drone regulations in the US provide little in the way of explicit privacy protection. While the National

18 72

December 2019 September/October 2018

Telecommunications and Information Administration (NTIA) has produced “best practices” to guide users about how to manage drones in a privacy-protective way, this framework does not have the force of regulation (https://www.ntia.doc.gov/files/ntia/publications /uas_privacy_best_practices_6-21-16.pdf). There are also practical and conceptual difficulties with how some aspects of those guidelines apply to the drone environment.

A Brief History of Drones

UAVs—broadly defined as aircraft without a pilot— have an extensive history beyond what individuals have seen in headlines or their local store today. This technology has significant roots in military use before being adopted by civilians and emergency services.

Military Developments UAVs—broadly defined as aircraft without a pilot— have a long history of development dating back to the Civil War. When UAVs were first conceptualized, they essentially consisted of loading up balloons with explosives, and users hoped that they would land on their intended targets. Precision was impossible. Missile

Published theComputer IEEE Computer Society Copublished by thebyIEEE and Reliability Societies

2469-7087/19 © 2019 1540-7993/18/$33.00 © 2018IEEE IEEE

technology had improved significantly by World War II, but unmanned aircrafts were still in their infancy. For example, the Japanese launched balloons carrying explosives with the theory that air currents would carry the balloons to a target. This was labeled a failure.1 The US experimented with UAVs in WWII as well, but instead of using balloons they used planes. The plane was loaded with enough explosives to essentially make it a bomb. Pilots would fly the plane to a certain point, where (everyone hoped) they would bail out safely, and then a nearby aircraft would control the unmanned plane via radio control to its target. This was known as Operation Aphrodite, and had mixed rates of success.1 This approach, though, is much closer to how UAVs operate today, with someone controlling the aircraft via remote without actually having to set foot in a cockpit. Since the 1940s, UAVs continued to be developed by the military and have been used in multiple wars for reconnaissance missions; photography and video were integral to their operation, including surveillance of people and their movements. UAVs gained the capability of carrying weapons only around 2002 during the war in Afghanistan. It is not coincidental that around this time the budget for UAVs increased significantly with one billion dollars allocated in the US budget for 2003.2

Civilian and Emergency Service Developments From their origins in the 1930s, remote controlled model aircraft rapidly became popular, including being a perennial item on Christmas present lists for both children and adults. Specialized competitions and clubs sprang up. Instructors were available to help novice pilots learn to fly their increasingly sophisticated vehicles and avoid hazards. Recent technological developments and decreasing costs of basic drones have expanded the civilian market enormously. This includes the development of commercial applications for drone use, including agriculture, oil/gas, media, real estate, and filmmaking.3 Drones also have increasing value for emergency services where they enable close view without putting people at risk, for instance in police search and rescue operations in inaccessible regions, bushfires, and assessment of unstable buildings or accident sites. As of August 2016, there were roughly 20,000 drones registered with the Federal Aviation Administration (FAA) for commercial purposes. This number is expected to grow to nearly 600,000 after a change in FAA regulations made it easier for someone to be certified to pilot a drone.3 In 2016, drone sales for private use grew to $200 million, with models equipped with cameras and global positioning systems (GPS) dominating the market.4 www.computer.org/computingedge www.computer.org/security

However, regulation of drone use has been slower to develop than the market and is still arguably playing catch-up. For instance, in the US Congress passed legislation in 2012 directing the FAA to draft rules governing the use of commercial drones by 2015. Similarly, in New Zealand, the Civil Aviation Authority did not pass rules relating to drone use (or “remote piloted aircraft” as it formally calls them) until late in 2015.

Privacy Implications

The proliferation of drone use over the past few years has significantly increased the privacy concerns related to their deployment. Most of this is because drone capabilities have increased over the years to allow for sophisticated surveillance and data collection. The issues include the following: ■ The majority of drones are equipped with—or designed to carry—cameras that both record pictures and video and are capable of storing and transmitting this information either immediately or for later viewing (typically through online upload). Some are also audio capable. ■ The immediacy and normality of capturing and publishing information creates a significant need for user awareness of risks and a clear ethical framework within which to act. Taking privacy-invasive material down after the event is possible (though it may still persist in some form), but the damage may well have been done by then. However, the availability of drones has galloped ahead of clear guidance or agreed etiquette—let alone formal rules—on how to use them successfully. ■ Drones are typically fairly small, especially when used by private citizens, with some capable of fitting in the palm of a hand. This allows them to be relatively invisible— that is, to enable covert surveillance—and to get into small places that a person cannot reach. ■ Drones can also reach heights that are not accessible to other forms of commonly used modern devices such as the ubiquitous cell phone camera. They can therefore capture information that is well beyond what a normal cell phone user would be able to view: they are eyes in the sky. ■ Another major factor in drone capabilities that creates additional privacy risks is their cost. Traditionally, surveillance of this type would be highly expensive, but drones today are inexpensive, with the average cost of those sold in the US in 2016 amounting to around $500.4 At this price, it is relatively easy for a large number of people to become model aircraft users, and it increases the likelihood of entry from commercial organizations. ■ One effective method of protecting the privacy of an individual when collecting data is to ensure that there 19 73

DRONES

are no personal identifiers within the data, either by ensuring one does not collect any personal data to begin with or by anonymizing the data (for instance by pixelation or blurring5). However, it is unrealistic for the average personal user to engage such techniques. Even commercial entities may struggle. This form of advanced technology is relatively expensive and time consuming and would be less likely to be found on commercial or private drones. With the private use of drones, there is not a good mechanism to ensure that someone will take these privacy protection measures. Furthermore, at this point in time there are very few requirements for commercial companies to protect visual data in this way. ■ While the law often does not extend to protect privacy in public places, because reasonable expectations of privacy are said to be limited, information captured by drones may well be different because they can capture what is not otherwise visible. Expectations of privacy can still be intact. ■ Drones can capture images of people in their own homes: the place where we most obviously have a reasonable expectation of privacy. For instance, over the past few years, news outlets have reported several instances where drones have flown to someone’s window or over a private residence’s yard.6 Even the incursion of the aircraft is an intrusion, and when these drones are equipped with cameras, this is typically considered a privacy violation.7 ■ It is difficult to know just from a visual inspection of the drone in flight if a camera is actually on and recording. Flying a drone over someone’s property cannot constitute trespassing according to a recent court case,6 though the law will differ from place to place. This makes it uncertain whether individuals can take action if they believe that someone is surveying them using a drone: proving what was recorded may be hard, and dealing with the incursion itself may also be legally problematic. As drones continue to have increasing technological capabilities, surveillance could be conducted far enough away from a property that the target could be completely unaware that he or she is being watched. While this is useful for law enforcement agencies, it is equally problematic when drones are available for private use. This opens the door to making certain types of criminal activity easier (for instance, stalking, peeping).

Current Regulations for Drones in the US

For reasons of space, this article considers only the US federal regulations for drones. However, regulations in other jurisdictions typically pick up similar themes and approach the issues in broadly similar ways. We recognize that many states have already passed their own 20 74

ComputingEdge IEEE Security & Privacy

drone regulation; however, much of this regulation overlaps with the FAA regulations discussed here, with few laws specifically focusing on privacy. Regulations for drone use in the US are split in different ways. First, there are three categories of drone use: commercial, model aircraft, and military. The FAA is charged with regulating who can fly drones, pilot requirements, drone registration, and what airspace a drone is authorized to use for the two civilian groups (that is, commercial and model aircraft). The military tends to govern itself regarding the use of drones. The FAA regulations for commercial use do not differ very much from model aircraft use. Some of the most noteworthy differences are related to the pilot requirements and the operating rules. There is no pilot requirement for model aircraft use. However, to fly a drone for commercial purposes, a Remote Pilot Airman Certificate and Transportation Security Administration (TSA) vetting are required, and there is an age restriction (16). The ease with which someone might obtain a Remote Pilot Airman Certificate depends on whether they already possess another form of pilot license. The process is much easier if they do. However, for a first-time pilot, this certificate requires proficiency in the English language and the passing of an aeronautical skills test. This certificate expires after two years, at which point the pilot must take a refresher course to renew the certificate. The operating rules for commercial and model aircraft uses are similar, but as with the pilot requirements, much more is explicitly stated for commercial use. The two FAA regulations that the commercial and model aircraft groups have in common are that the drone must be flown within the pilot’s line of sight and it must yield to any manned aircraft. Other than these two commonalities, it appears that commercial use of drones is regulated more heavily than model aircraft use. However, after examining the community safety guidelines that the FAA refers to for the model aircraft group, it becomes clear that this is not the case. The community safety guidelines include many of the regulations imposed on commercial drone use, including not flying over groups of people and/or events. In addition to these stipulations, the safety guidelines also include some rules that the commercial regulations do not explicitly state, such as recommending that pilots not fly under the influence and not fly near emergency situations. A summary of the regulations for commercial and model aircraft use can be found in Table 1.

Privacy Best Practices (“Guidelines”) In addition to the regulations discussed earlier, the NTIA has also released a set of best practice guidelines in relation to privacy and drone use. They differ December 2019 September/October 2018

from the regulations discussed above because they are merely recommendations for self-regulation and are not required by law. These guidelines specifically refer to practices related to data collected using drones, including transparency of data collection, methods of data collection, sharing, and storage of the data. Like the FAA regulations for the use of drones, NTIA guidelines are separated into different sections: commercial, neighborly, and press. The press is included in addition to the first two categories in the best practices in order to carve out an exemption. Newspapers and news agencies, while commercial entities, are not asked to follow the guidelines because of the First Amendment protection regarding free speech and freedom of the press. These organizations typically have their own set of ethical guidelines, which the NTIA has referenced instead. The first two categories of commercial and neighborly correspond with the categories (commercial and model aircraft) that the FAA previously defined in the drone regulations. As with those regulations, the best practices for commercial entities are more substantive than neighborly use. However, all drone operators, regardless of whether they are commercial or model aircraft, are advised to use the full set of guidelines whenever possible. The major areas in the privacy guidelines are summarized in Table 2. We discuss the main guidelines below. Transparency and consent. Informed consent is at

the heart of the guidelines. A commercial entity that expects to collect data from individuals must inform them about the potential data collection and obtain consent for this data collection from the individual. Consent is defined as “words or conduct indicating permission.” This consent can be implied or explicit. As a general principle, this is of course good practice, though the specifics outlined in the guidelines decrease the power of this principle. The method recommended for notifying individuals about data collection practices is through the posting of a privacy policy that outlines the purpose for which the data is collected, what the data will be used for, what kind of data will be collected, how this data will be shared and with whom, how to submit complaints, and how the entity responds to law enforcement requests (https://fpf.org/w pcontent/uploads/2016/06 /UAS_Privacy_Best_Practices_6-21-16.pdf). To some extent, this meets transparency requirements (subject to the limitations below), but calling it “consent” is problematic. First, privacy statements historically are not written with the best interest of the individual in mind, but with

www.computer.org/computingedge www.computer.org/security

Table 1. FAA civilian drone regulations.* Commercial

Model aircraft

Pilot requirements

Must have passed TSA vetting Must have a Remote Pilot Airman Certificate Must be over the age of 16

None

UAV requirements

Must be under 55 lbs, or be registered if over 0.55 lbs Must undergo preflight check

Must be under 55 lbs, or be registered if over 0.55 lbs

Location

G airspace

Five miles from an airport (without prior authorization)

Operating rules

UAV must: – Yield to any manned aircraft – Stay within line of sight – Fly under 400 feet – Fly at less or equal to 100 mph – Not fly over people – Not fly from moving vehicle

UAV must: – Yield to any manned aircraft – Stay within line of sight – Follow communitybased safety guidelines

* Table adapted from Federal Aviation Administration website: https://www.faa.gov /uas/getting_started.

the purpose of protecting the company from litigation. They typically take the form of extremely lengthy documents and are written in high-level language.8,9 This results in very few individuals actually reading the documents.10 Assuming informed consent from an individual on the basis of a privacy policy alone is not going to be very effective in practice. Second, and even more tellingly, there is the problem with where the privacy policy would be posted. Typically, when privacy policies are discussed, it is in reference to an online platform or terms and conditions about some technology. In-time communication is relatively easy in such circumstances: the privacy policy is easily accessible for use via a web link. For physical products, the seller can include a piece of paper with the technology. However, drones involve a third party using a particular technology to potentially collect information about others. While the guidelines do state that the entity should make reasonable efforts to notify individuals before the data collection takes place, they do not state that they should include their privacy policy in that notification. Even if one implies that they must, and even if that policy is publicly available, it does not mean it is easy to find or that it is reasonable and practicable for individuals to take the necessary steps to go and find it. Frankly, it is often impractical for drone users to 21 75

DRONES

Table 2. Privacy best practice summary. Area of interest

Guidelines provided

Privacy policy

Should be publicly posted Should contain information regarding: – The purpose for the data collection – Types of covered data collected – Data storage and anonymization practices – Procedures for submitting complaints/concerns – Reponses to law enforcement requests

Data collection

Operators should not: – Collect covered data where the individual has an expectation of privacy*† – Continuously collect covered data of individuals*†

Data storage

Operators should avoid retaining data longer than necessary*†

Data security

Operators should develop an information security policy from one of these known standards: – Federal Trade Commission – NIST Cybersecurity Framework – ISO 27001 standard for information security management

Data use

Data should not be used for: – Employment eligibility, retention, or promotion† – Credit eligibility† – Healthcare treatment eligibility† – Any purpose not listed in the privacy policy – Marketing purposes†

Data sharing

Data should not be shared: – With anyone not listed in the privacy policy – Publicly† – For marketing purposes†

* Unless there is a compelling reason to do otherwise. † Unless consent is obtained. Table constructed from the National Telecommunications and Information Administration Privacy Best Practices for UAS.

inform people before or at the time of collection that the collection is occurring (the collection be incidental to the purpose of using the drone), and tracing the relevant individuals after the event to ask for consent may not always be possible. The third problem lies with the notion of consent itself. Consent, as outlined in these guidelines, can be explicit or implied. If the entity makes a reasonable effort to notify individuals about data collection, it appears that the entity would be able to assume that no response from these individuals means that they consent. This is more commonly known as “opting out.” 22 76

ComputingEdge IEEE Security & Privacy

When an individual opts in for a program, there is an assumption that unless the entity hears something from the individual, then the answer is “no.” Realistically, the nature of the activity and issues of timing mean that individuals are not likely to be given the choice to opt in, however preferable this might be from a privacy perspective.11 Transparency is an important privacy protection, but there are real questions about the practicality of making that information available and the validity of asking for consent in the drone environment. Relying on a privacy policy alone to establish implied consent is conceptually problematic. However, creating mechanisms to obtain consent is useful and worthwhile if the drone use is a planned activity (such as filming a sporting event) where it is obvious that information about individuals will be captured and particularly where the individuals may be the subjects of the footage. When images of identifiable individuals are captured incidentally or accidentally as part of the drone activity, though, consent may be impracticable, and reliance on a privacy policy to provide consent does not work at all. Notice and choice are simply insufficient to protect privacy. Much greater emphasis needs to be placed on the other safeguards covered in the guidelines. Covered data. The document is very specific about what types of data are “covered.” Covered data is any data that “identifies a particular person.” If the information cannot be linked to an individual’s name or any other personally identifiable information (PII), then it is not covered data. This is a significant departure from current privacy laws and policy where protected data is limited to specifically defined types of personally identifiable information. It goes beyond what is traditionally protected. In our view, it does this in a way that appears justified and necessary in the context. Historically, in US law (though not in jurisdictions with broader privacy laws), protecting PII is closely related to protecting against identity theft. This means that the items included in the definition of PII are typically limited to name, Social Security number, driver’s license number, bank account numbers, and address. It does not include still or moving images or audio files, which are covered by the guidelines. At this time, multiple technologies exist that can identify an individual based on various attributes including voice and likeness (for example, image).12 This expanded definition of PII is necessary when discussing drones because the traditional forms of data collected by drones would not be covered by current definitions of protected data. The guidelines in this case provide more privacy protection than many laws (for example, the Telecommunications Act) today. December 2019 September/October 2018

The guidelines cover only information that is capable of leading to an individual’s identification. They state that information is not covered if it “likely will not be linked to an individual’s name or any other personally identifiable information, or if the data is altered so that a specific person is not recognizable.” However, it is well known that anonymization is not effective in protecting an individual’s privacy: see for instance the studies on re-identifying individuals from combined datasets.13 Facial recognition technologies are improving significantly, including image recognition that can identify people out of photographs posted on social media. There are currently methods that can permanently obscure an image, but these require an extra step after data collection.5 It is foreseeable that many entities that apply the guidelines will underestimate the likelihood of individuals being identified, for instance, if they post the footage on social media (which is permitted only if necessary, as we pointed out earlier on the problems with consent and necessity above). It is common for agencies to look at such clauses through the lens of their own intentions, not to look at what is objectively possible in the context—that is, if the agency itself does not intend to identify those individuals, it may assume that the data is not covered by the guidelines. It may therefore be beneficial to expressly alert unmanned aerial system (UAS) operators to the need to take care when publishing images in situations that may allow others to identify individuals who are featured. This includes using images in advertising or posting them online. Data collection, storage, and sharing practices. The remainder of the privacy guidelines focuses on the collection, storage, and sharing of covered data. In general, unless the operator of the drone has received the explicit consent of the individual or has a compelling reason, he or she should refrain from collecting data. The guidelines do not define the term “compelling reason” in order to give the practices more flexibility for different entities that would use them. Using a strong term such as “compelling reason” is a useful signal that if consent is not possible, there must be a clearly justifiable reason for the collection. However, not defining this term hurts individuals who could be the subject of data collection because what they would define as a “compelling reason” will not always be the same definition of the operator. The decision to not define this term leaves far more room for interpretation than what is necessary for this document. Instead, the guidelines could make it plain that a “compelling reason” will only exist where:

■ the personal information is collected for a lawful purpose; www.computer.org/computingedge www.computer.org/security

■ the data collection with drones is genuinely relevant to what the agency does; ■ the potential for an individual to suffer harm from a breach of their privacy is limited; and ■ there is no realistic and less intrusive alternative to collecting this information in order to meet the agency’s aim—that is, the collection is effective and proportionate. Drone operators should also avoid retaining collected data for an extended period of time. When the data is retained, there should also be efforts to keep this data secure. Specifically, the guidelines state that the entity should “implement a program [with] reasonable administrative, technical, and physical safe-guards appropriate to the operator’s size and complexity, the nature and scope of its activities, and the sensitivity of the covered data.” This language is quite vague and bureaucratic. Both the words reasonable and appropriate are open to interpretation. What seems reasonable or appropriate to one operator may seem excessive to another. Because the guidelines never define either of these terms, there is no unified expectation for privacy protections. The guidelines recognize the problem and have referred to several reputable organizations (that is, the Federal Trade Commission, NIST Cybersecurity Framework, and the ISO) that operators can use as resources to develop a good information security policy. Given that these are privacy best practices and not regulations, referencing these standards of information security is helpful. However, smaller drone operators may struggle to unpack what these external resources mean (particularly the NIST and ISO frameworks) and how they can apply the relevant points to their own situations. If the guidelines are to be observed, it is important to make life easier for small businesses and individuals who will use them. We suggest that the guidelines could distill the main points from the external guidance and communicate it clearly to users (perhaps in the form of a “top ten” list of things to check or implement). The final area that the best practices discuss is about how data should be shared and used. While several other areas of the best practices are vague, they are more explicit in regards to how this data may be used. Specifically, data collected by drones cannot be used (without consent) to determine “employment eligibility, promotion, or retention; credit eligibility; health care treatment eligibility.” The exception to this is if it is permitted by another regulatory framework. This is a logical and necessary exception, but given the patchwork nature of US laws and regulations, finding out what other laws might apply is not always the easiest task in practice. 23 77

DRONES

Apart from those specific areas, the operator should not use or share the information outside of circumstances specified in the privacy policy. Clearly, operators need to comply with what they have said they are going to do; to do otherwise would be misleading. The reference is therefore unobjectionable, and is possibly a useful reminder for the operator, but it does not provide any further privacy protection. As mentioned earlier, relying on a privacy policy for any type of consent is not in the best interest of the individual because of elevated language typically used and the likelihood of someone reading it is not great.8,10 As for the public release of the data collected, operators are advised to not do so unless necessary. If they are required to publicly release information, they should take steps to anonymize the data. As stated earlier, this is not a good measure of privacy protection.

Interaction between FAA Regulations and NTIA Best Practices As briefly mentioned earlier, the privacy guidelines were released by the NTIA, not the FAA. This is because the FAA does not have the authority to regulate activities related to the collection and handling of data. Whether it can issue nonbinding guidance is perhaps moot: it does not appear that it considers the issuing of privacy guidance as part of its core role. Instead, this task falls to the NTIA, which released the guidelines after consulting with multiple stakeholders concerning the operation of drones. The fact that two different agencies are regulating and advising on the topic has some implications for privacy in the use of drones. It is not necessarily obvious to operators (particularly model aircraft users, but even small business operators) where to look for the rules or advice. It may also not be obvious what the status of documents produced by each body may be. Confusion about what is a required rule and what is voluntary best practice is not helpful, and can raise risks of noncompliance or of risk aversion and excessive compliance costs. Even in the NTIA guidelines, there is some confusion. The only section of the privacy guidelines that directly refers to the FAA is section 2c, which states (emphasis ours): Where it will not impede the purpose for which the UAS is used or conflict with FAA guidelines UAS operators should make a reasonable effort to minimize UAS operations over or within private property without consent of the property owner or without appropriate legal authority.

Yet, as we know, the FAA has imposed regulations on commercial and model aircraft operators—not guidelines. Those regulations have the force of law and must be followed, or the operator will suffer consequences. 24 78

ComputingEdge IEEE Security & Privacy

However, more positively, several of the FAA regulations reduce some of the concerns raised earlier about protections in the privacy best practices. First, the privacy guidelines state that the operator must have in place some way for individuals to file complaints or make requests that their data be deleted. There is a concern that it may not be easy for an individual to find this particular information. It should be in the privacy policy, but unless the individual knows who the drone operator is, this may be of little use. However, an FAA regulation offers a slight bit of help by requiring every drone over 0.55 lbs to be registered. If this registration information is publicly accessible, and one can visually identify the registration on the drone from a distance, then it would be possible for the individual to look up the operator from the registration. However, we note that that is a lot of “ifs”: the chances of everything lining up are not particularly high. Second, the drone must always be within visual distance to the operator (without aided vision, for instance, binoculars, scope). Although this does not stop operators from hiding from those observed, it can provide more of an opportunity for individuals to confront them to file a complaint than if they were a few miles away from the drone. As we noted earlier, though, technological improvements make it possible to obtain clear images of individuals while the drone is a long way from the individual. The line of sight rule is important for safety, as it makes it more likely that the UAV will be flown competently, but it is less useful for privacy. The increasing number of near-misses between drones and piloted aircraft, and the difficulties that the authorities experience with tracking the drone operators responsible, make it plain that the ordinary individual faces an uphill struggle with identifying operators, finding out whether a privacy violation has occurred, and holding the operator to account if it has.

Policy Recommendations

The privacy guidelines released by NTIA follow in the tradition of US privacy policy in that there is an expectation of industry self-regulation. However, if the web platforms are any indication of what self-regulation looks like, it typically does very little to protect individuals. Research shows that industry self-regulation at this time has no impact on preventing data breaches. In addition, there has been a proven track record of noncompliance with companies registered with self-regulatory programs. However, there is no clear indication how this noncompliance is addressed.14 At the very least, the guidelines also need an amendment to enhance the privacy protection that they offer while also recognizing the significant benefits that drone technology provides for both commercial enterprises and model aircraft users. December 2019 September/October 2018

The effectiveness of voluntary guidelines will always be limited, however, and it is important to eliminate the confusion that currently exists between what is law and what is optional. But the biggest weakness of self-regulation is the lack of an enforcement mechanism. Individuals have no guarantee that drone operators will follow industry best practices, and no clear action is specified if it is determined that these best practices have been violated. This concern has been echoed by both potential bystanders to drone operation and drone operators.15 We therefore believe that it is better for the US to adopt and enforce privacy regulation concerning the use of drones. The regulation should cover many of the matters in the guidelines, including development of accessible privacy policies, the need for consent where practicable, limits on when personal information can be collected (so that situations where consent is not possible are governed), limitations on use and disclosure, and rules about retention. Any privacy protection mechanism should consider the privacy of bystanders as well as drone operators because some potential methods for enforcement could impact the privacy of drone operators by subjecting them to data collection.15 Federal regulation is recommended to ensure that all individuals who reside in the US have a minimum standard of privacy protection and for the purposes of clarity for drone operators. As with other privacy regulation today, state legislators still have the option to enact stronger privacy protections as they deem necessary. These recommendations are more in line with how European law deals with privacy but are still workable in the US context. Even so, there are potential barriers that could significantly impact the implementation of our policy recommendations. First, the political climate of the US makes it increasingly difficult to pass legislation at the federal level. Maintaining consistent regulation outside of legislation is just as difficult because many regulatory agencies are dependent on Presidential appointments for their positions. As can be seen with the Federal Communications Commission’s attempt to reverse the net neutrality policies introduced in 2014, regulation could shift depending on which party holds power within the regulatory agency. Second, harming innovation is frequently cited as a barrier to privacy protections. We recognize that our policy recommendations could potentially harm smaller drone operators. However, this is a potential harm with any regulation that is introduced. In this situation, we believe that that benefits outweigh the costs.

I

t is our view that appropriate and proportionate regulation of drone use to protect personal privacy is becoming critically important as drones continue to www.computer.org/computingedge www.computer.org/security

proliferate and their commercial and recreational uses become more varied. The pressure on airspace and the potential for intrusions into personal space are only going to grow greater. UAVs represent a significant opportunity for developing new business processes and services, and it is important not to stifle innovation. However, the ability to innovate will be undermined if drones are operated in an untrustworthy manner. Considering regulation through a privacy lens as well as a safety lens enables us to gain the benefits from UAVs while avoiding the worst of their potential harms. The current combination of the FAA regulations and NTIA guidelines do not go far enough to provide clarity and certainty for users or for the public who may be the subjects of either deliberate or accidental drone surveillance. More needs to be done in both fields, but it does not seem likely that anything short of regulation— where there are consequences for failure to comply— will be effective enough to drive safer and more trusted behavior in this fast-moving environment. References 1. J. Garamone, “From the U.S. Civil to Afghanistan: A Short History of UAVs,” Army Communicator, vol. 27, no. 2, 2002, pp. 63–64. 2. J. Garamone, “Unmanned Aerial Vehicles Proving Their Worth over Afghanistan,” Army Communicator, vol. 27, no. 2, 2002, pp. 62–63. 3. A. Selyukh, “FAA Expects 600,000 Commercial Drones in the Air within a Year,” NPR, 29 Aug. 2016; http://www. npr.org/sections/thetwo-way/2016/08/29/491818988 /faa-expects-600-000-commercial-drones-in-the-air -within-a-year. 4. “Drone Dollar Sales for the Past 12 Months Were Three Times Higher than Sales from Prior Year,” NPD Group 25 May 2016; https://www.npd.com/wps/portal/npd/us /news/press-releases/2016/year-over-year-drone -revenue-soars-according-to-npd. 5. J. Cichowski and A. Czyzewski, “Reversible Video Stream Anonymization for Video Surveillance Systems Based on Pixels Relocation and Watermarking,” IEEE International Conference on Computer Vision Workshops, 2011; doi:10.1109/ICCVW.2011.6130490 6. A. Peterson and M. McFarland, “You May Be Powerless to Stop a Drone from Hovering over Your Own Yard,” Washington Post, 13 Jan. 2016; https://www.washingtonpost.com/news/the-switch/wp/2016/01/13/you-maybe-powerless-to-stop-a-drone-from-hovering-over-your -own-yard/?noredirect5on&utm_term5.dc65c501bb44. 7. V. Chang, P. Chundury, and M. Chetty, “Spiders in the Sky: User Perceptions of Drones, Privacy, and Security,” Proceedings of the CHI Conference on Human Factors in Computing Systems, 2017, pp. 6765–6776. 25 79

DRONES

8. I. Pollach, “A Typology of Communicative Strategies in Online Privacy Policies: Ethics, Power and Informed Consent,” Journal of Business Ethics, vol. 62, no. 3, 2005, pp. 221–235. 9. S. Winkler and S. Zeadally, “Privacy Policy Analysis of Popular Web Platforms,” IEEE Technology & Society Magazine, vol. 35, no. 2, 2016, pp. 75–85. 10. R. Malaga, “Do Web Privacy Policies Still Matter?,” Academy of Information & Management Sciences Journal, vol. 17, no. 1, 2014, pp. 95–99. 11. D.L. Rosen et al., “Comparing HIV Case Detection in Prison during Opt-In vs. Opt-Out Testing Policies,” Journal of Acquired Immune Deficiency Syndromes, vol. 71, no. 3, 2016, pp. 85–88. 12. P.J.S. Vega et al., “Single Sample Face Recognition from Video via Stacked Supervised Auto-Encoder,” Proceedings of the 29th SIBGRAPI Conference on Graphics, Patterns and Images, 2016, pp. 96–103. 13. J. Unnikrishnan and F. Naini, “De-anonymizing Private Data by Matching Statistics,” 51st Annual Allerton Conference on Communication, Control, & Computing, 2013; doi:10.1109/Allerton.2013.6736722. 14. S. Listokin, “Industry Self-Regulation of Consumer Data Privacy and Security,” John Marshall Journal of Information Technology & Privacy Law, vol. 32, no. 1, 2015, pp. 15–32.

Call rticles for A

15. Y. Yao et al., “Privacy Mechanisms for Drones: Perceptions of Drone Controllers and Bystanders,” Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 2017, pp. 6777–6788. Stephanie Winkler is a PhD candidate at The Pennsylva-

nia State University in Information Sciences and Technology. Her current research interests include privacy, security, human–computer interaction, and fake news and misinformation. She received a master’s in communication from the University of Kentucky. Contact her at [email protected].

Sherali Zeadally is an associate professor in the College

of Communication and Information at the University of Kentucky. His research interests include cybersecurity, the Internet of Things, privacy, and computer networks. Zeadally received a doctoral degree in computer science from the University of Buckingham, England. He’s a Fellow of the British Computer Society and the Institution of Engineering Technology, England. Contact him at [email protected].

Katrine Evans is a privacy lawyer in Wellington, New Zealand,

and is currently a senior associate at Hayman Lawyers. Before going into private legal practice, Evans was Assistant Privacy Commissioner for more than a decade, including acting as the Commissioner’s general counsel and managing the policy and technology team. Contact her at [email protected].

g putin m o C rive s t pee e late ervas P o n th s E r e E p IE ul p a e , an d , u s ef mobil seek s

a cce s

review

e

sible

elop d d ev

itou ubiqu

s co m

m e nt

put

s in p

e r va s

opics ing. T

ive,

includ

tru c t infras

e har

ure , r

d war

eal-w

e

orld

er ware mp u t , s of t a n - co ology ing , hu m n io t c includ intera tions , d a r n e a id g s co n s sensin c y. s y s te m priva n , an d , an d io t c a curit y e s , inter y labilit t , sca ymen deplo s: htm e lin e uthor. guid asive /a r v r o e h /p Au t rg /mc uter.o .comp www

te chn

er Furth s: l i det a p e r va co m p

sive@

uter.o

rg

. www ter. u c om p ive er vas or g / p

26 80

ComputingEdge IEEE Security & Privacy

This article originally appeared in IEEE Security & Privacy, vol. 16, no. 5, 2018.

December 2019 September/October 2018

DEPARTMENT: EXTREME AUTOMATION

The Robotization of Extreme Automation: The Balance Between Fear and Courage In our last column,1 we have illustrated how FabLabs Jinan Fiaidhi network can play an Sabah Mohammed important role in shaping the Lakehead University The line between industrial future of manufacturing and Sami Mohammed automation. The focus was robot tasks and collaborative Victoria University on people taking a collaborative role to design robot tasks is beginning to Editor: and build the most Jinan Fiaidhi, Lakehead blur, and collaborative robots extraordinary things using University; jinan.fi[email protected] tools and techniques that are may be poised to take available at FabLabs. In this direction, FabLabs is a market share from industrial platform of a community of makers who collaborate and learn from robots. With the added each other to build highly specialized products, rather than settle for the legacy mass production products. In this column, however, we capabilities of working with are shifting our focus on the use of robots for extreme automation. Actually, “none of humanity’s creations inspires such a confusing human safely, as well as mix of awe, admiration, and fear as those associated with robots. We want robots to make our lives easier and safer, yet we can’t being smart to achieve quite bring ourselves to trust them. We’re crafting them in our own common goals, cobots and image, yet we are terrified they’ll supplant us”.2 The fears from the outcome of using robots reached some kind of far end as microbots are changing our apocalyptic, resembling the biblical Apocalypse or the end of 3 world. These reactions describe the wind of change related to future. automation and manufacturing as many prominent people are asking important “what if” questions like what happens if a new technology based on robotics causes millions to lose their jobs in a short period of time, or what if most companies simply no longer need many human workers? Bill Gates, as well as many politicians, believes that governments should tax companies for using robots instead of people as a way to compensate the loss in employment and for retraining.4 Others like legislators and lawmakers are trying to mature a robot law.5 Actually, the notion that workers’ skills can suddenly become obsolete

IT Professional 2469-7087/19 © 2019 2018 IEEE November/December

Published by the IEEE Computer 87 Society

Published by the IEEE Computer Society December 2019 27 1520-9202/18/$33.00 2018 IEEE

IT PROFESSIONAL

Figure 1. Robots as part of the physical AI world.

highlights a lively debate over how much and how fast technology will take over the workplace, and there’s concern about whether the governmental institutions and social safety nets are equipped to deal with this increasing problem. These kinds of fears and concerns are growing in pace and scale, which is likely to be highly disruptive, and so, the growing anxiety cannot be easily wished away. Thus, many policies have begun to circulate, where some are still controversial, like universal basic income, negative income tax, and the government job guarantee among others. The fact that we have got a lot of jobs including new types of jobs is a good thing and automation is proved to be beneficial in general.6 We cannot surrender to our fears, and we have to have the courage to make such balanced decisions in supporting robotization, as well as dealing with the cost associated with this new ecosystem. As Winston Churchill said, “Fear is a reaction. Courage is a decision.” We need to make decisions to move forward boldly by rebooting our thinking on automation, taking the stuff from the past that still works, and remix it for this new world. The most important thing that we need to keep in mind is that as robots and machines get smarter and smarter, it becomes more important that their goals, what they are trying to achieve with their decisions, are closely aligned with human values. We need also to learn how to work alongside the robots that had replaced many of our jobs. Actually, robots have contributed to the new technological initiative, which is called Industry 4.0 or the artificial intelligence (AI) revolution, as they are providing the arm for implementing the smart applications that interact with the physical world. This means that by implication the AI paradigm is based on two worlds (virtual and physical), as Figure 1 illustrates, where the robotics is an important player of the AI physical world.

ROBOTS FOR EXTREME AUTOMATION Robots are coming as the new workforce of manufacturing to work alongside humans. They are mainly classified as industrial robots and typically they are sort of articulated arms featuring six axes of motion (6 degrees of freedom). This design allows for the maximum flexibility. There are six main types of industrial robots: Cartesian, SCARA, Cylindrical, Delta, Polar, and Vertically Articulated. However, there are several additional types of robot configurations. Each of these types offers a different joint configuration. The joints in the arm are referred to as axes. Typical applications of industrial robots include welding, painting, assembly, pick and place for printed circuit boards, packaging and labeling, palletizing, product inspection, and testing, all accomplished with high endurance, speed, and precision.

November/December 2018 28 ComputingEdge

88

www.computer.org/itpro December 2019

EXTREME AUTOMATION

Robot adoption in the manufacturing industry increased by an annual global average of 12% between 2011 and 2016, concentrated heavily in the automotive and electronic/electrical manufacturing sectors. According to the International Federation of Robotics, more than 3 million industrial robots will be in use in factories around the world by 2020.7 Global sales of industrial robots reached the new record of 387 000 units in 2017. That is an increase of 31 percent compared to the previous year (2016: 294 300 units). China saw the largest growth in demand for industrial robots: up 58 percent. Sales in the USA increased by 6%—in Germany by 8% compared to the previous year. These are the initial findings of the World Robotics Report 2018.8 However, the customer demand for greater product variety is driving an increase in low-volume, highmix manufacturing. Industrial robots, although they can be programmed to work for different settings, they cannot adapt and rapidly repurpose their production facilities in the same demanding speed. The need for smart factories, in which machines are digitally linked, is ever growing with the new notion of Industry 4.0.9 Smart factories create a supply chain that shortens product development time, repurpose quickly, reduce product defects, and cut machine downtime. The smart factory ecosystem is used to describe the seamless communication between all levels of extreme automation to a fully connected and flexible system—one that can use a constant stream of data from connected operations and production systems to learn and adapt to new demands. Central to the issue of such ecosystem is the notion of “smart manufacturing units (SMU),” which signifies the opportunity to drive greater value both within the four walls of the factory and across the supply network. Every SMU is a flexible system that can self-optimize performance across a broader network, self-adapt to and learn from new conditions in real or near-real time, and autonomously run entire production processes. SMUs can operate within the four walls of the factory, but they can also connect to a global network of similar production systems and even to the digital supply network more broadly. SMU enables all information about the manufacturing process to be available when it is needed, where it is needed, and in the form that it is needed across entire manufacturing supply chains, complete product lifecycles, multiple industries, and small, medium, and large enterprises. In fact, the new advances in collaborative robots and assistive technologies such as exoskeletons expand the scope of tasks robots can perform in support of the SMUs integration and adaptivity. Other significant advances are also adding more positive effects in the era of smart factories like “Cloud Robotics,” “Robot as Service,” and Microbots. In the next section, we shed some light on some of these advances that have an important effect in increasing the uptake of robots for extreme automation. However, we may address some of these issues in our next columns as they relate to other aspects of the extreme automation initiative.

Advanced Low-Cost Robotics for Manufacturing The current industrial robots are still pretty rudimentary—and expensive. They are typically been large, caged devices that perform repetitive, dangerous work in lieu of humans. These robots suffer from many problems like the vision issues as robots do not have the ability to identify and navigate around objects (including people) and dexterity issues as their gripping, maneuvering, and mechanical capabilities are still limited. As the digital technology advances and development of autonomous vehicles have driven down costs of off-the-shelf hardware, smaller, more dexterous robots have come onto the factory floor. Actually, low-cost robot designs poured in from around the world in response to many initiatives around the world like the Kickstarter campaign (www.kickstarter.com) advocating for a basic low-cost industrial arm for nontraditional markets like Fablabs. These lighter weight, lower cost robots can be outfitted with sensors that allow them to work collaboratively alongside humans in industrial settings, creating “cobots” or “FabLab robots”—robots that can perform tasks like gripping small objects, seeing, and even learning to tackle “edge cases.” Niryo One (https://niryo.com/) is one example from the Kickstarter campaign where the low-cost industrial robot is a six-axis robotic arm, made for makers, education, and small factories. The robot is 3-D printed and powered by Arduino, Raspberry Pi, and Robot Operating System. STL files and code on the robot are open source. Niryo One is also the first industrial robot that can be connected to your home and can be accessed and controlled via the Internet. The introductory price of Niryo One is less than $1000. There are other examples like the Franka Emika (www.franka.de): a rather remarkable cobot arm. It is designed to be easy to set up and program and can operate right next to people, assisting them with tasks without posing a risk and it can build copies of itself. Now, all the big industrial robot makers are trying to develop their

November/December 2018 www.computer.org/computingedge

89

www.computer.org/itpro 29

IT PROFESSIONAL

own cobots, but the most innovative designs have come from startups like the Rethink Robotics who pioneered the Baxter dual-arm robot in 2012 and, later, the single-arm robot called Sawyer. Both cobots are simple to use as their Intera software platform provides a train by demonstration experience unlike any other cobot. You can train these cobots by simply moving their arm and demonstrating the movements. With the introduction of Intera 5, Rethink Robotics has created the world’s first smart robot that can orchestrate the entire work cell, removing areas of friction and opening up new and affordable automation possibilities for manufacturers around the world. Intera 5 is driving immediate value while helping customers work toward a smart factory and providing a gateway to successful industrial internet of things (IIoT) for the first time.10 Collaborative robots are now defined by ISO 10218, which defines five main collaborative robot features: safety monitored stop, speed and separation monitoring, power and force limiting, hand guiding, and risk assessment.11 1. 2. 3. 4. 5.

Power and force limiting Robot force is limited through electrical or mechanical means. Safety monitored stop Robot stops when the user enters the area and continues when they exit. Speed and position monitoring Robot slows down when the user nears and stops if the user gets too close. Hand guiding The user is in direct contact with the robot while they are guiding and training it. Risk assessment Risk assessment of the application should be done to determine all possible risks and proper devices and procedures to mitigate the risk should be implemented.

Many cobots were successful in following these guidelines and achieving the highest sales among manufacturing companies like the Universal Robots cobots (https://www.universal-robots.com/).

Robotization Trend: From Cobots to Smart Microbots There are many other low-cost industrial cobots that you can find by a simple search like KINOVA Ultra lightweight robotic arm (www.kinovarobotics.com), which is weighing in at just 4.4 kg and give the workers a performance, flexibility, and ease all in on with plug-and-play option developed with open architecture and compatible with ROS. However, the main challenge with cobots is not just the hardware and following the ISO 10218 standard but also the software to make it easily accessible to nonexperts and smart enough to collaborate with other human coworkers and cobots. What it does mean is that cobots need to be equipped with sensors, smart technologies, and algorithms that are linked with the IoT and/or specific systems like the vision system. Vision systems allowing robots to identify and safely navigate around objects were largely an afterthought. In recent years, vision hardware (such as lidar) has become much cheaper, more effective, and subsequently more widespread. Lidar works much like radar, but instead of sending out radio waves it emits pulses of infrared light— aka lasers invisible to the human eye—and measures how long they take to come back after hitting nearby objects. It does this millions of times a second, and then compiles the results into a so-called point cloud, which works like a 3-D map of the world in real time—a map so detailed it can be used not just to spot objects but to identify them. Once it can identify objects, the cobot can predict how it will behave, and thus how it should move. A good example of such cobot is the CES2018 for its pingpong playing perfection (https://www.cbinsights.com/company/omron). However, the capability of such cobot in distinguishing what an object is and deciding how to move it is still a challenge or bottleneck. What is needed is a cobot that recognizes people’s movements and gestures as well as the objects it is surrounded by including other cobots. A cobot allows real-time mapping of the workspace, in this way being able to adapt the movements of the robots to which it is connected. Smart cobots according to DARPA12 need to be a small object of moderate weight capable of instantaneously recognizing and interpreting everything that is happening in the workspace, enabling the cobot to become more flexible and to react to sudden changes in the work process. DARPA announced a new program called short-range independent microrobotic platforms (SHRIMP). The goal is “to develop and demonstrate multifunctional micro-to-milli robotic platforms for use in natural and critical disaster

November/December 2018 30 ComputingEdge

90

www.computer.org/itpro December 2019

EXTREME AUTOMATION

scenarios.” To enable robots that are both tiny and useful, SHRIMP will support fundamental research in the component parts that are the most difficult to engineer, including actuators, mobility systems, and power storage. To help overcome the challenges of creating such smart microbots, the first required stage is to enforce such microbots with optimized size, weight, and power or what is known as SWaP, which are just some of the constraints that very small robots operate under. One of the very first microbot was a “push button” that can push nearly any mechanical button and makes such dumb buttons smart through controlling it via your smartphone or a tablet from any distance through WiFi connection. The push button microbot can turn ON any device like the coffee maker at a scheduled time or it can be controlled wirelessly. The next expected stage is to provide such microbots with smart algorithms to enable their work and collaboration toward achieving common goals. For example, at the University of California, San Diego, a research laboratory managed to develop a tiny 3-D-printed robotic fish smaller than the width of a human hair that can be used to deliver drugs to specific places in our bodies and sense and remove toxins. These microfishes are self-propelled, magnetically steered, and powered by hydrogen peroxide nanoparticles.13 Actually, AI and machine learning capabilities have been quickly making their way into industrial robotics technology and microbots. In the never-ending quest to improve productivity, manufacturers are looking to improve on the rigid, inflexible capabilities of legacy industrial robots and make these robots smart, small, and effective. Good examples on adding such smart capabilities to the existing cobots or microbots are the FANUC Intelligent Edge Link and Drive IIoTs platform to create an intelligent system of industrial collaborating or team robots.14 Kuka and Huawei15 signed a deal to develop what could be another global network—built on the IIoTs—to enable the connection of robots across many factories. The companies say they plan to integrate AI and deep learning into the system. Moreover, if the power of the robot team work is combined with a ground-breaking technology like blockchain, it will make the industrial robotic operations more secure, flexible, autonomous, and even profitable.16 Figure 2 illustrates the trends of using industrial robots starting from the traditional industrial robots and ending with team robots. The enabling technologies for such trend include smart algorithms, internet of everything, mechatronics, and blockchains.

Figure 2. Robotization trends based on the notion of extreme automation.

November/December 2018 www.computer.org/computingedge

91

www.computer.org/itpro 31

IT PROFESSIONAL

CONCLUSION Digitalization and the implementation of extreme automation as inspired by the Industry 4.0 will bring fundamental changes to industrial production and necessitate new products, solutions, and concepts. Robotization is the new reality that is going to change the manufacturing sector as well as our life. Robots are becoming smarter and cost-effective with time. Today a business may own few classical robots, but the day is not far when the significant parts of an industry would be managed by the new generation of robots. However, the main question that everyone asks is that we want robots to make our lives easier and safer, yet we do not want them to control our life and cause catastrophic loss of jobs. The challenge is to strike a balance between our fears and the benefits of such innovative technology, which can offer the enhancement to our life quality. This column has touched the surface of this exciting topic and we are encouraging you to contribute to this column by writing to the editor of this column on her email (jfi[email protected]).

REFERENCES 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12.

13. 14. 15.

16.

J. Fiaidhi, S. Mohammed, and S. Mohammed, “FabLabs as enabler for extreme automation,” IEEE IT Prof., vol. 20, no. 6, pp. 83–90, Nov./Dec. 2018. M. Simon, The wired guide to robots,” The Wired.Com Blog, May 17, 2018. [Online]. Available: https://www.wired.com/story/wired-guide-to-robots/. B. Tarnoff, “Robots won’t just take our jobs—They’ll make the rich even richer,” Guardian, Mar. 2, 2017. [Online]. Available: https://www.theguardian.com/technology/2017/mar/02/ robot-tax-job-elimination-livable-wage. K. J. Delaney, “The robot that takes your job should pay taxes, says Bill Gates,” Quartiz Blog, Feb. 17, 2017. [Online]. Available: https://qz.com/911968/bill-gates-the-robot-thattakes-your-job-should-pay-taxes/. R. Calo, A. M. Froomkin, and I. Kerr, Robot Law. Cheltenham, U.K.: Edward Elgar, 2016, ISBN: 9781783476725. “United States employment rate 1950-2018,” Trading Economics. [Online]. Available: https://tradingeconomics.com/united-states/employment-rate. “Robots and the workplace of the future,” Int. Fed. Robotics, IFR 2018 Positioning Paper, Mar. 2018. [Online]. Available: https://ifr.org/downloads/papers/ IFR_Robots_and_the_Workplace_of_the_Future_Positioning_Paper.pdf. S. Crowe, “Industrial robot sales increase 29% worldwide,” IFR Rep., Jun. 20, 2018. [Online]. Available: https://www.therobotreport.com/industrial-robot-sales-29-worldwide/. Jinan IT-Pro 1. K. McSweeney, “Rethink Robotics rethinks its software,” ZDNet Blog, Feb. 8, 2017. [Online]. Available: https://www.zdnet.com/article/rethink-robotics-rethinks-its-software/ #ftag ¼ CAD-00-10aag7e. “Collaborative manufacturing: All together now,” The Economist Special Report, Apr. 21, 2012. [Online]. Available: https://www.economist.com/special-report/2012/ 04/21/all-together-now. IEEE Spectrum, “DARPA wants your insect-scale robots for a micro-olympics,” Jul. 18, 2018. [Online]. Available: https://spectrum.ieee.org/automaton/robotics/robotics-hardware/darpawants-your-insect-scale-robots-for-a-micro-olympics? utm_source¼roboticsnews&utm_campaign¼roboticsnews-07-31-18&utm_medium¼email. R. Moss, “3D-printed microscopic fish could be forerunners to smart ‘microbots,’” New Atlas, Aug. 26, 2015. [Online]. Available: https://newatlas.com/3d-printed-microscopic-fishsmart-microbots/39106/. “Field systems,” Fanuc 2018. [Online]. Available: https://www.fanuc.co.jp/en/product/ catalog/pdf/field/FIELDsystem(E)-01.pdf. S. Francis, “Kuka to build global deep learning AI network for industrial robots,” Robotics Automat. News, Mar. 19, 2016. [Online]. Available: https://roboticsandautomationnews. com/2016/03/19/kuka-to-build-global-iot-network-and-deep-learning-ai-system-forindustrial-robots/3570/. M. Aggarwal, “Blockchain in robotics —A sneak peek into the future,” Medium.com, Mar. 18, 2018. [Online]. Available: https://medium.com/@manuj.aggarwal/blockchainin-robotics-a-sneak-peek-into-the-future-4e115ccf4931.

November/December 2018 32 ComputingEdge

92

www.computer.org/itpro December 2019

EXTREME AUTOMATION

ABOUT THE AUTHORS Jinan Fiaidhi is a Full Professor and the Professional Engineer with the Department of Computer Science at Lakehead University. She is the Founder of Smart Health FabLab and an Adjunct Research Professor with the University of Western Ontario. She is also the Editor-in-Chief of the new IGI Global International Journal of Extreme Automation in Healthcare. Moreover, she is a Department Chair with the IT-Pro and the Chair of Big Data for eHealth with IEEE ComSoc. Contact her at jfi[email protected]. Sabah Mohammed is a Full Professor and a Professional Engineer with the Department of Computer Science and the Co-Founder of the Smart Health FabLab at Lakehead University. He is also an Adjunct Professor with the University of Western Ontario. He is the Chair of Smart and Connected Health with IEEE ComSoc. Contact him at [email protected]. Sami Mohammed is a Graduate Student with the Department of Computer Science, Victoria University. Contact him at [email protected].

This article originally appeared in IT Professional, vol. 20, no. 6, 2018.

ADVERTISER INFORMATION

Advertising Coordinator

Central US, Northwest US, Southeast US, Asia/Pacific: Eric Kincaid Email: [email protected] Phone: +1 214-553-8513 | Fax: +1 888-886-8599 Cell: +1 214-673-3742

Debbie Sims Email: [email protected] Phone: +1 714-816-2138 | Fax: +1 714-821-4010

Midwest US: Dave Jones Email: [email protected] Phone: +1 708-442-5633 Fax: +1 888-886-8599 Cell: +1 708-624-9901

Advertising Sales Contacts Mid-Atlantic US: Dawn Scoda Email: [email protected] Phone: +1 732-772-0160 Cell: +1 732-685-6068 | Fax: +1 732-772-0164

Jobs Board (West Coast and Asia), Classified Line Ads

Southwest US, California: Mike Hughes Email: [email protected] Cell: +1 805-208-5882

Heather Bounadies Email: [email protected] Phone: +1 623-233-6575

Northeast, Europe, the Middle East and Africa: David Schissler Email: [email protected] Phone: +1 508-394-4026

Jobs Board (East Coast and Europe), SE Radio Podcast

November/December 2018 www.computer.org/computingedge

Marie Thompson Email: [email protected] Phone: +1 714-813-5094

93

www.computer.org/itpro 33

OUT OF BAND

Vehicle Telematics: The Good, Bad, and Ugly Hal Berghel, University of Nevada, Las Vegas

Vehicle telematics may be thought of as an Internet of Things (IoT) on wheels. And just as with the IoT, the technology is a mixed blessing, with serious privacy and security implications.

J

ust as the shifts changed at Jaguars, a noted Las Vegas “gentleman’s club,” one afternoon in May 2003, FBI agents burst into the manager’s office, guns in hand. So, according to the L.A. Times,1 began Operation G-Sting, the federal sting that convicted four Clark County (Nevada) Commission members and two San Diego, California, city councilmen on bribery charges. Flash forward to March 2010. A disgruntled former employee of the Texas Auto Center (TAC) in Austin remotely triggers the installed aftermarket GPS devices to set off car alarms, activate headlights, and shut off the engine starting systems for roughly 100 of TAC’s customers’ personal vehicles, leaving them stranded with disabled or unusable automobiles.2 What does the conviction of four county commissioners from Las Vegas have to do with the TAC hack? The answer is vehicle telematics, one of the emerging digital

Digital Object Identifier 10.1109/MC.2019.2891334 Date of publication: 11 March 2019

34 66

December 2019

COM PUTE R PUBLISHED BY THE IEEE COMPUTER SOCIET Y

threat vectors in use by hackers, criminals, terrorists, governments, and invasive businesses.

FAITH-BASED SECURITY

The TAC (w w w.tex a sautocenter .com/) is apparently one of the car dealers of last resort for those who are credit challenged. These car centers have been a staple in poor communities for many years, specializing in high-interest and no income, no job, no assets (NINJA) loans. The trick to making such loans profitable is the ability to recover the car if the payments are in arrears. In years past, this was the purview of the repo man. Now, we throw new wave repo technology at the problem in the form of a digital, remotely operated “real-world asset protection” system like Payteck (http://www.payteck.cc/). TAC used Payteck’s GPS and starter-interrupt systems for asset management (i.e., theft reduction)—apparently a winning combo for fi nance companies that specialize in NINJA car loans. Unlike the GPS trackers used formerly, Payteck’s system enables fi nance and used car companies to both locate and disable the car if payments became delinquent. This sounds fi ne in practice, but what if a disgruntled former employee of TAC used a coworker’s login credentials to go rogue on the unsuspecting used car buyers—which is exactly what happened when that former employee disabled roughly 100 recently sold vehicles as an

Published by the IEEE Computer Society 

2469-7087/19 © 2019 IEEE 0018-9162/19©2019IEEE

EDITOR HAL BERGHEL

University of Nevada, Las Vegas; [email protected]

act of revenge against TAC. This reads like a bad NSA surveillance expose: one has to ask, “Where were the checks and balances?” Apparently, the Payteck and TAC folks assumed that theft or unauthorized elevation of authentication privilege could never happen to them and that it would be impossible for an employee or hacker to behave improperly. This is an example of what I have called faith-based security (FBS),3 a cousin of security through obscurity (STO). If such strategies are effective, it’s by accident rather than design. Even if you are fi nancially well heeled, your cars aren’t immune to FBS and STO measures. Don’t get lulled into complacency because you avoid NINJA fi nancing. One may accomplish the same objective with any car through the internal computer system—even by hacking the audio system.4 I will return to this theme. Operation G-Sting was a hack of a different color; this time, it was the Feds who took advantage of the vehicle telematics system. The convicted bagman, the head of the Clark County Commission and a former cop, decided that to best avoid government eavesdropping, he’d conduct all sensitive discussions regarding the bribing of elected officials in his car, which just happened to have OnStar enabled. Much to his chagrin, the preinstalled General Motors’ OnStar folks were all too willing to activate the microphone for the FBI, thereby allowing the latter to listen in on conversations in the vehicle (all without benefit of warrant). These recorded conversations provided the key evidence for the convictions. In one of life’s little ironies, after the county commissioners had been released from prison, the Ninth Circuit Court (which includes both California and Nevada) ruled that such OnStar spying was illegal because it required tampering or disabling the OnStar vehicle recovery mode, which violates www.computer.org/computingedge

the customer’s terms of service.5 That is, the Ninth Circuit Court ruled that OnStar wiretapping and surveillance represented an egregious violation of a corporate term of service under current law (http://www.law.cornell.edu /uscode/text/18/2518)—but not that it in any way violated the customer /citizen’s expectation of privacy! In a bizarre twist, the 2011 OnStar revised terms of service extended OnStar’s promised focus on continuous vehicle recovery mode and specifically allowed OnStar to collect driving and location data from car owners even if they had cancelled their OnStar subscriptions. This produced a public relations nightmare for OnStar, which, temporarily at least, stopped this practice6 at the behest of former Senator Al Franken (D, Minnesota) and Senator Chris Coons (D, Delaware). OnStar has since resumed the practice of collecting any information, for any purpose, at any time (https://www .onstar.com/content/tcps/us/20180227 /privacy_statement.html). These prosecutions were interesting from several perspectives. One of the two San Diego city councilmen was exonerated in 2010 (http://www .sddt.com/News/article.cfm?Source Code=20101014tza). The convicted former politicians who made up the Las Vegas contingent of bribe recipients have apparently set aside their political aspirations for the moment and directed their attention to less visible vocations in public relations, marketing, and the law.7

BIG BROTHER TELEMATICS

Vehicular telematics is but one of the later instantiations of Orwellian digital dystopia, but with its own distinctive twists including the increased exposure to malicious hacking and the potential for abuse of individual privacy. As with other innovative technologies, modern vehicular telematics is

a mixed blessing. There is no doubt that some telematics associated with convenience, safety, mechanical reliability, and entertainment are welcomed by many consumers and to varying degrees. With my latest vehicle, I most appreciate features like forward collision alert, 360° surround vision, distance indication, front pedestrian braking, cross traffic alerts, active cruise control, lane-keeping assistance, parking sensors, blind-spot monitoring, navigation systems with traffic alert, adaptive lighting, and a host of other warnings and driver assistance features. I’m confident the roads would be safer if such features were available on all modern vehicles, and I’m pleased to see that some car manufacturers like Subaru and Toyota now include most of these in their base models. No matter how useful, these telematics features are the least interesting from the point of view of security and privacy. The more intriguing features are those that entail security and privacy vulnerabilities. I’ll begin with a convenience feature that largely goes unnoticed these days: the vehicle remote, also called the wireless fob, which is used in lieu of a key to control access to a vehicle or remotely initiate some action on the vehicle (e.g., remote start). Originally designed about 40 years ago for remote keyless entry, fobs are functionally similar to more feature-rich mobile devices. Used as a substitute for the keypad on the driver’s door, the fob is a shortrange radio-frequency (RF) transceiver. In my case, the fob passively exchanges proximity information with the car so that, when it is within a few meters of the car, a logo is projected on the ground where a sensor detects motion, opens the rear hatch, turns on various lights, and activates the opening buttons on the door handles. Additionally, push buttons on the fob enable it to communicate instructions to the car JANUARY 2019

35 67

OUT OF BAND

to remotely start the engine, lock or unlock the doors, open the rear deck hatch, and set off the car alarm. Many of these features are further configurable. Thus, the modern fob has taken on the role of the modern remote controller associated with multimedia devices. What’s the problem then? To begin with, RF appliances are never optimal for security-sensitive applications; RF neither respects individual privacy nor obeys property lines. So any communications between car and fob should be viewed as broadcasts throughout the immediate neighborhood. This makes them susceptible to a gamut of hacks, ranging from denial-of-service (e.g., to deny vehicle access) to replay attacks, to name but two. And this is nothing new. Computer scientist Avi Rubin has been lecturing about such vehicle insecurities for many years.8 The Center for Automotive Embedded Systems Security at the University of Washington, Seattle, and the University of California, San Diego, (www.autosec.org) has been conducting research on vehicle telematics vulnerabilities for even longer. One of the center’s classic papers from 20109 references articles on vehicle vulnerabilities as far back as the early 2000s. In a subsequent paper,10 these same researchers evaluate a cornucopia of attack vectors that affect modern automobiles. One of this center’s projects, CarShark, prov ides work ing demonstrations of these vulnerabilities. Researchers there have since extended their work to include using vehicle telematics for driver profiling and fingerprinting. The irony that this research has been covered so extensively over the past 10 years that it has been featured in Popular Science11 should not be overlooked. Confirmation of these problems isn’t hard to obtain. Samy Kamkar (https:// samy.pl) recently developed a suite of such attacks12 and reported the same in a 2015 DEFCON talk.13 His tool, OwnStar, runs a replay attack against OnStar fobs by inserting itself between a GM vehicle’s transceiver and either OnStar apps on mobile devices or the

36 68

COMPUTER

ComputingEdge

fobs themselves. His video explains all of this in detail. While OwnStar targets older RFbased keyless entry systems, modern vehicles use rolling code systems that prevent OwnStar replay attacks. Rolling codes use algorithms to generate code sequences based on pseudorandom numbers. As long as the vehicle transceiver and the fob/mobile app transceiver use the same seeds and rolling code algorithm, the sequences can be validated even if the codes are nonconsecutive. This means that, while continuously changing, rolling code generators suffer from the serious defect of predictability: once the algorithm is known, an endless sequence may be generated, each element of which can be determined to be legitimate. Based on this observation, Kamkar developed RollJam,14 which offers a replay attack for modern RF-based keyless entry systems that use rolling codes. This reaffirms our observation that RF is really not effective when security is important (i.e., if you want to prevent car, boat, or airplane theft; garages from being opened by home invaders; proximity card lock compromises; and so on). A question naturally arises: Why do car companies use digital technologies that are so easily compromised? In this case, challenge-response authentication based on a reasonable key derivation function would go a long way toward avoiding replay attacks. Such technology has been well understood and successfully deployed for decades, so why isn’t it used for keyless access systems? The answer is that manufacturers’ cost benefit analyses suggest that their legal exposure to the resulting safety deficiencies and security vulnerabilities from nonuse will not cost them much. In the absence of regulations with teeth or large-scale public blowback, there is little incentive to protect the customer. Since the beginning of the industrial revolution, the absence of risk has always been a strong disincentive to serious process improvement where product safety is

at issue. (Note, by the way, that these same vulnerabilities may apply to other remote access systems including remote garage door openers, proximity card access systems, and so on). But keyless access is not the greatest security and privacy vulnerability; far greater is the new cell phone synchronization environment. A decade or so ago, Bluetooth synchronization between a cell phone and the vehicle’s communication systems was focused on hands-free use of the phone. The vehicle’s voice recognition system sent the appropriate codes to the phone (for dialing, searching contact lists, and so forth) but otherwise played a passive, facilitative role in the communication. On modern GM systems I’m familiar with, and presumably other systems as well, the Bluetooth synchronization actually uploads data from the cell phone and stores the data in the vehicle’s computer database—without the user’s permission and possibly without the user’s knowledge. This compounds the privacy problem of a lost cell phone. Even if cell phone data are encrypted and the phone is locked, it is not that difficult to retrieve PINs, passwords, and encrypted data in plain text. Companies such as Cellebrite (www.cellebrite.com) have, for decades, offered mobile forensics devices that serve this purpose. But the lowest hanging fruit in this attack vector is the automobile. The only data access protection that my car offers is a four-number PIN valet lock. This bad idea is both polished and refined: it offers limited data protection against a determined adversary while at the same time making vehicular telematics inconvenient for the owner. Such bad ideas don’t just happen naturally; they require serious effort from incompetent designers. This isn’t innovation: it’s enervation. A similar situation applies to the access of data through the onboard diagnostics (OBD) ports under the dash. While it seems reasonable to make diagnostic data available to manage December 2019

W W W.CO M P U T E R .O R G /CO M P U T E R

engine performance, optimize safety systems, and so on, when OBD ports became the preferred option for smog tests a decade or so ago, that opened an entirely new vulnerability to the car owner. While automobile manufacturers could have restricted OBD information sharing to just those data of use to smog inspectors, instead they opened the OBD ports to a much wider variety of data—including historical accelerometer data, speed data, GPS data, and trip timings and usage data. Originally, these “black box” OBD devices were used by insurance companies (e.g., Progressive’s Snapshot program) to award user premium rate discounts, i.e., drivers were even given discounts to have them installed in their vehicles. As any good personal injury attorney will attest, one of the first questions attorneys ask of accident investigators is whether the “other” vehicle had one installed so that it may be subpoenaed as evidence in court. Sans black box, an attempt will be made to recover vehicle data directly from the automobile. This information can be used by an insurance company to confirm good driving behavior, but it can also be used by personal injury attorneys and prosecutors to justify liability claims for allegedly bad driving behavior. Somehow this equivalence just never seemed to register with the public. Incidentally, OBD black box devices are now popular general aftermarket automobile appliances for GPS tracking, monitoring driving behavior, and so on (https://www.black boxgps.com/products/blackbox-gps-3slocator-obd-ii).

MARKETING VERSUS PRIVACY PROTECTION

I don’t mean to impart any special blame to General Motors or OnStar for breaches in personal security and privacy. All car manufacturers offer similar services. Ford SYNC, based on Microsoft’s Auto OS, offers the same range of services as GM/OnStar. The same may be said for LexusLink, BMW Assist, Mercedes Mbrace, and so forth. As near as I can tell, all manufacturers www.computer.org/computingedge

approached telematics exclusively from a marketing point of view with little or no consideration for consumer privacy protection. This is not to deny the potential advantage to collision detection and reporting capabilities. Nor is it to criticize the use of motor vehicle event data recorders (MVEDR) per se. However, for detection and reporting accidents, MVEDRs don’t require more than a few minutes of precrash recorded data collection to serve the passenger’s public safety interests. So, even if we assume that vehicle speed, engine revolutions per minute, service brake status, lateral acceleration, roll angles, antilock braking system status, seatbelt status, steering wheel position, and airbag-related data would be useful to first responders, a simple first-in, first-out data collection strategy that would retain only the most

that using and selling access to these data can be enormously profitable.15 Car companies and dealers are finding that the sale of customer data is another lucrative source of profit along with the interest and fees associated with car loans. However, unlike with car loans, the customer has no right of refusal regarding the sale of his or her personal data. It is quite telling that automobile manufacturers have not packaged these data collection technologies in the form of optional modules that the customer may or may not purchase and that may be removed if the service is no longer desired. Manufacturers do not want to give customers that choice because 1) many would choose not to purchase these options and 2) the manufacturer would lose the opportunity to repurpose the data for profit. In the

Car companies and dealers are finding that the sale of customer data is another lucrative source of profit along with the interest and fees associated with car loans. recent data would serve perfectly well. In other words, the claim that event data recorder information has to be retained for longer periods or shared with the manufacturer via telephone or satellite links doesn’t pass my smell test. That was essentially the issue that Sens. Franken and Coons raised with OnStar. At the heart of privacy vulnerability is the manufacturer’s insistence on a simple, integrated vehicle data retention policy that will serve all demands, e.g., crash reporting, smog inspection, manufacturer’s revenue potential from the sale of telematics options, manufacturer and third-party marketing and advertising revenue, and so forth. It is this oversimplistic integration that leads to the problem. Of course, the rationale is obvious: automobile manufacturers have discovered

case of my new car, the OnStar equipment is built into the car. Any attempt to disable or remove it not only disables other nonprivacy invasive systems like navigation, the entertainment system, and Bluetooth connectivity, but it also voids the manufacturer’s warranty. And there’s no way to avoid Internet connectivity on modern premium cars.16 My car will attempt to authenticate with all insecure proximate Wi-Fi networks whenever the car is started. If there’s a way to disable this feature, I haven’t found it. One need only read up on Operation G-Sting to confirm the dangers of all of this insecurity. The FBI aren’t the only ones listening! With the proliferation of unwanted and unneeded passive interfaces like Bluetooth; Internet and Wi-Fi connectivity; the ability to invasively record passenger compartment audio and JANUARY 2019

37 69

OUT OF BAND

video; the integration of myriad sensors, cameras, and microphones; and the megaintegration of all of these components into an insecure multimedia and networked infrastructure, the potential for privacy abuse in modern automobiles is enormous. Add to that the profit motive for the manufactures to use or sell these data, and we have a new frontier for privacy abuse, fraud, and theft. The question isn’t whether these new automobile systems will be exploited to our cost, but when and to what degree.

T

his is not to deny that there are other manufacturers capturing these data. Mobile device manufactures do the same thing. Literally hundreds of smartphone apps are known to share such data as realtime GPS location with third-party vendors.17 It is not easy (and may not be possible) to shut such features off because the manufacture/provider ultimately has control over enabling/ disabling services. However, at least in the case of mobile devices, you have the ability to shut the device off. That’s not an option with modern automobiles. There are also more mundane privacy exposures with such “large scale and covert collection of personal data” through Microsoft Offices’ ProPlus subscription,18 which shares motivations with vehicle telematics and mobile apps, but under the office productivity suite rubric. I’ll expand on this in a future column.

REFERENCES

1. J. Johnson, “The talk of Las Vegas is Operation G-String,” Los Angeles Times, Dec. 21, 2003. [Online]. Available: http://articles.latimes.com /2003/dec/21/nation/na-strippers21 2. K. Poulsen, “Hacker disables more than 100 cars remotely,” Wired. Mar. 17, 2010. [Online]. Available: https://www.wired.com/2010/03 /hacker-bricks-cars/ 3. H. Berghel, “Faith-based security,” Commun. ACM, vol. 51, no. 4, pp. 13–17, Apr. 2008.

38 70

COMPUTER

ComputingEdge

This article originally appeared in Computer, vol. 52, no. 1, 2019.

4. R. McMillan, “With hacking, music can take control of your car,” IT World, Mar. 14, 2011. [Online]. Available: http://www.itworld.com /security/139794/with-hacking-musiccan-take-control-yourcar?page=0%2C1) 5. D. McCullagh, “Court to FBI: No spying on in-car computers,” c|net, Nov. 19, 2003. [Online]. Available: http://news.cnet.com /Court-to-FBI-No-spying-on-in-car-co mputers/2100-1029_3-5109435.html 6. K. Hill, “OnStar kills its terrible plan to monitor non-customers’ driving,” Forbes, Sept. 27, 2011. [Online]. Available: http://www.forbes.com /sites/kashmirhill/2011/09/27 /onstar-kills-its-terrible-plan-to-tracknon-customers-driving-data/ 7. J. A. Morrison, “Politicians come and go after serving corruption sentences,” The Las Vegas Rev.-J., Mar. 21, 2010. [Online]. Available: https:// www.reviewjournal.com/news /news-columns/jane-ann-morrison /politicians-come-and-go-afterserving-corruption-sentences/ 8. TED, “All your devices can be hacked,” 2011. [Online]. Available: https:// www.ted.com/talks/avi_rubin_all_ your_devices_can_be_hacked 9. K. Koscher et al., “Experimental security analysis of a modern automobile,” in Proc. IEEE Symp. Security and Privacy, 2010, pp. 1–16. 10. S. Checkoway et al., “Comprehensive experimental analyses of automotive attack surfaces,” in Proc. USENIX Security Symp., 2011, pp. 77–92. 11. R. Boyle, “Proof-of-concept CarShark software hacks car computers, shutting down brakes, engines, and more,” Popular Science, May 14, 2010. [Online]. Available: https:// www.popsci.com/cars/article /2010-05/researchers-hack-carcomputers-shutting-downbrakes-engine-and-more 12. A. Greenberg, “The greatest hits of Samy Kamkar, YouTube’s favorite hacker,” Wired, Dec. 17, 2015. [Online]. Available: https://www.wired .com/2015/12/the-greatest-

13.

14.

15.

16.

17.

18.

hits-of-samy-kamkaryoutubes-favorite-hacker/) YouTube, “Drive it like you hacked it: New attacks and tools to wirelessly steal cars,” 2015. [Online]. Available: https://www.youtube.com /watch?v=UNgvShN4USU A. Greenberg, “This hacker’s tiny device unlocks cars and opens garages,” Wired, Aug. 16, 2015. [Online]. Available: https://www.wired .com/2015/08/hackers-tiny-deviceunlocks-cars-opens-garages/ P. W. Howard, “Data could be what Ford sells next as it looks for new revenue,” Detroit Free Press, Nov. 13, 2018. [Online]. Available: https:// www.freep.com/story/money /cars/2018/11/13/ford-motor-creditdata-new-revenue/1967077002/ B. Schneier, Click Here to Kill Everybody: Security and Survival in a Hyper-Connected World. New York: Norton, 2018. J. DeVries, N. Singer, M. H. Keller, and A. Krolik, “Your apps know where you were last night, and they’re not keeping it secret,” New York Times, Dec. 10, 2018. [Online]. Available: https://www.nytimes.com /interactive/2018/12/10/business /location-data-privacy-apps.html? emc=edit_nn_p_20181210 &nl=morning-briefing&nlid= 76619836§ion=topNews &te=1&fbclid=IwAR2MfaMW_ jC9wK25lywuCNFWiSs2xa4SCC3o61eaowMAeAcR7_ew_g0cYsY C. Cimpanu, “Dutch government report says Microsoft Office telemetry collection breaks GDPR,” ZDNet, Nov. 14, 2018 [Online]. Available: https://www.zdnet.com /article/dutch-governmentreport-says-microsoft-officetelemetry-collection-breaks-gdpr/

HAL BERGHEL is an IEEE and ACM Fellow and a professor of computer science at the University of Nevada, Las Vegas. Contac t him at hlb@ computer.org.

December 2019

W W W.CO M P U T E R .O R G /CO M P U T E R

CYBER-PHYSICAL SYSTEMS

EDITOR DIMITRIOS SERPANOS

ISI/ATHENA and University of Patras; [email protected]

Can We Trust Smart Cameras? Bernhard Rinner, Universität Klagenfurt

Technological advances have changed our conception of cameras as boxes that capture images into the notion of networked, smart devices that analyze their environment and trigger decisions. Since smart cameras have become key enabling components for cyberphysical systems, trust in these devices is gaining importance.

O

ver the last decades, we have witnessed tremendous progress in many fields of technology, including optics, microelectronics, computer vision, and sensor networks. These technological advances have leveraged the development of smart cameras, which perform sophisticated analytics of the captured imagery onboard the cameras.1,2 They transform the captured images into useful data, such as object classifications, person identifiers, and recognized actions, and may, therefore, not even deliver raw images anymore. New applications beyond the traditional consumer electronics and commercial markets of mobile phones

Digital Object Identifier 10.1109/MC.2019.2905171 Date of publication: 14 May 2019

2469-7087/19 © 2019 IEEE

and surveillance have been developed w it h smar t cameras. T hey come in various forms and configurations and have pervaded many aspects of our everyday life. We rely on cameras for monitoring the interior and exterior of modern cars, we enjoy enhanced photos and videos captured by selfie drones, and we even swallow cameras for medical examinations. Thus, smart cameras have become key devices for the Internet of Things (IoT) and cyberphysical systems and often operate in networks to assess situations in their environment. Progress in technology and applications over the last decades has changed our conception of cameras as boxes that capture images into a more general notion of cameras as smart and networked devices that generate data and make decisions. A new camera paradigm has emerged in which smart cameras have become key system components with an advanced level of autonomy and complexity where the camera’s output is not intended for the eye of the beholder anymore but serves as essential input for prompt control and complex decision-making algorithms. As in many applications in which humans are becoming

Published by the IEEE Computer Society

December 2019

M AY 2 0 1 9

39 67

CYBER-PHYSICAL SYSTEMS

traffic scenarios include complex interactions among various road users, and handling complex clutter and modeling the interactions with other road users are necessary.4 Smart cameras face a complexity challenge in autonomous systems where visual information serves as the prime input modality, and understanding the state of the environment is a fundamental precondition for lowlevel control and planning high-level actions. The performance of environment perception is currently sufficient in dedicated settings and controlled environments. However, smart cameras still need to reach human-level

defense mechanisms. The Mirai attack represented a prominent example of exploiting smart camera platforms for a distributed denial-of-service (DDoS) attack.5 The malware enslaved vast numbers of insufficiently secured smart cameras into a botnet, which was then used to launch the DDoS attack. Although the Mirai malware followed the very simple strategy of scanning for IoT devices still using their default passCHALLENGES words, the consequences were dramatic, with hundreds of sites and services beComplexity and Reliability ing rendered inaccessible to millions of The functional complexity and relipeople worldwide for several hours. ability of smart cameras are lagging In the Mirai attack, smart cameras behind for many real-world deploywere used to launch a DDoS attack bements. Image processing was one of the cause of the lack of badriving applications for sic security practices. the recent progress in Cameras are also the artificial intelligence We rely on cameras for monitoring the interior and target of attacks. Vari(AI) and machine learning, in particular deep exterior of modern cars, we enjoy enhanced photos ous security flaws have neural networks. One and videos captured by selfie drones, and we even been recently reported with IoT-based surveilprominent example is swallow cameras for medical examinations. lance cameras.6 These face recognition, where machine-learning algovulnerabilities could rithms have outperformed humans on performance in real-world (outdoor) essentially let an attacker view footage some standard benchmarks.3 Although settings, and current segmentation, from every smart camera connected there are still important open research detection, and tracking accuracies do online, completely disable the camera, and use it to get inside your comquestions for face recognition (such not yet suffice in difficult conditions. Reliability issues are closely related puter’s network. as scalability, pose, aging effects, and Smartphones are particular camtraining data), it is now embedded as a to functional complexity. Reliable enpopular service on several mobile plat- vironment perception is essential not era devices that serve as source, desforms. Cameras with image processing only for high-safety applications, such tination, and manipulator of a huge based on deep learning typically off- as robotics and autonomous driving, amount of data and thus provide many load complex and data-intensive com- but also for smart environments. These assets that need protection. Examples putation onto high-performance com- applications have very diverse set- of such assets include personal user tings and scales (from rooms to cities) data (keys, fingerprints, etc.), content puters or cloud infrastructures. Such a hybrid approach, with a clear and typically rely on events detected providers’ data (protection from illeseparation between onboard inference by cameras. Smart environments are gal copying on the user’s platform), and offboard training, is hardly fea- built on a wide variety of algorithms, and smartphone system protection sible for systems that require a high including motion, occupancy, and per- (trusted boot- and real-time kernel prodegree of autonomy and where the ad- son detection, as well as more complex tection). Recent smartphone platforms vanced analysis of captured imagery is activity recognition. Despite the huge rely on separate, isolated coprocessors, highly integrated with control and de- amount of captured data and the sen- such as ARM’s TrustZone and Apple’s cision making. Robotics and self-driv- sor fusion performed, robustness and Secure Enclave, which are built into ing cars are prototypical domains for reliability are still lagging behind ex- the platform’s main system on chip, or autonomous systems that operate in pectations, limiting the deployment of Google’s Titan M, which is realized as a dedicated security chip. complex, dynamic, and interactive en- smart environments. Hardware-enhanced protection vironments. Here, embedded camera significantly raises the security bar on systems are used together with lidar Security and radar systems to detect, classify, Smart cameras are profitable targets these platforms. It is much harder to and track relevant road users and pre- for security attacks, due to their omni- break than pure software defenses but dict their future trajectories. Yet real presence, multiple assets, and restricted still requires careful implementation more eliminated from decision-making processes, trust in the overall system and its individual components is gaining importance. A key question, therefore, is whether current smart cameras already fulfill our trust expectations. I will discuss some challenging issues for smart cameras in the following sections.

40 68

COMPUTER

ComputingEdge

December 2019

W W W.CO M P U T E R .O R G /CO M P U T E R

of security functionality and remains the deterioration strength or replace vulnerable, particularly to side-chan- sensitive parts with a deidentified nel attacks. The challenge of security representation.7 The applied anoremains a complex, multidimensional nymization technique influences the problem that has to be addressed in a achieved privacy protection level and holistic way. Security needs to become the remaining utility of the deteria primary design consideration, lever- orated visual data. Anonymization age proactive defense techniques, enables customizing the protection and involve all stakeholders to protect data and devices throughout In the Mirai attack, smart cameras were their entire lifetime.

Privacy

launch a

Sm a r t ca mera s pose threats to our privacy. Privacy concerns have been raised by the rapidly increasing number of visual-data-capturing devices. Not only surveillance cameras but also other video-capable multimedia devices threaten privacy.7 Due to the confluence of the availability of huge amounts of visual data, AI-based data analytics, and big data networks, identities can be detected, and behaviors of individuals can be analyzed at a large scale. Camera networks at airports, train stations, and shopping malls monitor millions of persons every day and allow the analysis of their movement patterns and activities. Although real-time analytics is limited for most of the currently deployed camera networks, huge amounts of video data are (at least temporarily) stored and can be analyzed offline and merged with other data sources. Encryption is a widely used privacy protection approach that provides access to the complete raw data to privileged users, that is, those possessing the decryption keys. For all other users, the visual data remain protected and have no utility. Anonymization is a more gradual protection approach in which sensitive parts of visual data, such as faces or text, are deteriorated such that ident i f icat ion becomes more d i f f ic u lt for both humans and algorithms. Typical anonymization techniques include pixelation or blurring, and more advanced methods can adapt www.computer.org/computingedge

increasing functionality of smart cameras imposes strong resource requirements for the platform along multiple dimensions, including processing, storage, data transfer, and energy.9,10 Consequently, the overall design space for the camera platform is complex, and the tradeoff among resources, cost, size, and various other factors needs to be explored. Integratused to ing as a system-on-chip DDoS attack because of the lack of basic is a w idely used approach for developing security practices. resource-efficient plat for m s. Recent ly, level and utility according to various nanotechnology has been explored factors, such as application require- as a fundamentally new platform for smart cameras.11 ments and user preferences. Anot her risk to privac y comes from data linkage. Even if visual data captured from personal devices (e.g., s smart cameras will consmartphones or body cameras) and prit i nue to per vade ou r d a i ly vate spaces (e.g., home security, assisted life, their functionality, reliliving, or monitoring applications) are ability, security, and efficiency need encrypted or anonymized, their linkage to be continuously advanced. This can reveal sensitive personal informa- requires research and development tion about individuals. More advanced efforts covering different fields, intechniques, such as privacy-aware re- cluding hardware, software, and sysporting or secure multiparty computa- tem elements. These technological tion, help to mitigate data-linkage risks. advances can serve only as a basis of However, they are rarely deployed in trust. They must be complemented current camera applications. by various nontechnical measures to The General Data Protection Regu- establish sustainable trust in smart lation (GDPR) affects camera manufac- cameras. turers and system operators and helps raise the protection level and privacy REFERENCES awareness. GDPR, which has been in 1. B. Rinner and W. Wolf, “Introduction effect throughout the European Union to distributed smart cameras,” Proc. (EU) since May 2018, regulates data proIEEE, vol. 96, no. 10, pp. 1565–1575, tection and privacy for all individuals 2008. within the EU and the European Eco2. M. Reisslein, B. Rinner, and A. nomic Area (EEA). It further addresses Roy-Chowdhury, “Smart camera the export of personal data outside the networks,” Computer, vol. 47, no. 5, EU and EEA areas.8 GDPR requires varpp. 23–25, May 2014. ious organizational and technological 3. Y. Taigman, M. Yang, M. Ranzato, procedures to implement its key princiand L. Wolf, “Deepface: Closing the ples, such as “protection by design and gap to human-level performance in by default,” “right of access,” and “right face verification,” in Proc. 2014 IEEE to erasure.” Conf. Computer Vision and Pattern

A

Resource Limitations

Finally, smart cameras face a resource ef f icienc y cha l lenge. T he steadi ly

Recognition (CVPR), Columbus, OH, 2014, pp. 1701–1708. 4. W. Schwarting, J. Alonso-Mora, and D. Rus, “Planning and M AY 2 0 1 9

41 69

CYBER-PHYSICAL SYSTEMS

decision-making for autonomous vehicles,” Annu. Rev. Control, Robotics, Autonomous Syst., vol. 1, pp. 187–210, 2018. doi: 10.1146/ annurev-control-060117-105157. 5. S. Khandelwal, “Chinese electronics firm to recall its smart cameras recently used to take down Internet,” Hacker News, Oct. 24, 2016. Accessed on: Dec. 25, 2018. [Online]. Available: https://thehackernews .com/2016/10/iot-camera-mirai-ddos .html 6. V. Dashchenko and A. Muravitsky, “Somebody’s watching! When cameras are more than just ‘smart,’” Kaspersky Lab, Mar. 12, 2018. Accessed on: Dec. 25, 2018. [Online]. Available: https://securelist.com

7.

8.

9.

10.

/somebodys-watching-when-camerasare-more-than-just-smart/84309/ T. Winkler and B. Rinner, “Security and privacy protection in visual sensor networks: A survey,” ACM Comput. Surv., vol. 47, no. 1, pp. 38, 2014. doi: 10.1145/2545883. Wikipedia, “General data protection regulations.” Accessed on: Dec. 31, 2018. [Online]. Available: https:// en.wikipedia.org/wiki/General_ Data_Protection_Regulation J. C. SanMiguel and A. Cavallaro, “Energy consumption models for smart camera networks,” IEEE Trans. Circuits Syst. Video Technol., vol. 27, no. 12, pp. 2661–2674, 2017. C. Bobda and S. Velipasalar, Eds., Distributed Embedded Smart

Cameras. New York: Springer, 2014. 11. J. Simonjan, J. M. Jornet, I. Akyildiz, and B. Rinner. “Nano-cameras: A key enabling technology for the Internet of multimedia nano-things,” in Proc. 5th ACM/IEEE Int. Conf. Nanoscale Computing and Communication, Reykjavik, Iceland, Sept. 2018. doi: 10.1145/3233188.3233209.

BERNHARD RINNER is a professor of pervasive computing at Universität Klagenfurt and deputy head of the Institute of Networked and Embedded Systems. Contact him at [email protected].

This article originally appeared in Computer, vol. 52, no. 5, 2019.

Rejuvenating Binary Executables Visual Privacy Protection Communications Jamming ■



SUBMIT

TODAY

Policing Privacy Dynamic Cloud Certification Security for High-Risk Users ■



Smart TVs Code Obfuscation The Future of Trust ■



IEEE TRANSACTIONS ON

IEEE Symposium on SUSTAINABLE COMPUTING Security and Privacy

SUBSCRIBE AND SUBMIT For more information on paper submission, featured articles, calls for papers, and subscription links visit: www.computer.org/tsusc January/February 2016 Vol. 14, No. 1

March/April 2016 Vol. 14, No. 2

May/June 2016 Vol. 14, No. 3

IEEE Security & Privacy magazine provides articles with both a practical and research bent by the top thinkers in the field. • stay current on the latest security tools and theories and gain invaluable practical and research knowledge, • learn more about the latest techniques and cutting-edge technology, and • discover case studies, tutorials, columns, and in-depth interviews and podcasts for the information security industry.

computer.org/security

Digital Object Identifier 10.1109/MC.2019.2911837

42 70

COMPUTER

ComputingEdge

December 2019

W W W.CO M P U T E R .O R G /CO M P U T E R

Department: View from the Cloud Editor: George Pallis, [email protected]

A Disaggregated Cloud Architecture for Edge Computing Rafael Moreno-Vozmediano Universidad Complutense de Madrid

n S. Montero Rube Universidad Complutense de Madrid

Eduardo Huedo Universidad Complutense de Madrid

Ignacio M. Llorente Universidad Complutense de Madrid Harvard University

Abstract—The use of a disaggregated cloud architecture would allow companies and institutions to deploy their own edge computing infrastructure and services by combining their on-premises cloud infrastructure with multiple highly dispersed edge nodes leased from third-party providers.

& THERE

interest, both from researchers and industry, in edge and fog computing paradigms. These have emerged as a solution for overcoming the limitations of traditional cloud computing platforms to support lowlatency environments. According to recent estimates, over 20 billion devices will be connected to the network by 2020,1 which will result in exponential growth in the amount of data with lowlatency processing needs. IS

GROWING

Digital Object Identifier 10.1109/MIC.2019.2918079 Date of current version 15 July 2019.

May/June 2019

2469-7087/19 © 2019 IEEE

There have been multiple reports of usecases in which services, or at least parts of them, were run much more efficiently next to where the data were created and/or consumed. A good example is that of a web platform offering low-latency services to users distributed in various geographic locations (e.g., e-healthcare services, remote driving assistance, online immersive games, etc.). This platform can be implemented as a central web server and a variable number of application servers are distributed over several edge locations. Another example is the emerging Internet of Things (IoT), which requires data processing near the devices in question.

Published by the IEEE Computer Society

Published by the IEEE Computer Society

1089-7801  2019 IEEE

December 2019

31

43

View from the Cloud

These use-cases usually require decoupling the application into two parts: one residing in a centralized cloud and another running on the network edge. For example, cloud providers including AWS and Microsoft Azure are starting to offer software solutions to extend their cloud services to the edge. They are not offering edge infrastructure, but rather, service software that extends cloud data processing functionality to IoT devices. AWS IoT Greengrass extends AWS to edge devices so they can act locally upon the data they generate. This service requires the installation of the AWS IoT Greengrass Core software on a Linux system to allow the local execution of AWS Lambda code, messaging, data caching, and security for a group of nearby IoT devices. Greengrass works as a cache and a hub for local AWS-enabled devices. Azure follows a similar approach with Azure IoT edge. Many companies need a way to elastically serve the dynamic demand of the edge portion of these emerging applications, for example, in application servers or IoT hubs. Moreover, in most cases, the end users and IoT devices are distributed over various geographic locations. Therefore, these companies require a framework capable of efficiently deploying and managing such distributed edge computing infrastructure which can dynamically and opportunistically support the execution of these edge hub services. Thus, here we define a disaggregated cloud model that can manage and combine both the local physical resources of on-premises data centers and the resources of multiple highly dispersed edge nodes to build dynamic, agile, and decentralized edge computing environments. One current question is if the main cloud providers are not offering infrastructure at the edge, then who is? To meet the needs of edge computing, the main telecoms companies are building new 5G infrastructures to support faster mobile data speeds, thereby adding server power to their cell towers and central offices. These companies plan to offer these thousands of computing distribution points as a service to the providers of applications who require data to be transmitted and received with no latency. Meanwhile, smaller hosting providers including Packet and others in the Kinetic Edge Alliance are introducing edge infrastructures with scattered

44

32

ComputingEdge

microdata centers that offer latencies of less than 10 ms into the main international locations. The disaggregated cloud architecture we propose here can manage small to medium edge computing infrastructures comprising several tens to hundreds of geographically distributed edge resources. These could be leased to the aforementioned third-party providers who offer on-demand resources located at the edge of the network in the form of bare-metal hosts or virtual servers. Furthermore, our model could easily be extended to larger platforms by using a peer-to-peer decentralized federation of clouds.

DISAGGREGATED CLOUD ARCHITECTURE Most edge computing environments (e.g., OpenStack Trio2o, or ENORM2) are based on a distributed management model, where each edge node is independent and is managed by its own management platform—also known as a Cloud operating system (OS). These distributed models include an upper-level edge gateway responsible for interacting with clients which routes client requests to the appropriate bottom edge nodes. This solution is highly scalable; however, it delegates most complex management tasks to the edge nodes, which can consume a lot of their resources. Furthermore, installing, configuring, and maintaining many Cloud OS instances can be an extremely complex and time-consuming task. Here we propose a disaggregated management model based on a centralized Cloud OS that manages the local resources of an on-premises data center as well as those of different edge nodes, as shown in Figure 1. This model delegates most of the previously mentioned complexity to the Cloud OS, thus releasing the edge nodes from multifarious management tasks. The main stakeholders on the disaggregated cloud architecture are as follows: 

Edge Application Providers: These are the companies offering edge applications to their customers. They run their own Cloud OSs to build a disaggregated private cloud that dynamically allocates physical resources at the edge offered by edge infrastructure providers and may also use resources from an onpremises cloud. They can deploy different

IEEE InternetDecember Computing 2019

Figure 1. Disaggregated cloud architecture.





types of services, such as web services using edge application servers to provide locationaware services and reduce latency,3 e-Health applications using edge e-Health gateways for rapid emergency responses,4 or IoT applications using edge IoT hubs for edge analytics.5 Edge Infrastructure Providers: These are third-party providers that lease out physical resources at the edge in the form of baremetal servers. Public Cloud Providers: The Cloud OS can also interact with one or more external cloud providers, including AWS or Azure, to offer cloud bursting capabilities.

The main functions of the Cloud OS are as follows: 



Management: As a cloud management platform,6 it manages physical resources located both in the on-premises cloud and the various dispersed edge locations and provides users with uniform access to this disaggregated resource pool. It is also responsible for the deployment and life-cycle management of virtual resources overlaid onto this physical infrastructure. It must be able to provide different virtualization technologies, such as virtual machines (VMs) or system containers, according to the service needs. Auto-scaling: The Cloud OS must also provide auto-scaling capabilities to accommodate the

www.computer.org/computingedge May/June 2019



anticipated number of virtual resources demanded by the service. In this paper, we only consider reactive threshold-based mechanisms7 that allow service providers to define scaling rules based on different performance metrics. For each service component and location (edge node or cloud site), these rules are defined in terms of specific upper and lower thresholds for the selected metric. Thus, if a metric falls above or below these cut-offs at a given moment and specific location, a scaling action is triggered which adds or removes virtual resources at this position. Monitoring: In terms of scalability, the monitoring system is one of the most critical components of the Cloud OS and one of its key parameters is the monitoring interval, i.e., the time that elapses between two consecutive readings of the monitoring metrics. This interval depends on several factors8 including the number of resources to be monitored, their load, and the network latency between them and the monitoring system.

The key challenges of this disaggregated model are scalability, orchestration, availability, and security. A disaggregated architecture would not scale properly for very large edge computing environments, but it is eminently suitable for small to medium private edge infrastructures consisting of tens of thousands of resources. The model can be extended to larger, worldwide

33

45

View from the Cloud

Figure 2. (a) Performance profile of a single application server. (b) Total response time.

platforms by using a peer-to-peer decentralized federation of cloud instances. Regarding the orchestration and availability of geo-distributed virtual resources over multiple edge locations, the Cloud OS should allow the specification of placement constraints through the definition of VM groups and affinity rules over multiple availability zones, clouds, and edge locations.9 The application must be aware of the location of its users (managing also their mobility), and it is responsible for defining where the service components must be deployed. The use of a single Cloud OS instance greatly simplifies security management, as it provides centralized tools for user authentication, access control, definition of security groups, and secure communication with edge nodes. Moreover, service level agreements with the edge infrastructure providers guarantee the integrity, availability, and authenticity of the edge nodes. In addition, secure VPN tunnels can be implemented to provide integrity and authentication at application level.10

PROOF OF CONCEPT AND RESULTS In this section, we show the viability of the disaggregated cloud architecture. Our experimental setup comprises a Cloud OS based on OpenNebula, an on-premises cloud, and a single edge node, located one hop from the clients/ devices. OpenNebula is an ideal candidate for managing a disaggregated cloud because it is functional, simple, flexible, and light. For the workload, we consider a generic IoT scenario such as the one shown in Figure 1. We assume that the application exhibits a typical performance behavior, with each device, on average, generating a request every ten seconds and each request consuming ten ms of CPU and eight ms of I/O. Figure 2(a) shows the response time ðTR Þ of a

46

34

ComputingEdge

single application server as a function of the server load, represented by the number of devices accessing to the server. Furthermore, we also assume that the number of devices increases linearly from 100 and 2500, at a rate of 20 new devices per minute. We used VMs as the virtualization technology, however, the same mechanisms can also be used for system containers. We analyzed and compared the following scenarios: �





Classic cloud: No edge resources were considered and all the service components were deployed within the on-premises cloud infrastructure. Distributed edge: Each edge node was managed by its own Cloud OS, and service components were deployed within the edge node closest to the devices. Disaggregated cloud: The edge nodes were managed by a central Cloud OS, and the service components were deployed in the edge node or cloud site closest to the devices.

The following parameters were used in these scenarios: �



Network latency ðTNL Þ. We assumed that there is a 50 ms network latency between the devices and the computing resources for the on-premises cloud (classic cloud) and 10 ms for the edge node (distributed edge and disaggregated cloud). These values were experimentally obtained from the different cloud and edge providers (AWS, Azure, and Packet). Auto-scaling metric and thresholds: To implement the threshold-based auto-scaling mechanism, we used the TR as the service-performance metric. The scale-up and

IEEE InternetDecember Computing 2019









scale-down thresholds were set to 60 and 25 ms, respectively, as shown in Figure 2(a). Monitoring interval ðTMI Þ: The monitoring interval is higher in the disaggregated cloud, where the Cloud OS must monitor both the local resources and a variable number of remote edge resources. We assumed a one-minute monitoring interval for the classic cloud and distributed edge setup, and a two to five-minute interval for the disaggregated cloud setup, depending on its size. Auto-scaling sustained period ð TSP Þ: To avoid unnecessary fluctuations in the number of resources assigned to the service, the condition that triggers an auto-scaling action (i.e., when the monitoring metric exceeds the established threshold) must be met for a sustained period; in this case, we set this period to three minutes. VM start-up time ðTVS Þ: This parameter depends on multiple factors.11 In most popular cloud platforms, it takes between one and six minutes. However, in real tests on Packet edge servers, we measured start-up times of four–24 s; here, we considered a start-up time of 1 minute. Total auto-scaling delay ð TAD Þ: This represents the total time elapsed between the occurrence of a condition that triggers an auto-scaling action and the provision (or retraction) of a VM. It can be computed as TAD ¼ TMI þ TSP þ TVS :

Thus, using the aforementioned values, the total auto-scaling delay was calculated as five minutes for the classic cloud and the distributed edge scenarios and between six and nine minutes for the disaggregated cloud scenario. Considering all these parameters, we ran our experiment for two hours; Figure 2(b) shows the total response time (Ttotal), taking into account both the application response time ðTR Þ and the network latency ðTNL Þ. The peaks in these graphs represent the instants before the provision of a new VM, and so, the Ttotal dropped after these were made available. Because it supports the lowest network latency and auto-scaling delay, the distributed edge had the lowest average Ttotal (63 ms) whereas the disaggregated cloud had a slightly higher average (64.1 ms to 68.5 ms)

May/June 2019 www.computer.org/computingedge

resulting from its higher auto-scaling delay. Finally, the classic cloud had the highest average Ttotal (143 ms) because of its higher network latency.

CONCLUSION The disaggregated cloud model proposed in this paper represents a good trade-off between performance and design complexity. Compared to the more complex distributed edge architecture based on multiple Cloud OS instances (one per edge node) the disaggregated cloud can manage both on-premises and edge resources with a single Cloud OS instance and with minimal application performance degradation.

ACKNOWLEDGMENTS This work was supported in part by Minis� n y Universidades terio de Ciencia, Innovacio under Grant RTI2018-096465-B-I00, and in part by Comunidad de Madrid through the Research Program EDGEDATA-CM (P2018/TCS4499).

& REFERENCES 1. M. Hung, “Leading the IoT: Gartner insights on how to lead in a connected world,” Gartner Inc., Stamford, CT, USA, 2017. 2. N. Wang, B. Verghese, M. Matthaiou, and D. S. Nikolopoulos, “ENORM: A framework for edge NOde resource management,” IEEE Trans. Serv. Comput., to be published, doi: 10.1109/TSC.2017.2753775. 3. J. Zhu et al., “Improving web sites performance using edge servers in fog computing architecture,” in Proc. IEEE 7th Int. Symp. Service-Oriented Syst. Eng., 2013, pp. 320–323. 4. A. M. Rahmani

et al., “Exploiting smart e-health

gateways at the edge of healthcare internet-of-things,” Future Gener. Comput. Syst., vol. 78, pp. 641–658, 2018. 5. P. Patel, M. Intizar Ali, and A. Sheth, “On using the intelligent edge for IoT analytics,” IEEE Intell. Syst., vol. 32, no. 5, pp. 64–69, Sep./Oct. 2017. 6. R.

Moreno-Vozmediano,

R.

S.

Montero,

and

I. M. Llorente, “IaaS cloud architecture: From virtualized datacenters to federated cloud infrastructures,” Computer, vol. 45, no. 12, pp. 65–72, 2012. 7. T. Lorido-Botran, J. Miguel-Alonso, and J. A. Lozano, “A review of auto-scaling techniques for elastic applications in cloud environments,” J. Grid Comput., vol. 12, no. 4, pp. 559–592, 2014.

35

47

View from the Cloud

8. J. S. Ward and A. Barker, “Cloud cover: Monitoring large-scale clouds with varanus,” J. Cloud Comput., vol. 4, no. 1, pp. 1–28, 2015. 9. R. Moreno-Vozmediano, R. S. Montero, E. Huedo, and

Eduardo Huedo is an Associate Professor with Universidad Complutense de Madrid, Madrid, Spain. His research interests include distributed computing, cloud computing, and performance management. Contact him at [email protected].

I. M. Llorente, “Orchestrating the deployment of high availability services on multi-zone and multi-cloud scenarios,” J. Grid Comput., vol. 16, no. 1, pp. 39–53, 2018. 10. R. Moreno-Vozmediano, R. S. Montero, E. Huedo, I. M. Llorente, “Cross-site virtual network in cloud and fog computing,” IEEE Cloud Comput., vol. 4, no. 2, pp. 46– 53, Mar./Apr. 2017. 11. M. Mao and M. Humphrey, “A performance study on the VM startup time in the cloud,” in Proc. IEEE 5th Int. Conf. Cloud Comput., 2012, pp. 423–430.

Rafael Moreno-Vozmediano is an Associate Professor with Universidad Complutense de Madrid, Madrid, Spain. His research interests include distributed computing, edge/cloud computing, and networking. Contact him at [email protected].

 n S. Montero is Cofounder and Chief ArchiRube tect with OpenNebula, and an Associate Professor with Universidad Complutense de Madrid, Madrid, Spain. His research interests include resource provisioning models for distributed systems and cloud computing. Contact him at [email protected]. Ignacio M. Llorente is Cofounder and Director of OpenNebula, Full Professor with Universidad Complutense de Madrid, Madrid, Spain, and Visiting Professor with Harvard University, Cambridge, MA, USA. His research interests include distributed, parallel, and data-intensive computing technologies, and innovative applications of those technologies to business and scientific problems. He is a Senior Member of IEEE. Contact him at [email protected].

This article originally appeared in IEEE Internet Computing, vol. 23, no. 3, 2019.

48

36

ComputingEdge

IEEE InternetDecember Computing 2019

COLUMN: Backspace

Oscillation Vinton G. Cerf Google

Even today, computing technology continues to oscillate between localized and centralized data storage and processing.

When computers were first developed in the 1930s and 1940s (not counting Babbage’s Difference Engine and other similar machines of the 19th Century), they tended to be large and singleprogram-at-a-time systems. We referred to “batch programming” in which only one computing job at a time consumed the entire machine. The machines got faster and one saw illustrations of large buildings theoretically filled with computer equipment that would run big computer programs in sequence. But then two strange things happened. The idea of making a machine share its computing power among multiple users (“time sharing”) was invented in the early 1960s and machines began to get smaller and less expensive while delivering the same or even more computing capacity. Eventually, the centralized computing model was replaced by departmental computers and then by personal work stations, desk and laptop computers, and now, by smart phones and tablets. Concurrent with this evolution, computer networking evolved from the 1960s eventually to become the global Internet, linking many of these computing devices together. The personal computers were reconfigured and repurposed to fill warehouses full of racks of computers that could be shared by millions of users, returning us to the age of shared, centralized computing systems but with a major distributed component of memory and computing capacity. Now a new trend is emerging: so-called edge computing with powerful computing capability positioned at the edges of the network, where personal devices also reside. The centralized facilities (warehouses) are still there but now we have distributed computing power once again at multiple levels in organizations and even residential settings. The Internet of Things will reinforce this trend. Billions of small, programmable devices will populate our offices, homes, automobiles and our persons. They will interact with and sometimes be protected by local computing capacity that shields the smaller devices from various kinds of attack or which serve as a kind of backup in case the Internet becomes unreachable. We see a related trend as content distribution services such as Akamai download and store content (e.g. music, videos and commonly referenced databases) to put them closer to the user’s access to the Internet. The result of all this is a persistent oscillation between localized computing and data storage and centralized storage and processing. While these oscillations take place, we also end up with concurrent centralized and distributed models operating across the Internet. Architecturally speaking, these phenomena introduce a strong demand for synchronization of content in a highly distributed environment. This is a nontrivial problem that calls for the deep thinking that led to Leslie Lamport’s PAXOS family of protocols. The implications of this line of reasoning suggest that the oscillation between centralized and distributed computing will continue and that computing and storage will become normal adjuncts of almost all environments. In the extreme, we may find computing technologies such as the memristor becoming a normal part of many computing elements so that the classical Von Neumann architecture that separates computing from IEEE Internet Computing July/August 2018 2469-7087/19 © 2019 IEEE

64 Published by the IEEE Computer Society

Published by the IEEE Computer Society 1089-7801/18/$33.00 USD ©2018 IEEE December 2019 49

IEEE INTERNET COMPUTING

memory blurs into a blended architecture that produces phenomena such as “instant on” that eliminate the need for “booting up” the operating system. Computation can continue where it was when power was turned off. As newer computing technology emerges, perhaps in the form of bio-computing and storage systems, the ubiquity of computing and communication will become even more pronounced, leading the way towards the vision of Mark Weiser, who coined the term “ubiquitous computing” 30 years ago to describe his concept that computing and communications would blend into the background in such a way that we would not make much of a distinction between computers and everything else. Everything would have computational capacity and communication would be embedded into an omnipresent infrastructure. As we experience new modalities of interaction with computing via voice and gesture, this vision may rapidly become reality.

BIO Vinton G. Cerf is vice president and chief Internet evangelist at Google, and past president of ACM. He’s widely known as one of the “fathers of the Internet.” He’s a Fellow of IEEE and ACM. Contact him at [email protected].

This article originally appeared in IEEE Internet Computing, vol. 22, no. 4, 2018.

50 July/August 2018

ComputingEdge

65

December 2019 www.computer.org/internet

CAREER OPPORTUNITIES

www.aspenconsultinggroup.com SOFTWARE VULNERABILITY RESEARCH POSITION Aspen Consulting Group is seeking a highly moti- collaborate in identifying software patches and in vated individual with academic or industry expe- the analysis of potential exploits which could have rience to lead research activities in the discovery resulted from the discovered vulnerabilities. of software vulnerabilities. The candidate will perform research and analysis of security vulnerabili- In addition, the candidate will be integral to our ties associated with a wide variety of architectures current research efforts for automated vulnerability to include x86, MIPS, ARM, amongst others, with discovery with the application of Machine Learning emphasis on embedded software applications. (ML) techniques. Demonstrated experience with reverse engineer- Travel Requirements 10% - 25% travel. ing tools, such as IDA Pro, the use of fuzzing U.S. Citizenship required. techniques and tools, and understanding dynam- About the Company ic, symbolic and concolic software analysis are essential. The candidate must possess an expert Aspen Consulting Group Inc. (ACG) provides a wide command of C/C++ programming, with knowl- range of services encompassing multiple technical edge of assembly language for multiple architec- disciplines. Our staff is recognized for our expert knowledge in Hardware Design and Fabrication, tures a definite plus. Software Development, Computer Network Operations, Wireless Systems Engineering, and System Degree and Required Skills: • BS/MS degree in Computer Engineering, Com- Development, Integration and Testing. Aspen’ s puter Science, Electrical Engineering, or a related corporate headquarters is located in Manasquan, field. Candidates currently enrolled in an MS ac- NJ, one mile from the Atlantic Ocean. This position credited degree program relevant to this position is located in Northeast Maryland, close to Baltiwill be considered. Possessing a PhD degree or more and Washington D.C.

being a candidate for PhD is preferred but not required. • Experience with debuggers, code compilers and linkers, source code extractors, and disassemblers for Windows, Linux, Android, Apple OS X and IOS platforms. • Experience with software reverse engineering tools and methods. • Experience with emulation such as QEMU, and code intermediate representation (IR). • Ability to prepare technical reports and papers and to present findings to senior management and at technical conferences.

Primary Tasks: The successful candidate will assume the lead role in discovering vulnerabilities within identified applications in support of a research team. He/she will

www.computer.org/computingedge

All benefits are paid by the company with no employee contributions! • Employer-paid medical, dental and vision plans • Paid Leave (Four weeks of vacation/sick time) • Employer-paid life insurance ($100K per employee) • Paid prescription drug plan • Company funded Health Reimbursement Account • 10 paid holidays per year • 401k Retirement Plan (Immediate vesting; employer matches up to 4% of employee’s contribution) • Performance Bonus plan • Employee Referral Bonus • Employee tuition reimbursement Please email resumes to Amy Russell amy.russell@ aspenconsultinggroup.com.

51

CAREER OPPORTUNITIES CALIFORNIA INSTITUTE OF TECHNOLOGY The Computing and Mathematical Sciences (CMS) Department at the California Institute of Technology (Caltech) invites applications for tenure-track faculty positions in all areas of applied mathematics, computer science, and related disciplines. Areas of interest include (but are not limited to) algorithms, data assimilation and inverse problems, dynamical systems and control, geometry, machine learning, mathematics of data science, networks and graphs, numerical linear algebra, optimization, partial differential equations, probability, scientific computing, statistics, stochastic modeling, and uncertainty quantification. Application foci include computational physical and life sciences, distributed systems, economics, graphics, quantum computing, and robotics and autonomous systems. The CMS Department is part of the Division of Engineering and Applied Science (EAS), comprising researchers working in and between the fields of aerospace, civil, electrical, environmental, mechanical, and medical engineering, as well as materials science and applied physics. The Institute as a whole represents the full range of research in biology, chemistry, engineering, physics, and the social sciences. A commitment to world-class research, as well as high-quality teaching and mentoring, is expected, and appointment as an assistant professor is contingent upon the completion of a PhD degree in applied mathematics, computer science, engineering, or the sciences. The initial appointment at the assistant professor level is four years and is contingent upon completion of a PhD degree. Reappointment beyond the initial term is contingent upon successful review conducted prior to the commencement of the fourth year. Applications will be reviewed beginning 15 November 2019 but all applications received before 15 January 2020 will receive full consideration. For a list of documents required, and full instructions on how to apply online, please visit https://applications. caltech.edu/jobs/cms. Questions about the application process may be directed to [email protected]. Caltech is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law.

Faculty Chair Position Computer Science Department The College of Engineering and Applied Sciences of the University at Albany SUNY is seeking applicants for a full professor faculty position beginning September 2020 for Chair of the Department of Computer Science. The successful candidate will have an established record of scholarship with demonstrated potential to influence the growth and development of computer science programs at the University at Albany. Applicants must have a PhD in Computer Science, Computer Engineering, Electrical Engineering, or a closely related discipline. For a complete job description and application procedures, visit: https://albany.interviewexchange.com/ jobofferdetails.jsp?JOBID=117256 Questions regarding the position may be addressed to [email protected] For additional information on the College and its departments, please visit: http://www.albany.edu/ceas/ The University at Albany is an EO/AA/IRCA/ADA Employer.

IEEE Computer Society Has You Covered! WORLD-CLASS CONFERENCES — 200+ globally recognized conferences. DIGITAL LIBRARY — Over 700k articles covering world-class peer-reviewed content. CALLS FOR PAPERS — Write and present your ground-breaking accomplishments. EDUCATION — Strengthen your resume with the IEEE Computer Society Course Catalog. ADVANCE YOUR CAREER — Search new positions in the IEEE Computer Society Jobs Board. NETWORK — Make connections in local Region, Section, and Chapter activities.

Explore all of the member benefits at www.computer.org today!

52 ComputingEdge

December 2019

CALL FOR PAPERS IEEE Intelligent Systems

Special Issue on Learning Complex Couplings and Interactions Deadlines: Submissions due: 30 March 2020 First revisions due: 30 July 2020

Final revisions due: 30 October 2020 Publication: January/February 2021

The world is becoming increasingly complex with a major drive to incorporate increasingly intricate coupling relationships and interactions between entities, behaviors, subsystems, and systems and their dynamics and evolution of any kinds and in any domains. Effective modeling in complex couplings and interactions in complex data, behaviors, and systems directly addresses the core nature and challenges of complex problems and is critical for building next-generation intelligent theories and systems to improve machine intelligence and understand real-life complex systems. This special issue on learning complex couplings and interactions will collect and report the latest advancements in artificial intelligence and data science theories, models, and applications of modeling complex couplings and interactions in big data, complex behaviors, and complex systems. This special issue will solicit the recent theoretical and practical advancements in learning complex couplings and interactions in areas relevant to--but not limited to--explainable learning, representation learning, hybrid intelligent systems and problems, large-scale simulation, visual analytics and visualization, big data and complex behaviors, human-understandable explanations, deep learning, and modeling applications and tools. Topics of interest include: - Coupling learning of complex relations, interactions, and networks and their dynamics - Interaction learning of activities, behaviors, events, processes, and their sequences, networks, and dynamics - Learning natural, online, social, economic, cultural, and political couplings and interactions

- Learning real-time, dynamic, high-dimensional, heterogeneous, large-scale, and sparse couplings, dependencies, and interactions - Probabilistic and stochastic modeling of coupling and interaction uncertainty and dependency

Submission guidelines:

Contact the guest editor(s) at

computer.org/ex/author-information

[email protected]

Guest editor(s): Dr. Can Wang (Griffith University, Australia) Prof. Fosca Giannotti (University of Pisa, Italy)

Prof. Longbing Cao (University of Technology Sydney, Australia)

stay connected.

Keep up with the latest IEEE Computer Society publications and activities wherever you are.

Follow us:

 |  |

@ComputerSociety facebook.com/IEEEComputerSociety

 |

IEEE Computer Society

 |

youtube.com/ieeecomputersociety

 |

instagram.com/ieee_computer_society

CALL FOR PAPERS IEEE Internet Computing

Special Issue on the Internet of Bodies (IoB) Deadlines: Submissions due: 13 January 2020 Revisions due: 13 April 2020

Final version due: 1 June 2020 Publication: July/August 2020

As healthcare solutions and augmented monitoring of human mobility overlap with the Internet of Things (IoT), an emerging area of interest leverages sensor networks that monitor personal health data and human activity. The Internet of Bodies (IoB) effectively connects the human body to the Internet. The Internet of Sports (IoS) further defines IoB for devices and software that monitor human activity in the fitness and competitive sports domain. The advanced capabilities and performance of these high-tech biological devices open a full array of opportunities and challenges as healthcare professionals develop new medical solutions and assistive technologies. There is an increasing number of IoB applications for personal mobile devices; there are also new and exciting devices, such as smart contact lenses for adapted vision correction, cochlear implants for enhanced hearing, and electronic pills that monitor internal organs. Internet computing, software engineering, human-computer interaction, and data and networking technologies are essential to creating next-generation IoB technologies. This special issue aims to collect the most recent practical and theoretical advances to IoB, including cutting-edge techniques for system development and implementation, networking approaches, privacy and security considerations, and human-sensor interactions. Topics of interest include: - IoB or sensor networks supporting human-oriented sensing/functions - Sensor networks for biological monitoring and enhancement - Security and privacy for IoB, IoT, and IoS

- IoB/IoS-enabled innovation and entrepreneurship - IoB/IoS applications and services - Experimental results and deployment scenarios - Electronics and signal processing for IoB/IoS/IoT - Connectivity and networking

Submission guidelines:

Contact the guest editor(s) at

computer.org/ic/author-information

[email protected]

Guest editor(s): M. Brian Blake, Drexel University Schahram Dustdar, Vienna University of Technology

Nagarajan Kandasamy, Drexel University Xuanzhe Liu, Peking University

Conference Calendar Questions? Contact [email protected]

I

EEE Computer Society conferences are valuable forums for learning on broad and dynamically shifting topics from within the computing profession. With over 200 conferences featuring leading experts and thought leaders, we have an event that is right for you. Find a region:

Africa Asia

Australia Europe

■ ▲

January 13 January • ICCPS (Int’l Conf. on Cyber-Physical Systems) ●

February 3 February • ICSC (IEEE 14th Int’l Conf. on Semantic Computing) ◗ 18 February • SANER (IEEE 27th Int’l Conf. on Software Analysis, Evolution and Reengineering) ◗ 19 February • BigComp (IEEE Int’l Conf. on Big Data and Smart Computing) ▲ 22 February • CGO (IEEE/ACM Int’l Symposium on Code Generation and Optimization) ◗

March 2 March • WACV (IEEE Winter Conf. on Applications of Computer Vision) ◗ 9 March • DATE (Design, Automation & Test in Europe Conf. & Exhibition) ● • IRC (4th IEEE Int’l Conf. on Robotic Computing) ▲

72

December 2019

◆ ●

North America South America

◗ ★

16 March • ICSA (IEEE Int’l Conf. on Software Architecture) ★ 22 March • VR (IEEE Conf. on Virtual Reality and 3D User Interfaces) ◗ 23 March • ICST (13th IEEE Conf. on Software Testing, Validation and Verification) ● • PerCom (IEEE Int’l Conf. on Pervasive Computing and Communications) ◗

April 5 April • ISPASS (Int’l Symposium on Performance Analysis of Systems and Software) ◗ 14 April • PacificVis (IEEE Pacific Visualization Symposium) ▲ 20 April • ICDE (IEEE 36th Int’l Conf. on Data Eng.) ◗

May 3 May • FCCM (IEEE 28th Annual Int’l Symposium on Field-Programmable Custom Computing Machines) ◗

Published by the IEEE Computer Society

2469-7087/19 © 2019 IEEE

4 May • HOST (IEEE Int’l Symposium on Hardware Oriented Security and Trust) ◗ 18 May • SP (IEEE Symposium on Security and Privacy) ◗ • FG (IEEE Int’l Conf. on Automatic Face and Gesture Recognition) ★ • IPDPS (IEEE Int’l Parallel and Distributed Processing Symposium) ◗ 23 May • ICSE (IEEE/ACM 42nd Int’l Conf. on Software Eng.) ▲ 30 May • ISCA (ACM/IEEE 47th Annual Int’l Symposium on Computer Architecture) ●

June 14 June • CVPR (IEEE Conf. on Computer Vision and Pattern Analysis) ◗ 16 June • EuroS&P (IEEE European Symposium on Security & Privacy) ● 19 June • JCDL (ACM/IEEE Joint Conf. on Digital Libraries) ▲ 29 June • DSN (50th Annual IEEE/IFIP Int’l Conf. on Dependable Systems and Networks) ● 30 June • MDM (21st IEEE Int’l Conf. on Mobile Data Management) ●

August 31 August • RE (IEEE 28th Int’l Requirements Eng. Conf.) ●

September 21 September • ASE (35th IEEE/ACM Int’l Conf. on Automated Software Eng.) ◆ 28 September • ICSME (IEEE Int’l Conf. on Software Maintenance and Evolution) ◆ • SecDev (IEEE Secure Development) ◗

October 18 October • MODELS (ACM/IEEE 23rd Int’l Conf. on Model Driven Eng. Languages and Systems) ◗ 21 October • FIE (IEEE Frontiers in Education Conf.) ● 25 October • VIS (IEEE Visualization Conf.) ◗

November 9 November • FOCS (IEEE 61st Annual Symposium on Foundations of Computer Science) ◗ 15 November • SC ◗ 16 November • LCN (2020 IEEE 45th Conf. on Local Computer Networks) ◆

July 6 July • ICME (IEEE Int’l Conf. on Multimedia and Expo) ● 8 July • ICDCS (IEEE 40th Int’l Conf. on Distributed Computing Systems) ▲ 13 July • COMPSAC (IEEE Annual Computer Software and Applications Conference) ●

Learn more about IEEE Computer Society Conferences www.computer.org/conferences

: s r e p a P Call for s n o i t c a s n a r T E E IE s r e t u p on Com Publish your work in the IEEE Computer Society’s flagship journal, IEEE Transactions on Computers (TC). TC is a monthly publication with a wide distribution to researchers, industry professionals, and educators in the computing field. TC seeks original research contributions on areas of current computing interest, including the following topics: • Computer architecture • Software systems • Mobile and embedded systems

• Security and reliability • Machine learning • Quantum computing

All accepted manuscripts are automatically considered for the monthly featured paper and annual Best Paper Award. Learn about calls for papers and submission details at 

www.computer.org/tc.