Explaining the Future: How to Research, Analyze, and Report on Emerging Technologies 0198822820, 9780198822820

Will this new technology work to solve the problem its inventors claim it will? Is it likely to succeed? What is the rig

525 45 6MB

English Pages 224 [211] Year 2019

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Explaining the Future: How to Research, Analyze, and Report on Emerging Technologies
 0198822820, 9780198822820

Table of contents :
Cover
Explaining the Future: How to Research, Analyze, and Report on Emerging Technologies
Copyright
Dedication
Preface
Acknowledgments
Contents
Chapter 1:
Key Questions
Question 1: What’s so special about this technology?
Question 2: What problem are you trying to solve?
Technical requirements
Ethical and legal requirements
Commercial requirements
Potential obstacles
No problem for the solution? Be creative . . .
Question 3: What is the effect of time?
Limits
Money, momentum, and market
Getting started
Question 4: What is the competition?
The status quo
Technology in development
Something completely different
Question 5: What are the features of each competitor?
Summary
Chapter 2:
Finding Answers
Getting organized
Example work flow
Which application is the most promising?
What are the application’s requirements?
What is the competition?
What are the features of the competing technologies?
A simple plan
Real life steps in
Types of sources
Keywords
Search engines
Technical
The technical literature
Forward and backward citations
Books and book chapters
Commercial/technical
Patents
The technical press
Industry bloggers
Industry reports and roadmaps
The outside world
People
Conferences
Lab, company, and site visits
Business development and PR/comms people
Commercial
Trademarks and designs
Annual reports
Websites, press releases, and whitepapers
Summary
Chapter 3:
Perspectives and Agendas
The press and the trade press
Look for the naysayers
Disagreement over the problem to be solved
Misleading without deliberately lying
More contrary positions
Individual agendas
Industry/corporation-supported cheerleaders
Credibility, analysis, and balance
Summary
Chapter 4:
Analyze
Starting point
Application-focused analysis
Technology-focused analysis
Choosing an application
Level of detail
On time, on spec
Routes through the process
Phase 1: Understand the technology
Step 1: The basics
Step 2: Features
Step 3: Potential applications
Step 4: Eliminating distractions
Phase 2: Taking the application’s point of view
Step 1: The basics
Step 2: The competition
Phase 3: Timing
Step 1: Ramping up
Step 2: Roadmap and uncertainty
Step 3: Evolution
Phase 4: Coming to a conclusion
Phase 5: Reality checking
Step 1: The application
Step 2: Your own work
Step 3: Fixing your analysis
Summary
Case Study Part I: Research and Analysis
A quick introduction to neuromorphic engineering
A quick introduction to photonics in computing
Getting down to work
Iterative trawling
Redirection
Moving forward
Moving the goalposts
Toward the finish line
Wrapping up
Chapter 5:
Audience and Explanation
Audience
Showing respect
Preparation
Multiple audiences
Purpose
Explanation
Jargon
Visualization
Photos
Diagrams
Graphs
Videos
Different audiences, different images
Figure captions and legends
Copyright and plagiarism
Summary
Chapter 6:
Technical Argument and Structure
The technical argument
What is the vision?
What is the status quo?
What is the technical problem?
What are the competing solutions?
What is the new solution?
What are the obstacles?
What is the prognosis?
Structure
Weight
Writing the title and introduction last
Title
Introduction
Conclusion
Variations on an outline
Outline for a basic research project
Outline for a review
Summary
Chapter 7:
Credibility
Show, don’t (just) tell
Be honest, authoritative, and accurate
Prepare for objections
Signposts, resting points, and flow
Paragraphs
Topic and linking sentences
Connector phrases
Non sequiturs
Sentence length
Section and subheadings
Make it easy to read
Voice
Repetition
Grammar
Legibility
Mathematics, code, data, and other technical detail
References
Don’t give rough drafts to readers
Summary
Case Study Part II:
Report
Will silicon photonics speed up deep learning?
Optimizing neural hardware
New viability for photonics
Discussion
Conclusion
Epilogue
References
Index

Citation preview

Explaining the Future

EXPLAINING THE FUTURE How to Research, Analyze, and Report on Emerging Technologies

sunny bains

1

1 Great Clarendon Street, Oxford, OX2 6DP, United Kingdom Oxford University Press is a department of the University of Oxford. It furthers the University’s objective of excellence in research, scholarship, and education by publishing worldwide. Oxford is a registered trade mark of Oxford University Press in the UK and in certain other countries © Sunny Bains 2019 The moral rights of the author have been asserted First Edition published in 2019 Impression: 1 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, by licence or under terms agreed with the appropriate reprographics rights organization. Enquiries concerning reproduction outside the scope of the above should be sent to the Rights Department, Oxford University Press, at the address above You must not circulate this work in any other form and you must impose this same condition on any acquirer Published in the United States of America by Oxford University Press 198 Madison Avenue, New York, NY 10016, United States of America British Library Cataloguing in Publication Data Data available Library of Congress Control Number: 2018948523 ISBN 978–0–19–882282–0 DOI: 10.1093/oso/9780198822820.001.0001 Printed and bound by CPI Group (UK) Ltd, Croydon, CR0 4YY Links to third party websites are provided by Oxford in good faith and for information only. Oxford disclaims any responsibility for the materials contained in any third party website referenced in this work.

For Stuart and CJ

PREFACE

W

ill this new technology solve the problem its inventors claim it will? Is it likely to succeed for any application at all? What is the right technical solution for a particular problem? Can we narrow down the options before we spend a lot of money on development? How do we persuade our colleagues, investors, clients, or readers of our technical reasoning? Whether you’re a researcher, a consultant, a venture capitalist, or a CTO, you will need to be able to answer these questions systematically and with clarity. Most people learn these skills through years of experience. However, they are so basic to a high-level technical career that they should be made explicit and learned up front, making the whole learning process more efficient. This book will provide you with the tools you need to think through how to match new (and old) technologies, materials, and processes with applications. Specifically, the first chapter covers the questions you need to answer, while the second looks at how to structure your research to answer them and points you to different resources that you might not have thought to use. Chapter 3 discusses how to decide whose opinions you should trust, whether in writing or in person, and whose you should treat with caution. In Chapter 4, we switch gear and focus on technical analysis, bringing together all the information you have gathered into something meaningful. To help you visualize what needs to be done here, this section includes several canvases that can be blown up and used to structure your material. These will help you identify opportunities and difficulties, eliminate dead ends, and recognize where pieces of the puzzle are missing and you have to do more research. The final three chapters will help you think about how to communicate your conclusions. Chapter  5 starts with the most important part of the communication process—the audience—and how it dictates everything Preface  |   vii

from how you set the context for your report, to the kind of jargon you use, to the depth of explanation you go to. In Chapter 6, the critical basic steps of a technical argument are covered, along with clear, pragmatic explanations of how they must be ordered in order to bring your audience along with you. Chapter 7 essentially covers how to be believed. It teaches you how to second-guess your audience’s prejudices, how to avoid coming across as a salesperson, and how important it is to be honest about issues that go against your argument so that your readers will learn to trust you. It also provides advice on how to guide your audience through difficult material by using good writing and clear signposts. Finally, the book concludes with a case study showing worked ­examples of how all these techniques can be used in practice. What you do with these skills is up to you. You might use them to ensure that you position your work to be ripe for funding opportunities, or to figure out who your potential customers are. Alternatively, you might use them to determine whether the claims made for a particular technology are valid: is it valuable, or is it vaporware? This book will teach you how to find the right information, ask the right questions, and interpret what you find without being swayed by hyperbole and PR. Whatever your end goal, this book will help you to make your case in clear, logical reports that are spoken and written at the right level for your audience.

Audience The book is written for people with some kind of technical (engineering or science) background. It’s ideal for students, from motivated undergraduates to masters and doctoral candidates, in that it will help give you a framework for thinking about your subject, as well as tools for research, analysis, and technical communication. For graduates taking their first steps into consultancy, start-ups, or tech-sector investing, the book highlights the real-world issues that determine success in technology but are often neglected at university, as well as introducing you to some ­important audiences you will have to persuade in order to achieve success. Finally, the book will suit mid-career scientists and engineers moving from the lab to technical management and other careers that demand they be more strategic in their thinking.

viii  |  Preface

For more information Once you’ve finished reading, you can get more resources via our website. These include summary videos highlighting key concepts; downloadable versions of the canvases used in the book; instructions, templates and check sheets for writing different kinds of documents and presentations; and more. Go to http://explaining-the-future.org and get full access by logging in as reader and using the password ETFbook1.

Preface  |   ix

ACKNOWLEDGMENTS

T

here are many people whom I’d like to thank and who, one way or another, made this book possible. First and foremost, I’d like to thank all those I’ve taught – whether in industry, undergraduate students, or postgraduates, in the US or the UK – who have helped me hone my teaching of this subject over the last 20 years. This book is my answer to the many requests I’ve had for better, more-comprehensive notes: I hope it suffices! Among the several thousand I’ve worked with, I’d particularly like to thank the hundreds of teaching assistants I’ve trained, and who have then helped me to help others. The hours we spent together developing your skills have been some of the most rewarding and productive of my working life (though not the easiest!), and the feedback and encouragement you’ve given me as you’ve seen the benefits in your own careers has kept me going. Another group of people to whom I owe a debt are those who have written for me over the years and whose snippets of raw text I’ve used as examples of good and bad practice. I’ve not named these contributors because the text is old, unedited, and I didn’t want to embarrass them by drawing attention to their habits (even if they were good!), but I think of them every time I present these examples to students. There are some specific people I’d like to thank. First, for more than a decade, Rashik Parmar of IBM has reminded me – even when it felt like no one else cared – that it is not just a luxury, but a priority for engineers to communicate. Gary Lye, Chika Nweke (a former teaching assistant as well as a valued colleague!) and all the staff in the UCL Department of Biochemical Engineering also have my deepest gratitude for creating a supportive work environment that allowed me to focus on getting this book finished. I’m also grateful to those who specifically helped me with Explaining the Future. Colin Hayhurst (Innovations Partnership Fellow at the University of Sussex) and Maurice Granger (chemical engineer and Acknowle dgments  |   xi

former teaching fellow at UCL) both helped by reading and giving really helpful feedback on drafts of the early chapters. Rose Gotto was extremely encouraging and helped proof my book proposal for Oxford University Press. Which brings me on to Sonke Adlung who commissioned Explaining the Future, Harriet Konishi, Elizabeth Farrell, and the rest of the team at OUP. You made the editing of this book simple, straightforward and relatively stress free (I wish the writing process had been as smooth). I would recommend working with you to anyone. Thanks also to the production team led by Lydia Shinoj. I’d finally like to thank all my family for their help and support. One of my brothers, Jon Bains, suggested that I develop canvases for the analytical steps and helped me think them through. His time and thought were particularly invaluable. I also want to mention my nearest and dearest who had the job of keeping me going during the day-to-day process of writing, editing, and proofing the book while trying to get on with work and life. You made this book possible.

xii  |  Acknowle dgments

CONTENTS 1. Key Questions

1

2. Finding Answers

20

3. Perspectives and Agendas

41

4. Analyze

51

Case Study Part I: Research and Analysis

85

5. Audience and Explanation

112

6. Technical Argument and Structure

132

7. Credibility

153

Case Study Part II: Report

180

Epilogue 188 References Index

191 193

Contents  |   xiii

CHAPTER 1

Key Questions

H

eadlines about research generally have the same form:

• “Breakthrough in Nanotechnology Will Enable Low-Power Circuits” • “New Medicine May Cure Cancer” • “Algorithm Designed by Evolution Will Make Our Computers More Secure”

These headlines intrinsically encode society’s value system when it comes to new technology: either today or sometime in the future, whatever it is it, must prove useful. Of course, there is blue-sky research and/or pure science, but funding for these is relatively scarce. In the UK, for instance, two-thirds of R & D is funded by businesses rather than by the government,1 and, even in universities and government labs, much of the work done is applied science and engineering. This means there are two fundamental questions in technology: “What can this do?” and “What can do this?”. The first implies that you have a technology in mind and want to know how to apply it. The second implies you’re trying to build something (you have an application), have a problem 1 See Office of National Statistics, Statistical Bulletin: Gross Domestic Expenditure on Research and Development, UK: 2014 (2016), http://www.ons.gov.uk/economy/ governmentpublicsectorandtaxes/researchanddevelopmentexpenditure/bulletins/ ukgrossdomesticexpenditureonresearchanddevelopment/2014, accessed July 6, 2018.

Explaining the Future: How to Research, Analyze, and Report on Emerging Technologies. Sunny Bains © Sunny Bains 2019. Published in 2019 by Oxford University Press. DOI: 10.1093/oso/9780198822820.001.0001 Key Question s  |   1

to solve, and are trying to find the right technology to achieve that. These two questions define two different perspectives, both of which are important and both of which we’ll consider in the upcoming chapters. We’ll start by considering the case of one specific technology and what it can do.

Question 1: What’s so special about this technology? Some person, laboratory, or company has developed a new process, widget, algorithm, or material and is making great claims about its potential to change the world. Your job is to evaluate these claims, which means being skeptical. What is different about this approach compared to all the others out there (in commercial terms, what is its unique selling point?)? What can it do? What does it make possible? Let’s make this more concrete with an example. A company has developed a program that turns your webcam into an eye tracker that can figure out what you’re looking at on the screen and the emotion you’re displaying while looking in a particular direction. What’s so special about that? There could be all sorts of answers to this question. Maybe it can be used to catalog your likes and dislikes for advertising purposes. Maybe it can help diagnose mental illness. Maybe it can add a new dimension to gaming. Whatever it is, the answer will be some kind of claim that you can then take time to research and verify. Most claims fall into one of three categories. The most obvious, simplest assertion inventors might make is that their invention’s performance is better than the competition in some way: the algorithm is better than other similar systems that already exist on the market because it’s faster, less processer intensive, or more accurate, or it works with less-controlled input. This claim may be strong or weak depending on whether it is unlimited or comes with a lot of caveats. Saying an eye tracker is the fastest ever made is very different than saying it’s the fastest of its kind, because the latter requires that you read the small print to see what “kind” that is. Alternatively, the company might say that the new program represents an integrated solution created using existing but state-of-the-art technologies to address a specific new application: that of finding out how people react emotionally to advertisements. None of the individual ­elements may be exceptional in themselves but together they form a system that is better 2  |  EXPL AINING THE FUTURE

suited to this task than anything else ever built. Alternatively, it may simply be the first attempt to address this exact application at all. Either way, this claim is only strong if the application is really interesting. It also begs the question, is it better suited to the task than other systems that could be built? Just because something is the first of its kind doesn’t mean it’s the best. The third claim, which is generally only made if a technology is really new and different, is that it is potentially disruptive of some technology space. This wouldn’t necessarily apply to our eye tracker but might to a new material with some unique properties, for instance. Very-hightemperature superconductors (which allow electricity to flow with almost no resistance) would fall into this category. They could allow the creation of all kinds of devices that would have been impossible to build up to now and would be qualitatively different than those we use today. A less dramatic, but more common, claim is that a new process or system is an enabling technology: that is, it makes possible further development in a related field. For instance, if you were able to produce a more efficient, scalable purification process, you might enable the manufacture of new drugs that would otherwise still contain toxic byproducts. Enabling technologies are extremely important but, like any other, they have to be evaluated. Finally, there are new approaches to technical fields that have the potential to completely reframe how we think about its development. In artificial intelligence, for instance, the idea of bottom-up learning based on experience (making sense of information coming directly from sensors) was for decades seen as being much less important than knowledge-based expert systems programmed by humans. Over the last ten to fifteen years, this has changed, and what we now call “deep learning” has provided us with a completely new way of looking at the subject. (We’ll be discussing deep learning a lot more in the case study.) Systematization of a field can often have this effect (think of the periodic table), as can computer design tools that allow us to create circuits, lenses, mechanical parts, and chemicals, without the skill and number crunching that was previously required. If you are an inventor, finding out what claim(s) you can legitimately make for a new technology can be critical for getting funding, getting a job, or getting publicity. However, if your job is to evaluate the potential of research, identifying whether a claim is well founded is just the first step. The next step is to assess whether it is relevant to the task in hand: the application. Key Question s  |   3

Question 2: What problem are you trying to solve? Often, with the claims we’ve discussed, there is an application implied: an area where there is an existing problem that could be solved with the new technology. For instance, if you take a performance claim, some company might say they’ve developed the most fuel-efficient engine yet and imply that it will make cars of the future more eco-friendly. Validating this claim doesn’t involve redoing the efficiency measurements. Although there are cases of scientific fraud that would make this necessary, it’s not usually an issue. The problem is that this one measure of performance does not tell you everything you need to know about whether a development is likely to have a positive impact. For instance, what if the high-efficiency performance only occurs for steady highway driving, and the engine is actually less efficient than existing cars in the city? What if the engine requires new fuel additives, isn’t compatible with current car design practices, or emits particularly toxic pollutants? What if the engine design requires a lot of a very expensive and/or scarce mineral? These are not rhetorical questions: just because the engine has some limitations or disadvantages doesn’t mean that it is useless. There may be applications where it’s by far the best option. But, without understanding the application requirements, there’s no way to answer these questions meaningfully. Another example to consider is as follows: a group is claiming that they’ve created a computer using microelectromechanical logic gates. This kind of logic gate is much less efficient than that used in standard electronics: it is slower and heavier and takes up a lot more space on a chip. Oh, and it’s much more expensive. Useless, right? Perhaps not, if you’re trying to build some kind of failsafe device for a nuclear power station. Conventional electronics can be very easily (falsely) switched when exposed to radiation, and this makes them unreliable in any sort of disaster. Mechanical switches, thanks to the fact that they’re less efficient (require a lot more energy to switch) are more robust and therefore better for this kind of backup system. Bugs really can be features, and features really can be bugs. You only know for sure if you get into the detail. 4  |  EXPL AINING THE FUTURE

Sometimes the problem is that the applications of a piece of research are many and varied (a good problem to have if you are looking for work with impact). In this case, the solution is to choose an example application (or a few) to work through that will help you make a case for the likelihood of success. When you get to the reporting stage, these examples will also make it much easier for you to persuade others of the value of the new technology.

Technical requirements We usually start with the technical requirements because, if these can’t be met, the rest don’t matter very much. To get to grips with these, you need to get to know the subject by exploring and interrogating various sources and understanding their perspectives. We’ll get to these later. For now we’ll focus on the questions to ask. A technical requirement can be almost anything that—if the new technology doesn’t meet it—could prevent it from working, or working well enough to be at all useful. One set of requirements would come from thinking of the technology as a process with inputs and outputs: so, the webcam algorithm we discussed earlier takes in image data and produces an output that describes both where on the screen the user was looking and what their emotional state was. The purification system takes in a drug in a given form with a given proportion of impurities and then pushes it out with those impurities reduced. Understanding what some system needs to ingest and what it needs to spit out is critical to working out whether it can do the job. Another set of technical requirements relate to performance: how quickly, efficiently, quietly, or accurately the process needs to be carried out in order for a solution to be acceptable. The number of performance measures that can be applied are as varied as the number of applications served and the number of fields from which the technologies derive. Many require deep subject knowledge to understand exactly why they are important or even what they mean. Fortunately, that’s what experts are for (we’ll discuss them at length in Chapter 2). Physical constraints such as size, weight, and power (aka SWaP) can be important in many applications. Whether or not a technology is viable may depend on its size, its shape, its weight, its strength, or the number of degrees of freedom in which it can move, and—although this may seem

Key Question s  |   5

obvious—these everyday concepts mask a multitude of much more ­technical ideas. Strength doesn’t mean anything per se: it’s tensile strength, elasticity, hardness, and so on that matters. Which, precisely, are ­important for a given application, and which are not, are what you have to determine in your research. Yet another issue to consider is what operating conditions a piece of technology will have to endure, and its related working lifetime. If a circuit board needs to be kept below room temperature to operate, then you’re unlikely to be able to install it inside a PC. If a building material dissolves in acid rain, then—whatever other great properties it has— you’re not going to want to use it in your roof tiles. That these requirements exist may seem obvious and, when you’re in the thick of trying to design a system, you’ll be acutely aware of what each component or subsystem needs to do and/or withstand in order to fulfill its role. Engineers won’t choose products or use processes that don’t meet their exacting specifications when they are putting their own projects together. However, this doesn’t help us when we’re trying to evaluate technologies that don’t exist on the market yet. Essentially, we have to s­ econd-guess what the requirements will be. Where one product is straightforwardly just a replacement for another, it’s easy: the requirements already exist. But some requirements are not defined explicitly. Doesn’t dissolve in the rain is a key property of many building materials: so much so that we might not have thought to write it down. However, the creators of an advanced composite with many sophisticated properties might have overlooked this issue if they were focused on indoor applications. As the one evaluating whether the material could be useful for construction, it would be up to you to make sure that waterproof was on your list. Identifying the technical requirements for applications that don’t even exist yet is even more difficult. It is possible to do this, however, as long as you are willing to deal with some level of uncertainty. By making educated guesses (or asking others to) about how the new application is likely to work, drawing parallels with similar existing applications, and taking subsets of their requirements that seem relevant and then thinking through their potential points of failure, you can come up with a workable specification. This won’t be definitive, but what it will do is give you some idea of what is critical for success.

6  |  EXPL AINING THE FUTURE

Ethical and legal requirements Technology doesn’t exist in a vacuum. Any discussion of what will make an application work technically should be followed (if not preceded!) by a discussion of how it will work ethically and/or legally. Take the eye tracker that detects our emotions, again. This was a real project intended to help online platforms give useful feedback to their advertisers. The question for me would be, who would want this used on them? It’s bad enough knowing that free services have access to all the data we give them deliberately, but think of all the data we could inadvertently be giving away if they had access to our webcams, from our taste in sexual partners to our adherence to certain political philosophies. Of course, you can think about ways to mitigate this: it’s off by default, and you have to turn it on, for instance, and maybe people are given some kind of inducement to use it. But what if this makes the whole enterprise nonviable? Likewise, what if you produce a purification system that produces sludge so toxic that there is no way to dispose of it legally? Or, maybe you can in some places, but at great risk to those living nearby. Ethical considerations cover a huge range of different issues. Privacy is an important one, because it is covered by legislation. Just because you are in a position to gather data does not mean you are allowed to store it, process it, and use it for your own ends. People (in some parts of the world) have the right to know what information is being held about them so that, if necessary, they can challenge it, correct it, or delete it. A technology aimed at some future application that doesn’t take this into account is—at the very least—likely to encounter some surprises in its development. There is also a lot of law around health and safety, and this can apply to every stage of a project. For example, in most of the industrialized world, there are laws requiring your place of work to take appropriate measures to protect you from getting work-related repetitive strain injury (which is often—but not exclusively—related to typing). In addition, there are other laws related to breathing in fumes or working too long at the factory where components are made, lifting boxes of parts onto trucks, developing eye strain where the components are being assembled, and even using the product once it’s been sold. If the application is to succeed, its component technologies will have to be fabricated in accordance with these rules. Key Question s  |   7

A related concept here is exploitation. If an application requires a lot of labor but is not expected to bring in much money, then the temptation may be to pay the workers as little as they will accept to get the job done. In some cases, this may be so little that either they cannot afford a reasonable standard of living or they cannot afford to live without working an unreasonable number of hours. Both of these are unethical, and, in some countries, they are also illegal. Environmental concerns are also important to consider. There are obvious issues, like the potential toxic sludge of our purification process, but also many others that are more subtle. Life-cycle analysis is a tool that considers the impact of a product (or potential product), from the mining and shipping of the raw materials, to the energy used and pollution created during production, to the disposal of the final artifact once it has broken or become obsolete. For some industries, especially those related to nuclear or fossil-fuel-based energy, environmental concerns—and the laws put in place to address them—have had a huge impact on long-term viability. There are fewer legal sanctions related to the impact on society of new technologies. This is partly because they can be difficult to prove, partly because society can be harmed without individuals feeling they’re being harmed, and partly because some of the responsibility for harm has to be taken by individuals themselves. Arguments are made about supermarkets causing us to waste food, video games causing us to waste time or making us violent, and bad architecture preventing us from knowing our neighbors. In the short term, such arguments may not matter very much to investors. In the long term, doing the right thing often pays off. Unethical behavior is not always punished, but both the law and ­political scrutiny are having an increasing effect on how businesses run. In the UK, for instance, corruption (aka bribery) has been illegal for some time. Businesses not only have to behave ethically and conform to the law themselves but are expected to make sure all the companies in their supply chain are doing the same. The most famous examples of such issues in recent years have been related to the use of conflict minerals used in the electronics industry (mined by people enslaved by Congolese militias) and the issues surrounding the working conditions of Foxconn employees making iPhones for Apple. In the former case, specific legislation was passed in the US requiring the tracking of materials from their origin to make sure that conflict minerals did not end up in consumer products. In the latter case, Apple was forced to at least pay lip service to exhibiting better corporate responsibility. 8  |  EXPL AINING THE FUTURE

Another critical area to consider is the regulatory framework for the application area and geographical market you are planning to go into. If you are working on medical devices, for instance, it can take years of clinical trials in the lab, in animals, and in people to get permission to sell your product. Even then, you will likely be limited in what you can sell your product for. For example, even if your new insulin injector turns out to be useful for other drugs, if your trials only tested it with insulin, then it is quite likely that you will have to do a lot more testing before you are allowed to address your potential wider market. This points to a problem that many people have in analyzing and positioning technologies: they think too narrowly about what the particular tech in question does, can do, and will have to compete with. We’ll come back to that again later. Related to regulatory frameworks are issues to do with security and export control. Often, these issues have to do with what are known as dualuse technologies (those that can be used both for conventional commercial purposes and for weapons, surveillance, or secure communication). For instance, there was an infamous case where the Apple G4 personal computer (released in 1999) was briefly classed as a supercomputer and therefore could not be exported to some countries that the US government deemed unfriendly (what Apple lost in sales, they seemed to more than make up in good publicity!) Another example from the 1990s was the case of three lines of Perl code (related to cryptography) classified by ITAR (International Traffic in Arms Regulations) as munitions . . . and then printed on T-shirts. Dual-use technologies can be anything from centrifuges, chemicals, minerals, pipes, computers, code, machines that can be used to build other machines, and so on—almost anything. So, before you decide that your tech has a perfect application in another country, you need to be clear that you will legally be able to address that market. Do all these issues matter for the application(s) that you care about? Unlikely. Do any of them matter? It would be surprising if there were not one issue among all these that could represent an important obstacle to progress if it were not specifically addressed.

Commercial requirements If you thought that we missed some important issues in the technical section, there’s a reason for that. Some important requirements look ­technical but are, in fact, commercial. Key Question s  |   9

Compatibility is a good example of this. An application might be t­echnically possible without being compatible with anything (a lot of prototypes are like this). However, selling a new technology often involves it being able to work—at least in the short term—with equipment that ­people already have. In fact, this factor alone could help a technology beat its much-better-performing rival. What compatibility means depends on the context: it could mean making sure your widget has the right connectors, your process uses the right chemicals, or your data is in the right format. It doesn’t really matter as long as you know in advance what’s required and you understand that you may have to address your market segment by segment if the compatibility issues are different for each one. There is a whole constellation of requirements that are connected with the issue of cost. If the technology is going to be used as part of a product or service that is going to have to meet a certain price point when it sold (whether that’s to consumers or business), this can affect everything from manufacturability (how easy and cheap it is to make), to the materials used, to overheads like server costs for cloud-based services connected to products, to any labor involved, and so on. All of these issues are completely dependent on the application, and all of them have a direct impact on the type of technology that can succeed. And then there are the users and what they want and need. So many would-be products start as a great idea in somebody’s head—and end up in bankruptcy—because the great idea doesn’t appeal to or solve the problem of those who are expected to pay for it. Whole books are written on the subject of business models and product design, both of which are far beyond the scope of this one. But, at the very least, a reality check is warranted. That involves going out and talking to people. We’ll talk about how to do that in the Chapter 2.

Potential obstacles With dozens of potential requirements, it may sound extremely timeconsuming to figure out whether a new technology will be derailed by one of them or not. This might be fine for an investor or CTO, but not for a researcher or student. However, just a little research can take you a long way. Let’s say you’ve been tasked to find out (quickly!) whether a new robotic technology— developed for use in car manufacturing plants—could also be used as robotic toys, as the company claims. Part of the work is done for you 10  |  EXPL AINING THE FUTURE

because you know the technology is already in use in industry, so you can figure out what it can successfully do by looking at that example. Now that you understand what it does, do you really believe that it will work in toys? A five-minute discussion with a toy manufacturer will probably lead you to understand that there are two things that matter in that business: safety (legal/ethical requirement) and cost (commercial requirement). If you know enough about your application, you’ll see immediately the kinds of questions you need to ask. Will the behavior of the robot be as predictable in the chaotic environment of a home filled with children and pets as it is in a factory? If the robot malfunctions, is it strong enough to hurt anyone? How much does the processor (computer) cost that controls the robot? What about the actuators that make it move? Where you do find problems, it may or may not be possible to work around them. It could be that an application is not feasible right now because it cannot support the cost of the computing power it needs. By waiting a couple of years for technology to naturally evolve and reduce in price, that barrier may disappear. On the other side, there could be an intrinsic technical problem that requires a major research push to fix: these issues will be important to keep track of as you do your research and analysis. If you don’t know enough about your application, and you don’t ask the right questions, you may be so blown away by the company’s demo that you forget to see whether the technology is really fit for the new purpose.

No problem for the solution? Be creative . . . Sometimes a technology is just cool. It isn’t particularly designed to do anything, but it seems to have some features that make it special. It’s easy to get excited about such technologies and then quickly lose interest due to a lack of any obvious application. There are two things to do when this happens. The first is to try to look for the nonobvious application. You do this by going up the layers of abstraction. What’s special about this material? It’s expensive but really strong. Where does strength matter but not cost? Could the material be used to trade off strength for quantity (less material for the same strength?). Think of as many different scenarios as possible in which the particular features of the tech can be exploited. Sometimes this will get you to applications, and sometimes to a set of higher-level features, that is, features that people outside your immediate technical area can understand without knowing too much about the Key Question s  |   11

detail. Getting a really good feature set will allow you talk to a diverse collection of people about potential applications: people from different ­disciplines, those at different stages in their careers, and so on. If you have a solution in search of a problem, you need to find the problem, and brainstorming with others is the quickest way to do this. (There’s a canvas to help you do this in Chapter 4.) However, it’s important that you recognize that not all technologies do have applications, and that not all applications are close to being marketable.

Question 3: What is the effect of time? Timing is everything. It’s not enough to have the right technology for the right application; everything has to match up on cue. This is especially true when a good idea will need quite a bit of development before it becomes an actual product. There are two main reasons for this, progress and competition, but the latter is going to get its own section, so we’ll leave that for now. Most people have heard of Moore’s law. It’s expressed in a lot of different ways, but essentially it means that the processing power on a chip doubles every two years. The law was very important in the semiconductor and chip-design industry because it set a benchmark for what you would expect the computing world to look like over a period of time. Why is this useful? Imagine you are developing some kind of optical memory device and you think you will be able to store ten times more data in it that than you currently can in a conventional memory stick. It will take you three years to get it on the market. Is it likely to succeed? If ten times more data represents the limits of the technology, then the answer is almost certainly no way. By the time the devices came out, the advantage would be only 2–3 times the density. In six or seven years, conventional technology will have caught up, and in eight it will have surpassed the new optical memory. Not a good investment. But if the tenfold advantage in data density is not the best the technology can do, but rather just the starting point, then that’s a different story. If you’re taking it seriously, you might make predictions about how your new technology will progress and how the conventional technology will progress, and then plot those on a graph. If the technology can go twenty years without being surpassed, you’re probably in good shape. Ideally, the lines will never cross, because the new technology is always 12  |  EXPL AINING THE FUTURE

improving faster than the old. If you can’t predict how either technology will evolve—not even to the level of an educated guess—that’s an ­important piece of information too. Whatever you know (or don’t know) can be incorporated into your thinking. This analysis works with like-with-like replacements for existing products (where a new technology seems to perform better than an old one). It’s not quite so simple if the market for a product does not exist. But the principle stands: it’s not enough to talk about something that may be better than what we have today. As time passes, technology evolves, and an analysis that doesn’t take this on board will have no credibility. Incidentally, it’s important to recognize that time is critical in lots of other ways too: if the market’s not ready for a new product, it doesn’t matter how great or well-adapted it is. For engineers, however, the critical question is whether you are always going to be looking at the tail lights of the technology in front. If you’re already in the market, being worse than your competitors is survivable because people are inherently resistant to change, and first-mover advantage is real. For a newcomer to succeed, however, requires an immediate and impressive improvement on the status quo.

Limits It may sound obvious, but growth cannot continue forever. Even Moore’s law (the seemingly unending exponential increase in computer power) is grinding to a halt, thanks to a combination of technical and commercial limits. On the one hand, the challenges of manufacturing technology are on the nano- rather than the micro-scale. Think about the relationship between circuit density and yield. If you can reliably manufacture with, say, with a few circuits per billion on a chip being faulty, then the number of faults on a chip of a given size will go up with density. If you need a perfect chip, that means you are having to throw away twice as many defective chips for twice the density of circuits. You can get around this either by improving your process to minimize the percentage of faults per circuit or by introducing redundant circuitry (which duplicates the function of other circuit elements) so that if something is broken, you can work around it. That redundancy cuts the number of unique elements you can put in your chip and, likely, the benefit of having a higher density in the first place. The commercial pressures are just as real. Most people (most applications) don’t need more processing power, so if you build the equipment to make the latest chips, you may not be selling to a very big market. Since the Key Question s  |   13

latest fabrication technology requires a lot of investment by manufacturers, the chips they produce will be more expensive. This can quash demand. In the case of chip manufacturing, the big issue that engineers had seen coming for a long time was that of the end of the classical semiconductor. Semiconductors work because one material is doped with another. For every so many thousand or million atoms of one type, there will be one of another, the dopant, that changes the semiconductor’s properties. While this is fine when there are millions or billions of atoms in a circuit, the system breaks down as the feature sizes get smaller. Because the doping can only be controlled statistically (i.e. you know that roughly 1/1000 atoms is going to be a dopant, but you don’t know exactly where that dopant will end up) there could be circuit elements or areas where the dopant doesn’t make it in at all, resulting in electrical properties that are not as expected. As it happens, electronic engineers have found ways to work through and get around this limit by being creative, and commercial and other factors have ended up being far more important than physics. However, I would argue that paying attention to technical limits, where they exist, can help us to make better predictions about how technologies will evolve, and such an approach has the advantage of being easier to predict than the vagaries of supply and demand in an evolving marketplace. Of course, every industry—indeed, every technology—is different. There may be no equivalent of Moore’s law for the case you are looking at, and no obvious point at which progress is likely to break down. This is where you have to get creative, looking at the track records of technologies that have features similar to those of the new one. There’s no guarantee that the analogy will hold up over time, but this kind of educated guess will generally prove much more productive than none at all.

Money, momentum, and market Often, time pressure comes not from technological limitations or competition but from the evolution of the industrial, social, economic, and even political climate. This is most obvious in research and in investment funding, both of which are often driven by fashion. In investment, this can cause a bubble, when there is more money to invest than there are genuinely good ideas to invest in. Waiting to push out an idea in an area that’s become trendy, perhaps because it’s not 100 percent thought through, might cause you to “miss the boat.” Once people start to lose money through all the bad ideas they funded, it can become difficult to find investment for even the best ideas. 14  |  EXPL AINING THE FUTURE

Research funding can be much the same: an area may become “hot” for a while because it is seen to have some strategic value or is championed by someone in government but then disappear without a trace five years later. Researchers have learned to be opportunistic about this: they will bend their work to fit an in-vogue application so that they can get money to support their teams. They know that it’s critical to jump on the bandwagon when it starts rolling, ready or not. Another issue related to timing is more structural. Industries go through cycles, and launching a new—but not disruptive—technology into the market is much less likely to be successful in one of the later stages. At the beginning (the innovation stage), everything is new and fresh and has potential, and so-called first-mover advantage can be knocked out by better technology, implementation, or marketing. As the technology expands, it’s all about performance and features: who can make the best product, offer new features, or make things run faster. As the market reaches its peak, success becomes less about high performance and dramatic changes and more about low cost and incremental improvement. The next stage is consolidation, using fewer resources to satisfy those same users, often through the merging of companies. Finally, there are other factors that appear to have almost nothing to do with technology but can make a huge difference to the climate in which new products are released. Developments in unmanned aerial vehicles (aka UAVs, aka drones) would not be nearly so advanced were it not for wars and political unrest. Revolutionary embryonic stem cell research and development only became possible because recent legislation in some countries allows it. Genetically modified food is not as profitable in Europe as it is in the US, partly because of legislation and partly because of a cultural distaste for it. Nuclear energy is being phased out in Germany because of public opinion and political action to ban it as a result of the Fukushima disaster in Japan.

Getting started Finally, it is important to remember that you cannot compare technologies that are in the market now with those that require development. Or, rather, you can, but you must take account of this important difference. Even if there seem to be few or no obstacles to a new technology entering the market, getting there takes time. Businesses have to raise money, hire staff, build infrastructure, and find suppliers. Just because a technology Key Question s  |   15

seems competitive today does not mean it will actually be competitive if it launches four years from now. This has to be factored in. It is not impossible to succeed with bad timing. However, with technology being such a hit-and-miss business in the first place, good or bad timing can be a critical issue.

Question 4: What is the competition? The last major issue to consider is competition, which is a much harder issue to think through than might be immediately apparent. Theoretically, competition can be separated into three different types (although, in reality, there can be many nested levels to work through).

The status quo The most obvious benchmark for comparison is always to look at the status quo (how things work now). How is the problem you are trying to solve with the new technology addressed today? This is easy to research, verifiable, and generally uncontroversial. However, as we discussed in the last section, things are not going to stay the same, so judging a technology that’s currently in the market against something you plan to develop and introduce in three years’ time is not good enough. You have to assume that this competitor will be as busy developing their technology as you are with yours: you need to project their progress into the future too.

Technology in development Future competitors are more problematic to identify. These are not actually in the market yet, but someone, somewhere, has high hopes for them. These are harder to track down because, for instance, most journalists will only judge a new technology on how it competes with what we use today. What you want to know is how it will rate against a host of other technologies that may already be in development for tomorrow. We’ll talk in Chapter 2 about the kinds of techniques you can use to track down this kind of competition (and their limits).

Something completely different It’s unfortunate, but sometimes we get too close to a problem, and too wedded to a particular solution. Here’s a story (sadly, an urban myth!) that is a vivid example of the kind of thing I mean. 16  |  EXPL AINING THE FUTURE

During the Cold War, when the USA and the USSR were competing for “firsts” in space (first in orbit, first to the moon, etc.) NASA spent years and millions developing a pen that could write without using gravity to get the ink flowing. In the end, they were successful. In the meantime, the Soviet cosmonauts were happily using pencils. The story is doubly false (although, on some level, we all seem to want it to be true). First, pencils are not a good solution: you don’t want bits of graphite breaking off your pencil and floating around before ending up in your filtration system or your eye, or jamming up some machinery. Second, the space pen was actually developed by the Fisher Pen Company, with no NASA investment at all. The reason the story is so seductive is because it provides a kind of misdirection, or magic trick. We’re all so busy thinking about the problem of the way pens work, we forget—even if only for an instant—that the point was not to make a pen but to allow astro-/cosmonauts to write. The simplicity of the pencil solution then seems elegant and obvious by comparison. Disruptive technologies, those that solve old problems in completely different ways and (generally) using completely different business models, are difficult to predict, particularly for engineers. As technical people, we like to focus on the smallest solvable problem and get to grips with that. Disruptive technologies come out of thinking at a higher level of abstraction. But there may not be one such level but many. If we take it at face value, the space pen is trying to solve the problem of how to write with ink at zero gravity. The pencil solution (even though it falls down with closer scrutiny!) is clever because it jumps up a level. This solution embodies the idea that astronauts probably don’t care whether or not they are writing with ink but just that they are able to write. In other words, it’s solving a more general problem. You could go up another layer of abstraction. Does it matter that the writing be kept? If not, maybe one of those magic slates kids use (where you write with an inert stylus on a sheet of plastic over wax, and then you erase by pulling up the plastic) would work. Does it really matter that the thing is written? Maybe voice recognition could solve the problem, or a keyboard. The original solution, the space pen, was a solution based on a lot of assumptions. Those assumptions may be correct. Or they may have been correct at the time but became less relevant as time has gone on. If you don’t question these assumptions, then—in looking for competition—your focus will be too narrow, and you will miss technologies that solve a broader problem. Key Question s  |   17

Question 5: What are the features of each competitor? For every problem, there may be many possible solutions, each of which has its own advantages and disadvantages. If the application is very narrow, some of these solutions could be ruled out very quickly because they don’t meet the cost, performance, likely time to market, or other requirements. But that’s not always the case. Where it’s more finely balanced, the only way to make sense of how the options compare is to do a kind of audit, working through as many of the features of the solution as possible that could work positively or negatively in any given situation. This is an iterative process. A property that is advertised as a positive feature for one solution may not be mentioned at all by those in favor of another. Why? That’s what you need to find out. You also need to determine which of the features are most important to the different applications or market segments. Determining both the various features of the competitors and how they affect the applications are research-intensive questions (as are most of the issues discussed in this chapter). Chapter 2 is about finding the answers.

18  |  EXPL AINING THE FUTURE

Summary To determine the success of an emerging technology or application, you must ask the following questions: What’s so special about this technology? • Its performance? • It provides an integrated solution? • It’s disruptive? • It’s enabling? • It reframes the field?

What problem are you trying to solve? • Technical requirements? • Ethical and legal requirements? • Commercial requirements? • Potential obstacles? • Problems finding problems?

What is the effect of time? • Limits? • Money, momentum, and market? • Getting started?

What is the competition? • The status quo? • Technology in development? • Something completely different?

What are the features of each competitor?

Key Question s  |   19

CHAPTER 2

Finding Answers

R

esearch is an iterative process. Whether you’re doing a PhD or researching an investment, there is a process that involves loops, diversions, and periods of refocusing. Although there’s no one right way to do this, the following method may help you to get started. The end of this chapter defines the various different sources of information, explains why they are important, and provides some tips on how to find the most relevant stuff. Even those of you who already have research careers should hopefully learn a few tricks here. First, however, let’s start with process.

Getting organized From a practical point of view, you’re going to be collecting a lot of information, and you want to be able to organize, reorganize, and search it easily. An e-notebook can be ideal for this. For a technical analysis of the type we’re looking at in this book, it makes sense to start by organizing the notebook as follows: • Primary technology: If there is one, this is the technology you’re thinking of investing in, researching, sponsoring, and so on. • Application(s): This is the list of potential application areas the primary technology might address, or the one application that you’re interested in analyzing.

Explaining the Future: How to Research, Analyze, and Report on Emerging Technologies. Sunny Bains © Sunny Bains 2019. Published in 2019 by Oxford University Press. DOI: 10.1093/oso/9780198822820.001.0001

20  |   EXPL AINING THE FUTURE

• Competition: This is where you go through each of the competitors for each application.

Within these sections, of course, you’ll make further divisions to cover different kinds of requirements, features, and so on. As well as this, you’ll almost certainly need a reference manager to keep track of the technical papers that you look at as you go through. There are several good ones out there. Which of these you use may depend on what your colleagues use, the kind of computer you have, whether you want to receive paper recommendations, and so on. Make sure to install this near the beginning of your research (if you don’t have one already), and regularly think about how to organize (and reorganize) your papers. I strongly recommend that you make sure you have local copies of any papers that might be relevant. This will allow you to search their full text rather than just the abstract and keywords. In the absence of software that neatly combines the key features of reference manager and notebook in one, it’s important that you think through how you are going to use the two applications you’ve chosen together. You might simply make sure that the pages on your notebook are the same as the folders holding your papers, or you might do something more complicated. An hour or two at the beginning thinking this through could save you many hours organizing and reorganizing later or looking for things that are not where you expect to find them. Research can suck up your time. To do it well, you have to stay focused on what you’re trying to achieve. Whenever you sit down at the computer, therefore, you should be really clear about what specific question you’re trying to answer. If you find interesting leads that will relate to later stages of the work, it may be worth following them briefly, but your best bet is to clip the link/document/whatever into your electronic notebook, file it under the right section, and then get back to the job at hand. As well as organizing materials (links, papers, etc.) you will also want to start noting down keywords. It may also be useful to start building a concept/mind map from the very beginning, adding keywords as you go.

Example work flow In what follows, I’m going to assume that you have a technology that you are looking to apply to real-world problems. You have a good idea of what Finding An swers  |   21

the features of the new technology are and so are happy to move straight to thinking about applications. This is only intended as a starting point to illustrate how the research process can work: how you can set out to find answers to your questions. If the particular starting point (focusing on “What can this do?” rather than “What can do this?”) doesn’t work for you, Chapter 4 (on analysis) explains how you can use different focuses (using canvases) to work through different types of problems.

Which application is the most promising? You have a specific technology you want to evaluate, so your first job is to think through potential applications. Realistically, you will start by using Google and other search engines to find information. There is a huge amount of good material out there. However, being adept at searching is important, as is being able to judge the credibility of the material you find. We’ll discuss this more in Chapter 3. You should also be asking the people who are developing the technology what they think it should be used for. This could include the inventors, the funders, or the people in the lab working on the system every day. Everyone wants to believe that what they are doing is useful and will have put some thought into what their technology can do and why. While talking to them, ask them what special features the technology has and (if they have applications in mind) what specific requirements these relate to. You don’t have to believe what they say (we’ll talk about the agendas of different groups of people later). You just have to take note. If you don’t have direct access to people who know the field, even a little, the next best thing is to look at what they have written on the subject. If you’re sufficiently technical in the area you’re researching, you can go straight to the technical literature published in journals or the proceedings of conferences. Even if you’re not expert in the area, reading the introductions and conclusions of these papers should give you some idea of what applications people think their technology will/should be applied to. At this stage of the research, you shouldn’t restrict yourself to any specific project (the approach of one particular group or company). Chances are, there are several, or many teams working on similar approaches. Find them. Find their technical papers, find the applications that they’re looking at, and decide whether they might also be of interest to you. It could 22  |   EXPL AINING THE FUTURE

be that the technology is great, but the work that first attracted your attention is not the best example of it. As you go through this stage, you should keep all of the technical material you gather, organizing and reorganizing as you go. Chances are, you’ll get a lot more out of it later this way.

What are the application’s requirements? The next stage is to find out as much as you can about what it will take for any technology to succeed in the application in question, that is, the technical, ethical, legal, and commercial requirements, as well as any potential obstacles to success. For this stage, people are, again, ideal for pointing you in the right direction, but you should also be able to find a lot of the answers you need by looking at written resources. There may be relevant technical papers, for instance, and you may also find journalism about the applications in the technical press. Realistically, neither the technical literature nor the technical press will give you all the information you need if you’re trying to do a deep dive into a new application. What they can provide you with is a basic grasp of concepts and jargon. Once you have these, you should be in a better position to look for books that can help you push your expertise on to the next level. Even if your deadline won’t give you time to properly read them, skimming through a book can often be useful for giving you confidence that you know what the key issues are. The very best way to get information about applications, however, is to talk to the people who make them happen, once you’ve prepared your mind well enough to be able to ask the right questions and cope with potentially jargon-filled answers. An extremely efficient way to do this is to go to a relevant technical conference to immerse yourself in the field and surround yourself with people who are expert in it.

What is the competition? The sources we’ve just discussed to explore potential applications are exactly the same as you can use to find different current or potential solutions: people, the technical literature, the trade press, and conferences. Where companies and/or research groups are being open about the problems they are tackling, you can use all of these. The main thing to remember is that there are three types of competition: the way we do things now, solutions that are in development, and accomplishing things a completely different way. Finding An swers  |   23

Researching today’s technology should be easy. It’s finding tomorrow’s competition that’s hard, especially because, in any emerging industry, you tend to find companies that are acting in stealth mode. They have every intention of entering a particular market but don’t want anyone to know about it. There can be many reasons for this: they may not want to give their competitors any extra motivation to improve their offerings, they may want to sign the contracts on funding from publicity-shy investors, or they may be in the process of nailing down their intellectual property. Despite this, there are a few techniques you can use that could allow you to get a glimpse of the competition that is out there. First, if you want to have inside information on an industry, tech bloggers can be really useful. On the other end of the scale, in terms of formality, patent databases can be a mine of information about who has been working in a particular area recently. A variation on this theme, especially for consumer products, is to search for trademarks, because the logos and product names that a company registers are a good clue to what they plan to bring out. Famously, journalists are told that if they want to get to the bottom of a story, they need to follow the money. In some cases, this can be true of engineers too. If you’re trying to find long-term trends in a potential competitor, annual reports—in conjunction with other stories in the technical press—can be helpful. If you’ve been following what a company says about its direction in a few annual reports, then this—in combination with stories in the technical press about the research it’s done, the people it’s hired, and the intellectual property it’s licensed or registered—can give you a very good indication about whether it is likely to become a competitor in a particular area. Going back to the technical side of things, another way to find competitors is to identify the key publications of individuals involved. You then use forward citations to find the work their papers have inspired, and check to see who has licensed their patents.

What are the features of the competing technologies? The first task when trying to identify potential features and drawbacks of both the primary technology and its competition is to try to go and look at things in person: real applications, real experiments, and real devices being used or demonstrated in real time. If you can visit the lab or company or site where the new technology is being used or tested, you will learn a great deal that you couldn’t have otherwise. In part, this is because 24  |   EXPL AINING THE FUTURE

different senses are engaged when you see things for real: vision, hearing, sense of smell, proprioception, and touch all get a chance to contribute, not just the parts of your brain that deal with words and pictures. This brings us to one last group who can provide information and may determine whether or not you are allowed to do a lab visit: those who work in business development, and PR (public/press relations) people. As with all these sources, we’ll provide more detail later in the chapter.

A simple plan If you were going to make your life as simple as possible while also being as thorough as possible in your research, your project plan might look something like the following. First, you find a conference that covers the application or technology you’re interested in. It doesn’t need to be the definitive conference in the field, but it should do a reasonable job covering most of the major approaches and giving you an introduction. Most importantly, it should give you a good list of keywords for the next phase of research. Next, you go away and do some research online, looking at the trade press, technical literature, and patents. What are the major applications and/or the most important features of any given solution? Which seem to be the most important companies and research projects working on this? As you identify key groups, you should try to make arrangements to do site visits (if that’s possible within your time/money budget). Make sure that each one informs the next and that, as new issues arise, you go back and do the online research that will make sure that you understand them fully. Be prepared not only with questions about the lab you are visiting but also concepts that you do/don’t fully understand, issues relating to the application, competition, and so on. This will ensure that you make the best out of every opportunity and that, each time, you ratchet up your own levels of knowledge, insight, and expertise. Finally, you would go to another conference on the subject. This time, you should find—even if you can’t follow all the technical issues—that you have enough expertise to understand most of what you hear. You should be able to see more clearly the similarities and differences between different approaches and understand why they are (or are not) important. Most importantly, this conference will give you the opportunity to talk to other experts on an almost-equal level. In fact, if you’ve been able to follow the rest of the plan, by this stage you should know almost as much about what’s going on in the field (at least at a high level) as they do and Finding An swers  |   25

be able to ask really good questions. This will allow you to consolidate the knowledge you have, as well as filling in any gaps.

Real life steps in Most people reading this book will not have the time, the money, or the access to follow this ideal route from being a novice to an expert. The main principles to bear in mind are as follows: • Do your homework: Try to do as much online research as possible. • Get regular reality checks: If you can’t do lab visits or conferences, at least make sure to talk to people, whether in person or virtually. • Remember, everybody lies: Just because someone is an expert and they say something, that doesn’t mean it’s true . . . not even if it’s in print.

We’ll deal with this last point in Chapter 3. It is key to understand that everyone writes from their own perspective and agenda, and to figure out what these are for your sources. If you can do this, it will make analysis of the information you have gathered much more straightforward.

Types of sources Keywords These are critical to all research, because, without them, you will never find the material you are looking for. Very different terms can be used to mean very similar things: consider the relationships between artificial intelligence and machine learning; hydrocarbons and oil and gas; and CMOS and semiconductor processing. In each case, you might well find information of interest if one keyword matches, even if the other (your chosen topic) is nowhere to be found. Keeping track of these keywords is worthwhile because they change over time: words come in and out of fashion. Also, similar approaches to the same problem may be called different things depending on your community, your industry, and your institution. You don’t want to miss some crucial information because you didn’t know the right keywords to look out for.

Search engines The web is a brilliant source of information if you use search engines creatively. The problem, of course, is not finding information at all, but finding 26  |   EXPL AINING THE FUTURE

the right kind and quality. For instance, put in the name of a researcher, the kind of technology they’re working on, and the words +corporate or +company or even +Corp. or +Ltd., and you might find information about a start-up company they are working on. Search for keywords related to the application with some similar company-related words, and you may find start-ups in the field. If you’re lucky, you’ll then find websites with all sorts of information. The key to smart searching is to think not only of the subject you want to address, or the person, but the kind of information you are looking for. For instance, if you add the words references or bibliography to the rest of your search, you’re more likely to pull up technical papers. If you add the name of the company or institution that the researcher is from, you’re more likely to find technical papers by that person (rather than those that cite that person). If you think carefully, you can reduce the millions or thousands of search engine hits to a more manageable tens or hundreds, with the most relevant more likely to appear on the first page. There are many advanced search functions, most notably, excluding words you know should not appear (by using a minus sign) and requiring words that must appear (by using a plus sign). Time spent experimenting with the more sophisticated features on the various search engines you expect to use (whether they search patents, the technical literature, or the whole internet) is time well spent. The more intelligently you use them, the faster you will find the information you need.

Technical The technical literature Journal articles, conference proceedings, and so on can be daunting because each paper is written for a specific technical audience (which may or may not include you). This means that most articles will likely be filled with jargon. However, they are generally the most reliable account you can find of a specific piece of work because they are (mostly) peerreviewed. This means their content was examined by other experts/ competitors before publication. They are great starting points for your research, in part because the introductory section often highlights the status quo and competing work, and in part because they include references to related work that may be worth looking at. Finding An swers  |   27

These days, you can often find the papers you need using the web, but accessing them may be a different story. If you are affiliated with a university or a company that subscribes to a lot of journals, then it may simply be a question of acquiring the right access credentials or going in through a secure network. However, if you’re independent, it may be expensive to get access. On the plus side, open access journals are becoming increasingly common, and abstracts can almost always be found for free and may be enough at the early stages of the research process. One caveat, however: not everything is available using a typical webbased search engine (even Google Scholar). There are often special collections of articles that are held by individual publishers, learned societies, universities, and so on, and these would be difficult to find by doing a simple web search. If you have access to a technical library, it is definitely worth tracking down any such collections that may have the material you need. Another thing to remember is that there is often a long time lag (months to a year) between when work is completed and when you can read about it in the literature. This is another reason why talking to people can be so productive. You can find out what’s going on now and ask for the latest information. They might not give it to you, but even a hint can lead you to another thread that you can try to piece together through other means. One last thing: if you find that the technical literature trail goes cold (you’ve been following a series of papers by a group, individual, or company, but they suddenly stop), it’s worth checking the patent databases instead (see “Patents”). When a decision is made to commercialize a new technology—or even to consider doing so—the emphasis switches from publishing it to protecting it.

Forward and backward citations If you find a paper that you think is interesting, you can do a search for forward citations to find other people who thought it was interesting too (i.e. people who cited that paper in their own work). Such citations can be very useful because they allow you to move forward in the life of the original idea to see how it is being used now (and by whom). A backward citation, on the other hand, can be done by simply finding interesting papers in the reference list or bibliography that comes with a paper. These can be very important because there are often key papers that everyone feels they have to reference (to do with a technology, an application, or both) that then have the power to unlock the literature 28  |   EXPL AINING THE FUTURE

f­ urther. So, you can start with one paper, go one or two steps back to find a really key publication, and then find forward citations from that; that way, you have a good chance of finding most of the people working in a particular area. The one thing you must remember, however, is that just small shifts in perspective, approach, or background can mean that what people see as the key papers in their field may differ, which means they may not all be caught by the forward citation approach. Sometimes, this artificial fracturing of the literature happens for benign reasons (ideas grow up independently in adjacent fields and never quite come together). At other times, it’s a result of people attempting to make their work appear distinct from others, allowing them to become a big fish in a small pond to add to their prestige.

Books and book chapters Although a good book about your application (if there is one) can save you a huge amount of time, a bad one can waste almost as much. It’s important that—before you jump into spending a lot of time reading one person’s point of view—you get validation that it is one that will help you. This might come in the form of a recommendation by one of your expert contacts, or because you’ve done your homework and know that a particular book is covering the application in a way that’s helpful to you. The main thing with books is to be selective and to remember that the author is telling the story of their field from their own point of view (see Chapter  3). Note that, as was mentioned before, it often doesn’t make sense to read the whole book, even if it is relevant. In any project where time is constrained, it makes sense to skim for sections of relevance and only read carefully where you know this will help. This can be difficult for people who are methodical and like to be thorough (a large fraction of scientists and engineers), but it’s important to maximize the information you can gather in the time you have. Book chapters can also be extremely useful, especially in relatively new application areas where the field hasn’t had a sufficiently long period of stability for someone to have time to write a book. For instance, you might not find a whole book on sensor networks for urban applications, but instead a good chapter on them in a book about smart cities. Finding An swers  |   29

Commercial/technical Patents Intellectual property laws allow companies to protect their inventions in return for releasing their ideas to the public. This trade-off is intended to benefit progress in technology because, without the guarantee that no one else can steal and profit from their innovations for a decent period of time, everyone would keep them secret. Until relatively recently, patent searches were expensive and difficult: this is no longer true. You can now search patent databases like Google Patents or Espacenet for free. Patent databases are like any other: the better you are at searching, the more relevant the information you will find. Remember to look not only for solutions to a narrow technical problem but also for different ways of addressing the wider application (at least initially). This will help expose you to approaches you may have missed otherwise. Then, when you’re trying to narrow the search, remember to focus on patents filed recently (most likely for companies in stealth mode) and to think carefully about the kinds of terms that you would expect to see in inventions related to the problem you are researching. Also, rather than target a subject, you may choose to target individuals or companies who are doing work in a particular area. If they have filed a patent, this may well be an indication that they intend to commercialize it. Once you have an interesting patent, you can use all the other tools we’ve discussed to further research the company, licensees, the individual patent holders, and even any new keywords. These links may lead you to information about efforts in this area that could end up being important. For instance, you might find more papers by the inventors, which give more information about what the invention will most likely be used for, or a story about how the company that owns the patent has done some kind of licensing deal with a company in the application field. All of this is good information. Unfortunately, patents are deliberately written to be difficult to read. It’s probably easier to consider them as confirmations of activity, or links through to further information, rather than technical resources in their own right. For this reason, you may find that your research involves several cycles of talking to people, going to the trade press, going to the 30  |   EXPL AINING THE FUTURE

l­iterature (including patents), and then back around again before you have enough context to determine what is and isn’t important.

The technical press Many different kinds of publication can fall into this category. At the more populist end, you could include science, technology, and environment sections of quality newspapers and magazines (such as The New York Times, The Guardian, The Economist, and The Atlantic). Moving toward the more specialist end of the market, you will find magazines with names like Laser Focus World, Bioprocess International, and World Nuclear News. In fact, these are relatively broad titles in the technical world. Websites and newsletters are often set up for even narrower fields, such as the World Wide Electro-Active Polymer Newsletter (about artificial muscles and published by the Jet Propulsion Laboratory), for instance. And then, of course, there is a spectrum of titles with different levels of geekiness in between. Not all publications will include a lot of technical material. Some will have a commercial focus and cover industry issues (e.g. takeovers, new business, personnel shifts, market analysis), while others, especially those produced by technical membership societies, will likely devote more space to research. The only way to know what’s there is to ask around and then go out and explore the publications in your fields of interest. The advantage of the more specialist publications is that they are written for an audience of experts. Although this can make them more difficult to read, it also means that the information is more likely to be accurate and the technological problems less likely to be oversimplified. So, if you are interested in a technology that will serve the water purification industry, the place to look will be a publication with a name like Water Technology. To find these, start with any regulatory bodies, membership societies, or trade organizations related to your industry: the British Computer Society, the Institute of Civil Engineering, the Society of Plastics Engineering, and so on. See if these groups have publications related to your application. You can even do something as simple as searching for “trade publication [your keyword here].” Just remember that if your keyword is too narrow, you might not find what you’re looking for, even if it is actually well covered by a broader publication. Before you decide to trust a website (especially if you’ve found it yourself rather than it being recommended to you), make sure to spend a couple Finding An swers  |   31

of minutes trying to gauge its credibility. Misinformation is not confined to gossip and politics: it also appears in tech. Every company and research university has a PR person whose only job is to send out positive stories on the work done by their scientists (whether or not it is genuinely important). Some news sites will pick up this material for free and publish it as a medium through which to sell advertising. Their interest is not in the quality of the science but how much traffic they think it will draw, so—even though it’s a real story and may even have been edited and given a byline—that’s no guarantee that it’s worthwhile (or even accurate). Once you find the right publications, the next trick is to pull out the information you need. The chances of finding a recent article covering your exact field of interest are slim. More likely, you’ll have to make do with something a year or two old. Either way, this piece will only be a starting point in helping you track down all the criteria important to your application. However, unless you’re unlucky, you should likely find at least some material that is relevant and, perhaps more importantly, looking at site will get you reading about the industry more generally. As you read more about the way it functions and operates, technologies that have succeeded and failed, and current goals and concerns, you will start to get a sense of what is really important. As you do this, you should constantly be noting down criteria that might be relevant . . . even if you decide (in the end) that they are not.

Industry bloggers Usually insiders who regularly write or podcast about what’s going on in their industry, tech bloggers can sometimes offer insights that no other source can. Although the technical press covers some of the same stories, bloggers are generally less shy in reporting rumors and tidbits of information that may be denied by PR people (the gatekeepers of corporate information) but may nevertheless be true. On the other hand, this information is (on average) less likely to be true, and the biggest problem with bloggers is that they range hugely in quality, from the eerily prophetic to the unerringly wrong. The best way to use tech blogs is to read/watch/listen to a lot of them regularly over a period of time, with an eye on more traditional sources of information. Bloggers who know what they are talking about will eventually be vindicated, and others will be proven wrong; after a while, you can get a good sense of who to bother with. 32  |   EXPL AINING THE FUTURE

Also—as with everything else—remember to ask around before you start reading lots of blogs. People who know the field can point out both the good ones and the bad ones, instantly saving you a lot of time and effort.

Industry reports and roadmaps Especially when you are working on issues of timing (how applications and technologies will evolve), you may find it useful to refer to reports written by industry bodies and consultants. These can tell you what the industry is expecting over the short-to-medium term. A well-known example is the International Technology Roadmap for Semiconductors, which many companies (from those designing electronic chips to those planning to use them in systems) use strategically so that they know what’s coming. There can be dangers with these, however. Industry bodies move slowly and can be political, which means that the emerging technologies of today are not dealt with in a report that started its way through committees two years ago. Consultants reports should be more up to date but can be expensive and may not include the information you’re looking for (which you won’t necessarily know until you pay for it). Also, you need to ensure that the assumptions upon which these reports are based are valid. If you’re working on a disruptive technology (or if others have them in the pipeline), then they may not be.

The outside world People People are great, interactive sources of information. You can ask them questions, get them to explain things in different ways, run ideas past them to see what they think, and so on. However, depending on whom you are talking to, they will have their own limitations and agendas (as will their writing). We’ll discuss these further in Chapter 3. There are generally two main difficulties in finding people to talk to. The first is identifying people who actually know what they’re talking about. Just because you’ve found an interesting paper with someone’s name on it doesn’t mean that they know much about the research. They might have helped to run a particular experiment or built one of the gizmos used in it but know little or nothing about the context of the work. Finding An swers  |   33

The first author should know about the research in detail but may not have the exactly the expertise you need. You may have to ask to be referred on or check out web pages to work out who knows what for any particular project. The second difficulty is finding people who are actually willing to talk to you, especially if you have nothing to offer them in return. If you’re a technology transfer specialist, or a potential investor or sponsor, you have a reasonable chance of getting busy people to make time for you (because you’re potentially offering money in return!) If you’re not, then it can be more difficult. It’s still worth looking, however, because people who know the field can often clarify your misconceptions very quickly. To get the help you need, you must be prepared to ask interesting, wellinformed questions, which means doing your homework in advance. No one has any interest in doing your research for you. Don’t contact people until you’re really stuck, and make sure that—when you do—you make it clear what you’ve done to try to find the answers yourself. If you contact someone too early, they may be put off you forever! If you’re at the very beginning of your research and having trouble getting started, therefore, don’t go straight for the most important people in the field. Instead, ask those who are more easily accessible, even if their knowledge is a bit shallow. This might include people in your company/university, people at low-level meetings and conferences (see “Conferences”), and people who are otherwise local. Be clear that you understand that their time is precious, and ask if you can pick their brains. Be receptive to what they say (even if it’s rubbish, you don’t need to act on it!), and take a lot of notes. Generally, tracking people down once you’ve identified those to whom you want to speak is the easy bit. Most people have LinkedIn pages, web pages, or predictable e-mail addresses within their companies: unless someone is really paranoid, you can usually find a way to contact them after a few minutes of searching online. Once you’ve identified the person(s) you want to speak to or correspond with, it’s important that you contact them in a respectful way. Introduce yourself properly, explaining who you are and why you need the information. If you know people in common or have been referred by someone else, mention that person’s name. If you think that the work you are doing may provide some benefit to them, say so. Explain what you have been doing to investigate the subject yourself, and why you feel you’ve reached a point where you can proceed without expert help. Then, explain what you want to discuss. 34  |   EXPL AINING THE FUTURE

When you contact them, make sure that you give them plenty of time to respond, and make it as easy as possible to do so. They will have their own deadlines and travel schedules, and you will have to work around them. If there are time differences, it can be very difficult to arrange a call. If you have a small number of reasonably self-contained questions, you can ask them and suggest that they respond by e-mail. You can also take the burden off them by saying you’d be happy with a referral to someone else who has a better grasp of the topic (if they’re not the right person) or to a technical paper or other resource. All this said, by far the easiest way to get access to people is by going to an appropriate conference.

Conferences Technical meetings are great way of getting into a new subject and talking to experts who can put things in perspective for you. Ideally, you want to find one where there are one or two people speaking who are directly relevant, that is, people you would have wanted to speak to whether they were at the conference or not. Their presence makes it more likely that others attending will be interested in the same issues you are. To get the most out of a conference, you should attend talks strategically: go to the presentations of people you are interested in and topics that are relevant (or might be). Ask questions, if you can think of good ones. After the talk, or at the end of the session, introduce yourself to potentially useful speakers, but be respectful of their time. If you need something quick, concrete, and connected to their presentation, you might ask them questions there and then. If you need a more general discussion, you can set a time to meet later in the day/week, or ask if you can call them after the event is over. Don’t forget to send them a LinkedIn invitation and/or give them a business card so that they remember you. If you do this systematically, you’ll have a great list of contacts by the time the conference is over. Sometimes the most useful thing that you can ask a contact is whether or not your own understanding is correct. Explain to them what you’ve gleaned from your research so far. Get them to correct you if you’ve missed something or are emphasizing a side issue over more important criteria for success. Experts will not all agree (as we’ll discuss in Chapter 3), but every time you get feedback on your understanding, your insight will deepen. Finding An swers  |   35

Finally, if you can wangle an invitation, find a bunch of people in the field you’re learning about and try to hang out with them: go to the bar, go to dinner, and go to the coffee cart. You can learn a lot just by asking the odd question and then simply listening while the experts talk around you.

Lab, company, and site visits Seeing work, equipment, companies, and people “in the flesh” can an extremely powerful way of improving your own understanding about a project or technology and prompt you to think of many more questions than you might have otherwise. It gives you a great opportunity to see prototypes operating in context and also gives you the chance to talk to people eager to explain what you are looking at. If you get a chance to visit a lab or company, there are several things you should try to achieve before you leave. If you can achieve all of them, the visit will be a success, even if you decide that the technology is not! The first goal is to get a visceral experience of the technology and to decide whether it “feels” like it could be successful. This might sound like a typically nonrational goal in an age where emotion and “truthiness” often predominate over analysis and research, but that’s not the case here. Steve Benton, one of the founders of the MIT Media Lab, and a leading figure in the imaging community, summed up this approach with his mantra “Show me the hologram!” What this meant was, don’t give me papers to read and mathematics to follow to tell me whether your new technology is worthwhile or not: impress me with the product. The Media Lab as a whole took on this philosophy (known as “Demo or die”), and it is widely understood that this was one of the things that made it so successful at promoting itself, at raising money, and at confronting real problems rather than leaving them to be dealt with at the next stage of development. In Silicon Valley, a term emerged to describe hardware and software developed with the opposite philosophy: “vaporware” is technology that is beautifully written up, pitched, described, and drawn but, ultimately, one that no one has really tried to build or, possibly, ever could. Since anyone can write anything, say anything, make a diagram of anything, or render a simulation of anything, it’s really important to do a reality check and just see whether the technology, whatever it is, actually works in some version of the real world. If it does, that says three good things: the theory is sound, the system can be implemented in practice (at least to some extent), and the people 36  |   EXPL AINING THE FUTURE

who developed the technology have been forced to grapple with and overcome real-world problems such as working with components that aren’t perfectly to specification, and coping with friction, overloaded servers, and non-ideal operating temperatures. Seeing the technology in operation also gives you a chance to notice things that you might not when reading a paper, and you may find you have questions you didn’t expect to have. Does the system really have to be that big? That noisy? Will it always need to be cooled to liquid-nitrogen temperatures? What’s that smell? Does what’s coming off it mean it needs to be under a fume hood? Of course, if you’ve done your homework, you’ll have your own questions about the technology too. Is the performance really what’s claimed for it? How far can it be pushed? How does this or that really work? What are the dependencies? How does it scale? In fact, a site visit gives you the opportunity to ask any of the many questions we’ve discussed in this book. But what’s particularly nice is that you’re doing it in a less formal atmosphere than sticking your hand up at a conference. This means you can get less guarded answers from the people you talk to. You can ask them to be realistic about where the technology is really likely to be useful and ask them about their competitors and their competitors’ flaws. If you’re writing the technology up for some kind of public forum—like a paid consultancy report—then you may want some of this to be on the record. However, a lab visit is not the time to be worrying too much about getting verbatim quotes or nailing down numbers: you can do that by e-mail later. The goal is to understand the issues so that you can go ask better questions of the competitors and others later. Depending on the size of the team, you will probably want to make sure not only to talk to the person at the top (like the professor or the CTO), but also the engineers or students who are building the technology and trying to get it to work on a daily basis. Even if they don’t want to be quoted, they can give you great insights into the difficulties involved—the potential obstacles to success—which will again allow you to ask better questions of others. They may also have a different, even negative, perspective on the technology that you would not get from the boss who’s staked their career on the success of that particular approach. Such perspectives are extremely useful.

Finding An swers  |   37

Business development and PR/comms people Companies want to protect themselves and their interests, on the one hand, but need to be publicly visible and seen to be making progress, on the other. They don’t want to, or have time to, talk to everyone who might want information, so they generally employ business development and investor/public/press relations people (sometimes simply known as “comms” or communications) as gatekeepers. Their job is to make their company grow, look good, and get what it needs from the outside world. If you can show standing—that is, if you can convince these people that you potentially have something to offer—then they can be very helpful. So, if you identify yourself as a potential customer, investor, or partner (or a consultant who could connect them with potential customers, investors, and partners), then it is in their interest to help you. If they decide to do so, they can be particularly useful in getting facts, figures, images, press releases, quotes from the CEO, and all sorts of other public and semipublic information. If you’re lucky, they can also give you access to key players inside the company. However, it is rare to have the opportunity to really pick the brains of people on this basis. The business development and comms teams are there to make sure that the staff stick to the company’s official public position. So, whereas someone might make a grand claim for their technology when you are chatting to them in a bar after a conference, they are unlikely to repeat it when a PR or biz dev person is around. Whether the company line is overly optimistic or pessimistic will depend on the integrity of the organization involved. Either way, having a clear statement that you can then compare with the reality you uncover may tell you quite a lot about the project and how much you can trust the people involved. As an example of how this can cause problems, when I was working on the project used as the case study for this book (see “Case Study Part I: Research and Analysis” and “Case Study Part II: Report”), one of my sources was told by his comms department that he could not speak to me without there being someone else on the call. However, I was concerned that this would be too limiting. Fortunately for me, he and I went far enough back that he trusted and was willing to help me, so he did it on a “backgrounder” basis instead. We had our talk, but I couldn’t use his name. We’ll cover the issue of trust more in Chapter 3, but the main thing 38  |   EXPL AINING THE FUTURE

to understand about biz dev and PR people is that they can be helpful if you can provide the quid pro quo they are looking for.

Commercial Trademarks and designs Found by searching either national or international databases (you can find these databases easily with a search engine), trademarks and designs can offer forewarning of what projects may be in the pipeline but not yet public. You can search by the name of the company (if you have an idea of who might be active) or you can enter search terms related to, for example, the kind of product. You can also use Boolean searches to try to narrow down the options. However, as with other kinds of searches, you have to have a very clear idea what you are looking for and what kinds of keywords are likely to come up, or you will end up with far too many hits!

Annual reports Public companies (those in which the public can buy shares) are required to publish annual reports to show their shareholders how they are doing. These reports vary a great deal, but generally consist of a mixture of PR material (what the company wants to tell you about itself) and required financial disclosures (what it needs to tell you to comply with the law). Among useful things that can be picked up in an annual report are the vision/strategy of a company (at least, as far as they’re willing to disclose it), which sectors are more or less profitable for it, and recent acquisitions of other companies. The latter can, in particular, tell you a bit about what’s important to the company and maybe what they are working on, especially if you’ve already been reading the news and listening to the rumors and so have enough background to read between the lines. For instance, many argue that Apple’s investment in server farms was a major clue that they were working toward the launch of Siri (or something like it), at least it was for those who knew how to read the signs.

Websites, press releases, and whitepapers Companies try to promote themselves and their work via their own websites, by pushing out press releases that they hope will be covered in the mainstream or trade press, and by writing up whitepapers (a bit like Finding An swers  |   39

t­ echnical papers but often simpler and not subject to the reality checks that the peer-reviewed literature undergoes). Essentially, all this information is the same in that it is written purely with the company’s agenda in mind. The good news is that the factual information should be accurate, as there are legal reasons why misrepresenting the facts would be dangerous for the companies involved. It could be misleading, however. Even if posted material genuinely tells you about the company’s mission, products, and personnel (more or less), it will exclude a lot of important information, and any opinions offered are likely to be one-sided. In other words, you’ll need to weigh up the information before you use it. We’ll discuss that in Chapter 3.

Summary The following sources will help you find the information you need: • keywords • search engines • the technical literature • forward and backward citations • books and book chapters • patents • the technical/trade press • industry bloggers • industry roadmaps and reports • people • conferences • lab, company, and site visits • business development and PR/comms people • trademarks and designs • annual reports • websites, press releases, and whitepapers

40  |   EXPL AINING THE FUTURE

CHAPTER 3

Perspectives and Agendas

G

athering technical papers and other types of information is not enough. Eventually, you have to actually read them. I say this as the owner of books that I’ve never read (or barely read a chapter of) and which have sat on my shelves for decades. But just reading the material, and even understanding it, is not enough, either. Unbiased information is a bit like a free lunch: there’s no such thing. No matter how hard people try to be balanced, they cannot help but present information in a way that makes sense from their perspective and, probably, serves some agenda, even if that’s not done deliberately. You need to be able to separate reliable information from opinion and one-sided propaganda. If you cannot, then you will be misled, and your analysis and conclusions will be faulty.

The press and the trade press To illustrate this, let’s take the example of the press: daily and weekly newspapers and magazines. In some countries, the fallacy that these publications are supposed to be objective and look at the world dispassionately and in an unbiased way, the same as scientists are supposed to, still exists. However, in most of the world, the agendas of the various publications are well known. In the UK, for instance, the Daily Telegraph is on the right of the political spectrum (conservative), and The Guardian is more on the left (liberal). In other countries, you often have the situation Explaining the Future: How to Research, Analyze, and Report on Emerging Technologies. Sunny Bains © Sunny Bains 2019. Published in 2019 by Oxford University Press. DOI: 10.1093/oso/9780198822820.001.0001 Pe rspectives and Age ndas  |   41

where one newspaper supports the government line, and another favors dissenters. It goes deeper than that, of course. In a country at war (whether a trade war, a cold war, an ideological war, or an actual military war), you don’t expect to find the newspapers supporting the other side. We expect that kind of bias, and we probably only notice it if we’re from outside the country in question. One way to think of this is that the agenda of a newspaper is culturally baked in its articles, by which I mean that it’s not made explicit but is ­nevertheless there in the values and the assumptions journalists make when they look at the world. Readers generally take these for granted because they share these values and assumptions, which is the very reason they read these publications and not others. As you move away from the general press and toward more specialist publications, you’ll see this bias becoming more specific. For instance, you’re unlikely to find the Wall Street Journal or Financial Times supporting an end to capitalism, as capitalism is what these magazines are about. Likewise, Nuclear Engineering International is unlikely to support a ban on nuclear power, Oil and Gas Journal is unlikely to be too negative about fracking, Car Magazine probably isn’t in favor of measures to put higher taxes on fuel, and PC Magazine is unlikely to run an article saying you’d be better off buying a Mac. When you read a specialist website or publication, you therefore enter a bubble in which certain valid points of view are not likely to appear. If you are an outsider to the field in question, this will probably be very obvious: you’ll have questions that are never answered. These are, likely, questions that people care about in the real world but that never seem to be discussed within the publication. And yet, the people involved—the ­writers and editors—are not deliberately trying to deceive you. They’re just so used to writing about certain subjects in a certain way for a certain audience with certain beliefs that it doesn’t occur to them there’s any other way to look at the subject. This kind of bias is everywhere and extends deep into technology. For example, when mechanical machines were in fashion after the Industrial Revolution, people thought the brain must be like a mechanical machine. Then, hydraulic systems came in vogue, at which point people thought that the brain must be similar to them . . . then to computers, and then to quantum computers. When presented with a problem, engineers are likely 42  |  EXPL AINING THE FUTURE

to gravitate toward solutions they can understand and build. This means that a computer scientist is less likely to suggest something mechanical as a solution, and a mechanical engineer is less likely to suggest a computer. There are different ways to view this phenomenon, most of them benign. It could happen because people want to be helpful and so suggest solutions where they can make a contribution. By not mentioning solutions from other domains, they are working within their own expertise (which is the ethical thing to do). But there’s nothing objective about this. Their solution to a problem will be totally dependent on who they are. This isn’t bias in the sense we normally mean it. However, to the receiver, it could have the same misleading effect. The point is that saying someone is biased is not necessarily a value judgment: it’s just a statement of fact. It’s up to you to try to understand the nature and degree of the bias so that you can filter it out.

Look for the naysayers The most obvious kind of bias (and the one that you should be most sensitive to) involves people overselling their own technology. It’s therefore crucial to seek out those who take the opposite view, because they show up weaknesses in arguments and highlight criteria that inventors and other interested parties may not have discussed. Identifying that there is disagreement between the two camps is not enough. You have to move on to try to find the root of that disagreement.

Disagreement over the problem to be solved One reason a source might be negative about a technology is that they fundamentally don’t believe it to be necessary or important, either in a big, overarching way or in a small way. For instance, someone who does not believe in climate change is unlikely to have any serious interest in renewable energy sources such as wind and solar. Someone who believes that nuclear power is inherently dangerous is unlikely to get excited about a new, more efficient, reactor design. I always think of this as being the “witch” problem: it’s very hard us to agree on what should be done about witches if I believe in them and you don’t. If you can identify the underlying belief or assumption that leads to this kind of negativity, you can investigate it and then build it into your ­analysis if the criticism appears to be legitimate. If not, you can ignore it. Pe rspectives and Age ndas  |   43

Another possibility is that people agree on the overarching goal that a project should have but disagree on the approach to the problem. For instance, some take the view that solar energy is more worth harvesting than wind because it is more abundant. More narrowly, even those who agree that they want to concentrate on wind energy might argue about which aspects of it on which to focus. Some might take the view that the efficiency of a wind turbine (turning the kinetic energy of the moving windmill into electricity) is more important, where others might p ­ rioritize the design of the rotor. Although both approaches would end up producing a better windmill, there is clear disagreement over which would lead to the most improvement with the least time, investment, or change to manufacturing process. Whether different groups are likely to be negative about each other’s work will probably depend on competitive factors. If the various groups are competing for funding, investment, attention, and so on, they may well minimize the importance of rivals’ work to make themselves stand out. On the other hand, if technologies are seen as interdependent, those working on one may talk up the complementary work. In other words, if the promise of better rotor design could lead to a more lucrative turbine market, then turbine designers would have good reason to approve of the rotor designers’ efforts. The answer you need, the truth, may be dependent on dozens of different business, technical, political, regulatory, and other factors. You will have to unpick this context and try to understand why two points of view are so different. It’s even possible that both are right, but each is speaking to slightly different questions that can’t be directly compared. The challenge is being able to identify these subtleties.

Misleading without deliberately lying In my experience, for a scientist or engineer to tell you a flat-out lie is pretty rare (although not unheard of!). Less rare is the self-serving and misleading analysis achieved by being selective about the truth. Technical people may genuinely be ignorant about important criteria that their technology fails to address, or they may simply be keeping quiet about them. Either way, this means they are not telling the whole story about its real potential. This is why it is so important to talk to lots of different people and/or read lots of disparate sources: only this way can you be confident that you really know which features are critical for a technology to succeed in a given application. Another tactic is selective ignorance. This is where a scientist or ­engineer seems to be deliberately impervious to facts presented that 44  |  EXPL AINING THE FUTURE

contradict their point of view. (This is closely related to confirmation bias, where people seek out “facts” that support their own point of view while ignoring those that don’t). An example is as follows: at a panel discussion, a well-known scientist once said that artificial intelligence was impossible because “a machine can only do what you program it to do and nothing more.” As this professor had no background in artificial intelligence, the error could simply have been a result of ignorance. Fortunately, the next speaker explained the basics of machine learning and neural networks, refuting the previous statements. Shortly afterwards, at another event, the scientist made exact same statement again, claiming that machines were constrained to know only what their programmers could explicitly encode in software. This happened again and again, and each time other experts stood up and explained carefully how machine learning meant that machines could (literally) learn how to do something they hadn’t been explicitly programmed to. This person’s underlying agenda remains unclear, but the effect was to mislead people. This happens all too often. Some people just don’t like to change their minds, despite the contradictory evidence that comes their way. As well as selective ignorance, there’s just plain old ordinary ignorance. Some people are so focused on what they’re doing that they either cannot or will not keep up with developments elsewhere: their version of reality therefore becomes either outdated or too narrow to be of any use to you. Losing touch with what’s current probably happens to most of us at some point, but—if we are honest—we will provide caveats to make the limitations of our knowledge clear. Others, through carelessness, a lack of humility, or something else, will not.

More contrary positions In the same way that people can take different positions on problems and solutions, they may not see eye to eye on various other things. One point of disagreement can be the formulation or theoretical description of a problem. If people have different models of how something works, they will not be able to agree on how to make it work better. For instance, for about a decade in my own field, there was a split between top-down and bottom-up models of behavior in robotics. Essentially, one group was arguing that direct response to the environment was the most important thing, while another argued that planning and more strategic intelligence were important. One could argue that today’s approaches blend the two Pe rspectives and Age ndas  |   45

and that both were right, but, at the time, the stark difference in the approaches of the two made it hard to see any common ground. Another point of difference might be the criteria for success. You could have multiple groups all working on the same kind of product for, ­nominally, the same application, but prioritizing different features. Take laptop computers as an example. You could focus on ease of use, elegance of design, cost, interoperability, reliability, weight, and ruggedness, as all of these are desirable features. However, because it is impossible to optimize them all together, different companies set their own priorities (and, hopefully, each one achieves its own market based on consumers sharing that priority). In a case like this, it’s not about right or wrong. If one company can make money by making the cheapest possible computer, and another by making the most robust and reliable computer, then they both are right to have prioritized as they did. Because their approaches are fundamentally different, the best solution for one will not be best for the other. But notice that, in this case, what looks like disagreement actually masks consensus deeper down. If you asked them how to make the cheapest or most robust computer, both companies would probably give you the same answer. But determining what makes the best machine depends on the context. Likewise, identical functionality may be implemented completely differently on a mobile device, laptop, desktop, or network, not because there is disagreement about how to do it but because the physical constraints of size and power efficiency are so different in each case. The underlying principles that they would apply in devising the criteria are the same—minimizing cost, while optimizing functionality and efficiency—but the details of implementation are not. It’s these agreed-upon design rules (which are sometimes so ingrained that they’re unspoken) that you want to abstract. Finally, even if everyone agrees on the problem, the criteria, the design rules, and the priorities when applying them, they may still disagree on the best solution. You will need to determine whether this is simply because they genuinely disagree (in which case you have to analyze the arguments on both sides) or because there are agendas at work.

Individual agendas When an inventor speaks uncritically of their invention, most people (especially after having read this far) should know to be suspicious. It’s easy to understand that the inventor may want funding, fame, and fortune 46  |  EXPL AINING THE FUTURE

and may use a little hyperbole to get it. Likewise, you won’t be surprised if their organization or colleagues push the same party line. The problem comes when understanding the motivation of other ­people who are talking uncritically (or overly critically) about that same work. Why would they lie or mislead? What have they got to gain? In trying to weigh up individuals’ arguments—in the absence of concrete evidence one way or the other—you must consider the kind of relationships they have with those who did the original work. The easiest to spot are the obviously dependent relationships. You don’t expect a doctoral student to say anything critical of their supervisor, or an employee to say something bad about their boss. Both students and employees depend on their supervisors for their reputation, probably their income, their prospects of success in their current position, and also future references for other jobs. What you may not realize is that this kind of dependent relationship may exist with people in other organizations. For instance, one academic may be the senior person on a joint research grant with several other institutions. Other people in the collaboration are unlikely to say anything negative about the senior person, as this could affect their chances of getting funding in the future, or even sour the current collaboration. The same could be true of companies that are working together. Even if they’re not collaborating on the project that you’re investigating, they might not want to say anything negative that would cause problems for their colleagues. If one company is a customer of another, neither will want to compromise that relationship (assuming it’s going well) for the sake of giving you the information you need. In fact, you can extend this to sympathetic relationships in general. People who work in the same field can get to be friends. They can know someone’s work is weak and yet still like them as a human being and so not want to cause them trouble. Or they might not even know the person directly but have some kind of relationship with someone they work with. All this can be problematic, because the critical voices are the most useful when trying to determine whether a technology is likely to succeed or not. On the other side, there are also lots of competitive relationships, that is, researchers and companies that would like a rival technology to fail. Competitors are good, as they’re more likely to give you the critical point of view you need to expand your understanding. But you need to know they’re a competitor: you don’t want to view someone as independent Pe rspectives and Age ndas  |   47

when they have something to gain by presenting an overly negative view. It doesn’t mean that you will disregard what they say but simply that the extra information about their agenda will help you weigh it up.

Industry/corporation-supported cheerleaders There’s one last agenda that I want to draw your attention to. It’s one that has been pioneered (in science anyway) by the pharmaceutical industry but is becoming increasingly widespread. Here’s how it works. You work for a drug company, and you have a drug you want to promote. Various people have been working with you on clinical trials, and some are evangelistic about it, while others are lukewarm. What do you do? You narrow down your focus to the people who like your drug and who therefore (on balance) are more likely to be positive about it in the future. The more vocal they have been, the better they are (because it’s always embarrassing to change your view once everyone knows it). Now, you give those people extra attention: better access to the drug, research funding (not related to the drug itself, of course, as that would look suspicious, but perhaps connected with the condition it treats), platforms to speak at conferences related to the drug, or the guest editorship of a relevant special edition of a journal. Some of this may be in your gift through the company, and some through the right recommendations in the right ears when you’re giving out sponsorship money. After a while, you can point to this person as the expert when it comes to the condition your drug treats. They’ve got all the right credentials and all the right contacts, they have spoken and written in all the right places, and they love your drug. Now, if anyone asks, you can just point them to the “leading expert” in the field, and they’ll give you a rave review. Now, I want to point out here that the “leading expert” has done absolutely nothing wrong. Their view that the drug was great was genuine. They haven’t accepted any funding that was directly related to the drug, and they haven’t compromised themselves in any way. Nevertheless, the drug company has managed to skew the research landscape so that their view is seen as more credible than it should have been. At the same time, imagine a clinician who was not one of the drug’s cheerleaders. Maybe they are denied the use of the drug in a follow-up clinical study (for some thoroughly plausible reason, of course). Perhaps comments are again made when a speaker or guest editor is being sought, but this time slightly negative, dismissive ones. Or maybe just “Well that 48  |  EXPL AINING THE FUTURE

person’s great, of course, but have you thought of . . . .” A few years later, these researchers may have either moved to focus on other conditions or struggled to stay where they are, wondering why the quality and integrity of their research is not being recognized. This is not a conspiracy theory, but a well-understood strategy that is increasingly being adopted by various industries. In fact, the evidence suggests that not only does this relatively nebulous favoritism take place, but there’s also a lot of outright corruption. For our purposes, however, the main thing to identify is that, sometimes, people get to the position they are in for a reason that has nothing to do with their ability.

Credibility, analysis, and balance What I hope you get from this chapter is that it is not enough to accept information at face value. You need to decide to what extent anything you are told is credible. Usually, that means going through and separating the facts from opinions, and then weighing the various opinions against each other, based on the agendas of their owners. We are assuming that whoever is paying you is expecting not just a record of the information you found but also that you perform a genuine analysis on it. This means you have to combine research with intellect, cut through what people say, and try to get to something that genuinely resembles the truth. It’s fine to give both sides of the argument, but, ­ultimately, you need to decide what you think it all means. Even where there is no clear answer, perhaps due to a lack of information, a proper analysis will provide the reader with the questions that need to be asked or the evidence that needs to be gathered. This is hard. It means you have to make judgments about the credibility, accuracy, and even the intelligence of your sources. You have to do this in such a way that you can justify and accept responsibility for getting it wrong if that happens. What you must not do is to substitute “balance” for real analysis and then think you’re doing your job. “Balance” is what reporters call it when they find out what people have to say on a subject, write all the different points of view down faithfully (as if they were equally valid), and then—if you’re lucky—add a little context. “Balance” is what people do when they are too lazy to really get to grips with a subject or too cowardly to say what they really think. It is the opposite of analysis. Pe rspectives and Age ndas  |   49

Summary You must consider how much weight to give different opinions. Ask yourself whether your source has: • a baked-in predisposition to suggest a particular course of action • a stake in the industry as a whole • a stake in the particular technology • a stake in the company commercializing it • dependent relationships with people who have a stake • a competitive relationship with the industry, technology, company, or ­people • dependent relationships with people in competition • a contrary view based on the specifics of a project • a contrary view based on the underlying principles of the field • a deep-enough understanding to really answer your question

50  |  EXPL AINING THE FUTURE

CHAPTER 4

Analyze

B

efore we get into any detail about how to put together an analysis, we have to know what we are trying to achieve. You will start from different points depending on who is going to use the information you are gathering, and for what.

Starting point As we’ve discussed previously, where you start depends on your priority: “What can do this?” or “What can this do?” The next three sections provide an overview of these different approaches (application focused and technology focused) and how they overlap.

Application-focused analysis People who are application focused only care about solving their specific problem and will accept any solution that meets their needs and constraints (timeline, budget, performance, etc.). Essentially, it’s like a complicated procurement issue. You need to purify the water coming out of your factory, so you put together a specification of what pollutants your filtration system will have to remove, the flow rate, and the level of purity you need at the output. The technical people involved can then set about finding the best solutions to the problem. They might consider off-the-shelf systems, custom-made ones, or the possibility of developing a new filtration process in house. If part of the specification proves problematic—maybe because one chemical is impossible to remove cheaply—they might consider ways Explaining the Future: How to Research, Analyze, and Report on Emerging Technologies. Sunny Bains © Sunny Bains 2019. Published in 2019 by Oxford University Press. DOI: 10.1093/oso/9780198822820.001.0001 Analy ze  |   51

of avoiding or simplifying the problem. This might mean eliminating the processes that created the worst of the pollution in the first place. Analyzing this could be quite straightforward. It might be as a simple as creating a table with a column for each of the different filtration solutions, and rows for each of the pollutants that they have to remove. A quick search for “[any household appliance] comparison table” will find you plenty of examples like this. If there is no one right choice, this kind of approach would help in identifying whether two different systems might be able to work together to achieve the right level performance, or whether there is a specific, narrow, piece of the problem that cannot be addressed with the choices available. The difficulty is coming up with the right set of features, which is why we focused on this issue so much in Chapters 2 and 3. A more trivial example—choosing the best possible to-do-list app— can help us see why. Doing this just for yourself is easy: you simply work out all the features that make a to-do list good or bad from your perspective and then rank them based on how important they are to you. For instance, I know from experience that I need something that’s available when I’m offline (as I often add to my list while I’m on the London Underground) and allows me to include dates, projects, and notes for task items, as well as sequences of tasks where things need to be done step by step. The app also needs to work across all my devices. So, I choose the best possible option based on these criteria. But what if you have to decide on an app that will serve your company or team? Other users may have completely different ways of working and so will want different features. If you try to satisfy all these needs, you could end up having to choose from options that are versatile but very complicated to use. If “simple operation” was on everyone’s list, then you’ve failed. So, the first step involves really researching and understanding what the problem means to potential users, crystallizing out a set of criteria—or features or key performance indicators (KPIs)—and ranking them based on users’, clients’, or systems’ demands. You may also have to identify where various sets of needs are incompatible. Imagine designing a color printer: a machine that could be used by a large fraction of the population but in many different contexts. You might have speed, cost, quality, and using different types of paper or binding mechanisms on your list of things to incorporate into the device but without a clear picture of whether the printer is going to be used for producing print-on-demand books, photographs for display, worksheets to be 52  |  EXPL AINING THE FUTURE

handed out to school kids, or the occasional report. You cannot address this market without breaking it down into segments. Once you’ve got your specification sorted, the next job is easy by comparison: find out all the products that are available (or coming on the market), and work out how well they perform according to your user ­criteria. At this stage, you may well find solutions with some features you hadn’t heard about before. This might involve another round of discussions with potential users to determine how relevant these features are: the users may not have asked for what they didn’t know was possible. If you are compiling the information for general use (say, you’re a consultant selling your analysis to various clients), then you might simply list all the criteria as a big table, with numbers to show how they rate according to the KPIs. That way, each company can interrogate the list, based on their own needs. However, if you are doing work for one company, you will want to show how each app measures up to the specific needs and priorities of their users.

Technology-focused analysis Many people cannot take this kind of agnostic view of which solution is best when performing a technical analysis, because they are tied to a particular technology. For instance, if you have already invested in, have dedicated your life to, have invented, or have developed a particular technology, then you are interested in its success, not the success of its applications. Likewise, if you’re thinking of investing in a start-up company or a research and development program, applying for a PhD program or a postdoc position, or doing anything that will give you a stake in the success of the particular technology, you’ll want to know the likelihood of its success. If the application is already determined (there’s an existing start-up or research project or even a longstanding product that uses the technology), then the job is pretty straightforward. You do an application-focused analysis, including the technology you’re interested in among all the possibilities, and then see if it stands out as having the best fit of features for the job. If it does, great. If not, further analysis may show you up a market segment or sub-application, where its drawbacks are not critical and its benefits are particularly valuable. The difficulty here is in being objective. You have to forget about your enthusiasm for the technology (which got you to consider its chance of success in the first place) and try to look at the world from the application’s Analy ze  |   53

point of view, essentially following the same agnostic procedure that you would if you were taking an application-focused approach. What are the criteria for success? What technologies are competing to solve this problem? How well do they meet the defined criteria? Even for well-established businesses, this kind of analysis, which takes into consideration emerging technologies as well as existing competitors, can be really helpful for determining long-term viability.

Choosing an application But what if you’re not yet at this stage? There’s some new technology that seems to have the potential to solve lots of problems. You are a researcher deciding how to pitch it to get your next grant to advance the technology, or a CEO deciding what the first target market should be. How do you determine what to focus on first? This kind of tech-centered analysis is the most difficult because it is deeply iterative. First, you have to figure out what problem(s) need your solution. In some cases, this will be easy because the technology was designed with a particular purpose in mind, but this is not always the case. Even if it is, there may be other applications that might benefit from your solution more. So, the first step is to consider all the potential applications of the technology and work out which is the most promising. You can’t even start this task until you understand what your technology can really do: what makes it more useful than its competitors. Consequently, the very first stage involves listing all the different features and drawbacks of your technology so that you have a clear picture of what they are. At this stage, it really helps to talk to the entire technical team to find out all the potential pros and cons. The next step is to brainstorm the range of possible applications, to attempt to identify every problem that might be solved by your technology. Then, you can pick out the ones that seem most promising. For each of these, you should do a separate analysis looking at criteria and competitors. If you try to skimp and stop investigating after you identify one application that may be successful for you, then you may end up missing the one that could be most successful. Also, if you’re working on some kind of start-up, you may find that there are multiple applications that have compatible criteria or similar markets. Choosing one of these would allow you to pivot if you’re having trouble developing your first choice, or give you a second product to sell if you’re successful with the first. 54  |  EXPL AINING THE FUTURE

Of course, there’s always the possibility that you may do all this and come to the conclusion that your technology has little or no chance of being the best (or even one of the best) solutions to any of the problems you thought might be applicable. When you reach this conclusion, you may feel like the whole endeavor was a complete waste of energy, but this is the wrong way to look at it: the analysis may well have saved you and your colleagues a huge amount of time and money. Even if you decide to go ahead with a project knowing that success is a long shot, your understanding of the needs of the application and the weaknesses of your technology will help you to understand the niches where you might best compete.

Level of detail The next thing to consider is the depth of your analysis. For technical people, that is always a difficult question, because it is in our nature to want to be thorough. The problem is that, as with all research, perfection is often the enemy of good enough. What you really want is to reach a level of confidence about whether a new technology will succeed, and when. The when can be as important as the if, especially if you’re going to be constructing a business plan or research proposal. For instance, we are constantly talking about when machines will become smarter than people for given tasks. There can be little doubt that machines will eventually overtake us for almost any task you might mention, especially if there are no obvious technical roadblocks and it’s cheaper to have machines perform these tasks rather than humans. Economic pressure combined with a tractable problem makes success almost inevitable. But whether you’re an investor or a competitor, there is a big difference between likely success within five years, and likely success with fifteen years. In fifteen years, a theoretically viable start-up can go from being well funded, to running out of money, to having gone bankrupt five years ago. Likewise, competing businesses with a human-based service could come, make a lot of money, and go before the emerging artificial intelligence technology takes over. What’s important is that you understand (and attain) the level of confidence in your assessment that is necessary for your purpose. Failing that, you should be able to somehow express the confidence level you have Analy ze  |   55

been able to reach so far and identify the information you would need to improve it further. For people in research and academia looking for funding, the bar is often set quite low. Indeed, part of the nature of research is that it’s speculative: finding out the limits and potential of a new technology is the whole point. Nevertheless, in a technical paper, a grant proposal, a project proposal, or even a PhD thesis, there is generally a requirement for some justification of what the emerging technology may be useful for. For this case to be persuasive, that is, for reviewers, management, or examiners to read it without immediately raising objections, a basic technical analysis must be performed and it should ideally be reality-checked by people from outside the research group. Much higher levels of confidence will be required when money is involved. Those who are going to invest many thousands or millions into a technology have to do their “due diligence.” Only part of this is ­technical, of course. Venture capitalists, angel investors, and corporations thinking of moving in a new direction will want to look at the business case before they spend their money. However, without a convincing and thought-through technical case that an emerging technology really has something valuable to offer for a given application, the business case doesn’t matter. The most rigorous analyses should be performed by those consultants who are charging thousands for reports that supposedly pull together information from across an industry sector or application. Potentially, others are going to risk their money based on the belief that this research and analysis has been comprehensive; thus, assembling such a report is a big responsibility. In this type of project, you must make sure that you’ve done a thorough job, checking, rechecking, and then updating the information from original sources, determining and weighting the various ­criteria for success, and making sure you are judging each technology by the same set of standards.

On time, on spec Unfortunately, there is no hard and fast rule about what exactly is required from such an analysis. Before you start, therefore you have to give yourself as clear a specification of what it is that you’re trying to achieve, why, and by when. Here are some questions and example answers to consider: 56  |  EXPL AINING THE FUTURE

Whom is it for? • Venture capitalists • Research funding agencies (which use academics to assess proposals) • Technical management What do they want to know? • Which technology is the most viable? • What new technologies could disrupt this sector? • Where is this technology best applied? Why do they want to know? • So can they can earn large returns on their investment • So they can have confidence that the research money is being well spent • So they can make strategic decisions that will ensure their own company’s success • So they can ensure their career will not be tied to a nonviable technology What kind of report will they need? • A preliminary investigation: a first-blush, high-level, qualitative analysis to help decide whether a deeper feasibility study is warranted • A specific proposal: a detailed, quantitative analysis of the application, how it can be addressed, and the next steps to be taken • A comparative review: a detailed analysis of the application, the various options to address it, and recommendations on which might be most suitable What will the report look like? • A one- or two-page stand-alone executive summary • A short (2,000–5,000 word) stand-alone report • A long report with an executive summary • A presentation When do they need to know it? • Before the conference call tomorrow • By the next board meeting • In time for the annual review

This issue of time is an important factor. A first-pass back-of-the-envelope analysis that assists management in making a decision—but is there when it’s needed—is preferable to a report that comes in fully formed but weeks or months (or even just hours) too late to make a difference. In this chapter, we’re going to go into all the elements that may be needed. However, what is most critical is whatever your boss or client Analy ze  |   57

expects of you (or, if you’re doing it for yourself, what level of confidence you need to be satisfied).

Routes through the process In the rest of this chapter, we’re going to go step by step through the topics you need to research and think through as you’re performing your analysis. If you’re most interested in taking a technology-centered ­ approach (you have one technology you’re interested in and want to understand whether it will succeed or not), then start with Phase 1. If you’re taking an application- or problem-centered approach, you should start with Phase 2. However, it wouldn’t hurt to skim through the material in Phase 1 en route, because understanding it will help you understand the philosophy behind the analysis as a whole. One last note before we get going: this chapter is written in a very prescriptive way to help students and new professionals who have never done this type of analysis before. By being very specific about the use of such tools as canvases and spreadsheets, it’s easier to understand the process and to do it for the first time. Don’t be put off if you can see a better way of doing things. Everyone—especially if they’ve been working for a while—has their own process. The main thing is to extract the underlying principles here and then use your own expertise to adapt this to fit your needs or devise something completely new. Understanding your own methods and workflow is extremely valuable, as is an honest assessment of your strengths and weaknesses for this kind of task.

Phase 1: Understand the technology Step 1: The basics Your first job is to record everything you can in the available time about the technology in question: • Where does it come from? • Why was it developed? • How does it work? • What are its features? • Who is developing it now?

58  |  EXPL AINING THE FUTURE

If you’re working in a group that has a lot of expertise, a good way to start capturing this can be by using the Technology Canvas (see Canvas 1). Essentially, a “canvas” is just a big piece of paper with instructions in the corner, but—if you use it in combination with other canvases—it can provide a useful map of your research and reasoning about the subject and will lay out your technical argument for you. As with other canvases you may have used, the trick is to keep it flexible: use sticky notes to write your facts, issues, ideas, and so on, so that they can be moved around, themed, and organized as you go through. Whether you use this canvas or not, Technology should be one of the sections in your electronic notebook. Organize your research so that it answers the questions above. Make your own notes about issues related to your various themes. See if there are any questions where you feel less confident than you should about the answers and go back and look for more material in these areas. You’ll see that we’re not just recording the technology as it stands but looking at its evolution. This is not just for academic interest. If a technology grows up for one purpose, it adapts to that purpose. If a particular feature is unimportant to the application, it won’t be developed. If it’s important, it may be overdeveloped. This is fine for the application in question but might be completely inappropriate for another, and understanding the background may give hints on where there may be untapped potential. Think about machine vision as an example; specifically, consider sensors deployed for quality control on a production line. When this technology was first introduced, these sensors would have been simple video cameras. What were video cameras originally designed to do? Not to capture data for machines to process, but to record images in real time for playback to humans with as much accuracy as possible. Immediately, this goal set certain design parameters: there is an ideal frame rate beyond which humans don’t see flicker (or any major improvement in the image quality). Likewise, there is an ideal spatial resolution (number of pixels in the image) and a number of colors beyond which humans see no difference. But industrial inspection has nothing to do with making pretty pictures for humans: it’s about supplying a computer with the information it needs to determine whether the part has been built to spec or not. To do this well, you might want to use a faster frame rate or a higher resolution, or detect light in the infrared or another waveband that people don’t usually want to see because it’s not part of their everyday experience. Analy ze  |   59

60  |  EXPL AINING THE FUTURE

Canvas 1  Using canvases allows you to collect and reorganize information from a group of people. This one is intended to act as a starting point for a technologyfocused analysis. It encourages participants to define exactly what technology they are considering and where it comes from. (The term “5Ws and H” refers to the questions Who?, What?, Where?, When?, Why?, and How?.)

By understanding the most basic history of the technology, you can ask yourself what is intrinsic about it and what is just an evolutionary artifact. It allows you to see a much broader picture of what it is and how it can be used.

Step 2: Features The Feature Canvas (see Canvas 2) is critical because, without a clear picture of what your technology can do, it’s not possible to start mapping it onto applications. The features are also going to be used, later in the process, as points of comparison with the application criteria. Some may be expressed as numbers (speeds, costs, weights, resolutions, efficiencies, etc.) and others will be expressed in other ways. Is it analog or digital? Is it programmable? Does it need cooling? (This could be a Y/N, a number, or something more qualitative.) It’s really important that you’re starting to make a list of these features and logging the information you collect about them as you go. This means that, as well as organizing your notebook, you should start building a spreadsheet with a list of all the features relevant to your technology. There are lots of ways you can do this, but the simplest is just to put the features in the rows, and both competing technologies and potential applications in the columns. This way, every time you find a new feature that is relevant to your sector, you can add it. Every time you discover an application where this feature is needed (or where a specific performance is required), you can fill in the relevant cell. Every time you find out a technology’s potential in relation to that feature, you can add it. If you stick with it and can unearth all the relevant information, you should be able to see fairly quickly whether a technology is likely to work for a given application by just moving the two relevant columns next to each other. Back to the questions you are trying to get through. Thinking about all the people/groups currently working on a technology generally is useful, even if they’re all competing, because each will likely have different focuses and goals for what they are doing and have optimized their technology accordingly. This gives a broader perspective of what is possible and allows you to put together a picture of what the technology might look like when it’s matured. Of course, it may not be possible to achieve the same with one system as another—even in principle—but seeing the differences in performance can force you to ask the question, can we achieve this too? Analy ze  |   61

62  |  EXPL AINING THE FUTURE

Canvas 2 This canvas is intended to help you gather together all the information you can about the performance and other properties that will affect how a device or process is used. Some features may be obviously positive (benefits), some may be negative (drawbacks), and the value of still others may depend entirely on the context.

Also, if you’re trying to pitch a design and you don’t have a prototype, you can use results from related work to explain what you might expect from the new system. It’s not evidence that your design will work, but it’s evidence that you have good reason to believe it will work.

Step 3: Potential applications When we consider the potential of technologies, it is very tempting to focus on abstract potential: what each one can do in theory. This kind of analysis is important in that it gives us some ideas of a technology’s limits, but that’s all. Even a major advantage in one area of performance can be wiped out due to problems of practicality elsewhere. Any analysis must therefore constantly come back to concrete examples of how a new technology might be used in practice. By taking one instance and really thinking it through to its logical conclusion, we can learn about where its strengths and weaknesses are, whether it has a fatal flaw that will mean it is never suitable for a particular task, and where development effort is best spent. Example applications—which demonstrate how a particular technology can be used in practice to achieve a specific goal—can be used as thought experiments or reality checks to help us understand how much potential a technology has, and where. If we choose the examples well, failure in one will lead us to consider others where a given weakness is less likely to cause a problem. We might have a technology that has very high performance and is lightweight but is too expensive for consumer products. For whom is performance more important than cost? Maybe the military? That leads to more applications to explore. Still too expensive? Maybe this is the kind of technology you might want to send into space, where the cost of the technology itself pales in comparison to the cost of getting it to its destination. If you are really trying to be thorough, of course, you will use several examples in the analysis, with each application having different needs. Examples are important aids to communication, too. If I tell you that a gadget “has wide applications across electronics and biomedical devices,” then you’re still no clearer about what it does and why it might be useful. If I told you instead that it detects regular patterns from sensor data—like a heart monitor—and records an incident if that pattern is broken for some reason—like a heart attack or arrhythmia—then you’d have a much clearer view of why it might be useful. I could then go Analy ze  |   63

on to explain how it detected the pattern/break, using the heart-rate example, and you’d be able to visualize what I was talking about much more clearly than if I tried to talk about generic signals. In fact, if I choose the right one, it will immediately start you thinking about other, similar examples, which may—in turn—help you to consolidate the entire concept better. But, before we worry about what we want to communicate to others, which will be the topic of Chapters 5–7, our first job is to make sure we know what we’re communicating about. So, back to our process. You’ve got a technology, and you know how it works and what it’s potential is in every feature you can think of that might be useful (or problematic). Now it’s time to think about how you can use this. Again, if you’re in a group, at this point it may be useful to use the Potential Applications Canvas (see Canvas 3), and—whether you use the canvas or not—you should start a new section of your notebook: Applications. You start by thinking of every possible application that your technology can be applied to. One way to do this is to look at individual features where you know your technology is unusually good and then think about where this might be helpful. Your goal at first is simply to think of as many potential applications as you can, ruling nothing out (see Canvas 9 in “Phase 3: Timing” and Canvas 10 in “Case Study Part I: Research and Analysis” to see how this may be done). This can be difficult: as technical people, we are used to trying to find the holes in ideas, to look for the weak points. The advantage of not holding back, at least not to start with, is that one bad idea can often trigger a good one. If you have any favorite brainstorming techniques, this is a good time to use them. If you don’t, this is a good time to find some that you like.

Step 4: Eliminating distractions After making the list as long as possible, the next stage is to cull. The easiest way to start this process is to ask yourself or your colleagues what is the one thing that will make your technology fail for each application. At this point, you are just trying to weed out the showstoppers: those applications for which where there is such a mismatch between what the technology can offer and what the application requires that you can confidently ignore them (at least for now).

64  |  EXPL AINING THE FUTURE

Analy ze  |   65

Canvas 3 The goal here is to be as creative as possible in thinking of ways to apply your technology. You should start with the view that there are “no bad ideas,” because every poor idea has the potential to inspire a good one. You can move into “critical” mode after you’ve run out of creative steam, pushing the unhelpful suggestions off to the edge.

To do this, it generally makes sense to focus on the areas where your technology is weakest. You should also, at this point, be focusing only on the application criteria that are most critical, not the trivia. So, if you are looking for applications for a new building material, you might consider its strength, its flexibility, or its ability to bear a specific kind of load, and quickly eliminate all the applications that need a massively higher performance than that. Or, you might look at the amount of power needed to run a new device and then eliminate the applications that can spare massively less power than that. Massively is a deliberately slippery term. We’ll get back to this question in more detail later, but just how significant a mismatch is depends not only on raw numbers but also on how you expect the technology to evolve over time. At this stage, you don’t want to rule out too much, so think of a massive mismatch as one you think is all but impossible to overcome. If you can just about imagine that the performance of the technology could improve to the required level over time (even if it’s in the region of 5–10 years), that’s a different category. Before you decide that this massive mismatch is definitely a showstopper, however, ask yourself whether you can somehow compensate for it. For instance, smartphones have relatively small memories and low computing power compared to larger machines, but they are well connected. So, despite the fact that they can’t do that much locally in terms of voice recognition, object recognition, or knowledge search, they can operate as intelligent machines by sending their requests for information into the cloud. Again, if in doubt, keep the application in your set of possible options. You may think of an innovative solution to get around the problem later (especially if none of the other applications end up looking especially promising!). If you’re a fan of color coding, now’s your chance. You can mark in red any criteria that make a given application way too demanding for your technology. At the same time, you can mark in yellow those criteria that are in the gray area (not massively mismatched, but no guarantee that performance will ever reach the required levels) and, in green, those where the performance of your technology is up to the task or will represent an improvement on what has gone before. I’d advise one more level: silver (or purple or black, although not gold, as that may get confused with yellow!) for criteria where your technology offers a performance that 66  |  EXPL AINING THE FUTURE

will enable an application that was not previously feasible. If you’re doing this on a canvas, just write down in short form the criterion you’re considering (like shear strength, power consumption, speed, or toxicity) and add an appropriately colored dot. If you are using a spreadsheet, just change the color in the appropriate cell. At this stage, you’re already doing a back-of-the-envelope-style ­analysis. Of course, there’s no way you can consider every criterion for every single application, nor should you. Instead, make up some rules to speed up the process. For instance, you can decide to stop considering an application once it gets a certain number of red marks against it. How many (it could be as low as one) will be based partly on your confidence that the issues you identify as showstoppers really are, in fact showstoppers, and partly on how many criteria you’re considering. Once you hit this number of showstoppers, move the sticky note for this application off to the side or move that column into a new tab on your spreadsheet (never throw stuff away completely, as you might want to review it later). Once you’re rid of the applications you’ve identified as dead ends, you can focus on where the new technology really has something to offer: focus on its strengths. Where does your technology enable something new or improve existing functionality? Your focus now is on marking things as silver and green, but add in whatever colors are appropriate as you go along. So, now, you should be able to see at a glance which applications have the highest abundance of green and silver and the least amount of yellow and red. These are the candidate applications that you can take to the next stage. Your next job is to rank them intelligently. Which are potentially the most lucrative, the easiest, the ones that need the least investment, or the ones that will produce the most impact on society? Look at the ones that have the most technical potential (as based on the color scheme), and then rank similar ones based on these kinds of issues— whatever is ­important to you, your research group, your company, or your client. Sometimes, a quick-and-dirty analysis like this will be enough for your purpose: scientists and research engineers, for instance, may want to make a case that their new processes or devices will potentially enable new applications, without going into much depth as to how well (exactly) the applications will work. This is especially true at the very early stages of Analy ze  |   67

research, when the practicalities of a potentially important new ­technology are not known well enough for an analysis to be particularly meaningful. So if, for instance, you’re doing a PhD in basic or applied sciences, you may well be able to stop reading this chapter here (although you may want to skim the rest, to make sure you don’t miss out on something useful). However, for anyone who is raising money—whether through investments or research grants—once you have made claims about the impact a technology will make, your job is far from over.

Phase 2: Taking the application’s point of view Step 1: The basics Phase 1 was all about looking at the application from the point of view of the technology; now, we’re going to do the opposite. Whether your main interest is in the problem rather than the type of solution, or whether you have a technology whose potential applications you’ve narrowed down to a favorite few, this phase is about focusing on what the application needs. For this, you’re going to use an Application Canvas (see Canvas 4) and a new spreadsheet tab. On the canvas, the first goal is to do a brain dump about the application: who needs to do this thing, what exactly does it involve, where and when is it done, why are they doing it, and how does it work. If you are struggling with this because the application is so general, choose a use case to focus on: imagine a single subset of users (or circumstances for use), so that you can think this through from beginning to end. Let me take an example here. Let’s say you have developed a sensor that can be used for detecting various types of changing but essentially periodic signals, such as heartbeats, engine turnovers, or bellows compressions. Someone suggests that the sensor might be useful for a smartwatch, and—at first blush—the technology seems to be suitable. The main ­criteria for this application are met: the power consumption is low enough to be significantly better than that currently available, and the accuracy of the sensor is good enough for even hospital-based medical applications. But is that enough to guarantee success? Let’s have a look at the smartwatch application and the different kinds of users, with different needs, who might provide some insight into how 68  |  EXPL AINING THE FUTURE

Analy ze  |   69

Canvas 4  This allows you to gather information on what the application is and what is needed from a new technology to address it. (The term “5Ws and H” refers to the questions Who?, What?, Where?, When?, Why?, and How?.)

the watches might evolve. There will be those who see them primarily as fitness-tracking devices, those who see them primarily as wearable smartphones, and those who see them primarily as health/safety devices. This last category is a potentially interesting one, so let’s consider it in more detail. Consider people who have dementia. Some of them are fit and well but may tend to wander off at times when their cognitive function is low. They can get lost. Further, if they have dementia, there is a good chance that they are have additional health problems (so a heart-rate monitor could be useful . . . and perhaps also an accelerometer that can detect if the person has had a fall). At a glance, it still looks like a good application for our new sensor, but we haven’t yet thought through the specific needs of this application; so, let’s do this now. Let’s imagine an elderly man who has mild dementia, has a moderate risk of heart attack, and lives alone. He’s an ideal candidate for a safety smartwatch because a carer helps him get up every morning and makes sure he puts on the watch. Now, consider what the watch needs to do to be useful. First, if it’s going to be used to monitor the user’s heart rate constantly, it’s not enough to be accurate: it also has to be reliable under a wide range of conditions. Accuracy and reliability are completely different things. If you’re monitoring your father’s heart rate from a distance out of concern for his health, you care less whether you receive a number that is within two beats per  minute of the correct figure than that you are constantly getting ­confirmation that his heart rate is in the normal (safe) range. What you don’t want is for the signal to disappear for a couple of hours because your father got his hands dirty. Also, although many smartwatches have GPS, some do not connect to the cloud directly but via another device (like a smartphone). For users without dementia, the fact that this functionality relies on two devices— one wearable and one not—can be an inconvenience. For a person with dementia, it is a showstopper: it is unreasonable to think that they would remember to bring their smartphones when they go wandering. So, now we see that, for this application, the smartwatch would have to include not only the new heart-rate sensor and a GPS system but also a wireless phone transmitter. This is not, now, your average smartwatch. GPS and Bluetooth are already taking up space and using up power, and including the transmitter will just add to that (especially if the transmitter is always on because 70  |  EXPL AINING THE FUTURE

you need help to be alerted quickly if there’s a problem). The watch is also getting expensive. So, the constraints on our sensor are growing. It’s going to have to be a  bit smaller than we might have expected and probably a bit lower-­ powered, too. Also, it will likely have to be a bit cheaper than we might have thought (to compensate for all the additional components). The idea of the Applications Canvas is to allow you to explore all this with your colleagues and, even better, with anyone you can find who’s involved with designing smartwatches. Think through a use case (or several, using different canvases for each one). Add all the features you think might be important, and put numbers on as many as you can. When you’re done, transfer these to your spreadsheet tab, and research any numbers that might be missing but important. You should also, at this stage, start to think about how important each of these criteria is so that you don’t write off a really good application because it fails in a relatively insignificant way. Conversely, you don’t want to invest in a poor application because the technology is a good match in lots of superficial ways but lacking in those that are most critical. One way to do ranking is via a simple pairwise comparison. System­ atically, take pairs of features in your list and compare one with the other, asking which one is more critical to the success of the device. If you’ve done your research, have understood it, and have been consistent when doing the comparisons (i.e. if you haven’t said a > b, b > c, and then c > a, where “>” here means more important), then you should now have a useful ranking. It will not tell you how much more important each feature is than the one before; you’ll still have to use your own judgment for that (and you can use numbers to quantify this for a decision matrix if you want), but it will give you a set of priorities that will make the rest of your analysis more meaningful.

Step 2: The competition Of course, it is not enough that a technology can, feasibly, be used for a given application. For success, it needs to be the best option. You can’t know this without considering the competition. This may seem obvious but, often, researchers and entrepreneurs consider only a small part of the competition: those products currently being used or which are otherwise on the market now. They do not consider the competition that is being developed in university or corporate labs or by start-ups. Analy ze  |   71

This is a mistake: when a potential sponsor, investor, or customer asks, “But what about . . . ” and then mentions the XYZ company or product, you should instantly be able to explain how your project is different. First, by understanding what your competition are up to, you get a broader perspective on what the application is, what the market is, how it’s broken up into different niches with different requirements, and so on. This way, if your technology ceases to be competitive for one particular application, you can pivot to another. Second, half the time when someone mentions another product or company, it is because they have misunderstood what your technology does, the application, the competition, or all of these. If you understand the full context, that enables you to clarify it for your potential investor (or whomever). Third, whatever flavor of engineer or scientist you are, not knowing your competition makes you look illinformed and unprofessional. So, the next stage is to think through all the potential competitors (including your own technology). Use the Application Competition Canvas (see Canvas 5) to record as many of these as you can as a group, thinking through how the features meet the application requirements. If you’re short of time/resources when researching this, just focus on the requirements that you gave highest priority. Then, record these on your spreadsheet and evaluate which of the competing technologies seems the best match. This kind of analysis, done thoroughly and independently, is extremely valuable; some technical consultants specialize in creating reports of exactly this type. The value of the analysis disappears, however, if the person doing the work is being self-serving. This can happen for many ­reasons, including wishful thinking, poor research, or deliberate skewing of the figures, which is when the analyst (who is doing the work to promote a specific approach) chooses the best-case figures when considering their own technology, and the worst-case or poorly researched old figures when considering competitors. It’s important to avoid this kind of skew; it demonstrates naivety, incompetence, or dishonesty (or a bit of all three). In the short term, it may bring about a desirable outcome (a specific investment or grant), but, in the long term, it weakens the credibility of the analyst and—potentially—the ­company for which they have carried out the work.

72  |  EXPL AINING THE FUTURE

Analy ze  |   73

Canvas 5 The next stage is to assess all the competition for a given application and identify which can and which cannot meet its requirements.

Phase 3: Timing Step 1: Ramping up One of the most difficult problems in comparing technologies is that they will all (likely) emerge and evolve differently over time, which means a theoretical advantage now could quickly disappear by the time an actual product comes out. Although it’s impossible to know exactly how the future will unfold, systematically using the information you do have can help you to identify where technologies are potentially accelerating or decelerating or have stopped dead. The Timing Canvas (see Canvas  6) is intended to be used for both applications themselves and (in turn) for each of the seemingly feasible competing technologies. Your goal is to build a roadmap to help you think through what needs to happen before (for instance) a product can succeed in the marketplace. The roadmap is best drawn as a set of processes, some of which can happen in parallel and others of which might happen sequentially. So, for instance, there might be a year’s worth of solving a particular materials problem or manufacturing issue or of optimizing a piece of software to a different application or platform. In parallel, you might be raising money, hiring a sales force, building up your manufacturing capability, perfecting your user interface, registering your intellectual property, or going through some kind of device certification process. Whatever needs to be done can first go onto the canvas as a sticky note that includes the amount of time that process is expected to take. These can then be ordered with parallel tasks written as such, as long as you (or whoever else) will really have the resources to do them in parallel. Finally, if needed, the whole thing can be created as a more formal diagram. If they are done well, such roadmaps should help to determine the likely lead time for the technology in question. For instance, say you have an application (like the dementia smartwatches), and a potential client is looking for heart-monitor chips today to go into production as soon as possible. There are two available chips. One is relatively low performance (less reliable, higher power than the ideal) but ready as a product to buy now. Another is potentially better, but—even with good funding— manufacturing at scale will take a couple of years. Which will the smartwatch manufacturer choose?

74  |  EXPL AINING THE FUTURE

Analy ze  |   75

Canvas 6 This last canvas helps you think through how a technology or application might develop.

Unless the payoff in waiting for two years is huge (the lower power and higher reliability make the application much more likely to succeed), the current technology will likely win in the short term. The company that makes this will (if they’re smart) then use the profits from this order to improve their offering so that, by the time your new product becomes available, they’ll be more competitive both in performance and, probably, on price. Thus, even if the new technology is better theoretically, the lead time to production could make it difficult to beat your rival. If the newer technology is genuinely disruptive, that is, it offers something completely new or does something in a very different way than was done before, this need not be true. However, for technologies that represent only an incremental improvement on what has gone before, the lead-time issue is important. On the other hand, if the smartwatches aren’t going to be finally designed and manufactured for a while (i.e. if there is a lead time on the application side as well), then the advantage of the technology that is quicker to market may not be as important. This is why it’s important to consider the timing of not only the competing technology but the application itself.

Step 2: Roadmap and uncertainty The first stage in determining what can/will/might happen is to work through all the obstacles to a product or service being made available and all the infrastructure that has to be put in place. To make this easier, we can split these into two sets of issues: those where the protagonists (e.g. those at the company doing the work) are in control, and another where they are not. The former steps fall into the setup column because they are relatively straightforward: there should be no reason—in principle—why they should fail. The latter, on the other hand, fall into the dependencies column, because there are no guarantees. So, hiring staff, building a factory, and finishing off a design are all setup issues, because accomplishing these should just be a question of money, time, and effort. In contrast, waiting for 5G to come online, raising venture capital, and achieving medical device certification can all be classified as dependencies, because there is no guarantee that the people with the power to make these happen will do so. However, having a clear picture of what steps have to be performed in order to get a technology moving doesn’t mean you are guaranteed to get 76  |  EXPL AINING THE FUTURE

them finished by a particular time. The next stage of figuring out the roadmap involves putting ranges of numbers on all the different steps, to guard against unfounded optimism on how quickly things can be done. These ranges may, in part, be linked to resources; so, for instance, a company might be able to complete one of the processes within a year if ten people are hired, but it will need three years to complete the process if there are only four people working on it. Having this kind of clarity is useful when making decisions. For instance, if you see a competitive market that may slip away if the lead time is too long, then the amount of money raised (to minimize setup time) may become a critical issue. For dependencies, the relevant issue is not necessarily the amount of time things take but also the possibility of failure. For instance, say you had an augmented reality application that relied on the widespread availability of 5G. There is an industry roadmap for it, and lots of research in progress. Some consultants say they know when it will launch, but that’s not guaranteed. Further, how widespread the service becomes in any one country will depend on lots of factors, including the economy, the willingness of the government to invest in infrastructure, the price point for headsets, and so forth. You can express this kind of uncertainty in two ways. For instance, you could argue that 5G is an inevitability: it will happen; the only question is when. Thus, you can give it a range of 10–25 years (say). This gives you a 100 percent certainty rate but is really not helpful for decision-making or planning. A more useful way of thinking about this problem is to dial back the range of years to something that allows you to plan (say 10–15) but then puts an uncertainty on it. So, you might decide that there is a 70 percent likelihood that your figure is accurate and then plan accordingly. You can use this kind of approach (range plus uncertainty) to represent setup processes as well; it’s just a question of how detailed you want your analysis to be. For completeness, you’ll also want to transfer this information into a spreadsheet: exactly how you do it is unimportant (as with all of this), as long as you understand your own workflow and are methodical about recording everything. If you put your application/competitors along the top, and the setup and dependencies down the first column, this will help to remind you that a setup issue you’ve identified for one project may well also apply to another (if it doesn’t, you can say why). If you have two Analy ze  |   77

c­ olumns per application/competitor, you can use one to record expected time and the other for your confidence in the first. Note that, if you are trying to do a fully detailed risk analysis of (for instance) the project failing, you would not only look at all the setup and basic dependencies but all the different things that could potentially go wrong. Although that is the kind of thing you would want to do when starting to move forward with a project, it’s beyond the scope of the kind of preliminary study we are discussing here. Once you’ve thought through the setup and the dependencies, the next stage is to put them together to see what the total lead time will be in each case. So, now is when you decide what can be done in parallel and what needs to be done sequentially (you can encode these in color on the spreadsheet). Once you’ve done that, you should be able to add up the sequential numbers to get your final lead-time range. You can also draw this as a roadmap showing the relevant milestones and dependencies, both to remind you and to help you communicate it to others (which we’ll get to in the next chapters). Remember, it’s not enough to do this only for your favored project. It must be done for the application, if it’s new or changing, as well as the main competition. This way, you can start to get a picture of whether the lead times on any of these projects will affect their chance of success.

Step 3: Evolution However, lead time is not the only important issue when it comes to timing. Whether a technology is worth investing money and effort into is also dependent on how it is going to evolve over time. The most famous concept associated with technological evolution is Moore’s law, which predicts that the number of transistors (and so, ­nominally, the computing power) of chips doubles every year. There are various arguments about whether the law will eventually break down at a certain point, but, regardless, for years, it has provided a good prediction of expected progress in the field. This has given chip manufacturers a clear goal, and computer designers and programmers a clear picture of what they would have to work with. Further, anyone working on a competing technology would know not only what they had to achieve to beat conventional computer chips now but a good bet on what they would have to achieve to be competitive in ten years. 78  |  EXPL AINING THE FUTURE

All technologies evolve. They get smaller, faster, more efficient, cheaper, and cleaner as engineers understand and refine the processes that go into them. To see this, search for graphs of how the efficiency of refrigerators or solar cells has increased over the years, or how the emissions of cars have declined; there should be data for any technology that isn’t brand new. It’s a reasonable assumption that the application you are considering will have some kind of trajectory. The question is, will the component technologies lose their competitive edge because they cannot keep up with the rate of change? For example, designers of electric and hybrid cars need batteries to store energy. The more the battery can store, the further the cars will be able to travel on a single charge—and the more volume the battery will take up. If you want your car to go as far on a charge as a conventional car would on a tank of gas (say 500 km) without having to devote more space to it, you can work out how much the energy density (energy per volume) for that type of battery would have to improve. You can then look at the past trends and future predictions to figure out whether it will deliver, and—if so—how soon. If you can imagine wanting to pack 1,000 km into a charge, you can project that forward too and work out whether you’ll need to switch (for instance) to hydrogen fuel cells because lithium-ion batteries are running out of steam (proverbially speaking). With an emerging technology, the prognosis for evolution may be massively better, especially with the right investment, because it’s at the beginning of its evolutionary cycle rather than at the end. Guesstimating the curve (based on looking at those of other technologies and understanding some of the practical and theoretical limits that go into improving ­performance) can therefore give you a valuable clue as to whether a technology has a long-term future within a given application or not. Remember, performance is just one criterion. If you have a product that either does the job or not (beyond which performance is not an issue), it may be cost that you want to see improve; so, the evolution might concern materials, automation, or other factors that affect the price. Likewise, performance may be based on sustainability criteria, such as the amount of pollution emitted. On the canvas, you can write down all the features that you expect to change over time and which are important to the application. You can also indicate these on the spreadsheet. Later, you can research and graph them. Analy ze  |   79

Generally, you are likely to come up with an S-curve (sometimes called a “technology S-curve”), with the top of the S starting to flatten out at the point where technological limits become important and difficult to overcome. The shape and limits are not necessarily where your technological judgment will be most challenged, however; the biggest difficulty comes when trying to figure out how that curve will stretch out over time. These predictions are an inexact science. Do not expect to be able to draw beautiful graphs to see in which year one technology will surpass the other (although, in some cases it is possible to do that). It’s about deciding which options are likely to have something to offer the application in the long term, and which are not. Incidentally, just because a technology is not likely to succeed in the long term does not mean that it cannot succeed at all. Sometimes, if an application is ready to go and there is a technology that can provide some minimal functionality at a reasonable price right now, short-term success is possible. In a business sense, this kind of success is not to be sniffed at. If you are a start-up and can find an application for your new technology that will make money immediately, then it could help fund your setup costs to get to that longer-term business, even if it’s in an unrelated field.

Phase 4: Coming to a conclusion This chapter has focused mostly on the issue of gathering and organizing your information. It may seem mechanical, almost algorithmic in its if this, then that explanations. If that is what you think, you will radically change your mind as soon as you actually try to put it all into practice. The process is full of judgments that must be made: whom to trust, where to look, what to ask, how likely it is, how long it will take, and so on. You may be able to get those more expert than you to help, but this does not absolve you from the responsibility of deciding who and what to believe and what and why to prioritize. The good news is that if you have been diligent in thinking through the issues up to this point in the process, your conclusion should be clear. It may not be clear in a useful way, of course; you may end up deciding that two technologies have equal likelihoods of success for a chosen application, or that the technology that you’ve been working on will never be as valuable as you had hoped. Also, whatever conclusion you reach should be full of caveats; for example, the technology has a high chance (based on 80  |  EXPL AINING THE FUTURE

confidence numbers in your analysis) of becoming competitive within 3–5 years, assuming that X, Y, and Z happen. However, there are two sets of analyses taking place: the one you are diligently recording in your notebook and on your spreadsheets, and the one that is happening inside your own head. If these disagree, that’s a problem. Your brain has been taking in all the information that you’ve typed up, and a lot more besides. It’s been present at every conversation you’ve had and has read every word that you have on the subject at hand. If its conclusion is not consistent with the one you arrived at through the analytical process, then it’s a problem that you need to get to the bottom of. There are two likely causes of this dissonance. One is that we really want something to be true that simply isn’t; our inability to accept our own analysis may be a kind of bias that we have to get over. The other is that we subconsciously know our analysis is missing something (or lots of things), but we haven’t yet worked out consciously what this/these might be. Both of these outcomes are par for the course. When we get too close to our work, we lose perspective: we stop being able to see mistakes and get locked into our own logic, even when it’s faulty. The first thing you can do to avoid this is to take a step back. Have a break, do something else, or take a walk. Finish the basic analysis on Friday, have a good weekend, and then go back to it on Monday with fresher eyes. Even better, take a holiday for a month or work on another project for a few weeks, and then go and do a reality check (rarely possible, but if it is, take the opportunity!).

Phase 5: Reality checking Step 1: The application Whenever you provide a detailed process for analysis, such as this one, there is always a risk that some high-level common-sense element is missed out, leading to people “not seeing the wood for the trees.” You must take a step back to avoid this. Here’s one possible problem: we do an analysis and decide our heart sensor still has the right features for our watch application, but we have not asked ourselves whether the idea for the application is really f­ easible— at all or right now. Can it be built in a sufficiently small, affordable, reliable package and still run for a whole day? Is there a large enough community of people who would be candidates to wear the watch? Would they put it on in the morning? Analy ze  |   81

In general, technologies are part of long chains. One technology ­enables another, which enables another. The lower down the food chain you are, the more dependent you are on the vision of whomever you are selling to, especially if you are a small company with a small number of customers. Although one could argue that this is an issue for business people to figure out, not engineers, technical people are often much better equipped to see the potential disruption caused by emerging technologies, and so the best areas to get into and/or avoid. So, before you get ready to make a recommendation based on an analysis, take a step back and ask yourself if it makes sense from this broader perspective.

Step 2: Your own work To really have confidence in your work, it makes sense to get out of the bubble of your own team (or, worse, your own head) and go talk to people who also have some expertise in the subject. These should not be people who matter in the final decision-making process. Instead, they should be those who have a good enough understanding of the topic at hand that their opinion will be useful to you. Ideally, you shouldn’t talk to those with a vested interest in a particular technology winning (or, if they do, they should have a different vested interest than you). You’re not going to show them your spreadsheets and canvases at this stage (although having the former available to check might be a help if someone asks you for the numbers related to a particular application requirement). Instead, you’re just going to take them step by step through your process verbally. So, start by telling them about the new technology, the applications you considered for it, and why you picked the one you did. Then, move on to telling them about that application, its requirements, and the competing technologies that you had to take account of. Finally, take them through the roadmap/evolution of your application, and explain why you think the chosen winning technology will be competitive, and over what time frame. During this process, it’s important that you not be defensive about your work. If the analysis of the people you talk to is different than yours, try to explore the reasons that they came to a different conclusion and (gently) try to explain where you differ. Remember, your success is not rooted in persuading anyone at this stage; it will come from using your discussions to find holes in your own arguments so you can go and research these. Your final conclusions may 82  |  EXPL AINING THE FUTURE

change, or not. The main thing is that—by the time you are ready to write up—you are confident that you know all the arguments that people are likely to make, from all sides, and that you have, at your fingertips, your own arguments that demonstrate they are right or wrong. If you can, a good way to get this part of your work done efficiently is to go to a conference where a lot of people who know about the application or field will be gathering together. Even better, give a talk while you’re there. Make sure that you don’t overrun; leave plenty of time for questions. If you can, make clear to the session chair that you feel that a longish discussion period after your presentation will be valuable. At the end of your talk, encourage people to approach you, and include your e-mail address in your slides. The more people whose input you get, the less likely you are to miss out something important. A note of caution: if you are trying to get feedback, the last thing you want is for your session to end up on the record. Do not allow the conference organizer to video you or publish your presentation at this point, and don’t supply a paper. You should also make sure that you include some word(s) in your title that make clear that the work you are discussing is incomplete (e.g. “tentative,” “interim,” or “provisional”). If you can’t find an appropriate conference, or if the work is confidential, you can present to knowledgeable colleagues. Just make sure that they understand that their goal is not to give you an easy ride but rather to play devil’s advocate, try to think of reasons why elements of what you have said may be wrong, and hold you to account.

Step 3: Fixing your analysis Of course, the last steps are a waste of time if you don’t use all of the valid feedback you’ve received, from whatever source, to go back and re-evaluate your analysis, updating figures, changing priorities, and researching new topics and arguments that came up in your discussions. Keeping an open mind is very difficult at this stage. You don’t want your analysis to change because you’ve already invested a lot of time in it and may even have made it public. Incorporating new information could be a lot of work (there may be whole new applications/criteria/competitors to explore). Further, if the new information changes your conclusions, it could be mildly embarrassing. On the other hand, losing a lot of money for people because they trusted an analysis that you knew to be incomplete can be costly. For Analy ze  |   83

grown-ups, changing your mind is not an indication of failure but instead a willingness to be persuaded by evidence rather than by dogma. As long as the final product is the best you can make it, you have nothing to ­apologize for. The remaining chapters of the book will be dedicated to how you write up your research and analysis, but it’s important to note that the reality check that you did on your work will already have helped. Whether you have talked to experts one on one, met with groups of colleagues, or ­spoken at conferences, you’ll have had the chance to see the effect of your arguments and explanations on real people, to understand what they failed to understand, hear what questions they asked, and so forth. This experience will prove extremely valuable in thinking through how to communicate your analysis, so don’t miss out on it. We’ll come back to this issue in Chapter 5.

Summary There are different stages to performing an analysis:  1. Create a brief.  2. Decide whether you need to perform a tech-focused or an applicationfocused analysis.  3. If tech focused, start by understanding the technology: • What is the technology? • What are its features? • What are its potential applications? • Which applications are worth focusing on?  4. Next (or if you’re application focused), take the application’s point of view: • What is the application? • What are its requirements? • What other technologies are competing to enable it?  5. Now, consider the timing of the application and competing technologies: • How long will each take to ramp up? • What dependencies are there? • How are they likely to evolve?   6.  You should now have a conclusion in mind, so the final step is to perform a reality check by rethinking your own work and talking to others.

84  |  EXPL AINING THE FUTURE

case study part i

Research and Analysis

T

he best way to illustrate the methods described here is through a worked example. I started by developing my own brief. First, I needed to research a topic for a general technical audience (the readers of this book, as they come from many different disciplines). The topic needed to be something that I personally had the expertise to research, one that I maintained an ongoing interest in (for motivation), and one in which something interesting was going on. It also had to be something that people who didn’t have a background similar to mine could appreciate (at least on the basis that it was something cool). In the end, the field I chose was neuromorphic photonics: the development of brain-like computing systems that operate using a combination of electricity and light. Specifically, I wanted to determine whether this field was likely to be important in the development of intelligent systems and— if so—in what timescale and for what applications. To make the project more real, I added a time constraint: six weeks. Depending on your perspective, that might seem like a lot of time or very little time. For me, given that I had other work to do, it was the minimum time in which I thought I could delve into the subject deeply enough to meet my goal of writing a high-level preliminary report. So that you can follow my research trail, I’ll start by explaining the basic ideas behind the technology. Without it—unless you’re in the field already—you won’t be able to follow my reasoning and understand why I chose to go down one path or another at various points. By the end of the case study, you’ll know a lot more about the whole field. I also include citations to some of the papers that were important in shaping my thinking as I went along. To be clear, I’m not including a full set of references here, just examples so that you can understand the CASE STUDY PART I : RESEARCH AND ANALYS I S  |   85

process­I went through. For every paper I mention, I probably skimmed through another five that didn’t turn out to be as important as I might have hoped.

A quick introduction to neuromorphic engineering Machine learning has become increasingly sophisticated over the last twenty years. Until relatively recently, machines’ behavior has been programmed by humans explicitly: if this, then do that. Now, we have machines that can learn to recognize our faces and voices and can even learn how to drive better on the basis of their own observations, and their ability to turn their knowledge of the world (acquired through databases and sensors) into meaning and action (we’re going to hit them, swerve to the left). This new functionality is possible because engineers realized some years ago that, if we want to be able to mimic (and, indeed, go beyond) human and animal intelligence, we need to be able to mimic biological brains. This is different from more conventional machine learning, where it’s only the function of the brain that’s being copied, not the detail of exactly how the brain gets the answer. Potentially, this approach could make machine learning devices faster, lower power, smaller, and cheaper. Neuromorphic engineering is—today—the most extreme form of mimicry. One definition (not the only one) is that neuromorphic systems have structure and operation that are based on biological neurons and so produce similar behavior. So, how does the biology work? Neurons process information by taking information from lots of others that may have something worthwhile to share (see the bottom right diagram in the Technology Canvas (Canvas 7) in the section on Iterative trawling), including the information we receive via our eyes, ears, and other senses. This arrives as a sequence of electrochemical spikes (clusters of ions) that cause charge to build up. When a threshold (tipping point) is reached, the neuron sends out its own electrochemical spike, which will in turn be received by the other neurons it is connected to. It will then stop receiving incoming signals for a little while (this interval is called the refractory period), and the whole process starts again. The neurons collect information from—and send it on to—lots of others, some nearby and others farther away. Some of these signals 86  |   EXPL AINING THE FUTURE

even  end up looping back on themselves and being processed by the same neuron more than once. As all this is happening, the neuron is changing, based on how relevant connections to specific neighboring neurons turn out to be (how likely one neuron’s signal is to cause another to fire). The more relevant the information a connection carries, the more it is strengthened: the less relevant, the more it will atrophy. Over time, this means that the neuron learns to ignore irrelevant signals and can give more weight to the most important ones. The strengths of the connections are therefore known as weights: they store knowledge of the system. All artificial neural networks use weights of some kind, but, for the system to be truly neuromorphic, there must be a more direct link to nature. There are various ways that an engineer might choose to copy the brain’s biological structure and behavior into an electronic device: I’ll focus on just two that are relevant to our discussion here. First, the connections between neurons are direct: there is a biological “wire” that goes all the way from the emitter (output) of one neuron to a detector (input) of another. Since each neuron, on average, takes information from thousands of others, this structure cannot be emulated with electrical wires (we’ll get to why that is later). For this reason, the connections between artificial neurons often come in through networks (like the internet), where signals going from one neuron to another include the address of their final destination and are then redirected from one node to another until they get there. If you can stick to direct connections instead, the arrangement can make processing large sets of related data significantly more efficient. In images, for instance, the “meaning” of one pixel is directly related to the value of its neighbors. The geometry of the system potentially allows you to process the pixels in relation to each other in parallel, and in very few steps. Neuromorphic engineers are also keen to mimic biology’s analog behavior. Although the electrochemical spikes that neurons send out are all the same (which makes the system look digital because there’s either a spike or not; in digital terms, a 1 or a 0), the timing of the spikes is analog, that is, there is no clock that regulates when they are sent or when they arrive. If a bunch of spikes arrive at a neuron at the same time, they are more likely to push it over its firing threshold. This means they will be deemed relevant, and the connection will be reinforced. If a spike arrives just after the neuron has fired, it will be ignored as the neuron recovers CASE STUDY PART I : RESEARCH AND ANALYS I S  |   87

from its labors, and the connection to the neuron that sent that spike will wither. Since the precise timing of incoming spikes encodes their meaning, and since this timing cannot be maintained using indirect networks of connections (with some important exceptions), they destroy information. If the timing information can be maintained, on the other hand, then an appropriately designed analog circuit can process all of the incoming signals in parallel. So, here comes the engineering trade-off. It is entirely possible to use digital technology to simulate the analog behavior of neurons (and there are projects that do this). The problem is that this approach is inefficient in terms of power and speed. You can improve it by matching the geometry of the connections (whether in a network or not) to the geometry of the problems, but each neuron can still only process the information it receives bit by bit in series. That takes energy and slows the system down. It all boils down to the following question: why build a big power-hungry system to ­simulate a small set of analog neurons if you can just build a small set of analog neurons to do the job? It is possible to simulate having direct connections from every n ­ euron to every other one without sacrificing the spike timing. You use the same wire to broadcast signals to lots of different neurons but include an address in the signal so it’s clear which spike was meant for which n ­ euron. This works, but it potentially sacrifices speed: the more neurons are using the same wire to send their signals, the slower the communication. If you send too much information down the wire, you risk signals colliding with each other, and information being lost. Why do we care about this now? Because we are expecting an increasing amount of intelligence to be delivered ever faster from ever-smaller devices and, for self-preservation, we need to do this without increasing the amount of energy we consume. The goal is no longer just about creating machine intelligence: we know we can do that. Now, we need to be able to do it efficiently.

A quick introduction to photonics in computing The difference between electronics and optics is that one technology involves manipulating electricity (electrons), and the other, light (optical 88  |   EXPL AINING THE FUTURE

photons). However, because electrons and photons are very different, they have very different advantages and disadvantages for engineers. Electrons are fermions, which means they do not like to share the same space at the same time. This is why, in physics and chemistry, they take up distinctly different “orbits” within atoms, and why they are so sensitive to small changes in the electromagnetic environment. The fact that they are easy to manipulate and interact strongly with each other makes them ideal for constructing complex circuits but poor for communication. As they move along a wire, electrons are constantly being pushed and pulled from the path they are supposed to be following, repelled by some particles/fields and attracted by others. Every diversion or collision causes them to lose energy (as heat), which means that the further an electrical signal has to go, the more electrical energy has to be put in to guarantee getting a signal out. In addition, the heat created has to be dissipated somehow or it will heat up the wire it’s traveling through, and potentially change the wire’s behavior. Light, on the other hand, is a boson, which means that photons can happily occupy the same space (or pass through each other) without having any effect on each other except in special conditions and with exotic materials. This is why light can travel from faraway stars and galaxies, and down miles of optical fiber, with relatively little loss. The only complication is that you have to make sure you can discriminate between the different signals that you send down an optical channel. One way to do this is to have each signal encoded in a different color of light, which can be filtered out of the mass of signals and detected when needed. Optoelectronics and photonics (the terms are often used interchangeably) involve the use of electronically controlled active components to allow us to generate, amplify, and attenuate light while passive optical components shape and guide it. This technology has grown up to enable our telecommunications infrastructure, connecting the Wi-Fi routers and cell towers to each other through a web of optical fibers.

Getting down to work So, let’s put photonics and neuromorphic engineering together. Analog neurons have advantages in speed and efficiency over digital simulations of them, but also an inherent communication problem. Communication using light is faster and more efficient than using electrons traveling CASE STUDY PART I : RESEARCH AND ANALYS I S  |   89

through wires. If you could use the optics to enhance the analog behavior of the circuit, then you should be able to get systems that genuinely compete with digital electronics (at least in theory). I wanted to explore whether it was really likely to happen in practice. This gives you the gist of what I knew about this topic going in. I was aware that was work going on in the field because I’d read some papers1,2 for a story I’d written a few months before, but I hadn’t done a detailed analysis at that time. In particular, the first of these papers (from Princeton) reviewed the state of the art of the field as well as focusing on the authors’ own work. The paper was well written and—unusually—attempted to quantify the benefits of the new technology by comparing it with others, which made it an ideal place to begin. In addition, the group had just brought out a book on the subject, which meant that I knew I would have access to a lot of material, if I needed it, to ensure that I understood their approach. As someone who works at a major university, I didn’t have to worry about my first step: getting access to the technical literature. Almost ­everything that I needed for this project was freely available either through my library or (with some creative searching) on the web. However, even if you are not in academia or part of a large company with its own research library, it is generally not difficult to get similar access: at least for shortterm projects. Many university libraries will offer you access to their resources for free or for a relatively small fee. The main thing is to make sure in advance (by checking its online catalogs) that the subjects covered in their collections align with those you want to research. My next step was to set up a new literature database so I could describe my workflow from scratch. I decided to use Mendeley because I’d heard some good things about it, because I knew it had a paper recommendation engine, and because it would synchronize the PDFs I accumulated in my database with the tablet I wanted to use for reading papers. Once I had that installed and tested, I set out to find other groups that were working in the same general area as the original paper I had found. I searched for the keywords neuromorphic, optoelectronic, and photonic. At this stage, I wasn’t really reading papers but instead skimming them, to see how relevant they seemed, and trawling the introductions to see which projects they mentioned as competing approaches.3–5 When I found a paper that looked interesting in the references, I would then go find it and either skim it straight away or add it to my database so I could search it later for words, phrases, and sometimes even units. 90  |   EXPL AINING THE FUTURE

I was also checking to see whether there were any keywords (often provided with the abstract) that I had overlooked and search for that. For instance, I could see that I was missing some interesting stuff by not searching for neural, so I added that to my list of keywords in the computational side.6 For completeness, I also tried looking for optical, but that mostly yielded papers related to optical neural probes (to measure the activity of living cells in the brain), with nothing new in the space I was actually interested in. I also looked for conferences in the area, but because it is so specialized and relatively new, I could not find very much that way. I did find one or two sessions at bigger meetings, however, which gave me more r­ esearchers, papers, and keywords to search for. Finally, by doing some searches via my library search engine and Google Scholar, I was reminded of the limitations of Mendeley. Because individual users have to add references to their own database for them to be indexed, not all relevant papers would come up in a search. This meant I’d been missing some of the more recently published work: not a lot, but enough to be worth having.

Iterative trawling The next week or so consisted of a cycle of reading, refocusing, searching, and skimming. For instance, once I’d satisfied myself that I’d identified all of the major research groups in the area I was covering (there were about eight to ten groups actively publishing in the period I was interested in, going back five to ten years, and I had downloaded over thirty papers), I started skimming them, looking for specific things. For instance, I would search for the word application within the papers so that I could skim through and find out what the authors thought they were. Once I found one—for example, controlling aircraft moving much faster than the speed of sound—I would (if possible) download the cited paper relating to that application,7 and search to find more basic publications that would help me understand the topic.8–10 If—as in the case of hypersonic avionic control—I found I wasn’t really able to understand any of the papers very well, then I would look for less formal kinds of information. This could mean anything from articles on websites serving the avionics industry, to Wikipedia, to YouTube videos: anything at all to give me enough conceptual understanding of the application­so that I could see how (or if) the neuromorphic photonics might be a good fit. CASE STUDY PART I : RESEARCH AND ANALYS I S  |   91

I did the same thing when it came to the various features of the technology. For instance, MAC (multiply and accumulate) operations were considered fundamental to some kinds of machine learning, so I could look for papers that mentioned these. For this, I could search not only the papers I had downloaded on neuromorphic photonic projects, but electronic-only research, papers about the applications, or my whole paper database. Unfortunately, this didn’t yield much: as we’ll discuss later, there seemed to be no consistency in the way researchers measured the performance of their systems. After I felt I’d made a little progress, I started to make notes on the canvases discussed in Chapter 4. First, I went through the Technology Canvas (see Canvas 7) and wrote up what I understood of the Princeton project and the motivation behind it. If I couldn’t answer a question, I would go back to the papers and try to find the information there. If I was struggling to understand what was going on from a particular article, I would try to find another from the same group (often, thinking becomes clearer over time, so later papers are better). I might look for a different author or try to get a concept explained by another group doing something along the same lines. After reading similar things said in different ways, I would eventually assimilate enough to understand what was going on. For this particular project, the Feature Canvas (see Canvas  8) was incredibly easy to put together because the researchers at Princeton had done the work for me. They had systematically looked at the performance features they thought were important and then compared them with what they saw as their main competitors. I added a few other issues, but that was not a big job. When I got to the Potential Applications Canvas, I thought I had a clear picture of what neuromorphic photonics could be used for (see Canvas  9). I considered not only the applications that the Princeton researchers had explicitly discussed but also others that I thought might be interesting based on my own experience and the other papers I’d read. To help structure my thinking, I sectioned these off into two categories: mobile applications (like robots, planes, and phones) and fixed applications (like data servers and telescope installations). All of them seemed promising. It was at this point that I hit a wall of sorts. I needed to identify one “killer app” among the many: the one that would make the technology 92  |   EXPL AINING THE FUTURE

CASE STUDY PART I : RESEARCH AND ANALYS I S  |   93

Canvas 7  Although simplified to be understood by a general technical audience (as are all canvases presented here), this shows key information gathered about the original inspiration for this project. It answers all the basic questions, reminds us of what’s important, shows where the technology comes from, and so on. (The term 5Ws and H refers to the questions Who?, What?, Where?, When?, Why?, and How?.)

94  |   EXPL AINING THE FUTURE Canvas 8  This shows the features identified by the authors of the original paper, with a few extra added based on my own reading and knowledge.

CASE STUDY PART I : RESEARCH AND ANALYS I S  |   95

Canvas 9  The application space I came up with based on my reading. I tried to choose one standout application that might push the technology forward, but none of them jumped out as a good candidate.

sufficiently worth pursuing in the short term that it could eventually be applied more broadly in the long term. However, none of the applications seemed right. They were all esoteric, hard to explain, and niche. By this stage, I’d downloaded about a hundred papers, skimmed maybe half of them, and properly read maybe ten. I’d spent quite a bit of time trying to find papers on the application side that made clear to me how their problems could be solved by neuromorphic photonics’ features. I couldn’t find them. Without doing the design engineering myself—which I didn’t have the time or the expertise to do—I could not move forward.

Redirection At this point, I realized I needed help and contacted the Princeton group. I’d been in touch with them before, and they’d been helpful, so I asked if we could have a video call. This was a fantastically productive hour, partly because I was well prepared for it. I’d already done many days of research, reading, and thinking about the technology, so when I talked to Mitch Nahmias, the senior doctoral student in the group, I was generally able to understand what he was saying and ask sensible questions. In addition, it turns out, he’d had some of the same concerns about the applications they’d been focusing on. However, he said, things had moved on quite a bit in his thinking from the main papers I had read (published just a few months before) and now. There were two major issues that had radically changed their focus. The first was that some research had come out of Massachusetts Institute of Technology (MIT) in 2014–15 that  used ordinary CMOS (complementary metal-oxide semiconductor) processes—the same as we use to fabricate conventional electronics—to make silicon optical circuits.11 This was important, because photonic devices had, up until that point, always been produced separately using different materials and fabrication  equipment and only then combined with electronics if necessary. Although the optoelectronics need not be that expensive (they have been used in large quantities for decades in the telecommunications industry), and electronics are (famously) cheap, integrating them had been prohibitively complicated for most applications. By “hacking CMOS,” the MIT team was able to combine the two types of device in a single chip, thus dramatically lowering the cost. The second was that, just a few months previously, Google had published a paper showing how they had deployed their new tensor processing unit 96  |   EXPL AINING THE FUTURE

(TPU) chip for one of the most talked-about application in computing over the last few years: deep learning.12 Specifically, they showed that they could significantly improve the way their data centers could respond—in terms of speed of response and power expended—to queries that required inference: the “reading out” of deep-learned knowledge. This is useful for a whole class of increasingly important applications like speech and image recognition. The TPU itself is a chip that is specifically designed to perform a convolution: a function that is not strictly neuromorphic (at least, by my definition). However, to Mitch, this was exactly the kind of problem that neuromorphic photonics might be ideally placed to address. On top of this, he told me about several start-up companies—both those using conventional free-space optics and those using photonics—that were now targeting this market. This was an important redirection and highlighted a really important flaw in the technical literature: its slow speed. Although, in theory, we can now publish ideas the second we have them, in practice, the review process in technical publishing means that there is a time lag of weeks, months, and often years between an idea forming in someone’s head, work being done in the lab, and outsiders being able to read about it. This is not a bad thing: the scrutiny (imperfect as it is) that papers go through is there to prevent the publication of bad and fraudulent science. But it does slow things down. Talking to someone working directly in the field allowed me to get upto-date information in a way that would not have been possible otherwise. Smart doctoral students (especially those toward the end of their PhDs) can be ideal sources of information, both because they have to keep up with the literature and because they are also likely attend conferences and talk to others about their work, because they are on the hunt for their next job. This gives them a really good overview of what’s going on. Of course, you might get the same from the head of the research group (plus a lot more history and context), but the “big professor” is much less likely to be able to make time for you unless you have a lot to offer in return. From a process perspective, I took my notes of this and other conversations on scrap paper, then scanned it and added it to the Conversations with Researchers page of my online notebook (I used Evernote for this). I also pasted in e-mail conversations so that—when writing this narrative about the research process—I could easily review everything in sequence without having to flip back and forth between my notebook and mail. CASE STUDY PART I : RESEARCH AND ANALYS I S  |   97

Moving forward Based on this conversation, I set out to rethink my Potential Applications Canvas (see Canvas 10). My first step was to follow up on all the information I’d been provided with. I went through the start-up companies’ websites to see what they were up to and read some pieces written about them in the press.13–15 I downloaded papers by the key scientists involved and searched for patents they had filed. Keywords like tensor processing unit and zero-change photonics were added to my list, along with deep learning and convolution, in combination with other hardware-related search terms.16,17 I also made sure to do some reading up on deep learning in the trade press, to give me some of the basics before attempting to read technical papers on the subject. Some of this material I found via ENGins.org, and the rest in some trustworthy electronics and computing publications. I discovered some interesting facts during this process. There were, indeed, three separate companies spun out from the MIT team that developed zero-change photonics. Two focused on the neural processing I was interested in, and a third focused on fast optical connections between electronic chips (optical I/O). This got me wondering about the differences between the start-ups, so I searched to find who owned the related patents.18,19 Reading through all the various sources, I could see one piece of the puzzle was missing: light. The technology allowed the placing of photonic circuits in terms of routing and modulating laser beams, but there seemed no way to create them. I e-mailed three people who I thought could help me confirm this, and Luca Alloatti, the inventor of the toolkit that enabled zero-change CMOS, did just that. Essentially, he said, you would need to have an optical power supply, in the same way as you have an electrical power supply. Instead of using wires to connect these up, you would use fibers. This raised a whole new set of questions. The original paper from the Princeton group had the neural outputs created on demand by a laser. This can be a power-efficient way of handling light: only create what you need. If you use an optical power supply, then you need to produce light whether you need it or not and just block/unblock the beam as needed. In other words, it would work, but it wasn’t what the Princeton researchers envisaged when they published their papers, and I doubted it was the basis on which they did their power calculations. 98  |   EXPL AINING THE FUTURE

CASE STUDY PART I : RESEARCH AND ANALYS I S  |   99

Canvas 10  After discussions with Mitch Nahmias, I re-jigged the Potential Applications Canvas. Specifically, the inference processing for deep learning was added, and manufacturability and cost criteria were updated.

There is also the issue of cascadability: the ability to put one circuit after another after another indefinitely without any loss of performance. In the original papers, each neuron triggered its own new laser output, which would feed into the others in the network. This means there was no drop in the signal strength from the input to the output of the chip. I could imagine ways that you could do something similar using an optical power supply (perhaps having a separate input of light for each of the handful of neural layers), but, again, it was not clear to me how this was likely to affect the power efficiency. I put out some more requests for information, including e-mailing Mitch at Princeton about the optical power and cascadability issues. In the end (he was busy so it took a couple of weeks), it turned out that—in fact—using external sources was more rather than less power efficient because the laser systems used could be optimized for efficiency in a way they couldn’t in the system described in their original paper. In the meantime, I’d spoken to Jeff Shainline, a researcher at NIST in Boulder, CO, who’d worked on the zero-change photonics project but was now pursuing superconducting neuromorphic photonics. His view that the performance of the zero-change photonics platform was just good enough to help the electronics industry by creating integrated optical input ports for chips, but not nearly good enough for the deep-learning inference application I was interested in. Of course, I had to take into account that his technology was potentially a competitor to the one we were discussing. His view was that he had a technology that would massively outperform all the other competitors, if only engineers would accept the idea of having subsystems that had to be run the very low temperatures needed by superconductors. In fact, he made a compelling case for the medium-to-long term. However, although it was definitely very interesting, it did not seem immediately ready for commercialization.

Moving the goalposts It was then I realized that I had moved so far from my original starting point that the lovely comparison of performance metrics I’d found in the original Princeton paper was no longer in any way relevant. Worse, it was becoming clear that I was never going to be able to compare the various kinds of emerging systems directly, at least, not in the time I had available. The power-efficiency numbers from the Google TPU were not 100  |   EXPL AINING THE FUTURE

c­ omparable with any numbers I already had. Further, when a paper came out on the performance of another device with a similar purpose—the NVIDIA Tensor Core16—this again presented the data in a different way. Not only were the numbers noncomparable, but various papers argued that the way others had judged performance was invalid. In other words, it would not be possible to compare like with like without getting into the theory (to determine which of the methods of judging performance were most appropriate) or running experiments (in real hardware or in simulation) to see which hardware was the fastest and consumed the least power for a given task. Neither of these met my brief or, for that matter, matched my skill set. The lack of clear performance metrics was not particularly surprising, given the complexity of the topic, but explicitly recognizing that I would not be able to compare the technologies directly was an extremely ­important stage in the research. Like any scientist or engineer, I wanted to find an unequivocal answer to the question I’d posed. Sometimes, the answers are simply not available in the time you have. Accepting that was key to finishing the project. Despite all this uncertainty, it was clear that deep learning was the right application for me to study. It was a potentially major market in an ­important visible area where photonics could theoretically offer genuine benefits. Not only that, but the high-profile nature of deep learning (one of the most talked-about topics in technology in recent years) meant that—if it were viable technically—there should be no difficulty getting the project funded. Indeed, this had already been proven by the start-ups. Finally, if silicon photonics could be used successfully for this application, it could be the launch pad for an entirely new paradigm of combined photonic/electronic computer chips. At this point, I managed to arrange a background (non-attributable) discussion with a friend who I’d known almost since the beginning of my career. He’d been doing a PhD around the same time as I’d started my tech journalism career, and I’d noticed a recent paper of his that led me to believe he kept tabs on technology in this area. He worked for a major research organization related to computing (which was why I couldn’t talk to him on the record) and was incredibly smart, trustworthy, and—crucially—old enough to have seen many promising alternative computing technologies fall before the incredible might of digital electronics. CASE STUDY PART I : RESEARCH AND ANALYS I S  |   101

Before our discussion, I’d forwarded my friend (let’s call him Steve) various papers, including one that had come out of MIT and had been recommended to me by Mitch Nahmias.17 The first two authors had gone on to found the competing start-ups in that space, so this paper was probably the best guess that an outsider might have on what these companies might end up doing. I’d been busy with other work, so I’d skimmed it. However, when I talked to Steve, I realized I should have read it more carefully. Specifically, he’d noticed that—because of various problems with using optical modulators to store the neural weights (these modulators were physically big and power hungry when built using zero-change CMOS), this paper envisaged having the weights be fixed. This meant the person designing the system would have to determine the weights needed for a specific classification task and write them on the chip, and the chip would operate (forever) based on these. This had the advantage of maximizing energy efficiency and size, but only by making it impossible to reconfigure. Steve talked about this problem from the perspective of a self-driving car manufacturer. Although you wouldn’t want a system that was learning what obstacles look like on the fly as it drove around, you would want it to upgrade itself periodically (in the way software does) during an overnight charging cycle. If the weights were permanently fixed into the hardware, this could not happen. He also pointed out that—for inference in a car—you would have thermal issues. In the US, a car would have to operate at the very low temperatures of the Dakotas or the very high temperatures of Death Valley. Because photonic circuits are sensitive to temperature, this would require diverting energy to thermal management (keeping the temperature of the chip constant), thus reducing the overall energy efficiency. In the case of very big systems like data centers, he doubted there was enough of a market to be worth addressing. Google, after all, would be making its own chips.

Toward the finish line While I was assimilating this information and reading things again, I realized that I would have to go back a couple of steps. I’d tied myself in knots because I’d started out thinking about pure neuromorphic ­engineering using complex optoelectronics implementing fully fledged neural functionality. As my research had progressed, I’d ended up focusing on something completely different: a network with a much simpler function that 102  |   EXPL AINING THE FUTURE

was much easier to fabricate but might not have the long-term potential of the more sophisticated approach. So, I went back to the drawing board and redid the Technology and Feature Canvases from scratch, this time focusing on the MIT paper (see Canvas 11 and Canvas 12). I did the best I could to pull out the ­performance metrics that I thought would be relevant. These researchers gave their most useful comparisons with digital technology in terms of the orders of magnitude (powers of ten) improvement they expected. Less helpful was the fact that their metrics were dependent on how exactly the learning was configured: there were no, nice, easy-to-compare numbers. The next step was to switch my point of view and work through the Application Canvas (see Canvas 13). Here, I had to be technology agnostic, just focusing on what I could determine about the needs of deeplearning inference. I eventually settled on a good review paper on the subject.20 By now, I had a clear picture in my head of what was needed (at least for the level of detail I wanted in this case study), so this took relatively little time. The Application Competition Canvas came next. Because I’d taken such a meandering path through the literature to get to this point, I found I’d already worked through much of this. I’d found start-up companies working toward using free-space optics for the application, had looked at the superconducting work, and had downloaded papers on various other approaches. However, as I was going through this, I realized I hadn’t looked to see whether there was anyone trying to apply analog electronics to the problem. I then remembered that, at UC Berkeley in the late 1990s, there had been some work using cellular neural networks for image processing tasks, and I wondered if it might be applicable. However, no one else had thought so; I couldn’t find any papers in this area, so I finished up (see Canvas 14). At this point, we had a two-horse race between digital electronics and zero-change photonics, so I used two Timing Canvases (one for each technology) to try to determine which was most likely to win (see Canvases 15 and 16). While I was completing the silicon photonics canvas, I started thinking about a long set of tutorial slides I’d read that covered all the ways you could optimize a deep-learning system.21 This came out of the same group that had written the survey paper on deep learning I’d looked at earlier. Although much of it was too technical for me to follow in detail, one thing became very clear: there were many, many ways (dozens) to improve neural net performance through better design. With CASE STUDY PART I : RESEARCH AND ANALYS I S  |   103

104  |   EXPL AINING THE FUTURE

Canvas 11  Started from scratch again, working on the zero-change photonic inference chip discussed by MIT researchers. (The term 5Ws and H refers to the questions Who?, What?, Where?, When?, Why?, and How?.)

CASE STUDY PART I : RESEARCH AND ANALYS I S  |   105

Canvas 12  From the paper, I determined (as far as possible in the time) the positive and negative features of the technology.

106  |   EXPL AINING THE FUTURE

Canvas 13  To make sure I understood the needs of the application (independently of any technology I might want to apply), I switched focus to the needs of inference in deep learning. (The term 5Ws and H refers to the questions Who?, What?, Where?, When?, Why?, and How?.)

CASE STUDY PART I : RESEARCH AND ANALYS I S  |   107

Canvas 14  Shown are all the competitors I could find in this space. If I could identify why one of them couldn’t work in the short term, I put a red dot on the relevant criterion and then moved on to the next one. If not, I filled in the details for all the criteria.

108  |   EXPL AINING THE FUTURE Canvas 15  Zero-change silicon photonics could benefit deep learning if it were successfully commercialized now. The question is whether it could retain its advantage.

CASE STUDY PART I : RESEARCH AND ANALYS I S  |   109

Canvas 16  For digital electronics to succeed, you could argue that all the technology’s ­proponents have to do is keep plugging away. If zero-change silicon photonics can’t keep up, then digital will slowly erode any advantage it had through incremental improvement.

incremental development, there might only be a factor of 2 improvement here and a factor of 4 there, but these could mount up quickly. Hardware with the potential to be 1000—or even 10,000—times faster or more power efficient could quickly lose its edge. Of course, if the silicon photonics devices were improving at the same rate, it wouldn’t matter. You could argue that the same techniques identified to optimize deep-learning systems in general could be applied to those using silicon photonics. However, there are reasons why silicon photonics might have difficulty keeping up. First—by definition—the fabrication processes on which zerochange photonics devices rely were developed purely to optimize for electronics. Photonic circuits are so intrinsically different that any improvements that contribute to keeping electronics on track with Moore’s law are unlikely to help. Although some success is possible, it seemed to me that relying on this technology was like buying a model airplane and using the pieces to build a car. You can do it, but it will be frustrating and difficult, and you’ll be very limited in what you can do. Worse, when an upgraded version comes out, it’s unlikely to help you in any way. This is not to say that the integration possible in zero-change silicon photonics is not incredibly useful. It’s just not clear that deep learning offered the best opportunity to exploit it. Looking at Canvas  16 makes it clear why digital electronics has the advantage. There’s no time lag. Digital chips with convolution architectures already work. The technology is well understood, and now it’s just a question of plugging away at optimization: one tweak gets you better accuracy here, another, better energy efficiency there, and yet another speeds up the system. Some of these incremental changes will happen in software and so will be almost free. Others may be taken care of in the next iteration of the hardware. At this point I was content that I had as clear a picture of what was likely to happen as I was going to. The research and analysis phase of the project was over.

Wrapping up A few quick notes before we conclude this section. First, you’ll notice that there were several questions that that I mentioned in Chapter 1 that did not come up here. As I said, every case study is different. I wasn’t trying­to 110  |   EXPL AINING THE FUTURE

write a report that would determine how people invested their money, so going into too much detail on the business side was not appropriate. Also, the final two competitors used both the same materials and the same manufacturing process, so issues related to sustainability, supply chain, regulation, waste products, and so forth would be the same for both digital and zero-change silicon photonics. There was an ethical issue that I might have addressed: the use of deep learning and other machine intelligence techniques to allow big ­companies and government to spy on people. This is a great example of a bias or ethical choice that was “baked in” to the analysis I did, and one you probably didn’t notice. However, I’ve justified this to myself because I’m not the one building the systems, just comparing the technologies that one might use to build them. Someone deciding on whether to start (or join) a company in this space, and so get involved in creating or deploying the technology, really should consider these ethical issues. As it happens, I did have discussions about this during the research period: one of the engineers I spoke to was concerned about the impact deep learning might on privacy. These conversations reminded me how easy it was to forget about ethics when faced with an intellectually interesting technology. Second, research goes out of date—sometimes, very quickly. The work I described here was done in a roughly six-week period from the end of March to the beginning of May 2018. By the time this book is published, this research will be a year old. By the time you read it, it may be several years old. So, whether time has proven me right or wrong in terms of the conclusion of my final report (which is coming up at the end of the book), bear in mind I was doing my best with the information I had at the time. Finally, the narrative you’ve just read is really not the kind of thing you would ever want to share with anyone, at least, not on paper. You might tell bits of the story to colleagues or sources to explain where you’ve got to with your thinking and as a prelude to further discussion, but only if they’ve already got a background in the subject. When reading your analysis, no one really wants to hear about the twists and turns, the false starts, the confusion, or the backtracking. They simply want a linear argument that explains the whole story, starting with why it’s supposed to be interesting to them in the first place. That’s what we’re covering in the next three chapters. You can see the results of using these techniques—a short annotated report—in “Case Study Part II: Report.” CASE STUDY PART I : RESEARCH AND ANALYS I S  |   111

CHAPTER 5

Audience and Explanation

B

efore we get into the guts of the structure of a report, we have to think about the nature of communication: what it is fundamentally. In electrical and electronic engineering (EEE), we use something called communication theory for this, and the ideas upon which this theory is based are really useful in thinking about human communication. The key system elements are the transmitter, the receiver, and a noisy channel. In human communication, we can think of the transmitter as being the person who is giving the talk or writing the report (you), the receiver as being the audience, and the noisy channel as being various problems that get in the way of successful transmission (like poor punctuation and grammar or being inaudible). To take this metaphor a bit further, we can think about bandwidth as the amount of information that you can—in theory—transmit via a given channel. Your goal is to optimize your use of this bandwidth while preventing noise from distorting the message. In human terms, noise can be anything: distracting thoughts, a headache, the constant buzzing of your cell phone, and even annoyance with bad writing. All these things can stop a message getting through. You also have to think about the receivers (the audience) and their ability to take in your communication. In EEE, we know exactly how the receiver will decode a signal, so we design it to optimize that reception. With human receivers, it tends to help to design for what we may see as their weaknesses: lack of knowledge, lack of interest, limited attention

Explaining the Future: How to Research, Analyze, and Report on Emerging Technologies. Sunny Bains © Sunny Bains 2019. Published in 2019 by Oxford University Press. DOI: 10.1093/oso/9780198822820.001.0001

112  |   EXPL AINING THE FUTURE

span, poor eyesight, or color blindness. All of these things could prevent your audience from receiving and understanding your message. In this chapter, we are going to focus on crafting your message so it is compatible with your particular human audience. It has to be that way around because, in most circumstances, people are not given the opportunity (as they are in communication system engineering) to spec out both sides: you’re stuck with the audience you’ve been given. In Chapter 6, we’ll focus on structural issues; in Chapter  7, we’ll look at ways to use good writing to keep the audience on your side.

Audience I don’t think I’ve ever met anyone—in academia, industry, or elsewhere— who has not experienced bad communication, whether in the form of incomprehensible reports, deathly dull lectures, mystifying presentations, or assignments given without sufficient background to make it possible to get the job done. In case no specific examples from your own life have yet jumped into your mind, let me give you a few from mine. Some academics were trying to encourage the development of projects across different departments. The first step in the process was for them to understand each other’s research, so they could identify common interests and complementary skills. To this end, they ran a seminar series so that research group heads could share their ideas and approaches. After the first five minutes of one particular talk, many people were completely lost. Someone asked the speaker (OK, it was me who asked) what audience the talk was intended for. His answer pretty shocking: “Audience? I didn’t give the audience a second thought. This is just the talk I gave at the XYZ conference last week.” Another time, at an English-language conference in Germany, a professor stood up and started speaking in German. I asked the conference organizer what was going on, but he was baffled. Apparently, the professor had said something in German about simultaneous translation (even though there was clearly no translation equipment around and he had already been told that the conference was in English). There were some Japanese students there, who— because of some vocal similarity between German and English to their ears— apparently didn’t realize for some time that the language had changed. They just thought the speaker’s English was really bad (or their own ability to understand English was much worse than they had thought)! Even more annoying than that, he answered questions in English that was more or less perfect. Audience and Explanation  |   113

A final story is as follows. A young researcher gave an internal seminar one day. He’d obviously thrown everything together at the last minute: his slides were ugly and full of mistakes, he had almost no introduction to allow the audience to get into the topic, and he kept having to answer questions and go backward because he missed topics that were crucial to the understanding of his work. It was a complete mess, and the audience ended up feeling that, by the end of the talk, they actually understood less about his work than they had at the beginning. What all these stories have in common is that the speakers involved demonstrated an obvious disrespect for the people who came to hear them. They did not consider whether or not they had wasted the audience’s time, whether the goals of the audience had been met, or whether the audience had gained any understanding whatsoever of the subject. The speakers only thought about their own time, unwilling to do any more than the most basic preparation. One might wonder, why did these speakers bother to turn up and speak at all? What was the point of their standing up in front of an audience they didn’t care about and who, in turn, would get almost nothing from the experience? For people at the top of the food chain in academia (yes, it happens in industry too, but less often), the problem may simply be that they do not see the communication of ideas as an end in itself but rather as a means to an end: in other words, they don’t care about the listener/reader; they only care about the prestige or other benefit (like a fee, an expenses-paid trip, or a favor owed) that comes with that particular presentation/publication.

Showing respect In the real world, this attitude to communication does not get you very far, and even less so if the people with whom you are trying to communicate are above you on the food chain. Reports in the corporate world tend to flow up the chain of command. You report what’s going on in your domain to your bosses, and they use this as part of their report on the wider picture for their bosses, and so on. So, reports should not only demonstrate your competence at your own job but make your manager’s job easier. Bad reports and presentations make you look arrogant, incompetent, or both. In other words, to communicate successfully, you either need to care about the people you are communicating with (and want to do them a service of some kind) or have some stake in persuading them of your 114  |   EXPL AINING THE FUTURE

position. Without this intrinsic motivation, the whole exercise becomes selfish, pointless, and—often—counterproductive. People are less likely to give investment, jobs, promotions, and so on to people who bore them rigid or otherwise waste their time. So, what exactly is good communication? It involves achieving a number of different things at once. First, you need to be trying to communicate something that is of value to the people you are speaking to (and, to keep you motivated, there should be some value to you in reaching that audience . . . even if it’s just keeping your job). A report may be perfectly clear but, if it is on a subject that your audience doesn’t care about, it is pointless. Part of your job, therefore, is explaining to people that subjects they don’t view as relevant or interesting to them actually are. The next part of good communication involves explaining technical (or other) issues so that your audience actually understands. No matter how interesting your subject is, if your audience cannot follow your argument, the whole business will have been a waste of time for them. You must explain things properly and remember that what’s obvious to you isn’t always clear to everyone else: the research and analysis you have done will have made you an expert in your subject. The whole point of your report is to enlighten people who don’t share that expertise. A critical element of good communication is preparation. Being able to communicate well is a process, and if you leave out some of the steps involved, you are likely to make major errors. One of those steps is giving yourself enough time go back to your material after a break, so that you can then “see” those mistakes (as discussed in Chapter 4, “Step 3: Fixing your analysis”). This time, however, you’re not trying to determine whether you came to the right conclusion but instead whether you ­adequately explained your logic and reasoning. How long the break should be will depend on you and your level of experience. If you really know what you’re doing, the break might be as short as twenty-four hours. More often, I would suggest, you will need days or weeks to get enough intellectual and emotional distance from your work. We’ll come back to the politics of different kinds of readers later, but, for now, there’s just one thing I want to get across. Whatever you are working on—a report, a paper, a talk, a proposal, or a video—you must approach it with the audience foremost in your mind. If you can’t force yourself to genuinely care about really communicating with them, you need to somehow fake it and do the work anyway. Audience and Explanation  |   115

Preparation Second-guessing who the audience is and what they want is one of the most difficult jobs in communication. You should begin each project by writing down as much as you know about the people you are trying to reach. Start with the basics and then move on to more advanced issues that relate to your specific field. What kinds of jobs do the p ­ eople in the audience have? Are they technical people or managers? Are they in industry or in finance? What educational level have they attained? Are they entrepreneurs with no degrees but lots of companies, PhDs with no real-world experience, or somewhere in-between? Are they students or senior academics? Are they engineers with lots of practical technical experience and knowledge but few paper qualifications? Whatever education they have, is it in your field, a related field, or in something completely different? Is their job related to your field? Why are they likely to want to read your paper/attend your talk? What do they think they might get out of it? What do you think they might get out of it? These are mainly first-order questions: questions directly about the people you are going to try to get through to. Depending on the circumstances, there are lots of ways you can get the answers to these questions. Say you’ve been investigating a new technical avenue that might be strategically important for your company. The people in the management team that is going to consider your plan come from multiple departments, with multiple skill sets: each person has some expertise relevant to what you’re proposing, but no one on the team has the entire picture. You’re sitting down to write your report. How do you start? First, consider what you know about the specific individuals involved. Perhaps you already know quite a bit about their background. Perhaps you can find out a bit from your colleagues, LinkedIn, or other sources. Maybe you will try to get a few minutes to talk to some of the team members in advance. You need to be able to think through the benefits and challenges of the proposal from their point of view, and in language they can understand, in order to persuade them. Thinking through the background of the individuals who will be at the meeting will also help you figure out what to say about these issues and at what level of detail or explanation. 116  |   EXPL AINING THE FUTURE

Multiple audiences It is possible to handle two or more audiences at a time, but needing to do so will have an impact on the structure of your report. Let’s assume you’re writing something long (say 5,000+ words). Management will likely be mostly interested in the headlines, so they may not want to read all the details (or be qualified to). For them, you can start with an executive summary and include only the high-level argument there. This is where they will decide whether it’s worth reading further. The main report with the technical details will be aimed at the engineering team, as only they will have the expertise to pore over your logic and evidence. To make the whole thing coherent, you need to cross-reference the two, showing where the conclusions in the executive summary are supported by the main body of the report. You can do this by simply adding crossreferences like “Section 4 of the report shows that …” and then highlighting the conclusion that Section 4 came to. This gives managers the option of skimming through the summary and then asking one of their team to review Section 4 to ensure that it’s credible. Another byproduct of having two different audiences is that you may need to create two versions of (say) a figure that explains some key concept: one aimed at people who are only interested at a strategic level, and another for those who want to know the technical details. If you do this well, it should not look like you are showing the same figure twice. Rather, the introductory figure (the one aimed at management) will help ease the technical people into the version they see later on.

Purpose As well as figuring out who the members of your audience are and what they are expecting, it’s important to work out what you want to achieve. Are you trying to persuade your audience to take a particular position based on your research? This would be the case if you were making some kind of proposal to a potential customer (or a funding agency, or people higher up in your company, etc.). If you’re a student, you’re probably trying to persuade your audience (whether that’s a supervisor, reviewer, or examiner) that you’ve made some worthwhile contribution to the field. Audience and Explanation  |   117

Not everything comes down to persuasion, of course. Sometimes communication is more about education: perhaps you are simply trying to show how emerging technologies may impact a particular industry. Your goal may even (in part) be to entertain or move your audience. If you’re in a session full of three-minute pitches, one of your goals will be for people to remember you: the best way to do that may be to make them laugh, cry, or get angry. In other words, make an emotional pitch. Going into this in depth is beyond the scope of this book, but if that subject interests you, I can recommend Made To Stick: Why Some Ideas Take Hold and Others Come Unstuck, by Chip and Dan Heath. In any case, whatever you are trying to achieve, you need to have this goal at the front of your mind before you write a word. You also need to know what story you are trying to tell. If you are trying to persuade someone, you really need to know what you’re trying to persuade them of. Maybe it’s that a particular technology is important and that the company should be investing in researching it. Or perhaps that’s already been decided upon, and you’re trying to persuade the higher-ups that one particular approach to it is better than another. If you’re trying to educate, it’s the same thing. What do you want them to understand? How the technology works? Who the players are? What the market is? What problems are likely to arise? What problems have already come up and need to be solved? You need to have some kind of agenda for communicating with whomever it is. Without one, what’s your motivation for communicating in the first place? OK, let’s get more specific. Let’s say your company has developed some kind of medical imaging system that you want to promote. First, ask yourself who your target audience is. Angel investors and venture capitalists? Medical doctors who might want to use the technology in their practice? Hospital administrators who might have to approve any purchase of equipment? Other medical imaging researchers? What you write will differ depending on which audience you have to address, even if the general topic is the same in each case. Let’s take the last audience first. If you’re communicating to researchers in your own field, they will want to know how what you did built on research that preceded it (especially since some of the researchers who did the earlier work will be in the audience and will want to ensure they receive proper credit). Competitors will want to know how your approach and results compare with theirs. On top of that, of course, you want to 118  |   EXPL AINING THE FUTURE

persuade them that your work is “best” (or at least “worthwhile”), in some narrow sense of the word. Maybe it’s that the device is fast, light, and cheap enough to enable handheld applications in the field, even though the resolution isn’t as good as with more expensive equipment in the lab. Your purpose is to persuade your audience that your research has made possible the introduction of an important new tool for diagnosing disease and that thus you have made a contribution to the field as a whole. On the other hand, you might be trying to reach medical doctors devoted to the diagnosis and treatment of one specific condition. The physicians attending don’t really care about the history of your field or what your competitors are doing. They probably don’t even need that much detail about the subtleties of how your imaging system works. What they want to know is how it will help them to diagnose the disease in question, in what circumstances, and what results they can expect. Also, they will probably want some reassurance that it is simple to use, reliable, and safe. Here, your goal is to convince them to use your new imaging system. As we discussed previously, your purpose always has to be tailored to your audience, unless you’re publishing for purely academic reasons. In that case, you can think about it the other way around and choose your audience (by picking the right journal or conference for the paper you want to write). The main thing is that you must always begin the writing process by identifying both your purpose and your audience. Only then can you take the next step and start to think about your outline and what exactly you want to say. We’ll talk more about that in Chapter 6, but we’re not finished thinking about the audience yet.

Explanation Technical explanation is hard enough when those you are talking to are as expert as you are, let alone when they have little or no background in your field. When you start writing, therefore, you have to try to imagine your audience and who they are and then design a technical explanation just for them. To help you, you should start by asking yourself, “What do they know?” and “What do they need to know?” Even when we try to do this, however, we tend to find it very difficult to get into the heads of people who are not at our own level. We think that, because we know something, anyone else with any reasonable level Audience and Explanation  |   119

of intelligence/education must know it too. This is known as the falseconsensus effect, and it’s a serious problem in communication. It causes technical people (who work in a bubble of colleagues who all understand the same concepts and jargon) to constantly underestimate how much ­explanation their colleagues, clients, managers, or friends need in order to understand their work. Let’s take an example of some work that was done 20 years ago, but which is still relevant to technology today. If I told you I saw someone present a paper on modeling the polarization-hopping properties of VCSELs1 at 405 nm, you might well think that was boring and esoteric with no practical value. You’d be wrong. Instead, I could have said that I saw a presentation that was one of the enabling technologies behind Blu-ray. One reason you couldn’t store a whole high-definition movie on an old-fashioned DVD is that the amount of data you can store on an optical disk is related to the size of pit or hole you can burn into it. The more holes you can squeeze into a given area without overlapping, the more 1s and 0s you have. How small a hole you can burn is related to the wavelength (color) of the light: the longer the wavelength is, the bigger the spot. By moving from deep red (780 nm, CD) to mid-red (650 nm, DVD) and finally to blue (405 nm), they could reduce the size of the hole burned (and increase the data storage possible) by a factor of 4. Unfortunately, at that time, blue lasers were still relatively new and expensive. Researchers had been interested in using a potentially inexpensive type of blue laser fabricated on a chip (a vertical-cavity surface-emitting laser or VCSEL, where the laser beam literally emerged from the device surface). It looked promising, but the polarization of the light was unreliable: it would flip back and forth seemingly randomly. When it was in the wrong polarization mode it wouldn’t read the disk properly, making the system unreliable. The contribution of the scientists presenting the paper was to model the physics of the blue VCSEL to try to understand why this was happening. If they could do that, potentially, they could fix it. Let’s break this down. Because I’ve deliberately chosen a historical example, there’s no need to go into a lot of detail about high-density ­optical data storage. Almost any consumer today will know about Blu-ray Discs. You also don’t need to know anything about the physics or optics 1  Vertical-cavity surface-emitting lasers.

120  |   EXPL AINING THE FUTURE

involved. I explain each important concept to you step by step: optical data storage works by burning pits; smaller pits mean more data; lower wavelengths mean smaller pits; blue lasers were tricky to make; and so on. Once you understand all this, you can you can appreciate the work and why it was (potentially) valuable. What you don’t need to know are the exact equations for anything, how VCSELs are fabricated, or other ways (besides making smaller pits) of improving the storage density on a disk. Of course, if I were presenting a paper at an optical data storage conference, a lot of this information would be redundant and I could leave it out, as my audience would know as well as I about the math behind higher storage densities. What they might not know is the state of the art in blue VCSEL technology, so I could probably start my explanation there. For this audience, I’d go into more detail. You could argue that all of this explanation is a bit pointless: all you really need to know is that the work these people did helped to make Bluray possible. It’s true, I could just say that, and you might even believe me. But rational people need evidence. They don’t just need a conclusion. They need an argument that they can follow that leads them to accept that conclusion. If they can’t follow the argument because you have left out one or more important steps—for instance, that wavelength/color is related to pit size—they are much less likely to be persuaded. There are other issues to consider besides technical background when it comes to explaining things well. For instance, you may be aware that your audience is biased against a particular subject, perhaps because of a spectacular failure by a group who did something (nominally) similar in the past. In this case, you need to address that concern directly and explain why the new project will not have the drawbacks of the old. Alternatively, you may find that your audience is used to understanding a word in a particular way. For instance, many people think of chromatography as a way of separating out different elements from a solution—and therefore as a diagnostic tool—because they did a project on chromatography at school or watched some episodes of CSI. To biochemical ­engineers, however, chromatography is a purification method. If your audience thinks you are analyzing something when, in fact, you are purifying it, they will undoubtedly be confused by your explanation of what is going on. One further thing you might consider is age. Even if your audience consists of people from just one engineering discipline, how old the Audience and Explanation  |   121

­ eople are may have a significant impact on what they know. Computer p science degrees were very different in the 1970s than in the 1990s or the 2010s. Whole fields have come and gone over the decades. Even if people in your audience are experts in their own field within computing, there may be many other areas where they know very little beyond what they did at college. Depending on when that was, this means their understanding could be anything from advanced, to basic, to just plain wrong. One common misconception is that people mind your explaining things they already know. In fact, they are often very grateful: going over concepts they know validates the knowledge they already have, making them feel more confident that they are following your argument. It also prevents misunderstandings for those who only partially remember the subject. Be careful to get the tone right, however: some audiences can be prickly if they feel patronized. If you think that may be a problem, aim your explanation at the people who you think have the least amount technical knowledge, but to say it in such a way that you imply that you are just reminding them. In a presentation, for instance you might use phrases like “you may remember” or “as you recall.” To help you to get your explanation right, having a potential audience in mind is probably not enough. Unless you are quite experienced at communicating with this particular group, that false-consensus effect is still likely to cause you problems and make you start at a level that is too advanced. By far, the best way to avoid this problem is to start by communicating with a real person. Identify someone who has traits in common with your audience (especially in terms of technical background and knowledge of the subject at hand) but has the additional qualities of being helpful and friendly: someone who will be willing to either listen to you talk or read your report and tell you what is understandable and what is confusing. This reader should be a friend or colleague who’s not involved with the content of the report, and someone you trust to be honest with you. Talk them through your story, possibly using bullets to help guide you, and explain the technical ideas they need to follow your reasoning. During your conversation, ask them to say your argument back to you so that you are sure that they’ve really understood. If one way of explaining something doesn’t work, try another. Take notes on what does work. To get the most out of readers, you need to treat them with respect. First, accept their criticism graciously! Remember: they are doing you a 122  |   EXPL AINING THE FUTURE

favor. If they don’t understand something or are confused, don’t assume they’re stupid or not paying attention. Assume that you didn’t explain it well enough, and try to find a way to do it better. Finally, don’t have a big argument with them. If you decide that they’re completely wrong, you can just ignore them (without telling them that). If you don’t keep on good terms with your reader, they won’t read for you again. (And, even if you would never want them to, you don’t want to lose a friend or friendly colleague.)

Jargon If you take my advice and talk to a nonexpert, the first thing they are likely to point out is your use of jargon: the technical terms that most people outside a given field don’t understand. So, for instance, although most people know what a mortgage is, because it is a term used commonly by the public, only a small minority really understand what a derivative product is: we know it’s a financial term but that’s about all. Every field has its jargon, and it’s a huge barrier to understanding. Part of the problem is that it becomes invisible to us, because we use it every day. We think they’re normal words, and yet, when we use them outside our immediate research area, we can leave people baffled. Acronyms (phrases that are shortened, usually to the first initial of each word) are particularly troublesome for several reasons. For one thing, they can mean completely different things, depending on which field you happen to be in (and even within the same field). For instance, just within EEE, the acronym PCI can mean peripheral component interconnect, protocol-control information, panel call indicator, and a few more besides. In engineering, it may mean pulverized coal injection. In medicine, it can mean percutaneous coronary intervention. In biology, it can mean potato carboxypeptidase inhibitor. In finance, it can mean payment card industry. Check Wikipedia, and you’ll find a dozen more definitions. The point is that if you don’t know exactly what someone means by PCI and you try to guess, you’ve got a good chance of getting it wrong. As the communicator, it’s your job to stop the audience from having to guess. Another problem is that sometimes even the writer may not fully understand the meaning of an acronym they’re using but chooses to use it anyway. If others then pick up on the undefined term from the context, what it actually means may eventually get lost in the mists of time. Meaningless and Audience and Explanation  |   123

ambiguous terms don’t make for good science or e­ ngineering, and even less do they make for good writing. At best, you get silly tautologies (repetitions) like MEMS systems (when MEMS stands for “microelectromechanical system”). At worst, you flummox your audience and can’t even be sure what you’ve written yourself. So, the rule is to spell the acronym out the first time you use it, unless it is so well known that absolutely everyone knows what it means (like LED). Even then, for consistency’s sake, there’s absolutely no harm in defining it anyway: the number of words that these definitions take up is a tiny price to pay for avoiding confusion on the part of the reader. A last note on this subject: remember that readers only have so much brain space to devote to acronyms. They can be useful to avoid having to write out some big ugly phrase like vertical-cavity surface-emitting laser every time VCSEL, prounounced viksel, is much nicer). However, if you’re only going to use the term once in your whole document, you have to use the full version anyway, so why bother the reader with a shortened form that they’re never going to see again? So, don’t use acronyms unless you actually need to. The only exception to this is where the acronym is far more commonly known than the spelled-out version but still not necessarily that well known to your entire audience, like HDMI or RAM. Then, you would use both the acronym and its definition, so you can exploit your reader’s existing knowledge. Of course, not all jargon comes in the form of an acronym and, even if it did, just spelling out what the letters mean doesn’t necessarily help. For instance, what’s an optical correlator? Downstream processing? Lyophilization? Just knowing the word does not necessarily help you unless you are a specialist in these areas: if the concepts are important, they need to be defined for you. Don’t let the word define terrify you: short, functional definitions are what is required, not a whole page of encyclopedic detail. The goal is to give the reader what they need to understand a concept but no more. For instance, if I told you that optical correlators were pattern-matching systems that could be used for recognizing faces and objects, that would probably be enough for you (now you know what they do). I might also add that they can match large images as quickly as small ones, making them more scalable than electronic correlators. You still don’t know how either of these systems works or why. But, in the context of the argument I am trying to make, that may well be enough information. 124  |   EXPL AINING THE FUTURE

The philosophy behind the functional definition is that you should only include what the audience needs to know on any given subject. Any more is simply a waste of bandwidth.

Visualization You have more than words at your disposal to explain your ideas. Many technical people make the mistake of thinking that a good alternative is  mathematics, but that’s generally not true. Unless you know very ­specifically that your readers are in exactly the same field as you and are fluent in the same kind of math, they will probably find words easier to understand. A picture, on the other hand, can be worth a thousand words. Technical people are generally adept at reading graphs and diagrams, and when a good figure is accompanied by some well-chosen words of explanation, the combination can be powerful.

Photos There is something satisfying about seeing a device or system in real life. The experience can answer some questions and stimulate others you would never have thought to ask otherwise. Seeing a video of a prototype working in the lab, or even just a photograph of it, really can provide a clearer understanding of what is going on. You need to think, however, about what exactly you are showing, and provide enough context for it to be meaningful. If there are no recognizable objects (e.g. people, houses, hands, computers, coins) in the photo or video, then it may not be possible for the viewer to get an idea of scale. This may prevent them from really understanding what they’re looking at. So, include a familiar, standard-sized item when composing the photo, and if that’s not possible, add a scale bar. Also, if you’re showing a system or device, think about what it’s doing and the best way to show that. A video could show the process of turning some input into an output, or—if video is not possible—you could photograph various stages and show them in sequence. If it’s enough just to see what the system looks like, think about the best angles from which to photograph it to really highlight its features. Remember that different photographs (and other visualizations) may be better or worse for different audiences. Strategic-level readers may be Audience and Explanation  |   125

more interested in a photo that shows how small the prototype is, while those who are more technical may be especially interested in some ­innovative component or design element. Think carefully about what will best suit these different sets of people.

Diagrams Diagrams that help people understand how a process or system works are even more important than images of real objects or places. However—like anything other element of explanation—they need to be tailored to the audience to do a really good job. For instance, a diagram intended for an engineering specification is unlikely to be ideal for communicating concepts to nontechnical management, and using it could be c­ ounterproductive. When people see jargon they don’t understand, it makes them doubt that they have been successfully following your argument. This doubt leads to dissatisfaction and lack of confidence. A good diagram is one that—like the steps in a good explanation—only shows the audience exactly what they need to know, with irrelevant detail bundled up and hidden within functional elements. A bad diagram confuses the viewer with extraneous technical terms. Again, the best way to succeed with this is to think about what your audience knows already and what you want them to know, and fill in any information that they need to get them to the finish line. If your audience only needs to know what a system element does, then don’t clutter up their brain with how it works.

Graphs Graphs are probably the most misused visual element in technical reports, because scientists and engineers are taught to use them for analysis rather than communication. When students are at high school and early university, the graphs they present in lab reports are simply there to show that they did the experiment and that they got sensible results. As they go through their careers, building and testing things in the lab and the workshop, graphs become a tool of exploration: graphs represent a way of analyzing, understanding, and archiving data for themselves, rather than for explanation. These graphs might, at some point, be shown to a colleague or a boss, but that’s not what they were designed for. So, the first rule of using graphs in reports is to do so only if the relationship you are showing is genuinely critical to your explanation or argument and your audience is likely to find it conceptually difficult, interesting, surprising, 126  |   EXPL AINING THE FUTURE

or controversial. If the relationship is obvious and easy to understand, then the graph adds no value and is simply a distraction and waste of space. Second, think carefully about how you display the relationship. A really good book on communicating with data and, to a lesser extent, diagrams, is called Good Charts: The HBR Guide to Making Smarter, More Persuasive Data Visualizations by Scott Berinato, published by Harvard Business Review. I highly recommend that—if visualization is important to your work—that you buy a copy, but I can give you my take on the highlights here. As with almost everything that’s true communication, the advice the author gives probably sounds trite and obvious when you boil it down, but that doesn’t stop people breaking his rules every day. First—and this should go without saying—a good visualization should not misleading or inaccurate: that means (among many other things) that you should not be manipulating the y-axis to makes some comparison look more impressive. In addition, the graph should answer a sensible question in a way that is more easily or powerfully communicated visually than verbally. Also, images should follow visual conventions (e.g. red for stop and hot, green for go, and blue for cold) and should highlight what is most important. Finally, they should not be cluttered or continuously repeat information, nor should you have to read the whole paper in order to understand what the figure is showing you (they should be self-contained).

Videos There is a big difference in the way still images and videos are viewed. Videos can have a longer-lasting and more profound impact on the understanding of the viewer than a diagram or photo, especially if you ensure that they tell a story (showing sequentially how something works or happens). But the added dimension of video—time—is also a drawback from the viewer’s perspective. With a diagram, they know at a glance what they are getting and can decide how long they want to spend studying it. With a video, they are locked into whatever timeline you have chosen for them, and they have no idea what they are going to get. Also, unlike text, a video is not searchable. This, again, undermines confidence that it will be worth their time. There are various ways around this. If you are writing a report and link to a video that you hope your reader will watch, also include a representative still from the video—something that really encapsulates what it is about—as a figure within the report. Write a caption (see “Figure captions Audience and Explanation  |   127

and legends”) that describes the video well enough so that the reader feels they have the gist of what they’re missing and can decide whether it’s worthwhile stopping to view it. (Don’t forget to include both the link and the length of the video.) You should use the same image as the video thumbnail (the still image that appears before you start the video playing), rather than choosing some logo or other bit of branding. The echo of what they’ve seen in the report will encourage the reader, reminding them of their motivation to view it in the first place. The branding can be included, of course, just not there. Also, remember that not everyone is set up to watch videos in an office environment and may not have headphones handy. For this reason, if sound isn’t really important, explanatory subtitles are often better than (or can be used in addition to) voice-over narration. Finally, you need to consider what happens if members of your audience don’t bother to watch the video, either because they can’t (they are reading on paper, don’t have an internet connection, etc.) or because they won’t (don’t want to). We’ll come back to this issue in “Figure captions and legends.”

Different audiences, different images If we go back to the idea of a report where the executive summary will be read by one set of people (say managers and/or financial people), and the main body read by another (say technical people), then—by extension— this means it’s likely that you may need two sets of images, too. If this is handled carefully, it can look very natural, with a simple schematic at the beginning, and a more detailed diagram later on. Likewise, you might have a very basic graph showing a simple relationship up front, and something more nuanced deeper in. Think of the simpler version as laying the conceptual groundwork for the more sophisticated version. This way—for the audience that sees both—the second will not feel like a duplication but an expansion and reinforcement.

Figure captions and legends Figure captions are really important for all sorts of reasons: they can make clear the value, context, and meaning of an image, even when the reader has just skimmed through to look at the pictures (people do that). So, make them explanatory, active sentences, not cryptic titles. The phrase “the experimental system setup” tells me absolutely nothing. “The polymer 128  |   EXPL AINING THE FUTURE

is melted in the chamber on the right and then sprayed onto the rotating disk on the left, creating a 1 µm-thick light-sensitive layer” is much clearer. Of course, if the diagram is complicated, the caption will be complicated, too. That’s generally fine as long as you don’t get to the point where your caption is bigger than your image. At that point, you’ve gone too far! One thing to bear in mind about captions is that your readers may ignore them. Because of this, it’s very important that you not bury information in a caption that is critical to your explanation or argument. For this reason, it is acceptable to duplicate information that appears in the main text in a caption and vice versa, despite the fact that repetition is generally something that would be seen as redundant and therefore a waste of bandwidth (more on repetition in Chapter 7). Also, you may find that if you have, say, a system diagram with lots of elements, you may be forced to use what are called figure legends: these are simply acronyms or other short forms of words and phrases that you use, because you cannot fit the full word into the space available. It is absolutely fine to use these, but be careful, as you must make sure that their definitions are provided somewhere nearby. Ideally, you can have a key as part of the image itself, probably in a corner where there extra space because of the layout of the rest of the figure. If not, you can put a key at the end of the caption (e.g. “Abbreviations: BS, beam splitter; L1, Lens 1; L2, Lens 2; P1, Polarizer 1; P2, Polarizer 2.”). If you doubt this is necessary, think how confused an audience might be that does not know what the abbreviation BS means (or thinks it means something else).

Copyright and plagiarism If the image you are using is not yours (i.e. you didn’t take the photo or create the graph or diagram), things can get complicated, especially if you plan to publish your report. It can even be difficult to establish whether you are publishing something, given that so much of what we do ends up on the web one way or the other. In practice, the easiest way to know how seriously you need to take the issue of copyright is to consider whether this image (the one that belongs to someone else) is being used for commercial purposes or not. Commercial purposes can be (roughly) broken down into two types: publications that bring in money (i.e. where you are selling the report with the picture in it, which might be true of a technical consultancy) or advertising/marketing Audience and Explanation  |   129

materials. If you are not making any money from using an image, and its originator is unhappy with your use of it, the worst thing they should be able to do is to make you take it down from the web and stop you using it in future. However, if you get it wrong, and the purpose is deemed to be commercial, the author or original publisher could claim damages from you. The only way to know for sure whether your use of a photo or figure is acceptable is either to get written permission from the person who owns the copyright or to ask a media lawyer. Note: there are an increasing number of images around with Creative Commons licenses. The conditions for using these tend to be very clear, making it easy to understand what you can do and in what circumstances. If you can find suitable images with these licenses, they will make your life easier. Let’s take the worst-case scenario: you definitely want to republish the image (photo, diagram, graph, etc.) for profit, maybe as part of a report for a consultancy job or in a journal paper (remember, you may not be ­charging but the publisher may well be). First, you have to determine whether the image has been published before, and if so, by whom. This is because the author of the image may have signed their copyright over to this publisher. In that case, you might have to get permission to republish (possibly paying fees in the process) from both the original author and the publisher. If, for whatever reason, you cannot do that, one way around the problem— for graphs and diagrams anyway—is to have them redrawn. You can even get software that will pull the data out of a published graph so you can redraw it accurately. There are a number of benefits to this. First, because the image is now editable, you can improve upon its clarity (­tailoring it to your audience, changing the ratio of the axes, etc.). In ­addition, you can use your own branding (fonts and colors), remove errors, and optimize the resolution. Finally, as long as it is sufficiently different from the original, the new drawing should be free of the copyright restrictions of the old one. (Caveat: this is not legal advice and may not apply in your situation. Make sure to get proper guidance from someone who knows both your circumstances and the law in your jurisdiction, or you could end up being sued.) However, whether you reuse a figure with permission, without permission (because it’s noncommercial and covered by fair-use laws), or redraw it, you must still give credit to the original author. Even if you improve upon 130  |   EXPL AINING THE FUTURE

the original, you must acknowledge where the idea came from or you are plagiarizing someone else’s work, ideas, and data. Giving this credit could not be easier. Add some words like, “Adapted from an ­original figure by …” or “Reprinted with permission from …” to the figure caption, and include a citation to the original publication. Likewise, unless you have a solid agreement with a photographer that you can use their photos without credit, it is generally good practice to give them one, as, apart from anything else, doing so shows good faith if it turns out you somehow misunderstood the copyright or licensing situation with that image. Again, the credit can just be listed at the end of the caption or going up the side of the photograph. Depending on the circumstances, it can be as simple as having the phrase “Photo credit” followed by a colon and then the name of the photographer or corporate owner of the picture, or the copyright symbol (©), followed by the name/ corporation and the copyright year.

Summary To be able to communicate, you must first consider your audience. This means you must: • show them respect • prepare sufficiently • consider how to deal with more than one audience at the same time • make sure you have a clear, appropriate purpose • tailor your explanation to your audience(s) • make sure to define or avoid words that won’t be understood

One way to explain is by using visuals that should also be tailored to your audience. These include: • • • •

photos diagrams graphs videos

You must also make sure to think about: • figure captions and legends • copyright and plagiarism Audience and Explanation  |   131

CHAPTER 6

Technical Argument and Structure

W

hatever kind of report you plan to write, you must first consider the argument that you want to make—and there should always be an argument. It’s not enough to simply provide some kind of dossier with lots of random information about the subject in hand. Instead, you must start with some kind of interesting question/ problem, provide evidence that speaks to it, and come to some kind of conclusion. “Integrated circuits are . . . ” is not an argument. “Will integrated circuits continue to get smaller at a given rate?” could be the beginning of one. There’s a question there, one that you can potentially research from both sides and then answer. So, the first test is to see if you can construct your argument as a ­sensible, interesting question with an answer and one or more reasons why you believe that answer to be correct. Here are a few examples: • Can new models help us to understand how to mass-produce an enzyme more efficiently? Yes, because here’s one we created, with experimental results that show it works. • Do we have to accept that certain kinds of web transactions are, by their nature, insecure? No, because here are some new methods that can be used to strengthen the weakest links in the security chain. • Will using this new chemical system improve this industrial process? Yes, because it improves efficiency.

Explaining the Future: How to Research, Analyze, and Report on Emerging Technologies. Sunny Bains © Sunny Bains 2019. Published in 2019 by Oxford University Press. DOI: 10.1093/oso/9780198822820.001.0001

132  |   EXPL AINING THE FUTURE

In each of these, you can see a question, an answer, and some evidence for that answer. Remember that evidence needs to be provided for all sides of the argument, not just the side you favor. Imagine that your listener/reader is arguing with you in their head as they read; you will be much better at persuading them if you can see both sides. You might say things like “It is true that [some fact] is the case, as shown by [some people at some time], but recent results from [these others] suggests that this only affects [one narrow aspect of what you are talking about, not the main thing].” If your reader knew about that fact and had worried that it would affect your argument, this should reassure them. Also, don’t forget you have to think about the audience and the purpose of your communication in order to determine which arguments are most pertinent. Some will be more interested in the details of the narrow technical solution, and others in the goal of the overall project. You need to keep that in mind, especially if you have multiple audiences to think about. Having identified your audience and the argument you want to make, the next stage is to write a good outline. One way to do this is to start with your argument and the conclusion you came to, and then and think of all the questions you might have to answer in order to persuade your audience that you were right. Here’s an example based on an article that appeared in Science in 2017:22 Can we monitor a baby’s brain for abnormal activity, even though babies can’t go into magnetic resonance scanners? Yes, because this new acousto-optic imaging system makes it possible. • What kind of abnormal activity are you talking about? • How dangerous is it? • How will the monitoring help the baby’s development? • Why can’t existing methods do this? • How does the new system avoid this problem? • Does it work? • Are there any disadvantages we should know about? • Will it be expensive to buy? To use?

Once you’ve got a decent set of questions, the next stage is to make sure they’re in a logical order. Always ask yourself what the reader needs to know first, both so that they can understand the next step in the argument and to make them want to keep reading. “Why should I care?” is usually Technical Argument and Structure  |   133

one of the first questions that must be answered (with the medical ­example, those first three questions about the abnormal brain activity answer this question). Without that, there’s no reason to read on. After you have all the questions in a logical order, you must consider what evidence you need to provide to persuade your reader of your answers. This may involve doing the exercise again: Does the new technique work? • How was it tested? • How accurate were the results? • How was the accuracy measured?

As you continually come up with more questions and answers, you can also start thinking about the specific evidence (experimental details, ­citations, figures, tables, mathematics) that you will use to persuade your readers that what you are saying is true. Don’t forget that the questions and how you answer them will depend on your audience. An audience made up of business people will ask more questions about costs, markets, and intellectual property than one made up of scientists or engineers, and the kind of answers you give will be different too. For the former, you’ll probably be giving more detailed answers about the financial implications, and simpler explanations of the technology itself.

The technical argument Although the question/answer method can elicit the appropriate argument and structure for just about anything, you don’t have to start from scratch every time. Depending on what exactly you’re writing and how unusual your argument is, you may well find that there is a good generic structure that will help you focus your mind and skip some steps. I contend that most projects boil down to just a few basic questions.

What is the vision? However technical your audience is, they need to know what success looks like, what your goal is, and what might change in science, medicine, engineering, life, and so on if the project goes well. So, going back to our previous example, you might say: 134  |   EXPL AINING THE FUTURE

Every year, babies are born with or acquire infections that can cause permanent brain damage. If physicians could check the growing infants for abnormal brain activity at an early stage, this could allow them to identify and treat the ­underlying cause and so prevent further harm.

That’s a vision: everyone can understand this goal and why it is a good thing. Not every sphere of engineering can make such grand claims as being able to save brain function, of course, but the whole point of engineering is that it’s supposed to be trying to achieve something: whether it’s making a video game that is more absorbing, producing chemical products while polluting less, or creating buildings that use less energy for heating and cooling. In Chapter 5, we saw an example of a research group that was making a contribution toward storing more data on an optical disk. Although their work was very technical, the end goal—creating an optical disk storage system that could hold whole high-definition movie—was very real. Without understanding that goal, the research becomes esoteric and meaningless to anyone not directly involved. For people working on more basic science or technology, where potential applications are numerous, laying out one vision in detail is still important to capture the imagination. If you know who you are writing for/speaking to, you can choose the vision you think they will find most compelling. Of course, you can discuss other applications too, but you need to start by having your audience care about the subject: otherwise, they will not invest their time in learning more about it (they’ll stop reading). One last thing is that when you state your vision, you have to quiet the part of your brain that is caught up with all the ifs, maybes, and other caveats that could stop the vision from being realized. Your job at this point is simply to communicate your view of how the world could be better when the work you have been researching is successful. You will get to the obstacles to this happening later.

What is the status quo? “Status quo” is a Latin phrase that means “the way things are now.” It is a critical concept in engineering communication because it creates a benchmark from which any progress can be measured. To put it another way, Technical Argument and Structure  |   135

you cannot expect your audience to understand why any given solution is better unless you tell them what you are comparing it to. Almost by ­definition, the first comparison you will make is to the way things are done at the moment. Also, once you’ve introduced the status quo, you should focus on why this is not good enough. Is what we have now too expensive, inefficient, complicated, big, slow, heavy, inelegant, inaccurate, and so on? Why should people devote time, money, and effort to improving the situation? This section is critical because it sets up the way the rest of your argument will unfold. The technique currently used to detect brain activity in adults is fMRI (functional magnetic resonance imaging). Although it can produce very detailed images of the brain and illustrate how blood is flowing through it, fMRI requires that patients lie almost motionless. For this reason, it can’t be used on infants.

In this section, it helps to make clear what the criteria for success are (so show how they are not met by the status quo). Depending on the situation, these might be qualitative or quantitative. If you can provide your audience with numbers, this will make the comparisons with other technologies much easier.

What is the technical problem? Once it’s clear why the status quo is not good enough, we have to narrow down to the specific problem we intend to solve. You need two pieces to do this: a potential solution that could enable the vision, and an obstacle to that solution working. Essentially, you are explaining why previous efforts to use that solution have failed or stalled. Ultrasound could be an ideal technology for performing these scans because it captures images at 10,000 frames per second and can detect tiny changes in blood flow. However, since ultrasound waves cannot easily pass through bone, it’s not been considered suitable for imaging the brain, encased—as it is—in the skull.

Whether you spend two sentences or two pages on this, it will frame ­everything that follows. The social value of the project may be wrapped up in the vision, but the technical value will be judged by how well this problem is solved by the end of the report.

136  |   EXPL AINING THE FUTURE

What are the competing solutions? To trust your analysis of which solution to the problem is best, your audience must understand what the competing solutions are and the benefits and drawbacks of each. So, go through them all to the level of detail that’s right for your audience: what they are, how they work, and their particular advantages and disadvantages in terms of the important criteria that you defined earlier. If it’s appropriate, lump similar approaches together. You can just explain what they have in common in terms of how they work and what their advantages and disadvantages are, highlighting exceptions and differences where necessary. You can also point out where solutions can be ruled out for the application(s) you’re trying to address because of one of one or more specific weaknesses related to the specifics of the way you’re planning to apply your solution. For instance, someone else’s face-recognition system might be great for security systems situated at the entrances to major buildings but not work for mobile (the application under discussion), because it requires controlled lighting conditions. Going back to our example: Brain scans using ultrasound are possible. Cracking the skull could be useful in a surgical setting but is clearly not appropriate for monitoring. Thinning the bone is also possible, but difficult to justify medically. Even the least-invasive options— putting probes into the nose or mouth—would not be suitable for infants.

What is the new solution? Whether your job was to choose the best solution from those available or to show the benefits of your own work, the next section should be relatively easy. This is where you straightforwardly explain the new solution, the ideas behind it, how they were implemented, and the results. Most scientists and engineers are much more comfortable with this part of a report than with the early sections where they have to set up the significance and the context: the step-by-step structure here is a natural construction for a logical mind, and this part of the story often has the additional advantage of being somewhat chronological. Remember, however, that you have to make sure to address all the ­criticisms you’ve raised about others’ work in the context of your own. If you’ve said that competing solutions are not good enough because they

Technical Argument and Structure  |   137

are inelegant, impractical, dangerous, inefficient, inaccurate, expensive, or whatever, then you have to explain how your favored solution is none of these things (or, at least, does better than the others in the areas that you have said are most crucial). Researchers realized that—although there are many ways in which they are more difficult to image—babies have one advantage: the fontanelle. This is a gap in the skull that closes as the child develops. This creates—almost literally—a window on the newborn brain that researchers can image through without causing any harm or discomfort.

It is usually best to do this—to explain how the favored solution avoids the disadvantages of the others—right at the beginning of the explanation of how the new solution works. This helps to motivate why the new experiment, system, model, device, or whatever, has been designed in the way it has and why the approach was chosen in the first place. The next step, explaining the results, should be the easy part: you’ve already established a set of criteria for what constitutes success or failure, so you can use it as a yardstick to measure the performance of both the favored and the competing systems. Simply go through the criteria, systematically highlighting where the favored technology does well and/or badly in comparison to the others. The main thing here is to be selective, not in order to deceive but in order not to overwhelm. You will have far more information about the performance of your favored system than will be of interest to your audience. For instance, take a medical imaging advance. If it is being presented to a room full of doctors, do they really care about the fact that the image processor has the highest clock speed recorded on this type of system? Or exactly how long after they press the return key they’re going to see a useful picture (as long as it’s less than a second or so)? No. What they really care about is how much more effectively the new technology works in diagnosing the illness in question, whether there are any ill effects, false positives, or false negatives (i.e., where the person is found to be clear of a condition when they’re not), and so forth.

What are the obstacles? This question, too, often goes unanswered. You may think there’s nothing left to say after you’ve gone through how and how well the solution works.

138  |   EXPL AINING THE FUTURE

However, even with a successful project, there are generally a few areas that need improvement or present obstacles to further development. If you don’t admit to these, then you leave yourself open to attack by expert readers who know you are not telling the whole story. So, admit that there is a problem with expensive, unreliable, or toxic materials; that there are certain situations where your model gives inaccurate results; that you pay a heavy price for performance in terms of poor energy efficiency; that the product is currently prohibitively expensive; and so on. If that’s the truth, tell it. By raising these issues, you can choose to address them in a positive way. If materials are letting you down, talk about materials research that might help. If there are scenarios where the model is not valid, explain how to avoid them. If there is a performance/energy trade-off, explain which applications are most likely to benefit from the approach and which will suffer. If the component is expensive, point out when economies of scale will bring the price-point down. While we’re on this subject, if you’re talking/writing about your own work, it really pays to “collect” all reasonable objections to what you’re doing and address them (even if only briefly) where they’re relevant. Too often, we hide from this kind of criticism when we should embrace it to help us improve our argument (whether the weakness is in the argument itself or how it’s been communicated). If people don’t understand the concepts you explain in early presentations or report drafts, you need to work on explaining them differently or more carefully. If people aren’t convinced by your results, ask yourself (or, better still, ask them), why? Do they simply need more explanation, or are the experiments not demonstrating what you need them to? Most of all, if people are questioning your assumptions, the premise of your work, or anything else fundamental, make sure you know not only that they’re wrong but exactly why they’re wrong. You can think of this as debate prep. Without knowing what people are going to find difficult, objectionable, or unconvincing about your work, you will not be able to persuade them. You can also think of it as an interactive research process. Every time you pitch your ideas and someone raises an objection, it gives you the opportunity to find out why you’re right (and explain more clearly) or why they’re right (so you can fix your story). Not only will this process make you more persuasive, it will also help you to produce better, more thoughtful work.

Technical Argument and Structure  |   139

What is the prognosis? The argument structure outlined here helps you lay out the main points in a logical way that your reader should be able to follow. It gives them all the background and information that they need to understand what is going on and why. This section, however, is your chance to build on the foundation of understanding that you’ve been carefully laying down and put all the pieces of the puzzle together. This is where you present your analysis. A lot of this will involve going back through the issues raised in the early part of your argument and re-examining them in the light of the information you provided later (like the new solution’s performance, and any obstacles to success). In this section, you should go back to the competing technologies and compare the proposed solution with the best of them head-to-head. If the comparison is not favorable, you should have to rethink: either find a better solution or find a better application, depending on which side of the fence you sit. You might also consider whether the two solutions could complement each other. If there were some way to combine the two approaches, you might create something that could do better than either on their own. Don’t forget the concept of an application niche at this stage. An application in general may have one set of requirements, but if you narrow down to increasingly specific instances of the application, you may find areas where a seemingly inadequate technology is actually a very good solution. The classic example of this is the superstrong adhesive that wasn’t. Spencer Silver, who invented it, was frustrated for years because he’d unintentionally invented an adhesive that stuck lightly rather than creating a tight bond and—although he was sure it had to be good for something—no one could quite figure out what to do with it. It took almost a decade before paper backed with this adhesive was marketed as Press n’ Peel notes and, eventually, Post-it® Notes. The conclusion of this discussion is your prognosis or prediction for the future, which is based on the argument you have just laid out. Without this, the report is meaningless. Your audience doesn’t care about the work done in the past (that’s just history) except where it may one day impact the future. You have to spell this out for your readers. Which approach will succeed? What needs to be done next, and why? What are the implications of this? This is your argument’s conclusion, what you were trying to get to all along. Embedded in this section is the reason you had to write the report 140  |   EXPL AINING THE FUTURE

in the first place: you needed to persuade someone of what should, or could, happen next.

Structure Once you’ve developed a technical argument that will work for your audience and explained it clearly, the hard part is almost over. Now, you need to structure your document or presentation so that it will deliver this argument effectively. Three elements will help you do this: the title, the introduction, and the conclusion. Before we get on to these, I have to pull out a hackneyed old cliché (from a lay preacher apparently): “First you tell them what you’re going to tell them, then you tell them, then you tell them what you told them.” Cliché or not, I would argue that it’s good advice and especially important for technical reports and presentations. Let me explain why. From a philosophical point of view, there are two reasons to tell your audience up front what you are going to discuss. First you need to give them a reason to care and to read on. This means that the primary goal of your introduction is to get your vision across (as described in “The technical argument”). Second, you need to make sure your audience has enough of an idea of what your paper/presentation is about—enough of the gist of it—to read/ listen intelligently. If you let them know what to expect, they will start thinking about the subject, dredging up dormant facts from the backs of their brains, and relating them to your argument. People read/listen actively, not passively, and the more you can help them to do this, the better. Third, you need to make clear what’s coming in the report: both in terms of content and in terms of layout. This allows readers to start thinking about which sections are likely to be most relevant to them, which they might want to skip, and so on. Next, of course, you tell them: you go through your technical argument in a step-by-step fashion. There’s a slight complication with this because some of the first steps need to be covered in the introduction, but we’ll go through that in the next section. Finally, you have to tell them what you told them: in the same way that an introduction helps the reader to tune into a subject and receive it efficiently, the conclusion is designed to ensure that the reader is left in no doubt of the intended message and that they will remember key parts of Technical Argument and Structure  |   141

the argument later. The human memory is finite: just because the people in your audience have read a paper or listened to a talk does not mean they will remember it. The concluding section is your chance to say to the audience, “If you remember nothing else of what I’ve told you, please remember this . . . ” As you can see, the introduction and the conclusion have completely different jobs to do. What may not be obvious is that they also have completely different audiences. The person reading an introduction is naive: they have no idea (yet) what you are going to say or why. However, readers of the conclusion already understand the whole story. They are now sophisticated readers who (hopefully) understand what’s going on, can follow more nuanced arguments, and understand your jargon. A fatal mistake I see some inexperienced writers make is repeating the introduction almost word for word in the conclusion (with the first supposedly as an outline, and the second supposedly as a summary). Because the audience has such completely different levels of knowledge at the beginning and end, this cannot possibly work. Either the introduction will be confusing and technical, or the conclusion will be vague and imprecise.

Weight Before we get into detail about the title and introduction, it’s important to think about the weight that each of the elements of the technical argument should have in your report. There is no one-size-fits-all answer to this: it depends on the nature of what you are writing (or saying) and for whom you are tailoring your message. For instance, if we take an everyday technical paper—one written by an academic, a PhD student, or perhaps a corporate researcher for a conference or journal—the first steps of the technical argument are not generally going to be the main point. Rather, the favored solution—the theory, design, experiment, results, and data analysis of the recent work that was done by the researchers writing the paper—is what’s important. All the elements of the story that precede the authors’ work should therefore be outlined briefly (as an introduction), and the explanation should become more in-depth as the authors’ contribution kicks in. This becomes a bit nebulous when the main contribution is not the specific solution but the vision itself. Steve Jobs’ keynote speeches are a perfect example. When he stood on the stage and talked about Apple’s next new product, the focus would always be on the vision: on the wouldn’t it 142  |   EXPL AINING THE FUTURE

be great if we could . . . . There’s a reason for that. For decades, this was Apple’s unique selling point. They may not always have been the company with the most advanced research or the best engineers or intellectual property. What the company offered was a vision of what and why a new product should exist, a vision often enabled by a deep understanding of people’s desires and aspirations, and then it implemented that vision as well-designed hardware and software. The vision was the point of the presentation. The rest of it—the description of the various features and how they worked and had been engineered—was really just evidence that the vision had been delivered. It was not the main point. An engineer who had designed one of those features might have a different job to do. Yes, the audience (any audience) still has to understand the vision for the overall product in order to put that feature in context. However, if the paper or talk is supposed to be about the design, the focus will naturally shift to how a particular function was normally implemented and why this isn’t good enough (the status quo), the technical problem that had to be solved to make things better, competing solutions, the new approach, and so on. What if the engineer has made no contribution to understanding the problem, because it was already well known? In that case, the real focus is on the shortcomings of the competing solutions, and why the new solution is better. Determining which parts of the technical argument are core, and which comprise the preamble, matters for two reasons. First, the introduction should take up no more than 20 percent of any report. Essentially, the introduction is anything that precedes the main focus of your argument (your main contribution). You can’t write an introduction until you’ve identified which bits of the argument it’s going to have to include. Second, if you don’t know what the focus of the paper is, you cannot give it an effective and representative title.

Writing the title and introduction last Before we get into the detail about what the reader needs in a title and introduction, let me comment on the psychology of the writer. It’s often recommended that it’s best to write the introduction and conclusion to a report last (and probably in reverse order: conclusion first, then introduction). That’s good advice, because many writers don’t really know what they want to say when they start writing. Hopefully, they have an outline and some idea of what they do and don’t want to include, but they don’t Technical Argument and Structure  |   143

necessarily have a clear story. This is not surprising, because the process of writing is precisely how they develop their argument. People actually start to understand their own work better by trying to explain it to other ­people. Because the introduction’s job is to set readers’ expectations, to make them sensitive to relevant facts as they are presented, it’s not possible to write it effectively until you know exactly what you are preparing your readers for. This generally means knowing your whole argument in detail: strengths and weaknesses, the evidence you plan to present, and the conclusions you are going to draw. It’s not that you are going to include all this in your introduction, but it needs to be in your mind when you write it so you can drop the appropriate clues as you go along.

Title The title is important. On the one hand, it should be descriptive and specific: if you are too vague, then people may waste their time because it’s in the right domain but not the right topic within that. For instance, a report called “Building the railways in the western United States” might sound interesting to some engineers. However, if they’re interested in the techniques used to drill through mountains, create the track layouts, and forge railway sleepers, they may be disappointed if they find the paper’s actually on the social hierarchy of the Chinese immigrants who did much of the work. On the other hand, you don’t want to be so specific that you scare potentially interested readers away with jargon. A PhD student I once worked with entitled a paper “HVDC Tap.” Admittedly, the title is descriptive, in that it’s a very accurate, specific description of what he’s doing. Spelling out HVDC helps a little: “High-Voltage Direct-Current Tap.” But still, even most people within electrical engineering wouldn’t really have a clue what this was about. So, let’s understand the work and try to do better. Currently, there is an ongoing effort to generate solar energy in the Sahara and then relay it to Europe. The very long distances involved mean you have to use special (HVDC) transmission lines or else most of the energy will be dissipated by the time it arrives. For political and social reasons, the people involved want to be able to supply electricity to communities en route (tap off the power). They need to build the taps to do this, which is a hard problem technically. 144  |   EXPL AINING THE FUTURE

My suggestion for the title of a paper on this subject would be something like “Designing taps to power communities along the high-voltage direct-current electricity transmission line from Africa to Europe.” This is more descriptive and interesting, and provides context. It’s on the long side but not prohibitively so for a formal report. Also, if you had an audience steeped in power-grid technology, you wouldn’t have to spell out HVDC. That would make a big difference. This is an important point: short titles are more attractive and pithy, but it’s not worth sacrificing clarity for brevity. As we discussed earlier, part of the job of the title and the introduction is to prepare the reader’s mind for what is coming, and to do that, you need to supply sufficient context. A paper title like “Advanced measures for defeating pirates” might sound specific. But what exactly does it mean to the reader? What images and resources will their minds conjure? What will they be expecting? If I’d added the words “Somali,” “software,” or “Pittsburgh” in there, the meaning would have changed entirely. By being specific and clear about the topic at the beginning (but not so specific that I’m using jargon), I’m helping the reader to decide whether the paper is relevant and to recall useful information that will help them follow the argument. Finally, remember that the title is the first thing that your (potential) readers will see. They have many pulls on their time and—even when they are supposed to, even when it’s part of their job—many people will not read anything unless it looks particularly relevant to them (and, even then, they may just skim). You therefore need to use the title to entice them to read it.

Introduction The introduction has a lot of work to do. It needs to make a naive reader interested in the subject and set the scene for what is coming up in the report. To do this, you have to funnel the reader into the argument using what I sometimes call an “inverted pyramid” structure (note: this is not the inverted pyramid used in journalism when writing news articles.) It’s an inverted pyramid because it starts with statements that are connected to the wide world inhabited by the reader and then funnels down to the narrow world of the technical work being discussed. In Chapter 5, we gave the example of the chip lasers for Blu-ray. We started with disks and movies and then went deeper into burning pits, needing a different color of light, and the chip lasers that could produce it. Technical Argument and Structure  |   145

Finally, we got to the problem: the polarization hopping. This is a great example of the funneling-in introduction. It starts with the vision and moves through the status quo. That gets us to the technical problem at hand, which is where the technical mind can take over and fill in the rest of the story. As usual, exactly how relatable the world needs to be at the big-vision level depends on your audience. If you are an engineer talking to others within your discipline, then you start with a statement that anyone with a little experience in the subject can understand. If you are talking to management, then you may have to start with something that even the t­ echnically illiterate can understand. The vision is always going to make it into the introduction, partly because it’s what’s needed to get people to read and partly because it comes first. Next, you want to go through the rest of the introduction (the rest of the steps of the technical argument that precede the main points in the report) at a level of detail that is appropriate given the amount of space you have (your 20 percent). Finally, you make sure to outline the steps in the argument that you do intend to focus on. By outline, I mean you don’t go into detail; you just give a simple indication of what the argument will look like and what the readers can expect from the rest of the report (see Example 1). You should recognize this inverted pyramid as hitting the steps in our technical argument that we set up previously. The first one is the vision, the second is the status quo, and the third, fourth, fifth, and sixth are (roughly) the technical problem, the competing solutions, the proposed (or new, or favored) solution, and the results, respectively. The last one gets to what the work means for the future. (As we discussed previously, you

Example 1  An introduction should funnel-in from a broad vision that the audience can understand to the narrow world of the solution being proposed.

146  |   EXPL AINING THE FUTURE

don’t want to get into the obstacles in the introduction, because you’re still trying to attract the reader to your vision). Note that the first two are not the main argument for this article: we are providing them for interest and context, explaining them once, and then funneling in. The article will really start with the technical problem (in Example 1, the technical problem is “this particular kind of cyberattack”), so we’ll want to take our time explaining it. For this reason, we don’t want to go into too much detail in the introduction. We’re not providing an argument but rather a map to it. The following 70 percent of the paper will go step by step through these elements. What’s important is that our map or outline is not too vague. Remember we talked about Somali versus software versus Pittsburgh pirates. It’s in this map section that the detail is critical. So, for instance, you’re not just going to look at “a particular type of cyberattack” but rather “how security vulnerabilities in wireless transmitters and receivers are being exploited in cardiac implants.” Likewise, you will explain (briefly) why the current approaches are likely to fail, and what the new solution is based on. This is where you’re doing that critical preparation of your audience for what is coming, so that they can bring as much knowledge as possible to the rest of your story. You need to give the reader more, however. They need to know what to expect not only from the argument but also from the document itself. Here is the point at which you can explain what’s in the rest of the report: maybe it’s a discussion of the basic concepts, maybe a full design, maybe an implementation with experimental results, or maybe a comparison of different approaches to the problem. Let the reader know what they’re about to encounter and, assuming they’re interested in the first place, they’ll be expecting it (and receptive to it) when it comes. You can do this explicitly. It’s perfectly acceptable to say something like: In the remaining sections of this paper we will first explain [the xyz problem] in more detail and describe [the features] any solution will have to have in order to deal with it adequately. Then we will explain [our approach] and describe both the [design] and the [implementation] of the [resulting system]. After showing some [promising results], we will finally discuss why we think some [specific improvements] could make the approach competitive.

Note that this is intended as a generic example but you should not be vague or generic: as you read this paragraph, think about how you could fill in the Technical Argument and Structure  |   147

blanks. It’s important to be specific. A classic mistake that people make is to talk about the form of a paper (as I’ve done here) without mentioning the content (talking about the actual problem, approach, solution in question). However, when you do that, essentially all you’re saying is that you’ve written a report. Instead, you need to weave the form and content together. You should also make sure to at least hint at what your final conclusion will be. If they know the end point you are heading toward, people will be much better able to follow the intervening argument.

Conclusion The conclusion has three jobs to do. First, it has to summarize the highlights of the argument presented, succinctly, reminding readers of the evidence that led you to your particular view. Second, it has to state what your prognosis so they can understand what the argument actually means. Finally, it must put all this into some context and relate it back to the real world. Remember that this is your last chance to make sure that the reader takes away what you want them to from your report. What was it you really wanted to say? What do you really want them to remember/understand from what you have said already? What do you want them to do next? Also, remember that your reader has just heard your entire argument. You don’t need to explain it again from scratch because they now know all your jargon and (hopefully) understand your reasoning. Your job here is to, succinctly, remind them of that and make clear what it all means. To do this, I would recommend using a conclusion structured like the one in Example 2. Here you’ll notice that the first four bullets are essentially a summary of the argument: they present the technical problem, the competing technologies, the new technology (including the results), and what else we need to

Example 2  A good conclusion highlights the argument presented previously without repeating it in detail, and then leaves the audience with a take-home message.

148  |   EXPL AINING THE FUTURE

know. The fifth sentence is essentially the prognosis: this is the main message you are trying to get across. The final statement puts the work in some kind of context and looks to the future. When you’re writing the conclusion, bear in mind that—just as you helped the audience funnel into your work in the introduction—you now have to help them to come out again. So, the conclusion must lead from your narrow work to its wider implications (specific to the general). Sequences of text that show you’re on the right track might include “Our results showed that . . . ,” “This suggests that . . . ,” “The result is important because . . . ,” “This can be used for applications such as . . . ,” “We think this will mean . . . ,” and “The next step is . . . .” However, remember you don’t need to come out quite as far out into the everyday world with the conclusion as you started with in the introduction: your readers now have a better grasp of the subject.

Variations on an outline If we take the technical argument (vision, status quo, etc.) and the ­organizational elements (introduction and conclusion) together, then we end up with a hybrid structure. Here, the introduction includes the preparatory elements of the technical argument, plus an outline of the core sections, ending in the prognosis, and the conclusion includes the summary of the main points. Once you balance this so the introduction and conclusion each have the appropriate amount of weight, you end up with an outline of the entire report.

Outline for a basic research project If you are only interested in highlighting the benefits of one particular approach to the problem, and one particular piece of work, then the outline will look something like the diagram shown in Outline 1. Technical papers for journals and conferences tend to look like this. All the introductory stuff about the vision, down to the competing technologies, are treated as the preamble: they simply set the context for the main description of the work.

Outline for a review For a review, the structure is similar but will have a repeated series of ­elements in the main body. For instance, let’s say the review is discussing Technical Argument and Structure  |   149

Outline 1  Structure for a research project (one core solution).

one particular application but lots of different technical problems within it. In that case, the status quo, the technical problem, the competing solutions, the obstacles, and the prognosis will be repeated for each of the different­problems being considered. Note that the new solution is missing here because you’re not proposing anything specific in a review. The obstacles section, if you choose to include it, relate either to the competing solutions you favored or to all you’ve described. In your introduction, you’ll start with the vision and then the vision’s own status quo, which briefly describes the technical problems. Next comes an outline about how to address these in the report, pointing to some of the issues that will get you to your conclusion. 150  |   EXPL AINING THE FUTURE

Before you get to your conclusion, you will need one last obstacles and prognosis section to cover the application as a whole: this might discuss how the different pieces of the puzzle affect each other, whether they are complementary, what key stumbling blocks there might be to overall progress, and what this will mean for the future. Once that’s done, you can conclude with the highlights so that the audience remembers your story (see Outline 2). There is no perfect formula for structuring your document, so this is just a place to start. The main principles are as follows: prepare your reader for a clear, logical technical argument, and help them to remember the main points at the end.

Outline 2  Structure for a review (one core application). Technical Argument and Structure  |   151

Summary A good technical argument usually asks the following questions in this order: • What is the vision? • What is the status quo? • What is the technical problem? • What are the competing solutions? • What is the new solution? • What are the obstacles? • What is the prognosis?

In addition, the structure should include: • a descriptive title • an introduction (about 20 percent of the document) that: o   engages the reader o   prepares them for what’s coming • a conclusion (about 10 percent of the document) that: o   reminds the reader of the highlights of the argument o   makes clear the prognosis for the future

152  |   EXPL AINING THE FUTURE

CHAPTER 7

Credibility

I

t is very easy to lose credibility with your audience. You do this by being unclear about your argument, not putting forward sufficient evidence, being sloppy with your writing/presentation, being careless with your facts, or by seeming to be ignorant of some issue that your ­readers think is important. Slip-ups are inevitable: we all make them. Some, like occasional grammatical mistakes, are minor. Others, like mathematical errors, missing elements of the argument, or bluffing (pretending you know what you’re talking about when you don’t), are much more serious. But all of these mistakes matter. Think of it like this. Your argument is a path through a forest of information and ideas. You want it to be as clear as possible for your reader so that they can follow it and have a nice easy walk through the foliage. After all, these are busy people with other things to do. The little typos are like tall blades of grass: easy to push past if there is just one or two but tiring to walk through if the path is full of them. The major errors are more like mud or prickly bushes. They are annoying for someone who was expecting an easy walk. They turn the reader against the author/presenter . . . that is, if they are motivated enough to keep going. Some will just give up entirely. What does it mean if the reader turns against you? Let’s take an academic setting first. Say you’re a PhD student waiting for comments from your supervisor about a thesis or paper. Chances are, if your work is annoying to read, your supervisor will either take a long time to give you the Explaining the Future: How to Research, Analyze, and Report on Emerging Technologies. Sunny Bains © Sunny Bains 2019. Published in 2019 by Oxford University Press. DOI: 10.1093/oso/9780198822820.001.0001 Cre dibilit y  |   153

feedback you’ve been waiting for or give you useless feedback. If you’ve already submitted to a conference or journal and the readers are peerreviewers, they may well reject your paper outright, even if the ­technical ideas are basically sound, because they were distracted by the other issues and so couldn’t follow your argument. Likewise, a grant proposal could well be turned down because it looks like it’s been incompetently prepared. In a corporate setting, you are likely to get even shorter shrift: unless the proposal or report really grabs the reader from the beginning, it may be deemed not worth reading at all. There goes your pet project or your venture capital. When you are trying to persuade an audience of the value of something new, you need your readers to believe what you tell them. Good evidence and explanations will encourage them to do this, but inaccuracies will not. Any other signs that you are not credible (faulty logic, grammatical errors, and so forth) will detract from the credibility of your argument too. In this chapter, we’re going to focus on all the things that will help to clear the path for your reader, from the very substantial to the much less so. They boil down to a few simple rules. Unfortunately, as with most rules, they are much easier to say than to do. The next few sections will give you some advice on how to achieve them in practice.

Show, don’t (just) tell Arguments require evidence. It is not enough to assert that something is true (unless it’s something that you’re confident your reader already knows). You must show why it’s true. This needn’t be difficult or time consuming. It can be as simple as using a quote from a reputable source, citing a technical paper, or briefly relating the results of an experiment. Just a few words like “From semiconductor physics, we know that . . .” can make a big difference to how authoritative you sound and how much your audience will trust and believe you. To what extent you have to back up what you say will depend on what you want from your audience. If you just want some applause at the end, you can say almost anything and—as long as you’re entertaining and don’t contradict the existing world view—you will probably do fine. If you want to educate and inform, then the standard of evidence you will have to provide will be higher. It will be fine to skip some technical details when giving an overall picture, but you’ll need to say that you’re doing so and 154  |  EXPL AINING THE FUTURE

where that detail can be found if needed. If there’s money at stake, then the bar will be set higher still. The main thing to remember is that if some statistic, equation, mechanism, or fact is important to your argument—if your case would fall apart if the opposite were true—then you need to substantiate your claims. Examples represent a kind of evidence, in the sense that they offer your audience a chance to think about your argument through the prism of something that they already understand. So, you might show how artificial intelligence made inroads into one industry in order to give an ­example of how it could easily move into another. However, not all examples are created equal. If you give a frivolous example, or one that is too abstract or too far away from your audience’s experience, then they will either not take you seriously or simply not follow. Whether you are talking to managers, technical people, or schoolchildren, you must choose examples that they can understand and will be meaningful to them, based on their own lives. If you’re talking to people interested in applying new technology to the law, for instance, you might say your system “will transform the way lawyers find precedents, in the same way that Google and the internet changed the way we find information.” Assuming your team consists of people of a certain age (say, 40+), they will understand how powerful a change that is. If they are younger, they won’t remember what it was like to research before the web and search engines, and so the example will be meaningless. You might choose a different example altogether: maybe something more technical that actually represents a closer analogy with the work you’re discussing. Saying that your system “will transform the way lawyers find precedents, in the same way that ToxMatch helps pharmaceutical researchers find potential dangers in drugs” may actually be more ­accurate (in terms of the nature of the problem and the kinds of processes it uses). However, this will be meaningless to the people you’re talking to. It’s therefore critical to choose examples that your audience can relate to and understand. You then have to make sure to walk your audience through how the example maps to the argument that you are trying to make, so that the correspondence between the two is very clear. Another kind of evidence (which is especially useful if you have been working on a proposal rather than a completed project) involves showing  how elements of your new solution have worked in other contexts. Cre dibilit y  |   155

A signal-processing algorithm designed for target recognition, for instance, could be repurposed to detect medical abnormalities. Showing how the  algorithm performed in the military application and analyzing the ­similarities and ­differences between that and the new application could provide important evidence that the new system will work to a given specification. Of course, the most commonly used type of evidence for academics is the cited technical paper. Essentially, you simply drop a reference and then move on without further discussion. However, this does not work in every situation and particularly not in situations where the concept or fact being evidenced is counterintuitive. If something doesn’t seem right to your readers, they will need to be convinced that it nevertheless is correct. This involves first acknowledging that the idea is counterintuitive and then explaining why it is, in fact, true. Providing a reference as well will ensure that your readers can delve into it further if they’re still not entirely convinced. Of course, the best evidence comes in the form of actual results, and the most compelling way to show these are often in pictures or numbers by presenting performance graphs, videos of a machine performing a task, percentage improvements, survey results, and so on. Evidence that shows in a concrete way that there has been movement toward the vision introduced at the beginning is vital. Of course, you cannot share it all, but you need to share enough so that the audience can see the reality of your work.

Be honest, authoritative, and accurate One of the quickest ways to lose credibility is to bluff: that is, to talk about things you don’t understand as if you do. Even if your readers aren’t experts in your field, they may still know enough to recognize nonsense when they hear it. We all have gaps in our knowledge, and the best policy for dealing with such gaps is to honestly acknowledge where your expertise lies and where it does not. For instance, if you’re a mechanical engineer in a team using a new material to create an impressive new machine, it’s OK not to know all the intricacies of how the material was developed, as long as you don’t claim that you do. Just focus on the argument about the mechanics, and refer your audience to other people or sources for details of the material development. 156  |  EXPL AINING THE FUTURE

Honesty like this does not betray weakness but rather is a strength. It shows that you know your limitations and are willing to be straightforward about them. However, being too circumspect in what you say, not being willing to come down on one side or the other or to trust your own knowledge, can be almost as bad as overconfidence. Just because you don’t know everything about the way the materials were engineered doesn’t mean you don’t know enough to present or answer questions on the materials properties. Just because there are error bars on your graphs and nothing in life is 100 percent certain does not mean you have to spend a long time talking about all the possible weaknesses in your data. Your lack of confidence in yourself and your argument will be transmitted to your audience. If you are cautious, they will be cautious. Of course, where your confidence in a particular piece of information is particularly low, where you feel that trusting it is genuinely risky, you must discuss that (that comes under honesty). But, where your confidence is high, you need say very little unless your analysis is somehow controversial (which we’ll get to in “Prepare for objections”). Regardless of the confidence you place on different pieces of evidence, you should always represent them accurately. It may not be necessary (or meaningful) to give a number to three decimal places, but you should not be rounding up or down for the sake of it. Saying 38 percent is almost twofifths is probably reasonable but rounding it up to 40 percent is just inaccurate. I once saw a student write in a report that 5 ≈ 10. OK, it was an experiment with a logarithmic curve, so what I think he meant wasn’t unreasonable (that the result was of the right order of magnitude, which was good enough in the circumstances). What he wrote, however, was the kind of thing that makes a reader really question your ability to think logically. Using logical fallacies, where, instead of providing logical reasoning and evidence, you appeal to reader’s social instincts, will also lose you trust. There are dozens of these (and it’s worth looking them up), but here are the top five that you are likely to find (or use) in technical writing, with examples so you can see clearly how ridiculous the logic is: • Slippery slope (the smallest compromise will lead to disaster): “We can’t accept a design that doesn’t meet every single one of our criteria. If we did, we’d have to throw them all out and would end up with something completely unsuitable.” Cre dibilit y  |   157

• Appeal to the masses (because a lot of people agree with something, it must be right): “Most engineers aren’t very good at communicating and don’t bother to learn how to do it well, so it really can’t be worth your time.” • Appeal to authority (because someone famous, important, or smart said something, it must be true): “You might think that artificial intelligence is an interesting subject, but Professor Roger Penrose has said it’s not even possible. You’re wasting your time!” • Ad hominem (an argument is invalid because the individual making it is flawed): “Richard Feynman was known to be a complete womanizer, so forgive me if I don’t listen to his theories on quantum electrodynamics.” • Straw man (if an argument is strong, make up something weaker and argue against that instead):“You say we should
 try to do better at teaching communication to students . . . you mean you want to dumb down the degree and stop teaching them the technical foundations of engineering?”

One last issue related to honesty and accuracy is hyperbole. If you make grand claims, people will expect you to live up to them. If you use words like amazing and revolutionary, you will raise expectations to a level you cannot meet. Instead of using superlatives, talk about what the technology can actually do. If it’s amazing, your audience will see that.

Prepare for objections If you’ve been practicing your arguments by having discussions with potential audience members or giving presentations, you may well have found that some objections to your analysis or conclusions come up again and again. If you build these objections into your argument, then you can deal with them before they are raised by others. If you’re going to do that, however, there are a few pitfalls you should avoid. First, don’t have a section called “Common objections” or something similar. What you want to do is weave your comments about each one so it fits in with the rest of your argument. So, if you want to make clear that one kind of system has proven to perform better than another (even although most people erroneously believe the contrary), 
then explain that right after you introduce the system itself. Second, don’t make the reader feel stupid for having a common objection, as that will simply alienate them. Try to avoid phrases like “It’s a common misconception that. . . .” This implies that the reader is illinformed and wrong. Instead, if possible, try to make them feel that the 158  |  EXPL AINING THE FUTURE

situation is just complicated. So you might say, “Although it’s certainly true that A-type systems have advantages for X, Y, and Z applications, the B type is better where size and power is an issue. This is because. . . .” In this way, you are actually building on the reader’s existing knowledge rather than dismissing it. Finally, if you have something particularly controversial to say—and you know that because you’ve been given a hard time when you’ve tried to make the argument before—don’t make the mistake of becoming defensive in advance. If you treat your audience (whether reader or viewer/listener) as if they are going to be hostile to what you are going to say, you can end up reinforcing that hostility rather than diffusing it. Further, you may actually end up losing those people who already support you because you seem to be constantly assuming that they are not on your side. Hostile audiences can be challenging, but the best defense is simply to have a well-researched, unanswerable argument to any criticism that they might have. If you want to persuade your audience, use clear logic, ­delivered in a good-natured and optimistic way. If the members of your audience find nothing to disagree with, most of them won’t. However, the following advice is important for those delivering a presentation or reporting to a meeting: when someone asks a hostile question, be careful how you respond. There will always be people who, however persuasive you are and however logical your argument, will not accept what you say. It may be that this is a pet subject of theirs and you’re going against their line, that they have a vested interest in your argument not being correct, or that they’re just the kind of person who likes to be contrary or to show off. You can often identify these people when they first start speaking or, if not, when they want to come back with a supplementary question after you answer them the first time. Robin Steegman and her colleagues at ArtEsc (a company that provides presentation training for science and engineering students and academics) include the following advice in their workshops: “Never get into a mudfight with a pig. You will both get dirty, and the pig will enjoy it.” In other words, if you have reached the conclusion that your questioner does not really want to learn or be persuaded, then don’t argue with them further. Shut them down. There are lots of ways to do this. One is to offer to have a longer discussion “offline.” If the person is serious about knowing more about your argument, they will take you up on that. If they are just trying to show off Cre dibilit y  |   159

or put you down in public, they won’t. You can also say that you’ll research the topic and get back to them (as long as it’s not weakening your argument to imply that you have something to research), or you can offer to send them a paper or refer them to one of your colleagues. What you must do, however, is stop them from sucking away the precious time you have with your audience. Once you’ve made the offer and told them how to follow up (see you after, send an e-mail, or whatever), then you should make clear that you are moving on by turning to face a different part of the audience and calling for another question. This strategy reinforces your authority. Of course, if you need something from the person asking awkward questions (e.g. a sign-off, funding, approval), then you don’t have the luxury of moving on. You must simply be patient and do your best to persuade. However, the option of cutting off discussion is still there if you need it. Finishing early is far better than losing your temper.

Signposts, resting points, and flow That path through your argument, with its twists and turns, will be much more comfortable if the reader knows where they are at every point. The introduction and conclusion help to do that by giving the audience a little preview of what’s coming so they know what to expect. However, readers need to be reassured regularly that they know where they are now and where they are going.

Paragraphs The clothesline structure (see Figure 1) is a generic outline for almost any kind of argument and helps the writer with this problem. Essentially, the introduction and the conclusion are the poles, supporting the line of your argument (the clothesline). The clothes are the paragraphs, each of which covers one of the ideas being discussed. The first clothespin is the topic sentence, which lets the reader know roughly what the topic or idea for that paragraph is, and the second is the linking sentence, which hints at the topic coming up next. The paragraph is a critical concept in writing: it is the building block from which you construct text. Each paragraph should cover just one topic or idea. It should be no more than 150 words long and rarely fewer than 50 (although the odd one-sentence paragraph can be very powerful). 160  |  EXPL AINING THE FUTURE

Figure 1  In the clothesline metaphor, the introduction and conclusion are the poles, the clothes are the steps in your argument (the paragraphs), and the clothespins represent the topic and linking sentences.

The word limit comes from how much people can take in before they have to stop, consider whether they’ve understood the topic in hand, and—if not—go back to the beginning again. This means that if you have a topic or idea that needs more space, then you need to break it down into smaller subtopics. The good news here is that most scientists and engineers are actually very logical in the way they write: one sentence or idea tends to come after another in a sensible sequence. This means that if you find that writing long paragraphs is a problem for you, it’s not a hard one to fix. You simply break the long paragraphs into smaller ones, based on where the subject changes, and make sure that they have good topic and linking sentences when you’re done.

Topic and linking sentences Once you know what your paragraph is about, you can write your topic sentence. Topic sentences do not explicitly say, “In this paragraph, we will discuss. . . ,” but should nevertheless give a clear indication of what will be Cre dibilit y  |   161

covered. For instance, if the first sentence of a paragraph begins with “The experimental setup consists of an emitter and a detector separated by a noisy channel,” then you will expect the sentences that follow to be about the detail of these three elements, how they were arranged, or how they were used to make the experiment work. For a topic sentence to work well, every other sentence in the paragraph has to be clearly related to that first one (the topic of the paragraph). The job of the linking sentence is to make sure that the reader isn’t surprised when they start reading the next paragraph, but it’s not always ­necessary. Take the transition between the last paragraph and this one. The term “linking sentence” had already been introduced in the section “Paragraphs,” in the paragraph about the clothesline structure. After that, the subheading “Topic and linking sentences” indicated that topic and linking sentences were going to be discussed in this section. This should mean that you weren’t surprised when I switched in this paragraph to discussing linking sentences, even though I had not provided any explicit signposting in the last paragraph. However, there are other cases where the linking sentences are critical. If you look at the example below, you’ll see that the three paragraphs have very different topics. The linking sentences very clearly belong in the ­paragraphs that they’re in (i.e. they are clearly connected to the topic sentence in that paragraph). However, in each case they introduce an idea, which in turn links to the next paragraph. This creates a chain of logic that means that the audience is never surprised. Start by considering the following paragraph: Over the last twenty years, the discipline of imaging system design has seen steady evolutionary progress, in contrast to revolutionary advances in optical materials, detectors, and digital computation capabilities. Typical imaging system design treats the optics and detection as separate design problems. Postdetection processing is considered a last resort to correct for imaging system deficiencies. The impressive restored images produced during the Hubble Telescope crisis emphasized that post-detection digital computation can be a part of the image formation process. However, the digital computation merely compensated for deficiencies. Similarly, modern consumer digital cameras often employ digital de-warping to compensate for optical distortion. Image sharpening often compensates for aberrations. These examples are a first step toward an emerging integrated approach to imaging system design that consider the

162  |  EXPL AINING THE FUTURE

optical components, the detection layer, and the computational layer as multiple degrees of freedoms in a single design problem, as illustrated in Figure 1.

Even if this isn’t your field, you can see how the topic and linking sentences work here. The first sentence discusses the evolutionary progress in imaging system design, and all the sentences that follow relate to that evolutionary progress. The final sentence points to the conclusion of that evolution: an integrated approach to design. This integrated approach is the key connection between that paragraph and the next: A major difficulty in moving toward an integrated design methodology is the choice of performance metrics. One approach is to consider the imaging system as an information channel. The original message is the scene intensity distribution, and the received signal is stored as a discrete array of numbers that may later be processed and displayed. The design problem is to maximize the mutual information between scene intensity and the image. Huck et al. have applied Shannon information to analyze discrete imaging systems in this manner. It is important to understand that imaging systems designed using these metrics often do not form pleasing raw images at the detector. However, the design ensures that post-detection restoration can produce faithful images.

Without our understanding, from the first paragraph, that the goal is an integrated approach to design, the concept of the performance metrics would have been meaningless. With it, when we get to the next paragraph, we have the advantage of not being surprised by the direction that the argument takes, even if it is too technical for us to fully understand. We can also see that the topic and linking sentences in the second paragraph work well. It starts by leading you to expect it will cover ­performance metrics, and every sentence is, indeed, on that subject. The linking sentence is in a different place than in the first paragraph. We can see what it was only when we get to the third: We have applied the Shannon information approach to the problem of aliasing in discrete imaging systems. This problem arises because detector sensitivity demands wide aperture settings that result in very high spatial frequencies passed by the optical system. Typical consumer cameras are under-sampled by as much as twenty times the Nyquist sampling rate. In fact, virtually all consumer cameras contain a sandwich of birefringent plates that purposely blur the image prior to detection to reduce aliasing. The design parameters for such filters have been chosen subjectively and do not take advantage of post-detection Cre dibilit y  |   163

restoration­. Our approach used information metrics to design the birefringent blur filter and assumes that the image will be appropriately restored.

Now we can see that the connection between the second and third paragraph was Shannon information. Thanks to that third sentence preparing our minds, we are not surprised when we get into the next paragraph (which is actually about aliasing). This is very technical text intended for a technical audience, but the paragraphs are used well. This helps the audience—even those of us who are nonexperts—to get some of the gist of the important ideas it contains. The topic sentence is a really useful signpost for the writer as well as the reader. Because each paragraph is essentially labeled, you can see at a glance what it is about. As a writer, this means that you can move p ­ aragraphs around quite easily, as long as you make sure that arguments raised don’t rely on information or explanations that have moved later in the report. Once you master the use of topic sentences, your reports should pass the topic sentence test: the topic sentences should create a neat outline of your report. This allows both you and your reader to see at a glance what the argument is and where information can be found, and to skim the document efficiently.

Connector phrases Topic and linking sentences are critical for flow, but there is another tool that you can use to help your audience to understand your logic: ­connector phrases, which literally make that logic explicit. For instance, you could have a paragraph about a theory you wanted to test and follow it with a description of the experiment you used to do it. There is a logic that connects the two that could probably go unstated. However, the writing becomes much clearer is if you add just a few words to highlight that logic: “To test this theory, we designed an experiment. . . .” Now the audience has no need to guess, because the logic is clear. You can create connector phrases (also known as transition words, if you want to get inspiration online) to encode all kinds of logic. These range from phrases indicating sequence (“First we did this . . . Then we did that . . .”), to those indicating cause and effect (“From the results we could see . . .”), to those indicating the conditions in which things are true (“If this criterion can be met, then . . .”). They needn’t take up very many words, but they are hugely helpful as breadcrumbs to guide your reader through the woods. 164  |  EXPL AINING THE FUTURE

Non sequiturs If topic/linking sentences and connector phrases are like signposts, a non sequitur (Latin for “doesn’t follow”) is more like an unexpected fence blocking your way. All of a sudden, the reader is stuck and no longer trusts your directions. Now they are going to have to try to backtrack and figure out the right path for themselves. Essentially, this is a symptom of some broken logic. The importance of logic in technical writing is no different from that anywhere else: everything you say has to be built up by a set of assumptions, inferences, evidence, and so on. These can be conveyed to the reader explicitly (i.e. you may want to include them directly in your report), or they may be obvious . . . either because they are things that everyone knows or because it’s obvious what A + B equals in the context. For instance, in the following paragraph, the logic is generally very good. The author funnels in from a broad subject (3D imaging/processing) to the very narrow subfield within which the author’s team has made a contribution: Many existing 3D imaging and processing techniques are based on the explicit combination of several 2D perspectives (or light stripes, patterns, etc.) through digital image processing. With holography, multiple 2D perspectives are optically combined in parallel. When either of the two stages in holography, recording or reconstruction, are performed digitally, the process is referred to as computer, or digital, holography. Synthesis of holograms by computers, and digital reconstruction of optically recorded objects, have been demonstrated. We record digital holograms using a technique called phase-shift interferometry and introduce a third step, that of digital compression and decompression. Although the capabilities of holography have been known for many decades, digital ­holography has seen renewed interest with the recent development of megapixel digital sensors with sufficient spatial resolution and dynamic range.

The problem comes right at the end, the very last sentence. Having got incredibly specific with the digital compression and decompression, they now get quite broad again, talking about digital holography from a historical point of view. This is jarring: it literally doesn’t follow from what came before it. It’s not what we’re expecting, and we certainly couldn’t guess what would follow on from it. In fact, if you look at the last sentence, there’s nothing wrong with the information it contains. It’s relevant (although misplaced). In fact, with a little rewriting, it would have been perfectly fine, but only if it were moved Cre dibilit y  |   165

from its current location to after the third sentence (where digital ­holography is introduced) and rewritten slightly. Technical people are generally logical, so non sequiturs are not hugely common. However, they’re not rare, either. Given the seriousness of the credibility problem they create, they must not be taken lightly.

Sentence length In the same way that paragraphs have to be kept to 150 words to give the reader a chance to stop and digest, sentences need to be kept to 40 words or fewer. Sentences are the minimum unit of text: you should be able to read them all the way through without having to stop en route to work out what’s going on. If you are forcing your reader to stop and think midway, then your sentence is too long. The paragraph below, for example, is filled with sentences that are too long: One possible 3D integrated electronic/photonic PMCM structure is shown in Fig. 1(a) (pre-detection optics not shown). Multiple layers of pixelated silicon VLSI chips (chips that are divided into arrays of nearly identical devices or functional regions) are densely interconnected by a combination of electronic, o ­ ptical, and photonic devices to produce either a space-invariant or a space-variant degree of fan-out and fan-in to each individual pixel (neuron unit, or processing node). These weighted fan-out/fan-in interconnections are suggestive of the axonal projections, synapses, and dendritic tree structures that characterize neurobiological systems and provide for fan-out from one terminal on a given chip to many terminals on the adjacent chip with individual weights on each connection. The use of optical and photonic devices in particular allows for the implementation of such dense weighted fan-out/fan-in interconnection patterns between adjacent physical layers within the stack of chips without significant cross talk, thus eliminating the need for electrical connections that must penetrate through each chip.

All but the first one breaks the forty-word rule, and you can see for yourself that—besides the challenging content—the complexity of the text makes it difficult to read. When you analyze it, it’s easy to see what’s wrong here. Each sentence has two of almost everything: verbs, objects, and clauses. In other words, each sentence is actually two sentences crammed into one. By splitting the sentences up, they become much easier to read. In this kind of situation, you can avoid repetition by cloning the subject of the sentence 166  |  EXPL AINING THE FUTURE

and using a pronoun such as it or this the second time. Just make sure it’s absolutely clear what that pronoun refers to in the context. On the other hand, there is no such thing as an ideal sentence length. If you read the next example you may find it a little bit tedious: To evaluate the performance of the profiler, we performed repeatability tests with several evaporated metal test surfaces. Figure 6(a) shows the results of one such test, for a surface consisting of two rounded aluminum steps. The three traces are the surface profiles recovered from three interferograms, with the reference surface having been translated slightly between each one. After removal of the relative translation, the r.m.s. difference between traces was found to be just over 5 nm. A plot of the residual error between the top and the bottom traces is given in Fig. 6(b).

The original text from which it was taken was even more hypnotic because every sentence (in the whole paper) was pretty much one breath long (i.e. if you were reading it aloud you would naturally take a breath at the end of each one). This repetition of sentence length lulls you to sleep: exactly the opposite of the focus and concentration you want to stimulate in your reader. This repetition was probably a result of the writer (who was very competent in every other respect) being taught that you should read your work aloud to make sure it is clear. For public speaking, you don’t necessarily want to have to take a breath in the middle of a sentence, so this length might seem sensible. Constant repetition of this sensible length, however, does not make for good writing.

Section and subheadings Headings can be an extremely valuable tool for guiding readers, especially around long reports. They are not difficult to use, but there are some basic rules that will make them more useful. First, headings should look s­ ensible when they appear in an outline. This means they should be consistent with each other both in their style and how they are used. If you have a section on communication theory and allow the detector and the emitter to have their own sections, then the noisy channel should have one too (even if you don’t have quite so much to say on that topic). If the first section is called “Theory” (boring as that is), the next two should probably be called “Experiment” and “Results.” On the wording of headings, lots of different styles can work, from single descriptive words to snappy double entendres, to literary allusions, to long technical phrases. You can also use your headings as mini-headlines, enticing Cre dibilit y  |   167

your reader to keep going by highlighting a provocative question that the section will answer. Just remember that headings should not be full sentences and that making them too long can cause readability problems. If they run over two lines, for instance, the second may be more difficult to read and easily ignored. This means you’re losing some of the sense of what it said. It’s important that you not make your sections too short. It’s fine to have the odd short one, or even a section with just a paragraph of text, but that should not be the norm. If you have too many subheadings, one for every paragraph or two, then there is no flow between the paragraphs. This makes the text stilted, disjointed, and incoherent for the reader. Long sections can be problematic as well, because they inconvenience readers who need to take a break to get off the train, go to a meeting, or answer a call of nature. Readers always want to make it to the end of a section before they stop. That way, they can put in their bookmark (or turn off their reader) and know that they can simply pick up at the subheading. If a section is very long, then the reader has to skim to find their new starting place as well as having to reread the paragraph or two that came before they stopped to pick up the context again. Not a huge problem, but one of those little annoyances that could add up with others to make the reader give up or turn against you.

Make it easy to read There are a few more things you can do for your audience. One is to choose a writing style that is direct and interesting to read, avoids repetition, and is free of the ambiguity often caused by grammatical errors. Another is to make sure that your reader can actually read the text, ensuring that the size, style, color, and font are not making reading more difficult than it needs to be. If you’re including technical details, then it will also be important to ensure that formulas, code, and so forth are easy to follow. Finally, you want to make it easy for your audience to find out more when you cover topics that they find interesting. We’ll cover all these issues here.

Voice The most readable and direct prose comes from using the active voice, with some passive thrown in for variety. There are all sorts of arguments to be had here about the philosophy of science being better served by the passive voice because it keeps the investigator out of the scientific process. 168  |  EXPL AINING THE FUTURE

You (or your supervisor) may vehemently agree with these, and you’ll find many people on your side. However, if you really want to communicate, you will value pragmatism, and there are lots of pragmatic reasons to choose to use the active voice. Let’s start with two examples: Active voice: We did this. We did that. Lastly, we did that other thing. Passive voice: This was done. That was done. Lastly, that other thing was done.

So far, it looks like the two sentences include the exact same information in the exact same number of words. However, if I give you another ­example, you’ll see that, with the active voice, you are providing more information: Active voice: We did this. They did that. Lastly, we all did that other thing.

Yes, I’ve added a word, but even without the all there’s a lot of extra information there. The point is that, with the active voice, we know not only do what was done, but by whom. As a reader, that helps me. Now it is clear who did what, and I can figure out who it is that I need to talk to. The active voice removes any ambiguity. The active voice is also easier both to write and to read because it is more like the direct, natural language we use in everyday speech. Even for many native English speakers, the stilted syntax that has traditionally been used in technical papers is difficult to parse. Given that only about 5 percent of the world’s population speak English as a first language, and only about 18 percent can communicate in English to any extent, it is better to err on the side of simplicity. Another advantage is that you can use the second person to talk directly to your audience, an approach that is particularly useful for explaining how to do things. Compare the following snippets: Passive voice: To turn on the radio, the button should be depressed. Active voice: To turn on the radio, [you] depress the button.

Without the you, there you might not have even noticed that the active version was, in fact, active. But this form is important. Using the second person (you, whether you include that word or not) allows you to provide clear, direct instructions without tying yourself in knots. Finally, using the active voice doesn’t mean always using the active voice: it simply means using it as your first option where appropriate. If, for whatever reason, it’s not appropriate (see “Repetition” for one good Cre dibilit y  |   169

reason why it may not be), you can switch to the passive voice and then back again. This gives you more flexibility and an extra tool to help you avoid writing repetitive sentence structures.

Repetition There are many types of repetition that can divert your reader from your ideas, and you should try to avoid all of them. We discussed repetitive sentence length earlier, but that’s a relatively rare affliction. More common are constant repetition of words, sentence structures (including sentence beginnings), and ideas. We’ll take these in turn. Repetition of words is something you were probably told to avoid in high school. If you do it often enough, you will drive your reader crazy. Take this example: The prototype of our wearable user interface will lead to a solution to these demands. Figure 2 shows a larger view of our wearable user interface. The user “wears” combined gyro and acceleration sensors on the back of the hand to measure the three-dimensional orientation of the hand with respect to the ground and also “wears” a gyro sensor on each fingertip to measures the angular motion of the finger with respect to the back of the hand. Note that the relative bending angle of the finger is computed by comparing the outputs of the gyro sensor on the fingertip and the gyro sensor on the back of the hand. Since there is no other external device necessary to wear, particularly no device on the palm of the hand, the user can wear this compact interface anywhere and anytime, and can even hold other objects or take notes with a pen.

There are lots of repetitions here that you may well have noticed: back, finger, gyro, sensor, hand, user, and—worst of all—various versions of wear. Not only does this make the text confusing and annoying (the confusion comes because you feel like you’re reading the same thing over and over), but it also wastes bandwidth: more words are being used than ­necessary, wasting the reader’s time. To show you how true this is, have a look at an edited version of the same block of text: Our prototype interface addresses these requirements: Fig. 2 shows a larger view of it. The user wears combined gyro and acceleration sensors on the back of the hand—which measure three-dimensional orientation with respect to the ground—and gyro sensors on each fingertip that measure the angular motion

170  |  EXPL AINING THE FUTURE

of the finger with respect to the back of the hand. The relative bending angle is computed by comparing the outputs of these gyro sensors. Since no other external device is necessary, the user can wear this compact interface anywhere and anytime, and even can hold other objects or take notes with a pen.

This conveys exactly the same information, but is almost one-third shorter than the original. You can think about reducing the amount of text in two ways. If you’re expected to write a report of a particular length, then cutting out un­necessary words allows you to say more in the same space. If you have something in particular you need to say, at whatever length it comes out, then compressing the content in this way saves your reader time. Either way, it’s a win. Sometimes the problem is not so much that there are too many repeated words but that there seems to be no way to get rid of the repetition because a particular noun or verb has to be used several times to get your meaning across. If this is the case, there are two major options. The first is to rewrite, replacing the repeated word with a pronoun such as it, this, or that, or somehow avoiding using it altogether. You can see examples of all these tactics in the editing done to the repetitive paragraph above. The other option is to find a synonym: a word that means the same thing as the one you’re trying to avoid. So, for instance, you might talk about taking a measurement at one point, and a reading at another, and detecting the light level somewhere else. You can use a thesaurus to find synonyms, but make sure that you are careful about using words that you’re not particularly familiar with: ask around whether it’s appropriate in the context. Using a word wrongly dents your credibility far more than repetition does. The problem with repeating sentence structure is that every sentence ends up feeling very similar. The audience finds it hard to grasp the meaning after a while because it seems like the same thing is being said over and over. The nouns and verbs seem interchangeable, and it all blurs together after a while. The effect of this can be seen in the paragraph you’re reading now, where every sentence starts with the. The solution to this problem is to look at your sentences and ensure that they don’t all start in the same way. The offending sentences don’t all need to be changed. You just need to change enough of them to provide variety for the reader. The problem could be solved in this paragraph by switching just half the sentences around. Cre dibilit y  |   171

It may not be sentence beginnings that are the problem: it could be some other kind of sentence structure. My favored structure, for instance, is the colon: I’d use it way too often were I allowed. I could do this, and it would be entirely grammatically correct, but the problem would be as follows: it becomes very annoying to read and, again, makes the audience feel they are getting the same sentence over and over (just with different nouns and verbs). You might be getting a sense of that in this paragraph: I’ve tried hard to include a colon in every sentence. Once you (or someone you’ve asked to read your work) notice repetition of this type, it’s easy to fix: ration yourself to (say) just one sentence with that structure per paragraph. Finally, repetition of ideas can occur when you’re trying hard to ration out the explanation so that you only tell your audience what they need to know when they need to know it. Although, in general, this is a very good tactic, it becomes counterproductive if—essentially—you’re having to go back and re-explain concepts that you’ve already dealt with in order to prepare the audience for the new explanation. This gets you back to wasting your audience’s time. The best practice is to give the audience as much as they can handle, put it in context as you go, and then move on to the next topic.

Grammar Poor grammar causes two major problems. First, it makes you lose credibility with your reader. You seem less careful and less skilled than your reader needs to believe you are for them to trust your ideas and recommendations. Second, it can be a source of ambiguity: in a grammatically broken sentence, it may simply not be clear what you mean. The lesson here is that you need a strategy for making sure that reports don’t go out with poor grammar. You could use a proofreader or editor within your research group or company or you can take the decision to improve your own skills: either is acceptable. Having no strategy is not.

Legibility We all know that functional design is important within our own ­disciplines. What we often forget is that it is important in all fields, including those in which we have no training. Type design and layout is one of these. In particular, making poor choices in font, size, style, color, and line length can make life difficult for your reader. You can, of course, read 172  |  EXPL AINING THE FUTURE

books on this subject, but here I include some basic things to consider if you are designing your own report format. First, choose a simple non-distracting font. It makes things easier from a practical standpoint if you choose a standard font (one that exists on most computers) rather than one that is rarer (even if it’s more ­aesthetically pleasing). This is because you will have fewer formatting problems and missing/garbled characters if you have to share a document, change platforms, or switch to a different piece of software. If the document is likely to be viewed on a screen (e.g. a presentation or web page), then you are better to choose a sans-serif font: use serif fonts for printed text. Serif fonts include Times New Roman and the font in which this text is printed (Adobe Minion Pro). • Sans-serif fonts include Arial, Helvetica, and Gill Sans (used here).

In a serif font, there are small lines that protrude (for instance) at the base of the T and off its top stroke. These lines have a function in high-resolution printed text, in that they create base and top lines for the eye to follow along the page, making the text more readable. However, when these fonts are shown on a screen (especially if the resolution is low, the text is small, or both), then the serifs are reduced to one or two pixels and can distort the letter shape or appear as a kind of noise around the text. This makes the letters harder to read. Text size is also an issue. In documents, you probably shouldn’t go below 9-point type if you want people to physically be able to read what you’ve written. In a presentation, the problem is more complicated. First, something small that’s perfectly readable on your lovely, high-resolution monitor may look awful and illegible when it’s projected through an old SVGA projector. Second, a slide that works well on a large monitor in a small room will not necessarily work so well in a lecture theater. A rule of thumb is to use text that is at least 24 point and definitely no smaller than 18 point. If you use PowerPoint, this means overriding the settings of any in-built theme you use: the default allows you to go much smaller than is good for your audience. Take care if you have a diagram or screenshot that you want to show on a slide but where you know or suspect the text will not be readable. Your audience will find it frustrating if they cannot read all the words. If they are irrelevant, it may make sense to block them out. If the text is necessary to understand the figure, add it back in at a legible size. If it’s a screenshot, Cre dibilit y  |   173

ask yourself if you’re using it in a sensible way. If you just want to say, “Look, this thing is live,” then fine. If you want to show where functionality exists, you may have to zoom in to the relevant buttons or text so that they are clear and legible. Another issue to consider for all documents, but particularly presentations, is color. Almost 8 percent of men are color blind (specifically red/ green blind), limiting what hue combinations they can distinguish. If you do an image search for terrible color combinations presentation, you will find lots of pictures of awful slides that are unreadable because there is not enough contrast between the foreground and background to be readable, or the combination is just incompatible with the human visual system. Yellow text on white is just obviously a bad idea unless you want to hide something. So is light blue on dark blue. Red on green is an awful idea (even for people with standard eyesight) unless you’re trying to give your audience a headache. Line length (literally the length of a line of text, as measured in characters) is another important parameter to consider. The ideal number varies from expert to expert, but you should probably look at having no more than ninety characters per line and no fewer than forty-five. If the line is too long, then, when your readers get to the end of one line, they will have difficulty finding the beginning of the next one. If the line is too short, then reading becomes inefficient, because the eye is spending as much time trying to travel down the page as read across it. The character-length rule assumes single-spaced type (i.e. a single line space between lines), so you can get away with slightly longer lines if you use a wider spacing. The task of finding the next line after leaving the last one can also be made easier by using non-justified text (also known as ragged right). Here, each line of text has its own natural length, which produces a kind of landscape on the right side of the page. Your eye uses this landscape to find the beginning of the next line. Without this, the problem is more ­difficult. In addition, with columns, using ragged right prevents awkwardly large spaces appearing when there are just a few words in a line. Some people really like the look of justified text, and it’s not unreasonable to want to use it. Just bear in mind that if you choose this kind of layout, you’ll want to be sure you’re not using a line length that is too close to the limit. The last danger to be aware of is reader hazard, where the reader gets to the end of a block of text and doesn’t know where to go next. If you look

174  |  EXPL AINING THE FUTURE

Figure 2  Shown is a page layout known as a “reader hazard.” The arrows show the various different paths that the reader could take to get through the text. By pushing the figure to the top or the bottom, this ambiguity can be avoided.

at the example shown in Figure 2, you’ll see that certain configurations of columns with badly placed figures or tables can be confusing. Your reader might end up skipping a section without even knowing it, and so lose the train of the argument. Such problems are easy to fix: pushing figures to the top or bottom of the page will generally make the text flow obvious.

Cre dibilit y  |   175

Mathematics, code, data, and other technical detail In technical reports, you will often have to include very technical ­material, such as proofs, formulas, code, and data. If your audience is used to dealing with these (i.e. if they’re at least as familiar with them as you are), then their inclusion should be straightforward. If your audience has a different technical background, however, this can be problematic. Either way, there are a few issues to keep in mind so that you don’t end up confusing your reader. First, ask yourself (as with any kind of explanation) what level of detail you need to include. For instance, do you really have to include a proof (e.g. if you are writing for a journal or a very technical client), or can you simply write a paragraph giving the rough shape of the proof and then link to a rigorous technical paper you’ve written on the subject (in case they want to check up on you)? In other words, if your reader doesn’t necessarily need the technical detail—if it’s not the thing you’re expecting them to judge you on—don’t push it on them. What if you’re not sure, or have an audience composed of some people who are technical and others who aren’t? Here, the easiest thing is to hive the details off into appendices. This gives you the best of both worlds: thoroughness for those who need it, and clarity and concision for those who don’t. Where it’s not possible to use appendices (such as in journal papers), it may still be possible to separate out the details so that they don’t trouble those who aren’t equipped to deal with them. If there aren’t other conference/journal papers to reference, one tactic can be to publish the technical material online as an archive. This is a good strategy if the material is related, but not critical, to the topic being discussed. If this isn’t appropriate—you know those details will be indispensable to some readers—then another possibility is to include a technical section that you invite other readers to skip entirely. In theory, this can work well, as many readers will be very grateful that they’ve been encouraged to “skip Section 5 if you're not interested in the derivation of the . . . .” The problem comes if the technical content that follows relies on ideas that were only explained in that section. To make sure that it can be removed cleanly, the trick is to show the report to a friendly reader with the section missing entirely (so they can’t peek at it). This way, they should be able to tell you if you’ve hidden important concepts in there that—when skipped — make the rest of the paper unintelligible. 176  |  EXPL AINING THE FUTURE

Finally, if you are getting into technical proofs, you must make sure to spell everything out appropriately for your reader. Make sure they understand any assumptions upon which your theory is based, conventions used, variables, and so on. How much explanation you provide will depend on how much your audience needs to grasp in order to follow your technical argument.

References References are helpful in lots of ways. They help you avoid going into unnecessary technical detail, point to alternate approaches and strategies, and allow readers to delve into the guts of issues that are relevant but tangential to your main argument. However, if your reader looks for something that you cited, and they cannot find it, you will lose credibility with them. Using broken links, incomplete conference information, missing page numbers, misspellings, or anything else that forces the reader to do more than the most basic search is bad practice. If you’ve been working with a reference manager, you’re covered. The good ones will ingest the full details of a paper automatically, allow you to drop citations into your report as you compose, and then format the ­bibliography for you. In your reports, you should use the referencing style that is most appropriate to your field. In engineering, computer science, and the physical sciences, that would generally mean either the Harvard style, where the text citation consists of the author’s name and the date of publication, and the references in the bibliography are listed in alphabetical order, or the Vancouver style, where the text citation is a number assigned to the reference, and the references in the bibliography are listed in the order in which they appear in the text. However, there are several others. Some deal better with literary texts, and some deal better with legal texts. Your reference manager will allow output in whatever style you need.

Don’t give rough drafts to readers Unless someone is an active coauthor, they’re a reader. As such, they deserve all the consideration of any other reader: using our metaphor from before, this means that they should see your argument as path that is as clear as you can possibly make it. Only this way can they give you good feedback that will help you identify the blockages that you hadn’t previously noticed. Cre dibilit y  |   177

People will always focus on the most obvious problems. In fact, in the creative industries, it is common practice to “throw the client a bone” when getting to the point of getting a sign-off on a publication or video. In this case, the “bone” is a reasonably obvious annoying error that will distract the person doing the sign-off from meddling with the final product too much (which will cost the creative company time and money, make it more difficult to meet the deadline, and so forth). This diversionary tactic makes the client feel that they’ve done their job by demanding and error be fixed while not causing inconvenience to the people doing the work. What is a boon for editors and designers trying to have an easy life can be a huge problem for people who actually want insightful feedback. If you give readers a half-baked draft—one where there are lots of typos, the numbering is all wrong, the figures are missing, and/or the captions are incomplete—then your reader will focus on those problems to the exclusion of anything else. They won’t follow your argument well enough to critique it. In other words, the feedback will be worse than nothing. Why worse? Because you will have annoyed your reader, lost their good will, and delayed yourself by waiting for useless feedback, and your reader may not agree to help you again anytime soon. How do you avoid this? First, don’t use written drafts to test out arguments. Use verbal presentations instead. Create an outline of your report either as a slide presentation or as a simple document, perhaps with a few rough diagrams to aid explanation as you go. Sit with your potential reader and go through the explanation step by step. Note the questions that your reader asks, so you can figure out where those questions are best addressed when you do the actual write-up. Once you feel you have a clear, compelling argument that your readers will understand, write it up properly. It’s OK to have a first draft where there are holes, comments saying “Put reference here,” and so on. Before you give it to anyone, however, you need to make sure it is clean. It is critical to give yourself enough time to write, let days or weeks pass, and only then edit. When you write a document, you get very close to it. The sentences and paragraphs become embedded in your short-term memory, and you find it hard to see the errors in them. For a very long document—where, by the time you finish the last chapter, it’s been months since you looked at the first—this is generally not a problem, as the delay between writing and editing is built in. For shorter documents, however, you may start writing the document one day and start editing it the next 178  |  EXPL AINING THE FUTURE

day, or the day after that. Unless you are already a good writer, this is not enough time (and, even then, it’s a compromise). Being disciplined helps with this. Have a schedule in mind for your writing and editing with a comfortable gap between the two (one that is big enough so that it can be eroded by procrastination and other delays but still be long enough to be useful). Depending on the type of document, its importance, and the level of your writing skills, that gap could be as little as a weekend or as much as a couple of months. Once you’ve got all your content nailed down, make sure to check that it reads well. Use your spell checker and grammar checker. Even if you don’t care about some of the issues raised, these tools will likely highlight places where you’ve missed out words, garbled tenses, hadn’t quite finished editing, and so on.

Summary To keep your audience on your side and lead them through your argument, you will need to: • show your evidence (don’t just tell people what to think) • be honest, authoritative, and accurate • prepare for objections your audience is likely to have • provide signposts and resting points and create an easy flow through the argument by: ◦ having single-topic paragraphs ◦ including topic and linking sentences ◦ using connector phrases ◦ avoiding non sequiturs ◦ ensuring sentence and paragraph lengths are appropriate ◦ including the right number of section and subheadings • make it easy to read by: ◦ using the active voice where possible ◦ avoiding repetition ◦ checking your grammar ◦ making sure the document is legible ◦ only using the mathematics, code, data, and other technical detail you absolutely need ◦ specifying references correctly • give the best possible drafts to readers, to get the most useful feedback Cre dibilit y  |   179

Case Study Part II

Report

T

he following report is intended for a general technical audience interested in the subject from a strategic perspective: they are interested in knowing whether the new development is likely to change the technical landscape and so might have a knock-on effect on other fields, rather than having any specific interest in backing the technology themselves. I could assume that, since they are reading this, they have already read Case Study Part I and so know some of the concepts, ideas, and jargon that I went through in detail there. However, that was three chapters ago: any amount of time could have passed between now and then or, indeed, a reader may have bypassed that section all together, so I can’t take any of that information for granted. Other than that, my main job is to make a clear technical argument from scratch, justifying my reasoning and conclusions as I go. I’ve chosen to keep the report short: partly because greater technical detail wouldn’t be of interest to most readers and partly to make it easier to demonstrate the structure and annotate as I go along. There are three different types of annotation here: • At the beginning of each paragraph, you’ll find some text in bold. This shows the subject of the paragraph as indicated by the topic sentence: the rest of the paragraph should all directly link to that subject. • You will also see words underlined to indicate that they are jargon words that have been defined or avoided. • Comments about these words and phrases are included in the indented paragraph that follows, as is an indication of where paragraphs fit in the technical argument.

Finally, the structure I used for this piece is a variant on the two you saw in Chapter 6 (see Outline 3). 180  |   EXPL AINING THE FUTURE

Outline 3  Structure variation used for the case study report.

Will silicon photonics speed up deep learning? We increasingly expect machines to understand the world the way we do: to recognize faces and objects, understand speech, and diagnose diseases. Deep-learning systems can do this, and can have much greater potential than some other kinds of machine intelligence, because their knowledge is based on experience rather than programming. The use of such systems is only set to increase with the advent of self-driving cars, assistive technologies, and smart infrastructure in both the built CASE STUDY PART II : Report  |   181

environment and the home. Since many of these systems will be safetycritical, they will need to be able to react to new situations quickly and reliably. Above is the vision. It paints a picture of what we would like technology to be able to achieve.

Unfortunately, the processing that deep-learning machines do is not well suited to the way conventional computers operate. Essentially, ­digital electronics is most efficient at processing in series (like a conveyer belt, where each data point is processed one at a time). For neural processing, you want related information (data that are close to each other in time or space) to be compared with each other and processed in parallel (simultaneously). This is because the meaning of one point (e.g. a pixel in an image or an equivalent “moment” in sound) can only be understood in the context of the other pixels around it. These bunch together to form features like eyes/noses/mouths or phonemes, which can, in turn, be joined with others to form recognizable faces or words. Although conventional computers can do this, they are not optimized for it, which means they take longer and expend more energy than strictly necessary. “Processed in parallel” encompasses a large amount of detail related to this problem, detail that is interesting to electronic engineers but not to the audience of this report.

Here, we ask whether using optics rather than electronics might allow us to speed up the inference process, both theoretically and practically. Specifically, we look at using a new technology that allows optical circuits to be combined with electronics in a way that is both cheap to design and easy to fabricate. Finally, we consider whether such a technology could compete against digital electronics in the mediumto-long term.

Optimizing neural hardware In 2013, Google scientists realized that they needed to address the inefficient processing of deep-learning queries.12 They projected that “people searching by voice for three minutes a day using speech recognition (requiring deep neural networks) would double our datacenters’ computation demands.” To head off this situation, they developed a new digital chip 182  |   EXPL AINING THE FUTURE

known as the TPU (tensor processing unit), intended to make the recognition task (known as inference) faster and reduce the amount of power it required. The TPUs worked because they were optimized to perform convolutions: ­mathematical operations that were structured in the same ways the neural computations were. The approach worked: the speed was improved by a factor of 15–30, and the power efficiency by a factor of 30–80. Other companies have been working on similar approaches. This is the second half of the status quo and the first of two technical problems in this story: the deep-learning hardware and the integration of photonic devices.

The parallelism needed to perform a convolution operation requires massively increased communication on a chip, creating an opportunity for emerging technologies. Although the TPU implements this ­connectivity by using conventional electronic links, photons can carry more information much faster and more efficiently than electrons. For this reason, communication among, and increasingly within, computer systems is performed via optical links (such as fibers) rather than wires. Because of this, for more than three decades, researchers have sought to create photonic (light-based) systems that could form the communications backbone of electronic computers. I can’t expect a general technical audience to know what optical links are, so I provide a quick definition. Since most will understand what optical fibers are (and the benefits they bring), this will help them to understand the rest of the piece. The word “photonic” is not understood by most people and will be used often, so that is briefly defined as well.

Although light-steering components have continued to shrink and work in tandem with electronics at system level, it has been difficult to integrate the two technologies within chips. Stand-alone photonic devices are generally fabricated using gallium arsenide and other “exotic” ­materials to optimize optical performance. If electronic functionality is required, that’s implemented on a separate device, and the two are connected together. Although photonic devices can be bonded onto silicon chips, this is difficult, expensive, and unreliable. This paragraph, together with the paragraph above, explain the second technical problem. Note that I could have expanded light-steering components to light CASE STUDY PART II : Report  |   183

modulators and waveguides, but I couldn’t assume that these concepts would be understood by the audience and—in any case—they are not critical to the argument here. Also, I could have said more about III–V materials (of which gallium arsenide is one), but, again, it’s not the point here. All you really need to know at this stage is that optoelectronic materials are not used in conventional electronics.

New viability for photonics A group from MIT found one way around this problem: give up the advantages of optimized photonic materials and “hack CMOS” to create light-based circuits alongside electronic ones instead.11 Specifically, they created a design tool that allowed engineers to create basic photonic pathways and modulators (to controllably reduce the amount of light allowed to pass) with “zero change” to the processes and fabrication equipment used to make computer chips. Although the devices were not ideal, they did work, making this a major breakthrough in the field and promising a new generation of optoelectronic devices. This paragraph is part of the new solution, one that addresses the second technical problem raised. Since the main competitor to this technology is the status quo (digital electronics), which has already been discussed at length, it’s not necessary to go through the competing solutions again at this stage. Note that I could have used the word “attenuate” to help explain what “modulator” meant, but that would have been defining jargon with jargon. I could not have been sure that my whole audience would have understood it so, since I’m not going to use the word again, it made sense to use the definition without the word itself.

Another team of researchers, again from MIT, sought to exploit this advance for deep learning, and experimentally demonstrated a system for vowel recognition.20 Essentially, their chip was passive: the learning was done on another system entirely, and the weights are written onto the chip. This means that once the incoming data (containing the sound patterns to the analyzed) is encoded, it can literally be processed at the speed of light. This, they claimed, “enables a computational speed enhancement of at least two orders of magnitude and three orders of magnitude in power  efficiency over state-of-the-art electronics.” The two lead authors

184  |   EXPL AINING THE FUTURE

subsequently­founded competing Boston-area companies—Lightelligence and Lightmatter—with funding of about US$10 million each. This is the first paragraph that explains how zero-change photonics should form part of the new solution to the main technical problem being discussed and explains how—and how well—that new solution works.

Although it is not entirely clear what their products will look like, there has been some anticipation that the success of this technology might not only advance deep learning but also represent the first real steps toward a new paradigm of optoelectronic computing. During its peak in the 1980s and 1990s, the field received large amount of funding in the hope it would enable superfast targeting of long-range missiles for President Reagan’s Strategic Defense Initiative. It was not seen as a success, however. This was partly due of the lack of the micro-scale components and partly because the inexorable advance of digital electronics eventually made it seem redundant. The advent of zero-change photonics, combined with the massive interest in neural architectures that could potentially be implemented in optics rather than electronics, could stimulate a resurgence in this area. This is the second paragraph detailing the new solution being discussed, explaining its potential and why people are excited about it. I’ve added some context about the Strategic Defense Initiative because younger scientists are unlikely to have ever heard of the program.

Although the technology seems to have potential, there many unknowns that could detract from any impact it might have. First of all, it is not clear that completely passive chips would be acceptable for many applications. Systems designers expect to be able to upgrade systems once they’ve deployed (e.g. through software updates). Without this, they might be forced to make suboptimal workarounds or physical upgrades. Thermal issues (photonic technologies can be particularly susceptible to changing temperature) could also be a challenge for many applications, as might the switching out of optical power supplies (laser systems), which tend to have much shorter lifespans than conventional electronics do. Now we get to the obstacles that the technology will need to overcome to be successful. Optical power supplies had not been defined yet and needed to be made “real” to the audience.

CASE STUDY PART II : Report  |   185

Discussion If these issues can be overcome, or simply avoided by choosing applications that where they don’t matter, then the quality of the engineering will still be critical. Although a hundred or a thousand times improvement­ over digital electronics might sound impressive, computer systems in general (and neural systems in particular) are extremely complex. Even small decisions can lead to significant changes in overall performance, and there are so many decisions to make that what seems like an unassailable advantage could be eroded through bad design. I could have explained in what ways neural systems are complex (more obstacles), but that would have resulted in a list of issues that no one would have understood without hundreds of words of explanation. Even trying to give a single example would have been difficult, as that would have required understanding how different types of neural networks function in more detail.

Likewise, the massive intellectual weight currently being thrown into making digital convolution networks more successful could, in turn, prevent their photonic counterparts from ever getting off the ground. The paper on the TPU had more than seventy different authors. DeepMind (a completely ­separate part of Google) has more than 700 employees (at time of writing), who are entirely devoted to applying and making progress in deep learning. Other large companies are working in this space, including NVIDIA, which has a workforce in excess of 11,000. IBM, which created the TrueNorth deep-learning chip, employs 380,000 people worldwide. These companies have the resources to optimize their technology quickly and to the nth degree, should they choose to. If this effort reduces the benefits of photonics to an order of magnitude or so, it’s unlikely that customers will choose the riskier technology over the sure thing that digital represents. Now we are getting more into the discussion, and the comparison between the new technology and the main competitor. You could also consider this part of the obstacles section.

It is too early to tell whether silicon photonics will succeed, but the significant funding that Lightmatter and Lightelligence have received is cause for optimism. It’s not unreasonable to assume that, to raise this money, the founders had to identify at least a few lucrative application 186  |   EXPL AINING THE FUTURE

niches for which the technology is ideally suited. Even if these niches are small, success could lead to the cultivation of silicon photonics and other optical technologies for computing applications in future. On the other hand, the last fifty years of technological history suggests it’s unwise to bet against the incumbent—and especially digital electronics— unless you have something that’s better to the point of being disruptive. There are more sophisticated optoelectronic technologies3,23,24 that could, conceivably, do this, but they are not ready yet. If the limited number of devices created by using zero-change photonics causes the technology to stall, this could have a negative impact on the funding of these and others in the pipeline. The last two paragraphs form the prognosis and look to the future. Given the uncertainty in the analysis, it points readers to things they can look out for that may clarify the situation in the future.

Conclusion As we have seen, photonics does have a lot to offer deep learning and other neural systems. At least in theory, it could improve speed and efficiency by a factor of a hundred or a thousand. In addition, the zero-change design process could make the technology affordable for the first time. However, with current technology, the improved performance would have to come at the expense of reconfigurability. Although there may be applications where this is acceptable, this seems an important weakness. This is especially true given the relentless advance of—fully reconfigurable— digital electronics. We will have to wait and see the products (if any) that come out of the new companies and the markets that they address. If they do, indeed, offer major performance advantages over conventional technology, that will be a good sign. The prognosis will be better still if their function can be adapted for other niches, providing a route for growth and development. Until then, all bets are off.

CASE STUDY PART II : Report  |   187

Epilogue

F

or any how-to book, a primary goal is to systematize the subject matter to make it as self-evident and straightforward as possible. However, some of the techniques described here are significantly more challenging to use in practice than they are to explain. So, if you’re attempting this kind of project for the first time, it’s really important that you don’t get discouraged because what seems easy and tractable when described here feels much more difficult in real life.

For one thing, the case study included had to be over-simplified so that it could be understood by a general technical audience, and so that the canvases could fit in a paperback book. (In real life I’d recommend you print the canvases on 11×17" or A3 paper, and expect them to be much more complex, detailed, and messy.) Further—in the applied sciences, tech, and engineering—the subject matter itself is likely to be conceptually difficult, so even if you are doing everything right on the information-gathering side, you will still have to grapple with understanding the material. This is not easy. Even if you have the advantage of being expert in the particular area you’re investigating (making the subject matter transparent to you), you may encounter problems in your analysis because of your own internal biases, formed through your years in the field. Further, your expertise will likely make it more difficult for you to recognize the jargon in your field as, well, jargon. This will make writing up more challenging. Explaining the Future: How to Research, Analyze, and Report on Emerging Technologies. Sunny Bains © Sunny Bains 2019. Published in 2019 by Oxford University Press. DOI: 10.1093/oso/9780198822820.001.0001

188  |   EXPL AINING THE FUTURE

Beyond any particular blind spots we may have as individuals, there is also the challenge of multitasking. Almost none of the rules or practices I suggest in this book are difficult if you focus on them: the difficulty comes when trying to do everything at once. I like the analogy of spinning plates (look up a video if you can’t picture this!). It’s easy enough to keep one plate on the go, but the more you add, the more likely you are to let something important drop. One way to keep the plates spinning is to regularly and systematically shift your level of focus so that you don’t become fixated on the wrong things. On the technical side, you will have big questions related to the application as well as trying to keep track of the narrow technical details related to the solutions. You have to keep revisiting and reality-checking not only each end of this spectrum, but also the levels in between. Keep asking yourself why an approach might fail, how things could be done differently and, most importantly, if there is another way to think about the problem. From a writing perspective, this level-shifting is also critical. On the one hand, you must make sure you have a good technical argument and report structure; on the other, you need to check that you have written good paragraphs, avoided repetition, and have kept track of the needs of your audience. Doing all of this takes hard work and practice, and you can’t expect to be able to accomplish it all at once. The way to make progress is to work on a few things at a time . . . much as I’ve done in this book. You can find the right questions, sources, perspectives, analysis, argument, structure, and credible writing style as long as you focus on these elements one at a time, rather than trying to tackle them all simultaneously. One way to achieve this is to use the summaries at the end of each chapter to remind you of the key points and try to optimize as much as you can at each stage of your work. If you do this for two or three different projects you should find that you start to internalize the rules (at least the ones that make sense to you). Eventually—after enough practice spinning plates—you shouldn’t need this book at all.

Epilogue  |   189

REFERENCES 1. T. Ferreira De Lima, B. J. Shastri, A. N. Tait, M. A. Nahmias, and P. R. Prucnal, Progress in neuromorphic photonics, Nanophotonics 6 (3), pp. 577–599, 2017. doi:10.1515/ nanoph-2016-0139 2. T. Deng, J. Robertson, and A. Hurtado, Controlled propagation of spiking dynamics in vertical-cavity surface-emitting lasers: towards neuromorphic photonic networks, IEEE J. Sel. Top. Quantum Electron. 23 (6), pp. 1–8, 2017. doi:10.1109/JSTQE.2017.2685140 3. J. M. Shainline, S. M. Buckley, R. P. Mirin, and S. W. Nam, Superconducting optoelectronic circuits for neuromorphic computing, Phys. Rev. Appl. 7 (3), 034013, 2017. doi:10.1103/PhysRevApplied.7.034013 4. R. Wang, C. Qian, Q. Ren, and J. Zhao, Optoelectronic neuromorphic system using the ­neural engineering framework, Appl. Opt. 56 (5), pp. 1517–1525, 2017. doi:10.1364/ AO.56.001517 5. D. Brunner, S. Reitzenstein, and I. Fischer, All-optical neuromorphic computing in optical networks of semiconductor lasers, 2016 IEEE ICRC, pp. 1–2, 2016. doi:10.1109/ICRC. 2016.7738705 6. D. Woods and T. J. Naughton, Photonic neural networks, Nat. Phys. 8 (4), pp. 257–259, 2012. doi:10.1038/nphys2283 7. T.  Keviczky and G.  J.  Balas, Receding horizon control of an F-16 aircraft: a comparative study, Control Eng. Pract. 14 (9), pp. 1023–1033, 2006. doi:10.1016/j.conengprac. 2005.06.003 8. S. Wang, Y. Y. Zhang, Y. Jin, and Y. Y. Zhang, Neural control of hypersonic flight dynamics with actuator fault and constraint, Sci. China Inf. Sci. 58 (7), 070206, 2015. doi:10.1007/s11432-015-5338-2 9. B. Xu, Q. Zhang, and Y. Pan, Neural network based dynamic surface control of hypersonic flight dynamics using small-gain theorem, Neurocomputing 173, pp. 690–699, 2016. doi:10.1016/j.neucom.2015.08.017 10. Y. Zhi, Y. Yang, Z. H. I. Yongfeng, and Y. Yunyi, Discrete control of longitudinal dynamics for hypersonic flight vehicle using neural networks, Sci. China Inf. Sci. 58, 070204, 2015. doi:10.1007/s11432-015-5351-5 11. L. Alloatti, M. Wade, V. Stojanovic, R. J. Ram, and M. Popovic, Photonics design tool for advanced CMOS nodes, IET Optoelectron. 9 (4), pp. 163–7, 2015. doi:10.1049/ietopt.2015.0003 12. N. P. Jouppi et al, In-datacenter performance analysis of a tensor processing unit, Proc. ISCA 2017, pp. 1–12, 2017. doi:10.1145/3079856.3080246

Refe rences  |   191

13. Optical computing startup demos training of neural networks. http://top500.org/news/ optical-computing-startup-demos-training-of-neural-networks Accessed August 9, 2018. 14. MIT research sprouts two startups competing on new AI-focused chips. http://www. bizjournals.com/boston/news/2018/02/06/mit-research-sprouts-two-startupscompeting-on-new.html Accessed August 9, 2018. 15. This computer uses light—not electricity—to train AI algorithms. http://www.wired. com/story/this-computer-uses-lightnot-electricityto-train-ai-algorithms/  Accessed August 9, 2018. 16. S. Markidis, S. W. Der Chien, E. Laure, I. B. Peng, and J. S. Vetter, NVIDIA Tensor Core programmability, performance and precision, arXiv:1803.04014, 2018. 17. Y.  Shen, N.  Harris, S.  Skirlo, D.  Englund, and M.  Soljačić, Deep learning with coherent nanophotonic circuits, 2017 IEEE SUM, pp. 189–190, 2017. doi:10.1109/ PHOSST.2017.8012714 18. Methods and apparatus for automated design of semiconductor photonic devices. US Patent Application US 2016/0171149 A1 (filed June 16, 2016). http://patentimages. storage.googleapis.com/d9/8e/15/a4f8071b8e7cbf/US20160171149A1.pdf Accessed August 9, 2018. 19. Apparatus and methods for optical neural network. US Patent Application US 2017/0351293 A1 (filed December 7, 2017). http://patentimages.storage.googleapis. com/02/b6/41/76d4b1b6f986f5/US20170351293A1.pdf Accessed August 9, 2018. 20. V. Sze, Y. H. Chen, T. J. Yang, and J. S. Emer, Efficient processing of deep neural networks: a tutorial and survey, Proc. IEEE 105 (12), pp. 2295–2329, 2017. doi:10.1109/ JPROC.2017.2761740 21. Tutorial on hardware architectures for deep neural networks. http://eyeriss.mit.edu/ tutorial.html Accessed August 9, 2018. 22. Ultrasonic probe could detect stroke, brain damage in young babies. http://www. sciencemag.org/news/2017/10/ultrasonic-probe-could-detect-stroke-brain-damageyoung-babies Accessed August 9, 2018. 23. B. J. Shastri, A. N. Tait, T. F. de Lima, M. A. Nahmias, H. T. Peng, and P. R. Prucnal, Principles of neuromorphic photonics, arXiv:1801.00016, 2017. 24. G. W. Burr et al., Experimental demonstration and tolerancing of a large-scale neural network (165 000 synapses) using phase-change memory as the synaptic weight element, IEEE Trans. Electron Devices 62 (11), 3498–3507, 2015. doi:10.1109/TED.2015.2439635

192  |   Re fe re nces

INDEX A abbreviations 129 accuracy  2, 5, 31, 32, 40, 49, 59, 68, 70, 77, 110, 127, 130, 134, 136, 138, 139, 144, 154–7, 179 acronyms 123 active voice  168–70 Alloatti, Luca  98 analog electronics  103 analog neurons  87, 88, 89–90 analysis  49, 51–8 annual reports  24, 33, 39 Apple  8, 9, 142 Application Canvas  68, 69, 71, 103, 106 application-centered approach  51–3, 58, 68–73 Application Competition Canvas  72, 73, 103, 107 argument  132–43, 146, 148, 149 artificial intelligence  45 audience  112–17, 128, 133, 134, 141, 142, 154, 159

B backward citations  28–9 bandwidth 112 Benton, Steve  36 Berinato, Scott  127 bias  41–3, 111 bloggers  24, 32–3 blue-sky research  1 Blu-ray  120–1, 145–6 books 29 bosons 89 bottom-up models  3, 45 brief  84, 85, 101

bugs 4 business development  25, 38–9

C captions 128–9 cascadability 100 cheerleaders 48–9 chip-design industry  12, 13–14, 78, 96–7, 98, 100, 182–4, 186 chromatography 121 claims 2–3 clinical trials  9 clothesline structure  160, 161 CMOS (complementary metal-oxide semiconductor processes)  96, 184 code 176 collaboration 47 color 174 color coding  66–7 commercial requirements  9–10 common sense  81 communication skills  114–27 communication theory  112 compatibility 10 competing solutions  137, 140, 143, 150 competitive relationships  12, 16–18, 23–4, 47, 71–3 competing technologies  24, 61, 72, 74, 76, 78, 82, 84, 140, 148, 149 conclusions 148–9 conferences  23, 25, 35–6, 83 conflict minerals  8 connector phrases  164

consolidation 15 controversial statements  159 convolution architecture  98, 110, 186 copyright 129–31 corporate responsibility  8 corruption 8 cost 10 Creative Commons  130 credibility  49, 153–79

D data 176 databases  24, 28, 30–1, 90, 91 data density  12, 13 data visualization  125–31 debate 139 deep learning  3, 97, 98, 100, 101, 110, 111, 181–2, 186 DeepMind 186 defensiveness 159 dementia 70 dependencies  76, 77, 78 dependent relationships  47 design criteria  46, 59 detailed analysis  55–7 diagrams 126 digital electronics  101, 103, 110, 111, 187 disagreement  43–6, 81, 159 disruptive technologies  3, 17, 76 document titles  143–5 drug companies  48 dual-use technologies  9 due diligence  56

Index  |   193

E efficiency 4 electrical and electronics engineering (EEE)  112 electric cars  79 electronics industry  4, 8, 14, 100 electrons 88–9 enabling technology  3 engineering 112 environmental concerns  8 errors  153, 154, 157 ethical requirements  7–9, 111 evidence 154–6 examples 155 expertise  23, 25, 34, 43, 58, 59, 82, 85, 96, 115, 116, 117, 156 explanation 119–23 exploitation 8 eye trackers  2–3, 7

F false-consensus effect  120 feasibility 57 features  4, 11–12, 14–15, 18, 21–2, 24–5, 27, 44, 46, 52–4, 58–9, 61, 64, 71–2, 79, 81, 84, 92, 96, 125, 143, 182 Feature Canvas  61, 62, 92, 94, 103, 105 feedback  154, 177, 178 fermions 89 figure captions  128–9 first drafts  177–9 fonts 173 forward citations  24, 28 Foxconn 8 fuel-efficiency 4 functional definitions  124–5 funding 1

G gatekeepers 38 Good Charts (Scott Berinato)  127 “good enough”  55 Google Scholar  28, 91

194  |  Index

Google TPU  96–7, 100, 182–4, 186 grammar 172,  179 graphs 126–7

H Harvard Business Review  127 Harvard referencing style 177 headings 167–8 headlines 1 health and safety  7 heart sensors  68, 70, 71, 81 Heath, Chip and Dan  118 higher-level features  11–12 honesty 156–7 hostile audiences  159 HVDC 144–5 hybrid cars  79 hyperbole 158

I

K key performance indicators (KPIs)  52, 53 keywords  21, 26, 90–1, 98

L labor 10 lead time  76, 78 legal requirements  7–9 legibility 172–5 libraries 90 life-cycle analysis  8 lifetime 6 light  88, 89, 98 Lightelligence  185, 186 Lightmatter  185, 186 like-with-like replacement  13 line length  174 linking sentences  162–3 literature databases  90, 91 logical fallacies  157–8 logic gates  4

IBM 186 inaccuracies  153, 154, 157 individual agendas  46–9 industry bloggers  24, 32–3 industry reports  24, 33, 39 industry roadmap  40, 77 inference  97, 100, 102, 103, 165, 182, 183 information sources  26–9, 41 innovation 15 inputs 5 integrated solutions  2–3 intellectual property  24, 30, 74, 134, 143 introductions  143–4, 145–8 investment funding  14 iPhones 8 iterative processes  18, 20 iterative trawling  91–6

M

J

N

jargon 123–5 Jobs, Steve  142 justified text  174

Nahmias, Mitch  96, 97, 100, 102 neural networks  45, 87, 103, 182, 186

machine learning  45, 86, 111, 181–2 machine vision  59 Made to Stick (Chip and Dan Heath) 118 manufacturability 10 materials 10 mathematical formulae  176 mechanical switches  4 medical devices  9 membership societies  31 Mendeley  90, 91 misleading statements  44 MIT  96, 98, 102, 103, 184 MIT Media Lab  36 Moore’s law  12, 13, 14, 78, 110 multiple audiences  117

neuromorphic engineering  85–8, 92, 97, 100, 182–4 newspapers  31, 41–2 news sites  32 NIST 100 non-justified text  174 nonobvious applications  11, 140 non sequiturs  165–6 notebooks 20–1 NVIDIA Tensor Core  101, 186

O objections  139, 158–60 obstacles  10–11, 138–9, 150, 151 on time, on spec  56–8 operating conditions  6 optical data storage  120–1 optical memory  12 optical modulators  102 optical power supply  100 optics  88–9, 90, 98 optoelectronics  96, 187 outputs 5 overheads 10

P paragraphs 160–1 passive voice  168–70 patent databases  24, 28, 30–1 performance claims  2, 4 performance requirements  5 Perl code  9 persuasiveness  56, 117–19, 139 pharmaceutical industry  48 photographs 125–6 photonics  85, 88–9, 92, 96, 97, 98, 100, 101, 102, 110, 111, 181–2, 183–7 physical constraints  5–6 plagiarism 129–31 political climate  14 Post-it Notes  140 potential applications  63–4 Potential Applications Canvas  64, 65, 92, 95, 98, 99 PR (public/press relations)  25, 32, 38–9

press articles  31, 41–2 press releases  39 Princeton  90, 92, 96, 98, 100 privacy  7, 111 problem-centered approach  51–3, 58, 68–73 problem-solving  4, 136–8, 150 processes 5 processing power  12, 13 prognosis  140–1, 149, 150, 151 progress 12 project plans  25–6 public opinion  15 pure science  1

Q question/answer method  132–4

R radiation 4 ragged right  174 readability 168–77 reader hazard  174–5, 176 reality checking  81–4 reference managers  21 references  156, 177 reframing 3 regulatory bodies  31 regulatory framework  9 renewable energy sources  43–4 repetition 170–2 requirements 4–10 research  1, 3, 5, 6, 11, 14, 15, 21, 26, 30, 31, 33–4, 36, 49, 55, 111, 118 research funding  14, 15, 48, 56 research process  21–6 respect 114–15 results 156 review process  97 risk analysis  78 rivalry  10, 44, 47, 76 roadmaps  33, 74, 76–8 robotic technology  10–11

rough drafts  177–9

S screenshots 173–4 S-curve 80 search engines  22, 26–7, 28, 91 section headings  167–8 security and export control  9 selective ignorance  44–5 semiconductors  12, 14, 96 sensors  59, 68, 70, 71, 81 sentence length  166–7 sentence types  161–3 Shainline, Jeff  100 signposts  160, 162, 164, 165, 179 silicon photonics  96, 101, 103, 108, 109, 110, 111, 181–2, 186 Silver, Spencer  140 site visits  24, 25, 36–7 smartphones 66 smartwatches  68, 70–1, 81 social impact  8 solar energy  44, 144–5 solutions  136–8, 150 space pens  17 speech recognition  182 spelling 179 status quo  16, 135–6, 150 stealth mode  24 structured presentations  141–51 subheadings 167–8 success criteria  46 superconductors 100 supply chain  8 systematization 3

T tech bloggers  24, 32–3 technical argument  132–43, 146, 148, 149 technical conferences  23, 25, 35–6, 83 technical detail  176–7 technical explanation  119–23 technical literature  22–3, 27–8, 31–2, 90, 97

Index  |   195

technical problems  136–7, 150 technical report structure  141–51, 180–7 technical requirements  5–6 Technology Canvas  59, 60, 92, 93, 103, 104 technology-centered approach  53–5, 58–68 text design  174–5 text size  173 timing  12–16, 56–7, 74–80 Timing Canvas  74, 108–9 titles 143–5 top-down models  45 topic sentences  161–3 toy manufacturers  10–11 TPU (tensor processing unit)  96–7, 98, 100, 182–4, 186 trademarks  24, 39 trade organizations  31 trade press  41–2

196  |  Index

transition words  164 TrueNorth 186 trust  31, 38, 80, 83, 98, 101, 122, 137, 154, 157, 165, 172

visualization 125–31 voice 168–70 vowel recognition  184

U

water filtration  51–2 webcams  2, 5 websites  31–2, 39–40 white papers  39–40 wind energy  43–4 work flow  21–6 working conditions  8 working lifetime  6 writing style and structure 160–77

UC Berkeley  103 uncertainty 76–7 university libraries  90 usefulness 1 users 10

V Vancouver referencing style  177 vaporware 36 VCSEL technology  120–1 vested interests  82 videos 127–8 vision  134–5, 142–3, 146, 150

W

Z zero-change photonics  98, 100, 102, 103, 110, 111, 187