Universal Principles of Design: 200 Ways to Increase Appeal, Enhance Usability, Influence Perception, and Make ... Decisions [3 ed.] 076037516X, 9780760375167, 9780760375174

Universal Principles of Design, Updated and Expanded Third Edition is a comprehensive, cross-disciplinary encyclopedia,

920 178 62MB

English Pages 424 [426] Year 2023

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Universal Principles of Design: 200 Ways to Increase Appeal, Enhance Usability, Influence Perception, and Make ... Decisions [3 ed.]
 076037516X, 9780760375167, 9780760375174

  • Commentary
  • Publisher's PDF

Table of contents :
Cover
Contents, Alphabetical
Introduction
001 Abbe Principle
002 Accessibility
003 Ackoff’s Law
004 Aesthetic-Usability Effect
005 Affordance
006 Alignment
007 Anchoring
008 Anthropomorphism
009 Aposematism
010 Apparent Motion
011 Appeal to Nature
012 Archetypes, Psychological
013 Archetypes, System
014 Attractiveness Bias
015 Baby-Face Bias
016 Back of the Dresser
017 Biophilia Effect
018 Box’s Law
019 Brooks’ Law
020 Brown M&M’s
021 Bus Factor
022 Cathedral Effect
023 Causal Reductionism
024 Chesterton’s Fence
025 Clarke’s Laws
026 Classical Conditioning
027 Closure
028 Cognitive Dissonance
029 Color Effects
030 Color Theory
031 Common Fate
032 Comparison
033 Confirmation
034 Confirmation Bias
035 Consistency
036 Constraint
037 Contour Bias
038 Control
039 Convergence
040 Conway’s Law
041 Cost-Benefit
042 Creator Blindness
043 Crowd Intelligence
044 Death Spiral
045 Defensible Space
046 Depth of Processing
047 Design by Committee
048 Desire Line
049 Development Cycle
050 Diffusion of Innovations
051 Don’t Eat the Daisies
052 Dunbar’s Number
053 Dunning-Kruger Effect
054 Entry Point
055 Error, Design
056 Error, Human
057 Expectation Effects
058 Exposure Effect
059 Face Detection
060 Face-ism Ratio
061 Factor of Safety
062 Faith Follows Function
063 Feature Creep
064 Feedback
065 Feedback Loop
066 Fibonacci Sequence
067 Figure-Ground
068 First Principles
069 Fitts’ Law
070 Five Hat Racks
071 Five Tenets of Queuing
072 Flexibility Tradeoffs
073 Flow
074 Forgiveness
075 Form Follows Function
076 Framing
077 Freeze-Flight-Fight-Forfeit
078 Gall’s Law
079 Gamification
080 Garbage In – Garbage Out
081 Gates’ Rule of Automation
082 Gloss Bias
083 Golden Ratio
084 Good Continuation
085 Groupthink
086 Gutenberg Diagram
087 Habituation
088 Hanlon’s Razor
089 Hick’s Law
090 Hierarchy of Needs
091 Highlighting
092 Horror Vacui
093 Icarus Matrix
094 Iconic Representation
095 Identifiable Victim Effect
096 IKEA Effect
097 Inattentional Blindness
098 Interference Effects
099 Inverted Pyramid
100 Iron Triangle
101 Iteration
102 Kano Model
103 KISS
104 Knowing-Doing Gap
105 Learnability
106 Left-Digit Effect
107 Legibility
108 Levels of Invention
109 Leverage Point
110 MAFA Effect
111 Magic Triangle
112 Maintainability
113 Mapping
114 Maslow’s Hammer
115 MAYA
116 Mental Model
117 Miller’s Law
118 Mimicry
119 Minimum-Viable Product
120 Mnemonic Device
121 Modularity
122 Nirvana Fallacy
123 No Single Point of Failure
124 Normal Distribution
125 Not Invented Here
126 Nudge
127 Number-Space Associations
128 Ockham’s Razor
129 Operant Conditioning
130 Orientation Sensitivity
131 Paradox of Great Ideas
132 Paradox of Unanimity
133 Pareto Principle
134 Peak-End Rule
135 Performance Load
136 Performance vs. Preference
137 Perspective Cues
138 Perverse Incentive
139 Phonetic Symbolism
140 Picture Superiority Effect
141 Play Preferences
142 Poka-Yoke
143 Premature Optimization
144 Priming
145 Process Eats Goal
146 Product Life Cycle
147 Progressive Disclosure
148 Progressive Subtraction
149 Propositional Density
150 Prospect-Refuge
151 Prototyping
152 Proximity
153 Readability
154 Reciprocity
155 Recognition over Recall
156 Redundancy
157 Reverse Salient
158 Root Cause
159 Rosetta Stone
160 Rule of Thirds
161 Saint-Venant’s Principle
162 Satisficing
163 Savanna Preference
164 Scaling Fallacy
165 Scarcity
166 Selection Bias
167 Self-Similarity
168 Serial Position Effects
169 Shaping
170 Signal-to-Noise Ratio
171 Similarity
172 Social Proof
173 Social Trap
174 Status Quo Bias
175 Stickiness
176 Storytelling
177 Streetlight Effect
178 Structural Forms
179 Sunk Cost Effect
180 Supernormal Stimulus
181 Survivorship Bias
182 Swiss Cheese Model
183 Symmetry
184 Testing Pyramid
185 Threat Detection
186 Top-Down Lighting Bias
187 Uncanny Valley
188 Uncertainty Principle
189 Uniform Connectedness
190 User-Centered vs. User-Driven Design
191 Veblen Effect
192 Visibility
193 Visuospatial Resonance
194 von Restorff Effect
195 Wabi-Sabi
196 Waist-to-Hip Ratio
197 Wayfinding
198 Weakest Link
199 WYSIWYG
200 Zeigarnik Effect
Credits
Acknowledgments
About the Authors
Index
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z
Contents, Categorical
ARCHITECTS
017 Biophilia Effect
022 Cathedral Effect
048 Desire Line
062 Faith Follows Function
150 Prospect-Refuge
163 Savanna Preference
167 Self-Similarity
183 Symmetry
195 Wabi-Sabi
197 Wayfinding
ENGINEERS (ATOMS)
001 Abbe Principle
061 Factor of Safety
123 No Single Point of Failure
142 Poka-Yoke
156 Redundancy
157 Reverse Salient
161 Saint-Venant’s Principle
162 Satisficing
178 Structural Forms
198 Weakest Link
ENGINEERS (BITS)
019 Brooks’ Law
024 Chesterton’s Fence
078 Gall’s law
080 Garbage In – Garbage Out
081 Gates’ Rule of Automation
101 Iteration
112 Maintainability
121 Modularity
143 Premature Optimization
156 Redundancy
ENTREPRENEURS
003 Ackoff’s Law
068 First Principles
093 Icarus Matrix
108 Levels of Invention
115 MAYA
119 Minimum-Viable Product
131 Paradox of Great Ideas
133 Pareto Principle
151 Prototyping
162 Satisficing
GAME DESIGNERS
015 Baby-Face Bias
043 Crowd Intelligence
079 Gamification
089 Hick’s Law
111 Magic Triangle
129 Operant Conditioning
141 Play Preferences
162 Satisficing
176 Storytelling
200 Zeigarnik Effect
GRAPHIC DESIGNERS
006 Alignment
029 Color Effects
030 Color Theory
067 Figure-Ground
083 Golden Ratio
092 Horror Vacui
107 Legibility
130 Orientation Sensitivity
148 Progressive Subtraction
160 Rule of Thirds
INSTRUCTIONAL DESIGNERS
046 Depth of Processing
070 Five Hat Racks
073 Flow
091 Highlighting
099 Inverted Pyramid
105 Learnability
117 Miller’s Law
120 Mnemonic Device
140 Picture Superiority Effect
153 Readability
LEADERS
040 Conway’s Law
041 Cost-Benefit
044 Death Spiral
052 Dunbar’s Number
085 Groupthink
104 Knowing-Doing Gap
133 Pareto Principle
145 Process Eats Goal
174 Status Quo Bias
179 Sunk Cost Effect
MARKETERS
007 Anchoring
026 Classical Conditioning
050 Diffusion of Innovations
058 Exposure Effect
076 Framing
106 Left-Digit Effect
154 Reciprocity
165 Scarcity
172 Social Proof
191 Veblen Effect
PRODUCT DESIGNERS
005 Affordance
016 Back of the Dresser
037 Contour Bias
075 Form Follows Function
082 Gloss Bias
096 IKEA Effect
112 Maintainability
113 Mapping
115 MAYA
122 Nirvana Fallacy
PRODUCT MANAGERS
019 Brooks’ Law
021 Bus Factor
047 Design by Committee
049 Development Cycle
051 Don’t Eat the Daisies
063 Feature Creep
100 Iron Triangle
102 Kano Model
119 Minimum-Viable Product
146 Product Life Cycle
QUALITY ASSURANCE ENGINEERS
016 Back of the Dresser
020 Brown M&M’s
055 Error, Design
056 Error, Human
080 Garbage In – Garbage Out
124 Normal Distribution
132 Paradox of Unanimity
177 Streetlight Effect
182 Swiss Cheese Model
184 Testing Pyramid
RESEARCHERS
023 Causal Reductionism
056 Error, Human
057 Expectation Effects
124 Normal Distribution
128 Ockham’s Razor
158 Root Cause
166 Selection Bias
177 Streetlight Effect
181 Survivorship Bias
188 Uncertainty Principle
SERVICE DESIGNERS
002 Accessibility
048 Desire Line
054 Entry Point
071 Five Tenets of Queuing
126 Nudge
134 Peak-End Rule
135 Performance Load
147 Progressive Disclosure
172 Social Proof
197 Wayfinding
STRATEGIC MANAGERS
003 Ackoff’s Law
018 Box’s Law
019 Brooks’ Law
025 Clarke’s Laws
050 Diffusion of Innovations
072 Flexibility Tradeoffs
081 Gates’ Rule of Automation
133 Pareto Principle
145 Process Eats Goal
158 Root Cause
SYSTEMS DESIGNERS
013 Archetypes, System
039 Convergence
040 Conway’s Law
044 Death Spiral
065 Feedback Loop
078 Gall’s Law
109 Leverage Point
138 Perverse Incentive
158 Root Cause
173 Social Trap
UI DESIGNERS
033 Confirmation
035 Consistency
069 Fitts’ Law
074 Forgiveness
094 Iconic Representation
113 Mapping
127 Number-Space Associations
155 Recognition over Recall
186 Top-Down Lighting Bias
199 WYSIWYG
UX DESIGNERS
004 Aesthetic-Usability Effect
048 Desire Line
087 Habituation
090 Hierarchy of Needs
116 Mental Model
134 Peak-End Rule
135 Performance Load
176 Storytelling
190 User-Centered vs. User-Driven Design
194 von Restorff Effect

Citation preview

U P D A T E D

A N D

E X P A N D E D

UNIVERSAL PRINCIPLES of DESIGN THIRD EDITION

200 WILLIAM LIDWELL

Ways to Increase Appeal, Enhance Usability, Influence Perception, and Make Better Design Decisions

KRITINA HOLDEN

JILL BUTLER

© 2023 Quarto Publishing Group USA Inc. Text © 2003, 2010, 2023 Quarto Publishing Group USA Inc. Second edition published in 2010 First published in 2003 by Rockport Publishers, an imprint of The Quarto Group, 100 Cummings Center, Suite 265-D, Beverly, MA 01915, USA. T (978) 282-9590 F (978) 283-2742 Quarto.com All rights reserved. No part of this book may be reproduced in any form without written permission of the copyright owners. All images in this book have been reproduced with the knowledge and prior consent of the artists concerned, and no responsibility is accepted by producer, publisher, or printer for any infringement of copyright or otherwise, arising from the contents of this publication. Every effort has been made to ensure that credits accurately comply with information supplied. We apologize for any inaccuracies that may have occurred and will resolve inaccurate or missing information in a subsequent reprinting of the book. Rockport Publishers titles are also available at discount for retail, wholesale, promotional, and bulk purchase. For details, contact the Special Sales Manager by email at specialsales@ quarto.com or by mail at The Quarto Group, Attn: Special Sales Manager, 100 Cummings Center, Suite 265-D, Beverly, MA 01915, USA. 10 9 8 7 6 5 4 3 2 1 ISBN: 978-0-7603-7516-7 Digital edition published in 2023 eISBN: 978-0-7603-7517-4 Originally found under the following Library of Congress Cataloging-in-Publication Data Lidwell, William. Universal principles of design : a cross-disciplinary reference / William Lidwell, Kritina Holden, and Jill Butler. p. cm. ISBN 1-59253-007-9 (paper over board) 1. Design-Dictionaries. I. Holden, Kritina. II. Butler, Jill. III. Title. NK1165.L53 2003 745.4’03—dc21 2003009384 CIP Design: Stuff Creators Design Studio Printed in China

To the music makers: Dreamers of dreams And designers of things

Contents, Alphabetical

Introduction 001 Abbe Principle 002 Accessibility 003 Ackoff’s Law 004 Aesthetic-Usability Effect 005 Affordance 006 Alignment 007 Anchoring 008 Anthropomorphism 009 Aposematism 010 Apparent Motion 011 Appeal to Nature 012 Archetypes, Psychological 013 Archetypes, System 014 Attractiveness Bias 015 Baby-Face Bias 016 Back of the Dresser 017 Biophilia Effect 018 Box’s Law 019 Brooks’ Law 020 Brown M&M’s 021 Bus Factor 022 Cathedral Effect 023 Causal Reductionism 024 Chesterton’s Fence 025 Clarke’s Laws 026 Classical Conditioning 027 Closure 028 Cognitive Dissonance 029 Color Effects 030 Color Theory 031 Common Fate 032 Comparison 033 Confirmation 034 Confirmation Bias

035 Consistency 036 Constraint 037 Contour Bias 038 Control 039 Convergence 040 Conway’s Law 041 Cost-Benefit 042 Creator Blindness 043 Crowd Intelligence 044 Death Spiral 045 Defensible Space 046 Depth of Processing 047 Design by Committee 048 Desire Line 049 Development Cycle 050 Diffusion of Innovations 051 Don’t Eat the Daisies 052 Dunbar’s Number 053 Dunning-Kruger Effect 054 Entry Point 055 Error, Design 056 Error, Human 057 Expectation Effects 058 Exposure Effect 059 Face Detection 060 Face-ism Ratio 061 Factor of Safety 062 Faith Follows Function 063 Feature Creep 064 Feedback 065 Feedback Loop 066 Fibonacci Sequence 067 Figure-Ground 068 First Principles

069 Fitts’ Law 070 Five Hat Racks 071 Five Tenets of Queuing 072 Flexibility Tradeoffs 073 Flow 074 Forgiveness 075 Form Follows Function 076 Framing 077 Freeze-Flight-Fight-Forfeit 078 Gall’s Law 079 Gamification 080 Garbage In –Garbage Out 081 Gates’ Rule of Automation 082 Gloss Bias 083 Golden Ratio 084 Good Continuation 085 Groupthink 086 Gutenberg Diagram 087 Habituation 088 Hanlon’s Razor 089 Hick’s Law 090 Hierarchy of Needs 091 Highlighting 092 Horror Vacui 093 Icarus Matrix 094 Iconic Representation 095 Identifiable Victim Effect 096 IKEA Effect 097 Inattentional Blindness 098 Interference Effects 099 Inverted Pyramid 100 Iron Triangle 101 Iteration 102 Kano Model

103 KISS 104 Knowing-Doing Gap 105 Learnability 106 Left-Digit Effect 107 Legibility 108 Levels of Invention 109 Leverage Point 110 MAFA Effect 111 Magic Triangle 112 Maintainability 113 Mapping 114 Maslow’s Hammer 115 MAYA 116 Mental Model 117 Miller’s Law 118 Mimicry 119 Minimum-Viable Product 120 Mnemonic Device 121 Modularity 122 Nirvana Fallacy 123 No Single Point of Failure 124 Normal Distribution 125 Not Invented Here 126 Nudge 127 Number-Space Associations 128 Ockham’s Razor 129 Operant Conditioning 130 Orientation Sensitivity 131 Paradox of Great Ideas 132 Paradox of Unanimity 133 Pareto Principle 134 Peak-End Rule 135 Performance Load 136 Performance vs. Preference

137 Perspective Cues 138 Perverse Incentive 139 Phonetic Symbolism 140 Picture Superiority Effect 141 Play Preferences 142 Poka-Yoke 143 Premature Optimization 144 Priming 145 Process Eats Goal 146 Product Life Cycle 147 Progressive Disclosure 148 Progressive Subtraction 149 Propositional Density 150 Prospect-Refuge 151 Prototyping 152 Proximity 153 Readability 154 Reciprocity 155 Recognition over Recall 156 Redundancy 157 Reverse Salient 158 Root Cause 159 Rosetta Stone 160 Rule of Thirds 161 Saint-Venant’s Principle 162 Satisficing 163 Savanna Preference 164 Scaling Fallacy 165 Scarcity 166 Selection Bias 167 Self-Similarity 168 Serial Position Effects 169 Shaping 170 Signal-to-Noise Ratio

171 Similarity 172 Social Proof 173 Social Trap 174 Status Quo Bias 175 Stickiness 176 Storytelling 177 Streetlight Effect 178 Structural Forms 179 Sunk Cost Effect 180 Supernormal Stimulus 181 Survivorship Bias 182 Swiss Cheese Model 183 Symmetry 184 Testing Pyramid 185 Threat Detection 186 Top-Down Lighting Bias 187 Uncanny Valley 188 Uncertainty Principle 189 Uniform Connectedness 190 User-Centered vs. User-Driven Design 191 Veblen Effect 192 Visibility 193 Visuospatial Resonance 194 von Restorff Effect 195 Wabi-Sabi 196 Waist-to-Hip Ratio 197 Wayfinding 198 Weakest Link 199 WYSIWYG 200 Zeigarnik Effect Credits Acknowledgments About the Authors Index

Contents, Categorical Top 10 Most Useful Principles by Profession

ARCHITECTS 017 Biophilia Effect 022 Cathedral Effect 048 Desire Line 062 Faith Follows Function 150 Prospect-Refuge 163 Savanna Preference 167 Self-Similarity 183 Symmetry 195 Wabi-Sabi 197 Wayfinding

ENTREPRENEURS 003 Ackoff’s Law 068 First Principles 093 Icarus Matrix 108 Levels of Invention 115 MAYA 119 Minimum-Viable Product 131 Paradox of Great Ideas 133 Pareto Principle 151 Prototyping 162 Satisficing

INSTRUCTIONAL DESIGNERS 046 Depth of Processing 070 Five Hat Racks 073 Flow 091 Highlighting 099 Inverted Pyramid 105 Learnability 117 Miller’s Law 120 Mnemonic Device 140 Picture Superiority Effect 153 Readability

ENGINEERS (ATOMS) 001 Abbe Principle 061 Factor of Safety 123 No Single Point of Failure 142 Poka-Yoke 156 Redundancy 157 Reverse Salient 161 Saint-Venant’s Principle 162 Satisficing 178 Structural Forms 198 Weakest Link

GAME DESIGNERS 015 Baby-Face Bias 043 Crowd Intelligence 079 Gamification 089 Hick’s Law 111 Magic Triangle 129 Operant Conditioning 141 Play Preferences 162 Satisficing 176 Storytelling 200 Zeigarnik Effect

LEADERS 040 Conway’s Law 041 Cost-Benefit 044 Death Spiral 052 Dunbar’s Number 085 Groupthink 104 Knowing-Doing Gap 133 Pareto Principle 145 Process Eats Goal 174 Status Quo Bias 179 Sunk Cost Effect

ENGINEERS (BITS) 019 Brooks’ Law 024 Chesterton’s Fence 078 Gall’s law 080 Garbage In –Garbage Out 081 Gates’ Rule of Automation 101 Iteration 112 Maintainability 121 Modularity 143 Premature Optimization 156 Redundancy

GRAPHIC DESIGNERS 006 Alignment 029 Color Effects 030 Color Theory 067 Figure-Ground 083 Golden Ratio 092 Horror Vacui 107 Legibility 130 Orientation Sensitivity 148 Progressive Subtraction 160 Rule of Thirds

MARKETERS 007 Anchoring 026 Classical Conditioning 050 Diffusion of Innovations 058 Exposure Effect 076 Framing 106 Left-Digit Effect 154 Reciprocity 165 Scarcity 172 Social Proof 191 Veblen Effect

PRODUCT DESIGNERS 005 Affordance 016 Back of the Dresser 037 Contour Bias 075 Form Follows Function 082 Gloss Bias 096 IKEA Effect 112 Maintainability 113 Mapping 115 MAYA 122 Nirvana Fallacy

RESEARCHERS 023 Causal Reductionism 056 Error, Human 057 Expectation Effects 124 Normal Distribution 128 Ockham’s Razor 158 Root Cause 166 Selection Bias 177 Streetlight Effect 181 Survivorship Bias 188 Uncertainty Principle

SYSTEMS DESIGNERS 013 Archetypes, System 039 Convergence 040 Conway’s Law 044 Death Spiral 065 Feedback Loop 078 Gall’s Law 109 Leverage Point 138 Perverse Incentive 158 Root Cause 173 Social Trap

PRODUCT MANAGERS 019 Brooks’ Law 021 Bus Factor 047 Design by Committee 049 Development Cycle 051 Don’t Eat the Daisies 063 Feature Creep 100 Iron Triangle 102 Kano Model 119 Minimum-Viable Product 146 Product Life Cycle

SERVICE DESIGNERS 002 Accessibility 048 Desire Line 054 Entry Point 071 Five Tenets of Queuing 126 Nudge 134 Peak-End Rule 135 Performance Load 147 Progressive Disclosure 172 Social Proof 197 Wayfinding

UI DESIGNERS 033 Confirmation 035 Consistency 069 Fitts’ Law 074 Forgiveness 094 Iconic Representation 113 Mapping 127 Number-Space Associations 155 Recognition over Recall 186 Top-Down Lighting Bias 199 WYSIWYG

QUALITY ASSURANCE ENGINEERS 016 Back of the Dresser 020 Brown M&M’s 055 Error, Design 056 Error, Human 080 Garbage In –Garbage Out 124 Normal Distribution 132 Paradox of Unanimity 177 Streetlight Effect 182 Swiss Cheese Model 184 Testing Pyramid

STRATEGIC MANAGERS 003 Ackoff’s Law 018 Box’s Law 019 Brooks’ Law 025 Clarke’s Laws 050 Diffusion of Innovations 072 Flexibility Tradeoffs 081 Gates’ Rule of Automation 133 Pareto Principle 145 Process Eats Goal 158 Root Cause

UX DESIGNERS 004 Aesthetic-Usability Effect 048 Desire Line 087 Habituation 090 Hierarchy of Needs 116 Mental Model 134 Peak-End Rule 135 Performance Load 176 Storytelling 190 User-Centered vs. User-Driven Design 194 von Restorff Effect

Introduction

Not too long ago, designers were eclectic generalists. They studied art, science, and religion in order to understand the basic workings of nature and then applied what they learned to solve the problems of the day. Over time, the quantity and complexity of accumulated knowledge led to increased specialization among designers, and breadth of knowledge was increasingly traded for depth of knowledge. This trend continues today. As designers become more specialized, awareness of advances and discoveries in other areas of specialization diminishes. This is inevitable and unfortunate, since much can be learned from progress in other design disciplines. Convenient access to cross-disciplinary design knowledge has not previously been available. A designer interested in learning about other areas of specialization would have to study texts from many different design disciplines. Determining which texts in each discipline are worthy of study would be the first challenge, deciphering the specialized terminology of the texts the second, and enduring the depth of detail the third. The effort is significant and rarely expended beyond brief excursions into unfamiliar areas to research specific problems. The goal of this book is to assist designers with these challenges and reduce the effort required to learn about principles of design across disciplines. The principles in this book consist of laws, guidelines, human biases, and general design considerations. The principles were selected from a variety of design disciplines based on several factors, including utility, degree of misuse or misunderstanding, strength of supporting evidence, and universality. The selection of 200 concepts should not be interpreted to mean that there are only 200 relevant principles of design — there are obviously many more. The book is organized alphabetically so that principles can be easily and quickly referenced by name. For those interested in exploring principles by area of design specialization, a categorical table of contents has been provided with the top 10 principles most relevant to a range of design disciplines. Each

principle is presented in a two-page format. The left-hand page contains a succinct definition, a full description of the principle, examples of its use, guidelines for use, and a list pointing to related principles. Side notes appear to the right of the text and provide elaborations and references. The righthand page contains visual examples and related graphics to support a deeper understanding of the principle. The goal is not to relay everything there is to know about each principle. Entire books have been written about some of them. The goal is to give designers the essential 20% they need to know about each principle to realize 80% of their value and to do so in as efficient and memorable a way as possible. Sound design is not the exclusive purview of a talented few. It can be achieved by virtually all designers. The use of well-established design principles increases the probability that a design will be successful. Use Universal Principles of Design as a resource to increase your cross-disciplinary knowledge and understanding of design, promote brainstorming and idea generation for design problems, and refresh your memory of design principles that are infrequently applied. Finally, use it as a means of checking the quality of your design process and product. A paraphrase of William Strunk’s famous admonition makes the point nicely: The best designers sometimes disregard the principles of design. When they do so, however, there is usually some compensating merit attained at the cost of the violation. Unless you are certain of doing as well, it is best to abide by the principles.

William Lidwell Kritina Holden Jill Butler

001

Abbe Principle Measure things so that the measuring scale is aligned with the distance being measured. The Abbe principle, proposed by the physicist Ernst Abbe, states that measurements should be taken so that the measuring scale is in line with the distance to be measured. This principle of measurement is meant to address angular errors in design. In cases where applying the Abbe principle is not possible, measurements should be taken in a way that minimizes the angular motion or permits the ability to calculate and correct the offset. For example, calipers take measurements with jaws that are to the side of the measurement scale. Therefore, any angular error of the motion of the moving jaw amplifies the error of the scale, which results in an amplified error in the measurement. By comparison, micrometers take measurements in line with the measurement scale, which eliminates angular error. The principle is commonly applied in optics and the design of precision mechanisms.1 A key aspect of understanding the Abbe principle is understanding the problem it solves: angular error, sometimes referred to as Abbe error. Abbe error is a measurement error that is amplified by distance. This means that a small error early or near a source grows larger with time and distance. For example, the wobble from a bent axle increases with the length of the axle. The point is that a small error is amplified the farther it is from the source, and so measurements and intervention in line with and close to the source are superior to those distant and spatially removed. The Abbe principle applies narrowly to taking physical measurements that minimize angular error, but a broader interpretation of the principle can also be useful. For example, by analogy, measuring how people behave directly in real-world contexts is preferable to measuring how they behave in laboratory contexts. Research conducted in artificial contexts is akin to measuring a physical thing to the side versus in line with its scale, resulting in measurement error that can compound and lead subsequent design and research astray. Consider the Abbe principle in physical measurement and in the design of precision elements. Minimize angular errors by focusing measurements and interventions as close to the source of action as possible. Explore broader implications of the principle in analogous contexts, such as in the measurement of behavior and user preferences. See also Alignment; Error, Design; Root Cause; Saint-Venant’s Principle;

Scaling Fallacy

1

The seminal work is “Messapparate für Physiker” [Measuring instruments for physicists] by Ernst Abbe, 1890, Zeitschrift fur Instrumentenkunde [J Sci Instrum ], 10, 446 – 448. See also “The Abbe Principle Revisited: An Updated Interpretation” by J.B. Bryan, 1979, Precision Engineering, 129 –132; and “A Study on the Abbe Principle and Abbe Error” by G.X. Zhang, 1989, CIRP Annals, 38(1), 525 – 528.

A micrometer (top) measures in its line of action and eliminates angular error. It is a pure application of the Abbe principle. The caliper (bottom) measures away from its line of action, resulting in angular error.

002

Accessibility Things should be designed to be usable, without modification, by as many people as possible. Historically, accessibility in design meant providing accommodations for people with disabilities; however, it has become clear that accessible design benefits everyone. The principle of accessibility asserts that designs should be usable by people of diverse abilities, without modification.1 There are four characteristics of accessible designs: 1. Perceptibility — Achieved when everyone can perceive the design,

regardless of sensory abilities. Guidelines for improving perceptibility are: select typeface size and color so that text is legible to all readers, including older and color-blind readers; position controls and information so that seated and standing users can perceive them; present information using redundant coding methods (e.g., textual, iconic, and tactile); and provide compatibility with assistive technologies. 2. Operability — Achieved when everyone can use the design, regardless

of physical abilities. Guidelines for improving operability are: minimize repetitive actions and the need for sustained physical effort; position controls so that seated and standing users can access and operate them; facilitate use of controls through good affordances and constraints; and provide compatibility with assistive technologies. 3. Simplicity — Achieved when everyone can easily learn and understand

the design, regardless of experience, literacy, or concentration level. Guidelines for improving simplicity are: use familiar terms, symbols, and icons; remove unnecessary detail and complexity; ensure that reading levels accommodate a wide range of literacy; clearly and consistently code and label controls and modes of operation; and provide clear prompting and feedback for all actions. 4. Forgiveness — Achieved when designs minimize the occurrence and

consequences of errors. Guidelines for improving forgiveness are: use good affordances and constraints to prevent errors from occurring; use confirmations and warnings to reduce the occurrence of errors; and include reversible actions and safety nets to minimize the consequence of errors (e.g., the ability to undo an action). Consider accessibility in design from the outset, not as an afterthought. Make sensory elements widely perceptible, mechanical elements operable with little effort, interaction elements easy to use, and the design as a whole forgiving of errors. Be skeptical of claims that accessible design is inherently too expensive or impractical: Given sufficient imagination and motivation, accessible design can typically be achieved and be cost-competitive. See also Affordance; Forgiveness; Legibility; Normal Distribution; Readability

1

Derived from W3C Web Content Accessibility Guidelines 1.0, 1999; ADA Accessibility Guidelines for Buildings and Facilities, 1998; and Accessible Environments: Toward Universal Design by Ronald L. Mace et al., 1996, The Center for Universal Design, North Carolina State University. This principle is also known as barrier-free design, related to universal design and inclusive design.

How can it be…we can put people on the moon, we can fly across the continent, and yet a disabled person is given a wheelchair — a pathetic, inadequate substitute for what you and I take for granted? And so I said, “I’ve got to restore not just mobility. I’ve got to restore…independence, dignity, access”. — Dean Kamen, inventor of the iBot, news.yahoo.com

003

Ackoff’s Law It is better to do the right things wrong than the wrong things right. Ackoff’s law, proposed by the management theorist Russell Ackoff, states that although one can be successful doing the right things, even if poorly executed, one can never be successful doing the wrong things, even if perfectly executed. Therefore, getting strategy right — i.e., doing the right things — should always precede focus on execution — i.e., doing things right.1 According to Ackoff, doing the right things is “wisdom”, and doing things right is “efficiency”. If an organization is doing the right things, improving efficiency will make it righter over time. If an organization is doing the wrong things, improving efficiency will make it wronger over time. Organizations often have difficulty prioritizing strategy over execution because the latter is easier to measure. But in the words of the sociologist William Bruce Cameron, “Not everything that counts can be counted, and not everything that can be counted counts”. 2 A common refrain is, “Why can’t we simply do the right things right?” This is, of course, always the goal; and it can be achieved when a strategy and operational practices have time to mature and align. But often the rate of evolution of a market landscape or product category is faster than any organization can scale its operational practices. In such cases, the strategy should favor strategic agility over operational excellence — i.e., to be perpetually changing and perpetually inefficient. Another barrier to focusing on doing the right things right is that it often requires fundamental change, whereas doing the wrong things is more often a continuation of the status quo. For example, introducing an innovative new product may be the right thing to ensure a firm’s long-term viability; but it will generally be easier — i.e., perceived to be simpler, less expensive, and safer to many stakeholders — to refine legacy products with existing customers. Consider Ackoff’s law when setting goals and devising plans to achieve them. Prioritize strategy first and execution second. A good strategy provides a bridge to continuously improving efficiency. A bad strategy provides a bridge to nowhere. A good strategy poorly executed will beat a bad strategy perfectly executed every time. See also Gates’ Rule of Automation; Knowing-Doing Gap; Pareto Principle;

Process Eats Goal; Satisficing

1

See Redesigning Society by Russell Ackoff and Sheldon Rovin, 2003, Stanford Business Books. Ackoff’s law builds on a distinction proposed by Peter Drucker between doing things right (efficiency) and doing the right thing (effectiveness). See Management: Tasks, Responsibilities, Practices by Peter Drucker, 1974, Harper Business.

2

Informal Sociology, A Casual Introduction to Sociological Thinking by William Bruce Cameron, 1963, Random House.

3M has long been revered as one of the world’s most innovative companies. In 2000, a new CEO named James McNerney introduced Six Sigma, a popular management practice that prioritizes product quality and cost reduction. Processes were streamlined, the workforce was reduced, and operating margins grew. But when Six Sigma was applied to research and development (R&D), the number of innovative products plummeted. In 2006, Fortune magazine reported that 91% of the large enterprises like 3M that had implemented Six Sigma had fallen behind the growth rate of the S&P 500, blaming this poor performance on a significant decline in innovation at these firms. In July 2005, George Buckley succeeded McNerney as CEO and re-stimulated innovation by exempting many of the R&D processes from Six Sigma practices. If an organization’s strategy hinges on execution, rigid quality systems like ISO 9000, Six Sigma, and Lean can be helpful. But if an organization’s strategy hinges on innovation, such systems are examples of doing the wrong things right.

We all came to the conclusion that there was no way in the world that anything like a Post-it note would ever emerge from this new system. — 3M R&D Team member after a briefing on Six Sigma

004

Aesthetic-Usability Effect Aesthetic things are perceived to be easier to use than ugly things. Aesthetic things are often subjectively rated as easier to use, even when no usability advantage can be objectively measured.1 The aesthetic-usability effect is consistent with research on attractiveness and positive first impressions. The effect is also consistent with the anecdotal evidence that aesthetic things are more effective at fostering positive attitudes than ugly things, making people more forgiving when minor performance or usability problems are encountered.2 Research exploring the boundaries of the aesthetic-usability effect have yielded mixed results, with some finding no evidence of a relationship. Possible explanations for the mixed results range from there being no effect (i.e., aesthetics do not influence perceptions of usability) to there being a weak effect (e.g., aesthetics do influence perceptions of usability, but it is a small effect and hard to detect) to inadequate experimental design (e.g., the differences between aesthetic and ugly conditions in the experiments are too subtle to generate an effect).3 Independent of whether aesthetics influence perceived usability, there is good reason to believe that aesthetics play a role in the creation of emotional bonds, which make people more likely to accept, care for, keep, display, and repeatedly use aesthetic things. For example, it is common for people to name and develop feelings toward things that have fostered positive attitudes (e.g., naming a car) and rare for people to do the same with things that have fostered negative attitudes.4 Aesthetic things support the development of self-esteem and status — i.e., owning and conspicuously displaying aesthetic things enhance a personal sense of belonging or status. Additionally, aesthetic things are perceived to be of greater value and support higher price points than uglier but functionally comparable things.5 Consider the aesthetic-usability effect in design, ensuring that aesthetic aspects are weighted appropriately. Share the principle to help others understand that aesthetics is more than ornamentation. It is an investment in user acceptance, forgiveness, perceived value, customer satisfaction, and brand loyalty. See also Attractiveness Bias; Contour Bias; Form Follows Function;

Golden Ratio; Ockham’s Razor; Rule of Thirds

1

The seminal work on the aesthetic-usability effect is “Apparent Usability vs. Inherent Usability: Experimental Analysis on the Determinants of the Apparent Usability” by Masaaki Kurosu and Kaori Kashimura, CHI ’95 Conference Companion, 1995, 292 – 293.

2

See, for example, “Emotion & Design: Attractive Things Work Better” by Donald Norman, 2002, www.jnd.org.

3

See, for example, “Exploring the Boundary Conditions of the Effect of Aesthetics on Perceived Usability” by John Grishin and Douglas Gillan, Feb 2019, Journal of Usability Studies, 14(2), 76 –104; and “Is Beautiful Really Usable? Toward Understanding the Relation Between Usability, Aesthetics, and Affect in HCI” by Alexandre Tuch et al., 2012, Computers in Human Behavior, 28(5), 1596 –1607.

4

Emotional Design: Why We Love (or Hate) Everyday Things by Donald Norman, 2005, Basic Books; and “The Aesthetic Fidelity Effect” by Annika Wiecek et al., 2019, International Journal of Research in Marketing, 36(4), 542– 557.

5

“Self-Affirmation Through the Choice of Highly Aesthetic Products” by Claudia Townsend and Sanjay Sood, Aug 2012, Journal of Consumer Research, 39(2), 415 – 428; and “The Effect of Visual Product Aesthetics on Consumers’ Price Sensitivity” by Yigit Mumcua and Halil Semih Kimzan, 2015, Procedia Economics and Finance, 26, 528 – 534.

The leggy and alienesque Juicy Salif by Philippe Starck is more sculpture than juicer — a classic case of function following form — but this does not diminish its success as a product. Why? People don’t buy the Juicy Salif for functional reasons but, rather, for emotional reasons: intrigue, provocation, status, and story. These qualities serve emotional needs that incline people to forgive the juicer for its functional and usability foibles.

My juicer is not meant to squeeze lemons; it is meant to start conversations. — Philippe Starck

005

Affordance The physical characteristics of a thing that influence its function and use. The form and features of a thing make it well suited for certain functions and uses and poorly suited for others. For example, wheels afford rolling, stairs afford climbing, and levers afford pulling; whereas cinder blocks negatively afford rolling, featureless walls negatively afford climbing, and buttons negatively afford pulling.1 When affordances are good, things perform well and are intuitive to use. When affordances are bad, things perform poorly and are hard to use. For example, a door with a handle affords pulling. Sometimes, doors with handles are designed to open only by pushing. The affordance of the handle conflicts with the door’s function. Replace the handle with a flat plate, and it now affords pushing. The affordance of the flat plate corresponds to the way the door functions. The design is improved. A thing must be perceptible for its affordance to benefit usability. A light switch affords flipping only when it can be seen or felt. Affordances impact usability at a noncognitive, reflexive level. This means that instructions or signs intended to correct bad affordances are not effective at preventing errors. This also means that people generally blame themselves — rather than bad design — when such errors occur. Images of familiar physical objects can have perceived affordances. An image of a button affords pressing. An image of a slider in a track affords swiping. Perceived affordances can make things easier to use but should be applied cautiously, as interpretations of images rely on socially constructed meanings versus physical properties that shape use. For example, using a floppy disk icon to indicate a save file operation will not make sense to most people born after the 1990s. Consider affordances in design. Things should afford proper use and negatively afford improper use. When affordances are correctly applied, it will seem inconceivable that a thing can function or be used otherwise. See also Constraint; Desire Line; Error, Design; Error, Human; Mapping; Nudge

1

“The Theory of Affordances” by James Gibson, in Perceiving, Acting, and Knowing by Robert Shaw and John Bransford (Eds.), 1977, Routledge; and The Ecological Approach to Visual Perception by James Gibson, 1979, Psychology Press. A popular treatment of affordances can be found in The Design of Everyday Things by Donald Norman, 2013, Basic Books.

DO NOT TURN TO OPEN

PUSH

P P U U L S L H

…when a device as simple as a door has to come with an instruction manual — even a one-word manual — then it is a failure, poorly designed. — Donald Norman The Design of Everyday Things

006

Alignment The arrangement of elements along a common axis based on their edges, centers, or areas. Every element in a design should be aligned with one or more other elements. This creates compositional unity and cohesion, which contributes to the design’s overall aesthetic and perceived stability. Alignment also improves the efficiency by which information is processed, and can be a powerful means of leading a person through a design. For example, the rows and columns of a grid or table make explicit the relatedness of elements sharing those rows and columns, leading the eyes along horizontal and vertical axes to reduce error.1 When elements are roughly symmetrical, they should be aligned by positioning their edges or centers along a common axis. When elements are asymmetrical or oddly-shaped, they should be aligned by positioning their bodies along a common axis such that an equal amount of area or visual weight hangs on either side. Design and engineering software can align elements with great precision. However, the alignment supported by software is currently based on the edges of elements or on their centers, which is calculated based on a rectangular bounding box. This method will not visually align asymmetrical or oddly-shaped elements. In such cases, alignment should be manually adjusted based on area or visual weight. The shape of the medium (e.g., page or screen) and the natural positions on the medium (e.g., centerlines) should be treated as alignment cues. With regard to text, the hard edges of left- and right-aligned text blocks provide better alignment cues than center-aligned text blocks. Justified text provides more alignment cues than unjustified text, and therefore should be considered in complex compositions with many elements. Although alignment is generally defined in terms of vertical or horizontal axes, more complex forms of alignment exist. In aligning elements along diagonals, for example, the relative angles between the invisible alignment paths should be 30 degrees or greater; separation of less than 30 degrees is too subtle and difficult to detect. In spiral or circular alignments, it may be necessary to augment or highlight the alignment paths so that the alignment is perceptible; otherwise the elements can appear disparate and the design disordered.2 Align every element in a design to one or more other elements. Treat the centerlines of a medium as inherent alignment cues. Favor simple, linear alignments over complex, curvilinear alignments when efficiency of processing is key. Do not rely on software to align irregular elements: If things do not look aligned, they are not aligned, no matter what the software says. See also Legibility; Mapping; Orientation Sensitivity; Interference Effects;

Proximity; Signal-to-Noise Ratio; Similarity

1

See, for example, “Spatial Alignment Facilitates Visual Comparison” by Bryan Matlen et al., May 2020, Journal of Experimental Psychology: Human Perception and Performance, 46(5), 443 – 457.

2

See, for example, Elements of Graph Design by Stephen Kosslyn, 1994, W.H. Freeman & Company.

Although there are a number of problems with the design of the butterfly ballot (top), most of the confusion resulted from the misalignment of the rows and punch-hole lines. This conclusion is supported by the improbable number

of votes for Pat Buchanan in Palm Beach County and the number of double votes that occurred for candidates adjacent on the ballot. A simple adjustment to the ballot design (bottom) would have dramatically reduced the error rate.

007

Anchoring The subconscious influence of reference points on decision-making and judgment. The anchoring effect occurs when a stimulus becomes a baseline or reference point, influencing how related stimuli are perceived and acted upon. For example, when multiple prices are presented together, the high price sets an anchor — i.e., becomes the top-end reference point — and makes the lower prices seem less expensive. The specificity of the stimulus influences the scale of the effect. Given an anchor of $20, people tend to think in large increments ($19, $21, etc.), but given an anchor of $19.75, people tend to think in smaller increments ($19.50, $19.95). The effect is strongest when anchors are related and relevant to one another. If people perceive anchors to be uninformative or randomly derived, they have little to no effect.1 While most of the research on anchoring regards numbers, the effect applies to any stimulus perceptible by the senses. For example, in a matching task trying to match the feel of sandpaper of a certain grit, people presented with coarse-grit sandpapers first selected matches that were coarser than the original, and people presented with fine-grit sandpapers first selected matches that were finer than the original. The initial haptic experience became the anchor by which subsequent haptic experiences were evaluated.2 Prior knowledge may impact the effect. For example, there is an experiment where exposure to the last two digits of a social security number influenced bidding behaviors in an auction — i.e., when the last two digits were low or high, the bids were lower or higher, respectively. Social security numbers are irrelevant to the value of auction items, and so there should not have been an effect. This is, in fact, what was found in attempts to replicate the experiment: no effect. Anchoring works best with people who are unfamiliar with the things in question, and so it is possible that the original group was less familiar with the auction items. Research is ongoing.3 Consider anchoring in the presentation of options and experiences, especially in decision-making and judgment contexts. When presenting stimuli sequentially, the initial stimulus sets the anchor. When presenting stimuli together, the low and high stimuli set the low and high anchors. The effect is strongest when anchors are relevant and when people are unfamiliar with what the anchors are referencing. See also Expectation Effects; Exposure Effect; Framing; Peak-End Rule;

Priming

1

The seminal work is “Judgment Under Uncertainty: Heuristics and Biases” by A. Tversky and D. Kahneman, 1974, Science, 185, 1124 –1131.

2

“Perceptual anchoring and adjustment” by Gaurav Jain et al., Oct 2021, Journal of Behavioral Decision Making, 34(4), 581– 592.

3

“Coherent Arbitrariness: Stable Demand Curves Without Stable Preferences” by Dan Ariely et al., Feb 2003, The Quarterly Journal of Economics, 118(1), 73 –106; and “On the Robustness of Anchoring Effects in WTP and WTA Experiments” by Drew Fudenberg et al., May 2012, American Economic Journal: Microeconomics, 4(2), 131–145.

TH ANK YO U FO R YO U R PU R CH ASE

Would you like to donate and help pets in need?

No Thanks

TH ANK YO U FO R YO U R PU R CH ASE

Would you like to donate and help pets in need?

No Thanks

In point-of-sale systems, it has become common to conclude transactions with an option to make donations to charitable organizations. Considering anchoring in the choice architecture can increase donations. For example, the top configuration sets the high anchor at $4, whereas the bottom configuration sets the high anchor at $10. Because the $10 anchor makes the other amounts seem smaller by comparison, this design will receive more $1 and $2 donations. The effect can be enhanced further by highlighting the high anchor. Note that design tweaks along these lines need to be subtle. If the design becomes too complex or users perceive they are being manipulated, they will ignore the interaction altogether and donations will plummet.

008

Anthropomorphism The attribution of humanlike characteristics to nonhuman things. Humans are predisposed to perceive certain forms and patterns as humanlike — specifically, forms and patterns that resemble faces and body proportions. This tendency, when artfully applied to design, is an effective means of getting attention, establishing a positive affective tone for interactions, and forming a relationship based, in part, on emotional appeal. When inartfully applied, however, anthropomorphic forms can manifest as awkward or even ghoulish, attracting the wrong kind of attention and creating the wrong kind of emotional reactions.1 Anthropomorphism can be expressed in both visual and nonvisual forms. Social robots and personal robot assistants are examples of visual anthropomorphism. While the realism of their anthropomorphic forms varies widely, they all tend toward the abstract: They generally stand upright in a vertical orientation, have a head that can be clearly distinguished from a body, and have facial features and body motions that can signal different emotions. The anthropomorphic appeal of many of these robots is powerful, leveraging principles like the baby-face bias to make them endearing. Because these robots generally appear toylike rather than realistic, people have modest expectations with regard to their abilities and social intelligence.2 By contrast, voice-command appliances and AI-based virtual friends are examples of very realistic, nonvisual anthropomorphism. Their humanlike qualities lie not in their physical forms but in their communication. When their texting and speech quality is sufficiently realistic, it creates a suspension of disbelief that leads people to interact with them as fellow human beings, referring to them by given names, exchanging friendly banter, and even buying them digital gifts. It is anthropomorphic appeal at its most powerful. When the text and voice quality are not sufficiently realistic, however, the suspension of disbelief is violated and people can feel betrayed. When a thing appears very humanlike but falls short on simple tasks, it can be disappointing, jarring, and, in some cases, offensive. It is an uncanny valley of social interaction.3 Consider anthropomorphic forms to attract attention and establish emotional connections. Favor more abstract versus realistic anthropomorphic forms, as realistic depictions often decrease, not increase, aesthetic appeal. Use feminine body proportions to elicit associations of sexuality and vitality. Use round anthropomorphic forms to elicit babylike associations and more angular forms to elicit masculine, aggressive associations. See also Archetypes, Psychological; Baby-Face Bias; Face Detection;

Supernormal Stimulus; Uncanny Valley; Waist-to-Hip Ratio

1

See, for example, “From seduction to fulfillment: the use of anthropomorphic form in design” by Carl DiSalvo and Francine Gemperle, 2003, DPPI ’03 ; and “Is That Car Smiling at Me? Schema Congruity as a Basis for Evaluating Anthropomorphized Products” by Pankaj Aggarwal and Ann McGill, 2007, Journal of Consumer Research, 34(4), 468 – 479.

2

See, for example, “Anthropomorphism and Human Likeness in the Design of Robots and Human-Robot Interaction” by Julia Fink, 2012, International Conference on Social Robotics, 199 – 208; and “Eliza in the uncanny valley: anthropomorphizing consumer robots increases their perceived warmth but decreases liking” by Seo Young Kim et al., 2019, Marketing Letters, 30, 1–12.

3

See, for example, “Blame the Bot: Anthropomorphism and Anger in Customer– Chatbot Interactions” by Cammy Crolic et al., 2022, Journal of Marketing, 86(1), 132 –148.

The Adiri Natural Nurser baby bottle is designed to look and feel like a female breast, and not surprisingly, it elicits the positive associations people have with breastfeeding. The affective tone set by the bottle is one of naturalness and caring. What parent would choose a traditional, inorganiclooking bottle when such a supple, natural-looking substitute for the real thing was available?

The Method Dish Soap bottle, nicknamed the “dish butler”, brings an abstract anthropomorphic form to bear. The large bulbous head triggers baby-face bias cognitive wiring, reinforcing its aesthetic appeal as well as associations such as safety, honesty, and purity. Labeling is applied in what would be the chest region, with the round logo giving the appearance of a superhero costume.

The classic 1915 Coca-Cola “contour” bottle, often referred to as the “Mae West” bottle due to its distinctly feminine proportions, was a break with the straight and relatively featureless bottles of its day. The bottle also benefited from a number of anthropomorphic projections such as health, vitality, sexiness, and femininity — attributes that appealed to the predominantly female buyers.

009

Aposematism The use of conspicuous markings to grab attention and signal danger. Aposematism refers to evolved markings and displays that attract attention and signal danger to potential predators. Aposematic signals include bright color combinations, behavioral displays such as rattling or hissing, and patterning such as zigzags and stripes. For example, the brightly colored patterns found on poison dart frogs, wood tiger moths, and coral snakes all warn predators about their toxic, venomous, and chemical defenses. When predators attack these animals, they either die or learn to associate the negative after-experience with the prey’s markings and displays. As a result, predators learn to avoid these animals and, in extreme cases, evolve a hardwired aversion to them.1 Aposematic color combinations evolved to be seen and to be remembered, and as such, they are both attention-grabbing and memorable. To humans, aposematic animals and plants with bright, high-contrast color combinations are considered both more dangerous and more beautiful than nonaposematic members of the same or similar species. Aposematic patterns that feature angular versus round features appear to be the most effective danger signals to humans, such as patterns with triangles, diamonds, stripes, and zigzags.2 The most common aposematic colors in nature are red, orange, yellow, black, and white, typically presented in high-contrast combinations. The long-wavelength colors of red, orange, and yellow are more contrasting against natural green backgrounds than green, blue, and violet and more conspicuous under a range of lighting and weather conditions. These color combinations are also effectively detected and interpreted by people with color vision deficiency (CVD, also known as color blindness) and commonly used to attract attention and signal danger in safety signage.3 Consider aposematic colors and patterns to capture attention, increase interestingness and memorability, and indicate danger. In general, the brighter the colors and starker the contrast, the stronger the effect. Reference aposematic colors for color-blind populations and to create beautiful color combinations that also signal formidability — for example, for use in sports uniforms and book covers. See also Archetypes, Psychological; Color Effects; Color Theory; Contour Bias;

Mimicry; Threat Detection

1

The seminal work on aposematism is The Colours of Animals by Edward Bagnall Poulton, 1890, D. Appleton and Company. This book was the first substantial work to connect Darwinian selection to animal coloration and introduced the term.

2

“Does Colour Matter? The Influence of Animal Warning Coloration on Human Emotions and Willingness to Protect Them” by Pavol Prokop and Jana Fancovicová, Aug 2013, Animal Conservation, 16(4), 1– 9; and “Revisiting the Fear of Snakes in Children: The Role of Aposematic Signalling” by Jérémie Souchet and Fabien Aubret, 2016, Scientific Reports, 6(37619), 1–7.

3

“Color Contrast and Stability as Key Elements for Effective Warning Signals” by Lina María Arenas et al., Jun 2014, Frontiers in Ecology and Evolution, 2(25), 1–12.

010

Apparent Motion The illusion of motion created when images are displayed in rapid succession. Apparent motion is the appearance of real motion that occurs when similar still images are presented in rapid succession. The visual system fills in the gaps of time and space between the images to create the illusion of real motion. The effect can be created by changing the position or orientation of objects across images or by moving background elements around a fixed object. This is the basis for the illusion of motion in animations, flip-books, motion pictures, television programs, and zoetropes. When actors run across a movie screen, they are in apparent motion; when people in the theater walk in front of the movie screen, they are in real motion.1 Apparent motion occurs because the visual system resolves gaps and inconsistencies in the sequence of still images by applying information processing shortcuts and rules about how the physical world operates. These visual processing heuristics likely evolved from the need to detect predators and prey in complex visual contexts, like intermittently sighting a predator moving through a jungle.2 Examples of these heuristics include: • Focus on salient features • Focus on holistic patterns over individual details • Assumption that objects in motion continue along a straight path • Assumption that objects that are similar are the same • Assumption that objects will cover and uncover portions of the background along a path of motion There is a minimum frame rate required to achieve apparent motion; otherwise images appear as disconnected and independently presented. The minimum rate of image presentation to achieve apparent motion is about 10 images or frames per second. The standard frame rate for television and movies is 24 to 30 frames per second. Frame rates of 60 + frames per second are indistinguishable from reality.3 Consider apparent motion when designing experiences involving the illusion of motion. Consider visual processing heuristics in the design, focusing viewer attention, avoiding unnecessary details, and recognizing that people will assume motion will follow the laws observed in the real world. Design for a minimum of 10 frames per second to make images appear as one moving image. Design for a minimum of 60 + frame rates per second to achieve the most natural motion possible. See also Common Fate; Figure-Ground; Good Continuation; Inattentional Blindness; Perspective Cues

1

The seminal work is “Experimentelle Studien über das Sehen von Bewegung” by Max Wertheimer, 1912, Zeitschrift für Psychologie, 61, 161– 265. A modern review of this research (in English) is “Motion perception: a modern view of Wertheimer’s 1912 monograph” by Robert Sekuler, 1996, Perception, 25(10), 1243 –1258.

2

See, for example, “The Perception of Apparent Motion” by Vilayanur Ramachandran and Stuart Anstis, 1986, Scientific American, 254(6), 102 –109.

3

Note that to eliminate all blur and jerkiness, frame rates over 200 frames per second are required. See, for example, “A psychophysical study of improvements in motion-image quality by using high frame rates” by Yoshihiko Kuroki et al., 2007, Journal of the SID, 15(1), 61– 68.

A motion study by Eadweard Muybridge. When the images are displayed in rapid succession, the illusion of motion is created. Experience the effect by flipping the top-right page corners.

Only photography has been able to divide human life into a series of moments, each of them has the value of a complete existence. — Eadweard Muybridge (attributed)

011

Appeal to Nature The tendency to believe natural things are inherently better than human-created things. The appeal to nature is a fallacy that occurs when something is deemed superior simply because it evolved in nature or inferior because it’s human created. It is also a cognitive bias in which people interpret “natural” as a cue for “safe”, leading to a preference for natural things and a bias against synthetic things. Examples of appeals to nature include preferences for herbal remedies, organic foods, and all-natural ingredients, and biases against baby formulas, genetically modified foods, and vaccines.1 The fallacy occurs with conscious deliberation, whereas the bias operates below conscious awareness. For example, in an early example of biomimicry in design, many people reasoned that a flap-to-fly strategy was the best way to achieve flight because that is how birds evolved to fly. This is fallacious reasoning. By comparison, given two equivalent products, one labeled “natural” and one “artificial”, people will pay a premium for the “natural” product even when they are aware that the products are identical. This is cognitive bias. Perceived naturalness decreases when things are added or mixed but is relatively unaffected when things are subtracted. For example, orange juice with added pulp is perceived to be less natural than orange juice with the pulp removed.2 The fallacy and the bias can interact to synergistic and tragic effect. For example, in May 2021, the Sri Lankan president Gotabaya Rajapaksa attempted to transition the country to an all-organic agriculture, banning the use of synthetic fertilizers and pesticides. The consequences were swift and catastrophic, cutting the production of staple rice crops by 20% in just six months and devastating tea, the country’s primary export and source of foreign exchange. In November 2021, the government began rolling back the bans; but the economy had already begun to collapse. In July 2022, the president fled Sri Lanka amid mass protests over the economic crisis.3 Natural does not equal good. Plenty of natural things are harmful: Cyanide, mercury, and snake venom are all natural. Therefore, consider nature in design but do not treat it as an authoritative source. Nature should inform and inspire but never dictate. In marketing contexts, recognize the power of naturalness in positioning as well as the asymmetrical impact of adding versus subtracting elements. See also Biophilia Effect; Convergence; Framing; Mimicry; Scaling Fallacy

1

“The Meaning of ‘Natural’: Process More Important Than Content” by Paul Rozin, Sep 2005, Psychological Science, 16(8), 652– 658.

2

“Natural Is Better: How the Appeal to Nature Fallacy Derails Public Health” by Sofia Deleniv et al., Mar 8, 2021, Behavioral Scientist.

3

See, for example, “In Sri Lanka, Organic Farming Went Catastrophically Wrong” by Ted Nordhaus and Saloni Shah, Mar 5, 2022, Foreign Policy. Note that the collapse of the Sri Lankan economy was caused by many factors, not just the move to organic agriculture, but the rush to detoxify the island was a major contributor.

In the early days of human flight, flapping to fly was believed to be a viable strategy. Why? The strategy was pervasive in nature and therefore assumed by many to represent an ideal. Many failed, and injurious experiments later, the basic truth was laid bare: Nature offers possibilities, not perfection.

012

Archetypes, Psychological Patterns that elicit a reflexive attentional or emotional response in humans. Certain stimulus patterns elicit consistent, durable, reflexive responses in people, typically grabbing their attention or generating a particular emotional response. Such patterns are often referred to as archetypes.1

1

The seminal work on archetypes is “The Archetypes and the Collective Unconscious” by Carl Jung, in the Collected Works of C.G. Jung, Vol. 9 Part 1 by R.F.C. Hull (Tr.), 1981, Pantheon Books. While Jungian psychology is largely considered defunct, the use of the term archetype as a metaphor for innate and conditioned biases and preferences is useful and rooted in evolutionary psychology.

2

See, for example, “A Darwinian Theory of Beauty” by Denis Dutton, Feb 2010, TED2010.

3

“Isn’t It Cute: An Evolutionary Perspective of Baby-Schema Effects in Visual Product Designs” by Linda Miesler et al., Dec 2011, International Journal of Design, 5(3), 17– 30.

4

See, for example, The Hero and the Outlaw: Building Extraordinary Brands through the Power of Archetypes by Margaret Mark and Carol Pearson, 2001, McGraw-Hill.

5

See, for example, “Pretty Woman or Erin Brockovich? Unconscious and Conscious Reactions to Commercials and Movies Shaped by Fairy Tale Archetypes — Results from Two Experimental Studies” by Andrea GröppelKlein et al., 2006, Advances in Consumer Research, 33(1), 163 –174. The seminal work on archetypes in storytelling is The Hero with a Thousand Faces by Joseph Campbell, 1960, Princeton University Press.

It is believed that responses to archetypal forms, social roles, and stories provided early humans with adaptive benefits that have been passed down genetically or culturally to modern humans. These responses are expressed as reflexive behaviors, biases, and preferences.2 Archetypes that are applied to reinforce key design goals can increase the probability of success. For example, if seeking to make a car cuter and more playful, making the front of the cars more baby-faced in appearance — enlarging the headlights, shrinking the middle grille, and decreasing the width of the air intake while increasing its height — can make it reliably and durably cuter to people across cultures.3 There are three types of psychological archetypes: 1. Archetypal forms — Examples include faces, horns and canine teeth,

snakes, spiders, and sexual forms. Archetypal forms can be employed to improve design in a variety of contexts, including advertising, entertainment, product design, toy design, and architecture. 2. Archetypal social roles — Examples include the hero, rebel, mentor,

magician, and villain. Archetypal roles have been successfully employed in advertising and brand design by companies such as Harley-Davidson, which features rebel or outlaw figures wearing black leather and dark sunglasses; Nike, which features heroic sports figures such as Michael Jordan, Tiger Woods, and Serena Williams; and Disney, which features magical characters in their merchandise, movies, and theme parks.4 3. Archetypal stories — Examples include the quest, tragedy, voyage and

return, and conquering the monster. Archetypal stories have been successfully employed (wittingly or unwittingly) by filmmakers like George Lucas, George Miller, Steven Spielberg, John Boorman, Peter Jackson, and Francis Ford Coppola.5 Consider appropriate archetypes in your designs to capture attention and trigger desired attentional and emotional responses. Use archetypes that align with key aspects of a design to achieve synergistic effects. See also Affordance; Archetypes, System; Biophilia Effect; Contour Bias;

Face Detection; Mimicry; Threat Detection; von Restorff Effect

These are proposed designs for a marker system to warn future generations of the presence of a nuclear waste disposal site. The design specification required the markers to stand for the life of the radioactive hazard (10,000 years), clearly warn people to stay away from the area, and assume that future civilizations will not be knowledgeable

of radioactive hazards or speak any language known today. The designs address this seemingly impossible specification through the application of archetypal theme and form — parched earth, snakelike earthworks, and claws and thorns — to warn future humans (or other intelligent species) of the radioactive hazards on a visceral level.

013

Archetypes, System Universal structures and resulting patterns of behavior found across system types. System archetypes are cause-effect structures that exist across a wide range of systems and yield similar patterns of behavior. For example, the system structure that enables a thermostat to maintain a specified room temperature is basically the same as the system structure that maintains the population of a species. The details of the systems are unique, but the cause-effect structures and resulting system behaviors are comparable.1 It is unknown how many system archetypes exist, but the following seven are common to many systems: 1. Eroding goals — A gap between a goal and performance is realized.

Rather than taking difficult corrective action, the goal is lowered to close the gap and performance does not improve. Example: Lowering global climate change goals. 2. Escalation — An action by [A] causes a like action by [B], which causes

another like action by [A], and so on, creating a continuous escalation of action. Example: An arms race. 3. Fixes that fail — A quick fix alleviates an acute symptom but results

in new unintended consequences and doesn’t solve the underlying problem. Example: Paying debt with a credit card. 4. Limits to growth — A system grows by a reinforcing feedback process

until it reaches a limit, at which point it stabilizes, declines, or collapses. Example: Overpopulation. 5. Addiction — A temporary fix alleviates an acute symptom, diverting

resources from solving the underlying problem and ensuring the original symptom will return. Example: Dependence on fossil fuels. 6. Success to the successful — [A] and [B] need the same limited resource

to be successful. More of the limited resource is given to [A], which gives [A] an increasingly large advantage over [B], justifying that [A] get even more of the limited resource. Example: Self-fulfilling prophecies. 7. Tragedy of the commons — [A] and [B] increasingly exploit a common

resource faster than it can replenish, eventually exhausting that resource. Example: Overfishing stocks. System archetypes are universal structures abiding universal rules — they are the ultimate pattern language. Use them as templates to inform strategy and guide long-term planning. Explore archetypes to deeply understand system behaviors, diagnose unwanted results or side effects, and to design highleverage solutions. Consider interventions that worked in specific instances of an archetype and apply them to other instances. See also Archetypes, Psychological; Feedback Loop; Leverage Point

1

System archetypes were discovered by Jay Forrester, Dennis Meadows, Donella Meadows, and other pioneers of systems thinking in the 1960s and 1970s. See, for example, The Fifth Discipline: The Art and Practice of the Learning Organization by Peter Senge, 1990, Doubleday. System archetypes are also known as generic structures.

COVID Infections

Weight Gain

Inflation

Wolves

Mask Wearing

Dieting

Interest Rate Adjustments

Quantity

Rabbits

Delay T1

Delay T2

This graph depicts the predator-prey relationship between rabbits and wolves in an ecosystem. As the rabbit population increases (T1), the wolf population increases after a delay (T2). As the wolf population increases (T2), the rabbit population decreases after a delay (T3). And so on. We observe the same pattern in other systems, including COVID infections and mask wearing, weight gain and dieting, and inflation and interest rate adjustments. Each system is unique, but the cause-effect structures are the same. Only by understanding such patterns can one hope to change a system’s behavior.

Time T3

014

Attractiveness Bias A tendency to view attractive people as intelligent, competent, moral, and sociable. People preferentially ascribe positive intellectual and social attributes to attractive people based on appearance alone.1 Attractive people receive more attention from the opposite sex, receive more affection from their mothers, receive more leniency from judges and juries, and receive more votes from the electorate than do unattractive people. All other variables being equal, attractive people are preferred in hiring decisions, are more likely to be appointed or elected into positions of leadership, and will make more money doing the same work than unattractive people. The attractiveness bias is a function of both biological and environmental factors.2 General attributes of attractiveness include symmetrical facial features, clear skin, ideal waist-to-hip ratios, indications of status and wealth, and signs of health and fertility. From an evolutionary perspective, the absence of these attributes is an indicator of malnutrition, disease, bad genes, or an inability to support child-rearing. A significant component of the attractiveness bias is biologically driven, which suggests the bias applies across cultures. For example, in studies presenting images of attractive and unattractive people to babies (two months old and six months old), the babies gazed longer at the attractive people regardless of their gender, age, or race.3 Men find fertility and fitness cues most attractive, whether homosexual or heterosexual. Heterosexual women find status and wealth cues most attractive, whereas homosexual women find fertility and fitness cues most attractive. As a result, heterosexual women and homosexual men enhance their attractiveness to partners by exaggerating fitness and fertility cues (e.g., wearing clothing that highlights fertility or fitness features), and heterosexual men enhance their attractiveness to women by exaggerating status and wealth cues (e.g., driving expensive cars). Just as people judge books by their covers, people also judge people by their appearance. Consider the attractiveness bias in advertising and marketing contexts. Just as people preferentially ascribe positive intellectual and social attributes to attractive people, they also ascribe these positive attributes to the products attractive people use. See also Baby-Face Bias; Face-ism Ratio; Gloss Bias; Waist-to-Hip Ratio

1

The seminal work on the attractiveness bias is “What Is Beautiful Is Good” by Karen Dion et al., 1972, Journal of Personality and Social Psychology, 24(3), 285 – 290. A contemporary review of the attractiveness bias research is “Maxims or Myths of Beauty? A Meta-analytic and Theoretical Review” by Judith Langlois et al., 2000, Psychological Bulletin, 126(3), 390 – 423.

2

See, for example, Survival of the Prettiest: The Science of Beauty by Nancy Etcoff, 2000, Anchor.

3

“Cross-Cultural Agreement in Facial Attractiveness Preferences: The Role of Ethnicity and Gender” by Vinet Coetzee, 2014, PLoS One, 9(7), e99629

The first presidential debate between Richard Nixon and John Kennedy (1960) is a classic demonstration of the attractiveness bias. Nixon was ill and running a fever. He wore light colors and no makeup, further whitening his already pale complexion and contrasting his fiveo’clock shadow. Kennedy wore dark colors and makeup and practiced his delivery in a studio prior to the debate. People who listened to the debate by radio believed Nixon to be the winner People who watched the debate on TV came to a different conclusion.

In the aftermath of the first debate, Nixon’s running mate, Henry Cabot Lodge, had a few choice words for the GOP presidential candidate. “That son-of-a-b**** just lost us the election”, Lodge reportedly said. Johnson, who was Kennedy’s running mate, thought his running mate had lost the debate. Lodge saw the debate on TV, while Johnson listened to the debate on the radio. — constitutioncenter.org

015

Baby-Face Bias A tendency to see things with baby-faced features as having the characteristics of babies. People and things with round features, large eyes, small noses, high foreheads, short chins, and light hair and skin are perceived to be baby-like and as such are perceived to have baby-like personality attributes: naivete, helplessness, honesty, and innocence. Large, round heads and eyes appear to be the strongest facial cues contributing to the bias.1

1

The seminal work on the baby-face bias is “Ganzheit und Teil in der tierischen und menschlichen Gemeinschaft” [Part and Parcel in Animal and Human Societies] by Konrad Lorenz, 1950, Studium Generale, 3(9), 455 – 499

The baby-face bias applies to all anthropomorphic things, including people, animals, cartoon characters, and products such as bottles, appliances, and vehicles. And because the baby-face bias is an innate versus a learned bias, baby-face features are more resistant to habituation than other features. For example, a car front designed to elicit baby-face associations is more likely to be noticed and elicit emotional responses after repeated exposures.2

2

“Isn’t It Cute: An Evolutionary Perspective of Baby-Schema Effects in Visual Product Designs” by Linda Miesler et al., Dec 2011, International Journal of Design; 5(3), 17– 30.

3

See Reading Faces: Window to the Soul by Leslie A. Zebraowitz, 1998, Westview Press.

In advertising contexts, baby-faced adults are most effective when attributes of innocence and honesty are paramount, such as personal testimonials but less effective when attributes of authority and expertise are paramount, such as a doctor’s recommendation. In leadership contexts, leaders with more aggressive, dominant facial features tend to be favored in for-profit organizations. However, in nonprofit organizations, leaders with baby faces tend to be favored. In legal contexts, baby-faced adults are less likely to be found guilty when the alleged crime involves an intentional act but are more likely to be found guilty when the alleged crime involves a negligent act. It is apparently more believable that a baby-faced person would do wrong accidentally than purposefully. Interestingly, when a baby-faced defendant pleads guilty, they receive harsher sentences than mature-faced defendants. The disagreement between the perception of innocence and the reality of guilt seems to evoke a harsher reaction than when perception and reality align.3 Consider the baby-face bias in the design of characters or products when facelike features are prominent. Characters of this type can be made more appealing by exaggerating the various neonatal features (e.g., larger, rounder eyes). In advertising, use mature-faced people when conveying expertise and authority; use baby-faced people when conveying innocence and honesty. See also Anthropomorphism; Contour Bias; Face-ism Ratio; Mimicry;

Savanna Preference; Supernormal Stimulus

Baby-face characteristics include round features, large eyes, small noses, high foreheads, and short chins. Accordingly, round things are perceived to have baby-like and feminine associations — e.g., cute, gentle, safe, submissive — whereas angular things are perceived to have mature and masculine associations — e.g., striking, aggressive, dangerous, dominant.

The Beetle has a strong personality, soft shapes, sympathetic shapes… It’s like a pet; like a family member sitting in the garage. — Klaus Bischoff Volkswagen Chief Designer Architectural Digest

016

Back of the Dresser All parts of a design, visible and nonvisible, should be held to the same standard of quality. Craftsmanship applied to areas not ordinarily visible to customers is a good indicator of product quality — proof that designers and developers have applied consistent quality to all aspects of the product. The principle borrows from an allegory shared by Steve Jobs: When you’re a carpenter making a beautiful chest of drawers, you’re not going to use a piece of plywood on the back, even though it faces the wall and nobody will ever see it. You’ll know it’s there, so you’re going to use a beautiful piece of wood on the back. For you to sleep well at night, the aesthetic, the quality, has to be carried all the way through.1 Thus, the “back of the dresser” serves as a metaphor for the parts of a design that are not visible to people in ordinary use but that reveal much about the care and craftsmanship that went into the design and development of the product. Indications of craftsmanship include quality materials, precision fit, uniformity of finish, internal consistency, and maker marks or signatures. These elements reflect the passion and care (or lack thereof) of the creators. When these standards of quality are applied unevenly in a design, using lower-quality materials and craftsmanship for sections hidden from view, it undermines confidence in the quality of the product and trust in the creators. What other shortcuts were taken? What other compromises were made? What other perfunctory work has yet to be discovered? 2 The principle applies to all types of design, both physical and digital. A software application can have an attractive, usable user interface; but the code underneath can be inefficient, unstructured, poorly documented, and difficult-to-maintain spaghetti code. This not only indicates poor quality of craftsmanship and design; it also portends future maintainability, reliability, and scalability problems. Beauty is only skin deep; quality runs deeper. Apply the same standards of design and development to all aspects of products, visible and nonvisible. Be consistent and take opportunities to provide conspicuous signals of care and craftsmanship even if few will ever see them. Treat invisible areas as if they were visible and include them in testing and evaluation. Light is the best disinfectant for shoddy craftsmanship and quality. See also Aesthetic-Usability Effect; Brown M&M’s; Consistency;

Diffusion of Innovations; Kano Model

1

“Playboy Interview: Steven Jobs” by David Sheff, Feb 1985, Playboy.

2

It was once safe to assume that few would see the back of the dresser but no more. With professional media, amateur blogs, and online customer video reviews, etc., it is now safe to assume that a product will be opened up and taken apart, exposing the back of the dresser to all.

The interior design of computers in the aughts tended toward tangled hodgepodges of components and cabling (top), with one exception: the Apple Power Mac G5 and Mac Pro towers (bottom). These Macs looked as good with their doors off as on, signaling to consumers that Apple’s commitment to craftmanship and design extended past a pretty façade.

017

Biophilia Effect A state of reduced stress and improved concentration resulting from nature views. Poets and philosophers have long held that exposure to natural environments produces restorative benefits. In the past few decades, this claim has been tested empirically, and it does appear that exposure to nature confers benefits emotionally, cognitively, and physically.1 In a longitudinal study following seven- to twelve-year-olds through housing relocation, children who experienced the greatest increase in nature views from their windows made the greatest gains in standard tests of attention (potential confounding variables such as differences in home quality were controlled). A comparable effect was observed with college students based on the nature views from their dorm windows. Studies that examined the effects of gardening, backpacking, and exposure to nature pictures versus urban pictures corroborate the effect. The effect does not seem to require real plants in the environment; imagery — window views, posters on the wall, etc.— seems to suffice.2 Although some nonnatural environments may confer similar benefits, nature scenes appear to be the most reliable and consistent source for the general population. Why should nature imagery be more restorative and conducive to concentration than, for example, urban imagery? The effect is believed to result from the differential manner in which the prefrontal cortex processes nature imagery versus urban imagery. However, given that photographs of nature versus urban environments are sufficient to trigger the effect, it is likely that the biophilia effect is more deeply rooted in the brain than the prefrontal cortex — perhaps an innate bias for greenery evolved in early humans because it conferred a selective advantage, a bias likely related to the savanna preference. Consider the biophilia effect in the design of all environments but in particular, environments in which learning, healing, and concentration are paramount. Exposure to real nature objects (as opposed to images) should be favored when possible, as exposure is more likely to produce a strong generalizable effect. The strength of the effect also corresponds to the level of exposure, but the amount of exposure required to maximize the effect is not fully understood. Architectural classics such as Frank Lloyd Wright’s Fallingwater and Mies van der Rohe’s Farnsworth House suggest that more nature in the environment is generally better. See also Cathedral Effect; Color Effects; Prospect-Refuge; Savanna Preference;

Self-Similarity

1

The seminal work on the biophilia effect is Psychology: The Briefer Course by William James, 1892, Holt. The seminal empirical work on the effect is Cognition and Environment: Functioning in an Uncertain World by Stephen Kaplan and Rachel Kaplan, 1982, Praeger Press. The term biophilia effect is based on the biophilia hypothesis first proposed by Erich Fromm and popularized by Edward Wilson. See, for example, The Biophilia Hypothesis by Stephen Kellert and Edward Wilson (Eds.), 1995, Island Press.

2

“At Home with Nature: Effects of ‘Greenness’ on Children’s Cognitive Functioning” by Nancy Wells, 2000, Environment and Behavior, 32(6), 775 –795; and “The Restorative Benefits of Nature: Toward an Integrative Framework” by Stephen Kaplan, 1995, Journal of Environmental Psychology, 15, 169 –182.

The High Line park of New York City provides a biophilic retreat amid the hustle and bustle of big-city life.

The natural world is the refuge of the spirit, remote, static, richer even than human imagination. — Edward O. Wilson Biophilia

018

Box’s Law All models are wrong, but some are useful. Box’s law, proposed by the statistician George Box, states that models are simplified representations of real-world systems and as such are always “wrong” in the sense that they do not capture the full complexity of those systems. Despite this, models can be right enough to be useful. For example, maps are a type of model; they are wrong in that they are scaled-down simplifications but right enough to be useful.1 Models are useful when they have explanatory or predictive power. For example, models in evolutionary biology tend to have high explanatory power but low predictive power — i.e., they can explain mechanisms of change in great detail but are limited in the kinds of practical predictions they can make. By contrast, models in quantum physics tend to have high predictive power but low explanatory power — i.e., they can predict quantum phenomena with extraordinary accuracy but can’t explain why the phenomena behave the way they do. The most useful models have both high explanatory power and high predictive power; for example, models used in celestial mechanics.2 Models can often be made more useful by combining their outputs with outputs from other independently created models. This is called ensemble modeling. For example, hurricane models cannot predict a storm’s track with much precision beyond a few days. To increase the utility of forecasts, meteorologists use an ensemble of models to produce spaghetti plots, which show the storm track predictions of an ensemble of models. When multiple predicted storm tracks agree, the confidence and precision of the prediction increase. And even when they disagree, the ensemble models are still useful in predicting a storm’s track within a range that enables evacuations and emergency preparations. Consider Box’s law when working with models or evaluating their results. Models are often treated with prophetical reverence, which is why it is important to recognize that they are all, at some level, wrong. The practical question, according to Box, is “How wrong do they have to be to not be useful?” Favor models that accurately and consistently explain or predict phenomena that agree with reality over time and that have explanatory value. Beware models based on unproven theories or political ideologies, as they are likely to be too wrong to be useful.3 See also Chesterton’s Fence; Convergence; Mental Model;

Normal Distribution; Satisficing; Swiss Cheese Model

1

“Science and Statistics” by George Box, Dec 1976, Journal of the American Statistical Association, 71(356), 791–799.

2

Searching for Certainty by John Casti, 1991, William Morrow & Company.

3

Empirical Model-Building and Response Surfaces by George Box and Norman Draper, 1987, John Wiley & Sons.

HURRICANE DORIAN: SPAGHETTI PLOTS

All models are wrong to some extent, but they can still be useful. Consider these projected storm tracks for Hurricane Dorian in 2019. None of the models predicted the track perfectly, but the possible tracks enabled governments and residents to prepare for the worst. Additionally, when the predictions were considered together, the ensemble prediction was very close to the actual path.

HURRICANE DORIAN: ACTUAL PATH

019

Brooks’ Law For certain types of projects, adding people to speed things up inadvertently slows them down. Brooks’ law, proposed by the software engineer Fred Brooks, is a counterintuitive phenomenon that occurs when adding manpower to a late project with the intention of speeding it up ends up slowing it down.1 Brooks proposed the law for software projects, but it applies to any project or task in which any of these statements are true: • The project or task requires significant ramp-up time — The time it takes for new team members to become productive. In complex projects, not only are new team members unable to contribute initially, they also require support from key people working on the project, taking those people off task and slowing progress. The time to add key people and bench players is early in the project cycle when you don’t need them, or else they won’t be available when you do. • The project or task involves tight coordination and communication with other team members — The need for group norms, practices, rituals, and routines that enable team members to work efficiently together. This is as much about developing interpersonal relationships as it is about developing formal processes and systems, especially on large projects where the functions of many different groups are tightly coupled. Adding team members late in a project cycle who are unfamiliar with both the people and the processes risks disrupting both. • The project or task is not easily partitionable into independent units of work — The extent to which a task can be divided into discrete units and worked in parallel. Partitionable tasks can often be sped up by adding more people but non-partitionable tasks cannot. For example, the task of digging numerous post holes for a fence can be sped up by adding more people with shovels. However, if the task is to dig one deep, narrow hole, adding more people with shovels will hinder more than help. Consider Brooks’ law in project planning contexts. Weigh the costs of adding additional key positions early against the costs of project delays — and staff accordingly. Maintain a network of people who have institutional and project familiarity as well as personal relationships with key team members. As a last resort, consider shiftwork for non-partitionable tasks to make up time, minimize fatigue, and increase the total time on task. See also Bus Factor; Development Cycle; Iron Triangle; Knowing-Doing Gap;

Process Eats Goal

1

The seminal work is The Mythical Man-Month by Fred Brooks, 1975, Addison-Wesley.

As any video editor or computer programmer knows, certain types of tasks cannot be sped up by adding more people: tasks requiring significant ramp-up time, tasks requiring tight coordination and communication with team members, and tasks that cannot be partitioned into independent units of work. Adding additional people to these types of tasks can actually slow progress down rather than speed it up, a consequence of the increased communication and management overhead required to coordinate them.

The bearing of a child takes nine months, no matter how many women are assigned. — Frederick P. Brooks Jr. The Mythical Man Month

020

Brown M&M’s The use of covert, embedded tests to verify that quality standards have been met. Brown M&M’s borrows from a practice employed by the American rock band Van Halen. The band’s concert agreements had a rider requiring M&M’s as a backstage snack but noted in all caps: “WARNING: ABSOLUTELY NO BROWN ONES”. The band was initially castigated for being prima donnas, but it was later revealed that the rider was a quality control strategy. Van Halen had a complex and large stage show, with truckloads of heavy equipment and pyrotechnics. If the details of the contract were not executed precisely, the risks ranged from stage collapse to explosions. If there were no brown M&M’s backstage, the band knew that the promoters had read and executed the contract with care. If there were brown M&M’s, however, it was likely that other details had been missed as well.1 The use of brown M&M’s is similar to bebugging or defect seeding in software development, which involves intentionally adding defects to test the effectiveness of quality control processes. Other types of covert quality testing include mystery shopping, in which paid confederates pose as customers to test retail experiences; undercover inspectors, who try to smuggle weapons or similar illegal materials through checkpoints; and red teams, who play the role of an adversary to attack, probe, and test security systems. The strength of these testing methods is their directness and validity: They are very effective at measuring what they intend to measure. Another lesson of Brown M&M’s relates to the principle of TAGRI (They Ain’t Gonna Read It), which refers to the fact that people often fail to read the documentation — be it contracts, design specifications, or user manuals — especially when there is a lot of it. While the ideal solution is to create concise documentation worth reading, the reality of much design documentation is that it requires extensive detail. Brown M&M’s can be useful to verify that such documentation is appropriately reviewed and studied.2 Consider the use of brown M&M’s for quality control; they are low cost, low risk, and simple to apply in a range of contexts. Contrary to the Van Halen rider, favor brown M&M’s near mission-critical and safety-critical sections to avoid trivializing them. Keep the specific form and location of brown M&M’s secret, but don’t be afraid to let people know they are embedded. This will increase vigilance and help to promote trust. See also Depth of Processing; Don’t Eat the Daisies; Signal-to-Noise Ratio;

Testing Pyramid

1

The Brown M&M’s practice is recounted in Crazy from the Heat by David Lee Roth, 1997, Ebury Press. It is similar to what is colloquially known as the carpenter ant principle, which asserts that if you see one carpenter ant, you can safely assume that there are others.

2

See, for example, “Communication through Boundary Objects in Distributed Agile Teams” by Johan Kaj Blomkvist et al., 2015, in Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15), Association for Computing Machinery, 1875 –1884.

So, when I would walk backstage, if I saw a brown M&M in that bowl…well, line-check the entire production. Guaranteed you’re going to arrive at a technical error. They didn’t read the contract. — David Lee Roth Crazy from the Heat

021

Bus Factor The number of team members who, if lost, would put a project in jeopardy. The name of the principle refers to the expression hit by a bus, which is intended to be a humorous metaphor for a person who becomes suddenly unavailable due to death, extended sick leave, changing jobs, etc.1

1

Also commonly known as Truck Factor. The origin of the term is unclear, but it likely originates in software development. The metaphor may derive from Joseph Conrad’s novel, The Secret Agent, in which one of the characters says, “…try to understand that it was a pure accident; as much an accident as if he had been run over by a bus while crossing the street”. Note that the inverse bus factor is the number of team members who, if not hit by a bus, would put the project in jeopardy.

2

“A Novel Approach for Estimating Truck Factors” by Guilherme Avelino et al., 2016, IEEE 24th International Conference on Program Comprehension (ICPC), 1–10.

The bus factor refers only to team members who are essential to project success and cannot be easily replaced. A high bus factor (e.g., bus factor = 5) means that multiple team members could be lost without compromising project success. Large bus factors have reduced risks but less agility and higher costs: • Back-up team members must have the access, availability, project knowledge, skills, and willingness to cover the lost team member. • More management is needed to organize and coordinate efforts. A low bus factor (e.g., bus factor = 1) means that losing any one team member would compromise project success. Small bus factors have increased risks but greater agility and lower costs: • Lean staffing • Reduced equipment needs • Simplified communication • Less management needed to operate efficiently Startup organizations frequently have bus factors of 1, whereas more mature organizations are able to increase their bus factors to achieve an appropriate balance between risk and cost. A 2016 analysis of popular GitHub projects found that 65% of the projects had bus factors ≤ 2, and only 10% of projects had bus factors ≥ 10.2.2 The bus factor is primarily addressed through redundancy of personnel — personnel who have sufficient access, availability, project knowledge, and skills to substitute and maintain operational continuity. However, risks associated with low bus factors can also be reduced using techniques like automation, checklists, cross-training, pair design, role rotation, periodic design/code reviews, and good documentation. Consider the bus factor in organizational design and project team design. Avoid bus factors of 1 whenever possible. Use methods like automation, pair design, and regular work-product reviews when redundant personnel aren’t economically possible. See also Brooks’ Law; Factor of Safety; No Single Point of Failure;

Redundancy; Reverse Salient

Project Manager

Project Architects

Designer

Web Development Team

Web Developers Though rarely considered until after a teammate is “hit by a bus”, project success is impacted by the bus factor of the project team. Projects that have critical dependencies on individuals are vulnerable to individual accidents, illnesses, family emergencies, and turnover. The designer on this team has a bus factor of 1. As such, the project is one bad day away from a stalled project.

QA and Testing Specialists

022

Cathedral Effect High ceilings promote abstract thinking. Low ceilings promote detail-oriented thinking. It is widely accepted that people prefer high ceilings to low ceilings. Lesser known, however, is that ceiling height can influence how people approach problem solving. Depending on the nature of the problem, ceiling height can either undermine or enhance problem-solving performance. Noticeably low or noticeably high ceilings promote different types of cognition — high ceilings promote abstract thinking and creativity; low ceilings promote concrete, detailoriented thinking. No effect is observed if ceiling height goes unnoticed.1 In self-report measures, people predictably rated their general affect as “freer” in high-ceilinged rooms versus “confined” in low-ceilinged rooms. In word tasks, subjects were able to solve anagram problems more efficiently when the anagram aligned with ceiling height. For example, subjects in a high-ceilinged room could solve freedom-related anagrams (e.g., “liberation”) faster than those in a low-ceilinged room but were slower to solve confinement-related anagrams (e.g., “restrained”) than those in the low-ceilinged room. In another experiment, two groups were asked to conduct product evaluations, one group in a high-ceilinged room and one in a lowceilinged room. The group in the high-ceilinged room tended to focus on general product characteristics, whereas the group in the low-ceilinged room tended to focus on specific features. One hypothesis is that this effect is due to priming — the stimulation of certain concepts in memory to promote and enhance cognition regarding related concepts. High ceilings prime “freedom” and related concepts; low ceilings prime “confinement” and related concepts. Another hypothesis is that the effect is due to a vestigial preference for high tree canopies and open skies, as would have been common on the African savannas. Consider the cathedral effect in the design of work and retail environments: • Work environments — Favor large rooms with high ceilings when tasks require creativity and out-of-the-box thinking (e.g., research and development lab) and smaller rooms with lower ceilings for tasks that require detail-oriented work (e.g., surgical operating room). • Retail environments — Favor spaces with high ceilings when consumer choice requires imagination (e.g., home remodeling store), and spaces with lower ceilings for more task-oriented shopping (e.g., convenience store). Favor high ceilings to extend the time visitors remain on-site (e.g., casino) and low ceilings to reduce the time visitors remain on-site (e.g., fast-food restaurant). See also Perspective Cues; Priming; Prospect-Refuge; Savanna Preference

1

The seminal work on the cathedral effect is “The Influence of Ceiling Height: The Effect of Priming on the Type of Processing That People Use” by Joan Meyers-Levy and Rui (Juliet) Zhu, Aug 2007, Journal of Consumer Research, 34(2), 174 –186.

Worm’s-Eye View

Creativity

High Ceiling

Bird’s-Eye View

Focus

Low Ceiling

The ability to focus and perform detail-oriented work is enhanced by environments with low ceilings. The ability to perform more creative work is enhanced by environments with high ceilings. A related effect pertains to visual perspective: worm’seye views (looking upward) evoke cognition and associations similar to high ceilings, whereas bird’s-eye views (looking downward) evoke cognition and associations similar to low ceilings.

023

Causal Reductionism A tendency to fixate on one cause when solving problems, ignoring the reality of multiple causes. No real-world effect or phenomenon results from a single cause. Products don’t succeed or fail due to a single feature or marketing campaign. Components don’t fail due to a single defect or environmental condition. Accidents don’t occur due to a single event or human error. Even the common framing of “root cause” incorrectly suggests the existence of single cause-effect phenomena, when effects are always the result of “root causes”— plural. That people commonly think otherwise demonstrates the pervasiveness of causal reductionism.1 Causal reduction occurs when multiple salient causes are improperly simplified to one cause to explain an effect. For everyday understanding of how the world functions, this mental heuristic works fine. But when troubleshooting complex problems or designing for real-world applications, causal reductionism brings univariate thinking to problems that are inherently multivariate, leading to oversimple understanding and ineffectual solutions.2 There are two basic kinds of causes: antecedent and proximal. Antecedent causes comprise the historic chains of causation that lead to an effect. Proximal causes are the last links in those chains just before an effect occurs. When addressing real-word phenomena, both are forever plural. For example, antecedent causes of car accidents are things like a lack of driver sleep and poorly maintained brakes and tires; and proximal causes are things like rainy conditions and talking on the phone. The sum of all such causes, antecedent and proximal, is the “cause” of the accident. Ironically, causal reductionism is, itself, a causal reduction in that it results from a confluence of many factors, both conscious and subconscious: fallacious reasoning, bias toward simplicity, proximity bias, visibility, etc. The consequence is the pervasive and oversimple [A] causes [B] mental model. A more accurate and useful mental model is [a pie chart of things] causes [B], the sum of which brings about the effect. A multivariate world requires multivariate thinking to be successful. Resist the tendency to attribute phenomena to single causes. Consider using pie charts or similar models to map out the causal landscapes underlying complex problems. Focus on causes that have both significant impact and that can be influenced through design intervention. Treat the term root cause as plural — like “data”. See also Archetypes, System; Error, Design; Error, Human; Root Cause;

Swiss Cheese Model

1

The topic of causal reductionism-holism has long been fodder for philosophers. For example, in his A System of Logic, John Stuart Mill lamented the impossibility of picking out a single “cause” from the background “conditions” of an event. Bertrand Russell made a similar note in his essay “On the Notion of Cause”.

2

In 1997, the journalist Sebastian Junger published The Perfect Storm, which later became a movie by the same name. The perfect storm metaphor was used to describe three distinct weather phenomena converging to create one massive storm. This metaphor popularized the notion of multivariate causes and became a useful device to mitigate causal reductionist thinking. Despite this, the tendency to reduce causation to “user error” persists.

CAUSAL REDUCTIONISM Reason the Edsel Failed

“Edsel” is a terrible name for a car.

CAUSAL REALITY Reasons the Edsel Failed* The Edsel was introduced in September 1957, and the U.S. stock market crashed in October, followed by the 1958 recession.

New Edsels suffered from quality issues such as stuck trunk lids, peeling paint, missing parts, oil leaks, and failing brakes.

“Edsel” is a terrible name for a car. Market research for the Edsel was done so far in advance that by the time the car was released, tastes had changed and people preferred smaller, economy cars.

The oral symbolism of the front grille made the car unappealing to male buyers. Ford’s new corporate strategy did not include making the Edsel a success. The Edsel was overhyped prelaunch, which led to disappointment when the car became available.

* This pie chart is for illustration purposes only. This is not a complete list of reasons, and the percentages are invented.

024

Chesterton’s Fence Seek to understand why things exist the way they do before changing or removing them. Chesterton’s fence, proposed by the writer and philosopher G.K. Chesterton, is introduced in The Thing as a principle regarding reform. As a thought experiment, the reader is asked to imagine a fence across a road. Not knowing why it is there, a “modern type of reformer” will rashly choose to remove it, whereas a “more intelligent type of reformer” will seek to understand why the fence was put there before deciding whether to remove it. Thus, Chesterton’s fence, the principle: Do not remove a fence (or anything else for that matter) until you know why somebody put it there.1 In its modern form, Chesterton’s fence is a heuristic that advises against changing or removing things before we understand how and why those things came to be. For example, a software engineer discovers what appears to be vestigial code during a refactoring exercise, decides it is unnecessary, and contemplates deleting it. Chesterton’s fence discourages changing or deleting such things prior to understanding why they were put there in the first place. Chesterton argued that the creation of things like fences is not a random act: Somebody planned, designed, financed, and constructed them; and to go to all that trouble, they likely had a good reason for doing so. To remove or materially modify such things before understanding why they were designed and implemented the way they were is to invite unforeseen consequences. Chesterton’s fence originally referred to human-created things like fences, but the principle can be applied to any functioning system. For example, prior to significantly changing an environmental ecosystem — e.g., damming a river or eradicating a pest — one should have an understanding of the role these things play in their local contexts and the effects of modification prior to making changes. Chesterton’s fence is as much a principle of humility as a principle of design: a reminder to not confuse ignorance of reasons with absence of reasons. Consider Chesterton’s fence when modifying existing designs. When time and resources permit, do the research to understand the rationale behind design elements. When time and resources don’t permit, apply small and incremental changes to test the effects of modification prior to the final change or removal. See also Ackoff’s Law; Brown M&M’s; Dunning-Kruger Effect;

Not Invented Here

1

The Thing by G.K. Chesterton, 1929, Sheed & Ward.

Chesterton’s fence is not just about fences. It is about the perils of changing things — e.g., fences, components, products, strategies, or secret formulas — without a good understanding of why they exist and function the way they do. In April 1985, The Coca-Cola Company announced a change to its near century-old secret formula. Market researchers were confident that New Coke would be a hit because it performed better in taste tests. But drinking Coke had less to do with taste than brand stature and Americana symbolism. Taste testers thought they were giving feedback on a new product but not a product to replace Coke. The backlash was swift and severe: Grassroots campaigns, protest groups, and petitions to save the original Coke soon followed. Within months, Coca-Cola pivoted, offering both drinks and rebranding the original as Coca-Cola Classic.

[Coke] officials acknowledged that the major problem with the research…was that it failed to make clear to consumers that the old Coke would be scrapped. — The New York Times July 12, 1985

025

Clarke’s Laws Three maxims that offer insights into the nature of innovation. Arthur C. Clarke proposed three laws in his writings about the future, all of which regard the difficulty of distinguishing the possible from the impossible. The three laws can serve as useful heuristics for those engaged in innovative design and development.1 1. When distinguished scientists state that something is possible, they are

almost certainly right. When they state that something is impossible, they are very probably wrong.2 Clarke’s first law recognizes that experienced scientists have a good understanding of known phenomena and existing technologies within their domains of expertise. Therefore, when they claim a thing to be possible, they are usually correct because their claim derives from existing knowledge. However, experienced scientists also tend to be entrenched in their thinking and often underestimate the impact of yet-to-be-discovered phenomena and yet-to-be-invented technologies, especially those outside their domains of expertise. Therefore, when they claim a thing to be impossible, they are often incorrect because they do not accuratey account for the discoveries and inventions to come. 2. The only way of discovering the limits of the possible is to venture a little

way past them into the impossible. Clarke’s second law recognizes that the limits of the possible can only be discovered by trying to exceed them. If one stops before finding the limits, the point at which one stops sets the limits. Much like a mountain that must be summited to see what is on the other side, a limit must be challenged to see what, if anything, lies beyond. 3. Any sufficiently advanced technology is indistinguishable from magic.

Clarke’s third law recognizes the critical role that experience and perception play in design, defining how “advanced” technology is by how magical it looks and feels. A number of variants to the third law have been proposed but perhaps most relevant to designers are these: • Any technology distinguishable from magic is insufficiently advanced. • Any sufficiently advanced design looks and feels like magic. Innovation is, by its nature, nonobvious and counter to mainstream thinking. Consider Clarke’s three laws to help navigate such challenges in innovative endeavors: Know when to listen to experienced experts and when not to, test limits by striving to exceed them; and seek to design things that create magical experiences. See also Diffusion of Innovations; Dunning-Kruger Effect; Levels of Invention;

Paradox of Great Ideas

1

The first two laws were published in “Hazards of Prophecy: The Failure of Imagination” by Arthur C. Clarke, 1962, in Profiles of the Future, Harper & Row. The third law was added as a footnote in the 1973 revision.

2

The law in its original form reads, “When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong”.

…we have invented a new technology called multi-touch, which is phenomenal. It works like magic. — Steve Jobs, introducing the iPhone in 2010

026

Classical Conditioning A method of influencing how a person viscerally responds to a thing. Classical conditioning was the first type of learning to be studied by behavioral psychologists. Lab workers noticed that dogs in the lab began salivating when the workers entered the room. Because the lab workers feed the dogs, their presence (neutral stimulus) had become associated with food (trigger stimulus) and, therefore, elicited the same response as the food itself (salivation). Similar behaviors are seen in cats when they come running at the sound of a can opening. The influence occurs at an unconscious level and results in a physical or emotional reaction.1 Humans respond to classical conditioning in the same way as animals, and thus this principle can be used as a method of influencing or modifying human behavior. For example, repeatedly pairing a product with positive images and sounds results in stimulation of reward centers in the brain and causes consumers to be attracted to those products. One of the best examples may be the cigarette advertisements of the past, where attractive or ruggedly handsome models in glamorous settings were used to condition consumers to believe smoking was “cool”. Thus, an entire generation of smokers was born. In today’s smoking-related advertising, the same principle is at work but in reverse — magazines and television commercials feature severely ill or disfigured people identified as former smokers. These types of images stimulate pain centers in the brain and condition negative associations with smoking. The stronger the reaction to a stimulus, the easier it will be to generalize that reaction to related things. While most studies have found extremely negative “fear messaging” to be effective, such messages must be crafted with care to prevent avoidance of the message. For fear messages to be effective, people must comprehend the severity and feel the message applies to them, understand how to apply corrective actions, and believe they can succeed.2 Employ classical conditioning to influence the appeal of a design or elicit specific kinds of behaviors by repeatedly pairing a design with a trigger stimulus. Use images of attractive or positive things to create positive associations and images of ugly or negative things to create negative associations. Don’t use messages that are overly negative or emotional, as people will simply avoid them. See also Aesthetic-Usability Effect; Exposure Effect; Operant Conditioning;

Shaping

1

The seminal work in classical conditioning is Conditioned Reflexes: An Investigation of the Physiological Activity of the Cerebral Cortex by Ivan Pavlov, 1927, Oxford University Press, translated and edited by G.V. Anrep, 1984, Dover Publications.

2

See “Appealing to Fear: A Meta-Analysis of Fear Appeal Effectiveness and Theories” by Melanie B. Tannenbaum et al., Nov 2015, Psychological Bulletin, 141(6), 1178 –1204; and “Sixty years of fear appeal research: Current state of the evidence” by Robert A.C. Ruiter et al., 2014, International Journal of Psychology, 49(2), 63 –70.

I invited six hundred women into a room, and presented each of them with a blue Tiffany’s box…When the women received the box, we measured their heart rate and blood pressure. And guess what? Their heart rates went up 20 percent, like that. — Martin Lindstrom Buyology

027

Closure The brain automatically completes recognizable forms when they are interrupted or incomplete. Closure, one of the Gestalt principles of perception, asserts that people are inclined to perceive a set of individual elements as a single, recognizable pattern, or closed figure, rather than multiple, individual elements. For example, when individual line segments are positioned along a circular path, they are first perceived holistically as a circle and then as multiple, independent elements. The brain subconsciously closes gaps and fills in missing information in order to perceive a completed pattern, even when there is not one. The principle is strongest when elements are located near one another and approximate simple, recognizable patterns, such as geometric forms.1 The closure response is automatic and subconscious. It likely evolved as a heuristic to support rapid visual processing of complex and incomplete patterns and conserve cognitive resources. For example, in threat detection contexts, closure would have enabled early human ancestors to quickly detect partially obscured threats, like snakes in the grass or predators moving through the jungle. As such, it is a powerful response that typically overrides the effects of other Gestalt principles. The principle of closure can be used to make designs more interesting and attention-grabbing; the very act of completing an incomplete image makes the interaction more engaging. However, if the energy required to find or form a pattern is greater than the energy required to perceive the elements individually, closure will not occur. Designers can create closure through transitional elements (e.g., subtle visual cues that help direct the eye to find the pattern). Closure also enables designers to reduce complexity by reducing the number of elements needed to organize and communicate information. For example, a logo design that is composed of recognizable elements does not need to complete many of its lines and contours to be clear and effective. This makes the logo more interesting to look at because viewers subconsciously participate in the completion of its design. Consider the closure principle when the goal is to reduce complexity and increase the interestingness of designs. When designs involve simple and recognizable patterns, consider removing or minimizing the elements in the design so that viewers can participate. When designs involve more complex patterns, consider the use of transitional elements to assist viewers in connecting the dots. See also Apparent Motion; Figure-Ground; Good Continuation; Proximity;

Similarity; Threat Detection; von Restorff Effect; Zeigarnik Effect

1

The seminal work on closure is “Untersuchungen zür Lehre von der Gestalt, II” [Laws of Organization in Perceptual Forms] by Max Wertheimer, 1923, Psychologische Forschung, 4, 301– 350, reprinted in A Source Book of Gestalt Psychology by Willis Ellis (Ed.), 1938, Kegan Paul, Terench, Trubner, & Company, 71– 88.

Engaging people to complete or “close” logos is an effective means of grabbing and holding attention.

028

Cognitive Dissonance A state of mental discomfort due to incompatible attitudes, thoughts, and beliefs. Cognitive dissonance is a state of mental stress due to conflicting thoughts or values, often created when new information challenges existing beliefs. When a person is in a state of cognitive dissonance, they seek out ways to relieve this mental stress.1

1

The seminal work on cognitive dissonance is A Theory of Cognitive Dissonance by Leon Festinger, 1957, Stanford University Press. A comprehensive review of the theory can be found in Cognitive Dissonance: Progress on a Pivotal Theory in Social Psychology by Eddie Harmon-Jones and Judson Mills (Eds.), 1999, American Psychological Association.

2

See, for example, “Cognitive Consequences of Forced Compliance” by Leon Festinger and James Carlsmith, 1959, Journal of Abnormal and Social Psychology, 58, 203 – 210.

People relieve cognitive dissonance in one of three ways: 1. Reduce the importance of the conflicting thought 2. Add a new thought to counteract it 3. Accept the conflicting thought

Consider an advertising campaign that suggests that the love you feel for your spouse relates to how many diamonds you buy them. This campaign strategy seeks to create cognitive dissonance in you, between the feelings of love you have for your spouse and the idea that you haven’t bought them diamonds. And since your spouse also saw the campaign, the pressure is on. In order to relieve this stress, you can: 1. Reduce the importance of the conflicting thought (e.g., “a diamond is,

after all, just a bunch of pressed carbon”) 2. Add a new thought to counteract it (e.g., “the advertising campaign is

trying to manipulate me using cognitive dissonance”) 3. Accept the conflicting thought (e.g., “I’d better go buy some diamonds”)

Incentives affect cognitive dissonance in interesting ways. When incentives are small, people are inclined to change the way they feel about what they are doing to alleviate dissonance (e.g., “it is okay to perform this task because I like it”). When incentives increase, however, people retain their original beliefs and alleviate dissonance by justifying their participation with their compensation (e.g., “it is okay to perform this task because I am paid well”). In game and persuasion contexts, consider interaction designs in which people invest small amounts of time, attention, and participation — i.e., get skin in the game — to create dissonant cognitions and then provide simple and immediate mechanisms to alleviate that dissonance. For example, you might allow users to download trial software for free, then provide an easy way for them to upgrade and buy the software. When trying to change attitudes or beliefs, favor adding new thoughts or changing the conflicting thoughts to appear compatible rather than reducing the importance of conflicting thoughts. Avoid using incentives to achieve change.2 See also Consistency; Cost-Benefit; Framing; Hierarchy of Needs

Benjamin Franklin was a formidable social engineer. He once asked a rival legislator to lend him a rare book, which he did. The rival greatly disliked Franklin but had done him this favor. How to alleviate the conflict? The two became lifelong friends.

He that has once done you a kindness will be more ready to do you another, than he whom you yourself have obliged. — Ben Franklin quoting an “old maxim“ The Autobiography of Ben Franklin

029

Color Effects The cognitive and behavioral effects triggered by exposure to colors. The development of words for colors evolved in the same basic sequence across all languages: first black and white, then red, yellow, green, and blue. These six colors can trigger instinctive associations and responses. In some cases, these instinctive associations and responses can be overridden by learned associations and responses, which explains why color meanings often vary across cultures.1

1

The seminal work on universal color categories is Basic Color Terms: Their Universality and Evolution by Brent Berlin and Paul Kay, 1969, Center for the Study of Language and Information.

2

See, for example, “Why Good Guys Wear White: Automatic Inferences About Stimulus Valence Based on Brightness” by Brian Meier et al., 2004, Psychological Science, 15(2), 82– 87; and “Wearing Black Clothes: The Impact of Offenders’ and Suspects’ Clothing on Impression Formation” by Aldert Vrij, 1997, Applied Cognitive Psychology, 11(1), 47– 53.

3

See, for example, “Red Signals Dominance in Male Rhesus Macaques” by Sara Khan et al., 2011, Psychological Science, 22(8), 1001–1003; “The Effect of Red Background Color on Willingness-to-Pay: The Moderating Role of Selling Mechanism” by Rajesh Bagchi and Amar Cheema, Feb 2013, Journal of Consumer Research, 39(5), 947– 960.

4

“Lime-Yellow Color as Related to Reduction of Serious Fire Apparatus Accidents: The Case for Visibility in Emergency Vehicle Accident Avoidance” by Stephen Solomon, 1990, Journal of the American Optometric Association, 61, 827– 831; and “Distinguishing Between Perceiver and Wearer Effects in Clothing Color-Associated Attributions” by S. Craig Roberts et al., 2010, Evolutionary Psychology, 8(3), 350 – 364.

5

See, for example, “The Restorative Benefits of Nature: Toward an Integrative Framework” by Stephen Kaplan, 1995, Journal of Environmental Psychology, 16, 169 –182.

6

See, for example, “Blue Light Improves Cognitive Performance” by S. Lehrl et al., 2007, Journal of Neural Transmission, 114, 457– 460.

If a language does not have words for any other color, it always has words for black and white. The darker or lighter a color becomes, the stronger these black and white associations.2 1. Black — As colors get darker, approaching black, they are increasingly

associated with aggression and dominance, likely due to an evolved association with nighttime and vulnerability to predators. 2. White — As colors get lighter, approaching white, they increasingly

signal peacefulness and safety, likely due to an evolved association with daytime and a reduced vulnerability to predators. 3. Red — Strongly associated with fertility and dominance in most primates.

And because red arouses competitive tendencies, it can increase performance on simple physical tasks but undermine performance on collaborative and complex tasks.3 4. Yellow — The most visible color, likely the result of an evolved sensitivity

for detecting ripe fruit. However, yellow apparel decreases attractiveness in both males and females more than any other color, probably because it references a jaundiced complexion.4 5. Green — Universally associated with nature and security, perhaps a

vestige of our arboreal ancestry and the reason that green traffic lights around the world mean “go”. Green environments reduce stress and mental fatigue and support problem solving and creativity.5 6. Blue — The world’s most popular color. The color is commonly

associated with water and purity, except in food and health contexts, where it is associated with spoilage and hypoxemia. Blue promotes alertness and well-being during the day but can disrupt sleep at night.6 Consider color effects in the design and selection of color schemes. When designs are situated in evolutionary-type contexts (e.g., food selection, mate selection, threat detection, etc.), choose colors that elicit appropriate instinctive associations and responses; otherwise, choose colors based on aesthetics and cultural conventions. See also Aposematism; Color Theory; Expectation Effects; Highlighting;

Interference Effects; Similarity; Uniform Connectedness

030

Color Theory A body of practical knowledge regarding the application and mixing of colors. Color can make designs more visually interesting and aesthetic and can reinforce the organization and meaning of elements in a design. If applied improperly, however, colors can seriously harm both form and function.1

1

A classic treatment of color theory is Interaction of Color by Josef Albers, 1963, Yale University Press. For a more applied treatment, see The Art of Color: The Subjective Experience and Objective Rationale of Color by Johannes Itten, 1961, Reinhold Publishing Corporation; and Human-Computer Interaction by Jenny Preece et al., 1994, Addison-Wesley.

2

There continues to be mythology around the behavioral and mood effects of painting environments certain colors, in particular that painting rooms (or jail cells) Baker-Miller Pink, also known as P-618 and Drunk Tank Pink, has a significant calming effect. This claim has been repeatedly debunked, and the original research has failed to replicate. Outside of evolutionary-type contexts (e.g., food selection, mate selection, threat detection, etc.), it is reasonable to assume that dark room colors will make people sleepy, light room colors will make people lively, and irritating room colors will make people irritated. See “Does Baker-Miller Pink Reduce Aggression in Prison Detention Cells? A Critical Empirical Examination” by Oliver Genschow et al., Dec 2014, Psychology, Crime & Law, 21(5), 482– 489.

Guidelines that address common issues regarding the use of color are: • Number of colors — In interaction contexts, color should be used conservatively. Limit the palette to what the eye can process at one glance (about five distinct colors that are easily distinguished from one another). Do not rely on color as the only means to impart information, since a significant portion of the population has limited color vision. • Color combinations — Aesthetic color combinations can be achieved by using adjacent colors on the color wheel (analogous), opposing colors on the color wheel (complementary), colors at the corners of a symmetrical polygon circumscribed in the color wheel (triadic and quadratic), or color combinations found in nature. Favor warmer colors for foreground elements and cooler colors for background elements. Light gray is a safe color to use for grouping elements without competing with other colors. • Saturation — Use saturated colors (pure hues) to attract attention, but limit their use to accenting or highlighting in contexts involving reading or interactivity. Use desaturated colors when performance and efficiency are the priority. Generally, desaturated, bright colors are perceived as friendly and professional; desaturated, dark colors are perceived as serious and professional; and saturated colors are perceived as more exciting and dynamic. Use caution when combining saturated colors, as they can visually interfere with one another and increase eye fatigue. • Universal effects — The evidence supporting universal effects of color on behavior, emotion, or performance is limited to evolutionary-type contexts, and these effects can be amplified or overridden by cultural influences. Therefore, always test the impact of colors in a design with target audiences.2 Consider color theory when deciding color combinations and designing palettes. Limit the number of colors to five when efficiency of processing is key. Favor high-contrast color combinations to maximize perceptibility. Lighten colors to make them more playful, and darken colors to make them more serious. Saturate colors to attract attention, and desaturate colors to make them more usable. Favor familiar color combinations found in nature. Be skeptical of color claims that assert dramatic cognitive or behavioral effects, as color effects tend to be small and context-sensitive. See also Color Effects; Expectation Effects; Highlighting; Interference Effects;

Similarity; Uniform Connectedness

Color Combinations Analogous

Complementary

Triadic

Quadratic

Example from Nature

Example from Nature

Example from Nature

Example from Nature

Hues from yellow to red-violet on the color wheel are warm. Hues from violet to green-yellow are cool.

Saturation refers to the amount of gray added to a hue — as saturation increases, the amount of gray decreases. Brightness refers to the amount of white added to a hue — as brightness increases, the amount of white increases.

Brightness

warm cool

Saturation

031

Common Fate The brain automatically assumes elements moving in similar ways are related. Common fate, one of the Gestalt principles of perception, asserts that elements that move together in a common direction are perceived as a single group or chunk and are interpreted as being more related than elements that appear to move at different times or in different directions. For example, a row of randomly placed Xs and Os that is stationary is naturally grouped by similarity — Xs with Xs and Os with Os. However, if certain elements in the row move in one direction and other elements move in the opposite direction, elements are grouped by their common motion and direction.1 Perceived relatedness is strongest when the motion of elements occurs at the same time and velocity, and in the same direction. As any of these factors vary, the elements are perceived as less related. One exception is when the motion exhibits an obvious pattern or rhythm (e.g., wave patterns), in which case, elements are seen as related. Although common fate relationships usually refer to moving elements, they are also observed with static objects that flicker (i.e., elements that alternate between brighter and darker states). For flickering elements, perceived relatedness is strongest when the elements flicker at the same time, frequency, and intensity or when a recognizable pattern or rhythm is formed.2 Common fate relationships influence whether elements are perceived as figure or ground elements. Moving objects will be perceived as figure elements, and stationary ones will be perceived as ground elements. When elements within a region move together with the bounding edge of the region, the elements and the region will be perceived as the figure. When elements within a region move together but the bounding edge of the region remains stationary or moves opposite to the elements, the elements within the region will be perceived as the ground.3 Consider the common fate principle when the goal is to show relatedness of information with moving or flickering elements. Related elements should move at the same time, velocity, and direction or flicker at the same time, frequency, and intensity. It is possible to group elements when these variables are dissimilar but only if the motion or flicker forms a recognizable pattern. Note that moving elements are perceived as figures, and stationary elements are perceived as ground. See also Closure; Figure-Ground; Good Continuation; Miller’s Law; Proximity;

Similarity; Uniform Connectedness

1

The seminal work on common fate is “Untersuchungen zür Lehre von der Gestalt, II” [Laws of Organization in Perceptual Forms] by Max Wertheimer, 1923, Psychologische Forschung, 4, 301– 350, reprinted in A Source Book of Gestalt Psychology by Willis Ellis (Ed.), 1938, Kegan Paul, Terench, Trubner, & Company, 71– 88.

2

See, for example, “Generalized Common Fate: Grouping by Common Luminance Changes” by Allison B. Sekuler and Patrick J. Bennett, 2001, Psychological Science, 12(6), 437– 444.

3

“Common Fate as a Determinant of FigureGround Organization” by Joseph Lloyd Brooks, May 2000 Stanford-Berkeley Talk, Stanford University.

In video games, making the names and controller icons move with the players groups them, simplifying playability by making it clear who is being controlled and how to control them.

032

Comparison A method of highlighting relationships by depicting information in controlled ways. People understand the way the world works by identifying relationships and patterns in or between systems. One of the most powerful methods of identifying and understanding these relationships is to represent information in controlled ways so that comparisons can be made.1 Key techniques for making valid comparisons are: • Apples to apples — Comparison data should be presented using common measures and common units. For example, when comparing crime rates of different countries, it is necessary to account for differences in variables such as population, types of laws, and level of law enforcement. Otherwise, conclusions will be unreliable. Common methods of ensuring apples-to-apples comparisons include disclosing details of how variables were measured, eliminating confounding variables, and representing the variables using the same graphical and numerical standards. • Single context — Comparison data should be presented in a single context so that subtle differences and patterns in the data are detectable. For example, the ability to detect patterns across multiple graphs is lower if the graphs are located on separate pages versus the same page. Common methods of representing information in single contexts include the use of multivariate graphs that combine many variables in one display and multiple small views of system states (known as small multiples) in one eyespan. • Benchmarks — Claims about evidence or phenomena should be accompanied by benchmark variables so that clear and substantive comparisons can be made. For example, claims about the seriousness of the size of U.S. debt are meaningful only when accompanied by U.S. gross national product (GNP); a debt can appear serious when depicted as a quantity but irrelevant when presented as a percentage of GNP. Common types of benchmark data include past performance data, competitor data, or data from well-accepted industry standards. Use comparisons to convincingly illustrate patterns and relationships. Ensure that compared variables are apples to apples by measuring and representing variables in common ways, correcting for confounds in the data as necessary. Use multivariate displays and small multiples to present comparisons in single contexts when possible. Use benchmarks to anchor comparisons and provide a point of reference from which to evaluate data. See also Framing; Garbage In–Garbage Out; Signal-to-Noise Ratio; Visibility

1

See, for example, Visual Explanations by Edward Tufte, 1998, Graphics Press; and Envisioning Information by Edward Tufte, 1990, Graphics Press.

Diagram of the Causes of Mortality in the Army in the East April 1854 to March 1855

April 1855 to March 1856 J un e

a

ay

J u ly

y

Au

M

M

A

July

ug

June

55

S ep

Apr 18

t

t

Apr 18 54

Bulgaria

g

Sep

56 r 18 Ma

D ec b

rch Ma

No

v

Oc t

18 5 5

O ct

Crimea

Fe

N

ov

b

Jan 185 6

Fe

Death from wounds in battle

Dec

Death from disease Death from other causes

This is a modified version of Florence Nightingale’s famous Coxcomb graphs. The graphs are composed of twelve wedges, each representing a month. Additionally, each wedge has three layers representing three different causes of death. A quick review of the graphs reveals that the real threat to British troops was not the Russians but cholera, dysentery, and typhus. The graphs also convincingly illustrate the impact of improved hygienic practices at military camps and hospitals, which were aggressively implemented beginning in March 1855.

The graphs make apples-to-apples comparisons, representing the same variable (death rates) the same way (area of the wedge). The graphs are multivariate, integrating a number of key variables so that patterns and relationships in the data can be studied within one context. Deaths resulting from war wounds serve as a compelling benchmark to illustrate the significance of disease, as does the earlier graph for the later graph. The graphs have been corrected based on original data published in Nightingale’s Notes on Matters Affecting the Health, Efficiency and Hospital Administration of the British Army, 1858.

Jan 1 855

033

Confirmation A technique for preventing errors by requiring verification before actions are performed. Confirmation is a technique used to verify that critical actions, inputs, or commands are intentional and correct before they are executed. Confirmations are primarily used to prevent a class of errors called slips, which are unintended actions. Confirmations slow task performance and should be reserved for use with critical or irreversible operations only. When the consequences of an action are not serious, or when actions are completely and easily reversible, confirmations are not needed.1 There are two basic confirmation techniques: 1. Confirmation using a dialog — Involves establishing a verbal interaction

with the person using the system. It is most commonly represented as a dialog box on a software display (e.g., “Are you sure you want to delete all files?”). In this method, dialog boxes directly ask the user if the action was intended and if they would like to proceed. Confirmations should be used sparingly or else people will click through them without reading and become frustrated at the frequent interruption. Dialog messages should be concise and end with one question structured to be answered Yes or No, or with an action verb that conveys the action to be performed. For less critical confirmations that act more as reminders, an option to disable the confirmation should be provided. 2. Confirmation using a two-step operation — Involves a preliminary step

that must occur prior to the actual command or input. This is often referred to as an arm/fire operation: First arm the component, and then fire (execute) it. For example, a switch cover might have to be lifted in order to activate a switch, two people might have to turn two unique keys in order to launch a nuclear weapon, or a control handle in a spacecraft might have to be rotated and then pushed down in order to be activated. If the operation works only when the two-step sequence has been completed, it is unlikely that the operation will occur accidentally. Twostep operations are commonly used for critical operations in aircraft, spacecraft, nuclear power plants, and other safety-critical environments. Use confirmations to minimize errors in the performance of critical or irreversible operations. Avoid overusing confirmations to ensure that they are unexpected and uncommon; otherwise, they may be ignored. Permit less critical confirmations to be disabled after an initial confirmation. See also Affordance; Constraint; Error, Design; Error, Human; Forgiveness;

Garbage In–Garbage Out; Redundancy

1

See, for example, The Design of Everyday Things by Donald Norman, 1990, Doubleday; and To Err Is Human: Building a Safer Health System by Linda Kohn et al. (Eds.), 2000, National Academy Press. This principle is also known as verification principle and forcing function.

This industrial paper-cutting machine requires a two-step confirmation: Both hands must depress safety releases (ensuring that they are not in harm’s way) before the foot press is unlocked to cut the paper.

034

Confirmation Bias A tendency to favor information that confirms pre-existing views. Confirmation bias is a tendency to focus on information that supports preexisting views, ignoring any information that contradicts those views. As a failing of human reasoning, confirmation bias is a leading contender, exacerbated by Internet search engines that act like giant confirmation-bias machines: A person can search for any view they seek to confirm, no matter how ill-informed or outlandish, and a search engine will return pages of links that normalize and validate that view.1 Effects of confirmation bias include overconfidence, selective memory, poor decision-making, and resistance to change in light of contrary evidence. Accordingly, presenting facts alone is insufficient to overcome the bias. It is, for example, expected for people to interpret confirmatory evidence as supporting their views, but more surprising is the fact that they also commonly interpret disconfirmatory evidence as supporting their views. Even when given instructions to be “as objective and unbiased as possible” and to consider themselves “as a judge or juror asked to weigh all of the evidence in a fair and impartial manner”, people find ways to interpret information in ways that maintain their beliefs.2 One of the few strategies that have proven effective in overcoming confirmation bias is the “consider the opposite” strategy. This strategy involves inducing people to imagine scenarios in which the same evidence could support an opposing conclusion. For example, take a person strongly loyal to a particular product or theory or political candidate. Ask them why they are loyal, and they say because the product is better quality, or the theory best explains the data, or the candidate is best for the country. Engaging the person to consider the opposite means asking them to describe how the product falls short, how the data support other theories, or how a candidate is bad for the country. Considering the opposite in this way helps de-bias thinking and increases the likelihood of open-minded thinking.3 Beware confirmation bias in decision-making and evaluation. Seek other viewpoints, especially from people you disagree with. Employ the considerthe-opposite strategy in education, marketing, and persuasion contexts to combat the confirmation bias and open minds. Do not rely on the presentation of evidence to change intransigent minds — not only does it not work; it will likely make things worse. See also Anchoring; Cognitive Dissonance; Creator Blindness;

Not Invented Here; Serial Position Effects; Sunk Cost Effect

1

Recognition of the confirmation bias dates back at least to Francis Bacon. The seminal empirical work is “On the failure to eliminate hypotheses in a conceptual task” by Peter Wason, 1960, The Quarterly Journal of Experimental Psychology, 12(3), 129 –140.

2

See, for example, “Confirmation Bias: A Ubiquitous Phenomenon in Many Guises” Raymond Nickerson, 1998, Review of General Psychology, 2(2), 175 –220.

3

See, for example, “Considering the Opposite: A Corrective Strategy for Social Judgment” by Charles Lord et al., 1985, Journal of Personality and Social Psychology, 47(6), 1231– 1243. The strategy harkens back to a famous quote by John Stuart Mill: “He who knows only his own side of the case knows little of that. His reasons may be good, and no one may have been able to refute them. But if he is equally unable to refute the reasons on the opposite side, if he does not so much as know what they are, he has no ground for preferring either opinion”.

Temperature Anomaly °C Temperature Anomaly °C

The long-term trend clearly indicates rising global temperatures, which is why denialists tend to focus on the short-term trend.

It is the peculiar and perpetual error of the human intellect to be more moved and excited by affirmatives than by negatives; whereas it ought properly to hold itself indifferently disposed towards both alike. — Francis Bacon Novum Organum

035

Consistency Usability and learnability improve when similar things have similar meanings and functions. Consistency refers to the level of similarity in visual style and functionality within and among different designs. Consistency enables people to efficiently transfer knowledge to new contexts, learn new things quickly, and focus attention on the relevant aspects of a task. Consistent designs feel familiar and are associated with lower cognitive workload and lower risk of errors.1 There are four kinds of consistency: 1. Aesthetic consistency — Consistency of style and appearance (e.g.,

a corporate identity that uses a consistent font, color, and graphic). Aesthetic consistency enhances recognition, communicates membership, and sets emotional expectations. Mercedes-Benz vehicles are instantly recognizable because the company logo is prominently featured on the hood or grille. The logo has become associated with quality and prestige and informs people how they should feel about the vehicle — respected and admired. Use aesthetic consistency to establish unique identities that can be easily recognized. 2. Functional consistency — Consistency of meaning and action (e.g., a

traffic light that shows a yellow light before red). Functional consistency improves usability and learnability by enabling people to leverage existing knowledge about how a design functions. Videocassette recorder control symbols (icons for rewind, play, forward, etc.) are now used on devices like television remote controls. The consistent use of these symbols makes the new devices easier to use and learn. Use functional consistency to simplify usability and ease of learning. 3. Internal consistency — Consistency with other elements in the system

(e.g., signs within a park are consistent with one another). Internal consistency cultivates trust; it is an indicator that a system has been designed and not cobbled together. Within logical groupings, elements should be aesthetically and functionally consistent with one another. 4. External consistency — Consistency with other elements in the

environment (e.g., alarms are consistent across different systems in a control room). External consistency extends the benefits of internal consistency across multiple, independent systems. It is difficult to achieve because different systems rarely observe common design standards. When common design standards do exist, observe them. Be consistent by default. Unless there is a compelling reason to be different, be consistent aesthetically, functionally, internally, and externally. However, when there is a compelling reason to be different, be different. Don’t be foolishly consistent. See also Back of the Dresser; Error, Human; Mimicry; Performance Load;

Recognition over Recall; Similarity

1

Use consistent approaches when possible, but do not compromise clarity or usability for consistency. In the words of Emerson, “A foolish consistency is the hobgoblin of little minds”.

Consistency enables international travelers to understand traffic signs even when they don’t speak the local language.

Consistency in design is virtuous. It means that lessons learned with one system transfer readily to others. On the whole, consistency is to be followed. If a new way of doing things is only slightly better than the old, it is better to be consistent. But if there is to be a change, everybody has to change. Mixed systems are confusing to everyone. — Donald Norman The Design of Everyday Things

036

Constraint Limiting the actions that can be performed to simplify use and prevent error. Constraints limit the possible actions that can be performed on a system. For example, dimming or hiding options that are not available at a particular time effectively constrains the options that can be selected. Proper application of constraints in this fashion makes designs easier to use and dramatically reduces the probability of error during interaction.1 There are two basic kinds of constraints: 1. Physical constraints — Limiting the range of possible actions by

redirecting physical motion in specific ways. The three kinds of physical constraints are paths, axes, and barriers. Paths convert applied forces into linear or curvilinear motion using channels or grooves (e.g., scroll bar in software user interfaces). Axes convert applied forces into rotary motion, effectively providing a control surface of infinite length in a space with limited real estate (e.g., a trackball). Barriers absorb or deflect applied forces, thereby halting, slowing, or redirecting the forces around the barrier (e.g., boundaries of a computer screen). Physical constraints are useful for reducing the sensitivity of controls to unwanted inputs and denying certain kinds of inputs. 2. Psychological constraints — Limiting the range of possible actions by

leveraging the way people perceive and think about the world. The three kinds of psychological constraints are symbols, conventions, and mappings. Symbols influence behavior by communicating meaning through alphanumerics, icons, labels, and sound — e.g., a warning message and tone. Conventions promote common understanding and methods of interacting based on learned traditions and practices, such as red means stop, green means go. Mappings imply what actions are possible based on the perceived relationships between elements. For example, light switches that are close to a set of lights are perceived to be more related to the lights than switches that are farther away.2 Use constraints to simplify usability and minimize errors. Use physical constraints to reduce the sensitivity of controls, minimize unintentional inputs, and prevent or slow hazardous actions. Use psychological constraints to improve the clarity and intuitiveness of a design. See also Affordance; Confirmation; Control; Error, Design; Error, Human;

Forgiveness; Mapping; Nudge

1

The seminal work on psychological constraints is The Design of Everyday Things by Donald Norman, 1990, Doubleday.

2

Note that Norman uses the terms semantic constraints, cultural constraints, and logical constraints.

Physical Constraints

Psychological Constraints

Paths

Symbols

Paths are useful in situations where the control variable range is relatively small and bounded.

Symbols are useful for labeling, explaining, and warning using visual, aural, and tactile representation — all three if the message is critical.

0%

100%

0%

100%

0%

100%

Axes

Conventions

Axes are useful in situations where control real estate is limited or the control variables are very large or unbounded.

Conventions are useful for making systems consistent and easy to use; they indicate common methods of understanding and interacting.

Barriers

Mappings

Barriers are useful for denying undesired actions.

Mappings are useful for implying what actions are possible based on the visibility, location, and appearance of controls.

037

Contour Bias A tendency to favor things with contoured features over angular or rectilinear features. When presented with objects or environments that possess sharp angles or pointed features, a region of the human brain involved in fear processing, the amygdala, is activated. The degree of fear activation in the brain is proportionate to the angularity and sharpness of the features. Likely a subconscious mechanism that evolved to detect potential threats, this fear response suggests that angular features influence the way objects are aesthetically and emotionally perceived.1 The contour bias is robust with things that have either a neutral or positive valence but not with things that have a negative valence. An example of things with a neutral valence is simple geometric shapes: A circle will generally be aesthetically preferable to a triangle. An example of a thing with a positive valence is a teddy bear: A roundish teddy bear will generally be aesthetically preferable to an angular teddy bear. An example of a thing with a negative valence is a bomb: A roundish bomb will generally not be aesthetically preferable to a pointy bomb.2 Objects and environments with angular or pointy features elicit stronger activations in regions of the brain related to associative processing, meaning that although angular objects are less liked, they elicit a deeper level of processing than do the contoured objects — they are, in effect, more interesting and thought-provoking to look at. This is consistent with the kind of innate response one would expect from potential threats and suggests a tradeoff between angular and contoured features: Angular features are more effective at attracting attention and engaging thought; contoured features are more effective at making a positive emotional and aesthetic impression. Consider the contour bias to make things cuter and more inviting. In emotionally neutral or positive contexts, favor round, curvy forms over sharp, angular forms as there is a general relative preference for rounded objects. Employ contoured features to make a positive first impression and promote calm and trust. Favor angular and pointy features to attract and hold attention, as observed with octagonal stop signs and triangular warning signs. See also Archetypes, Psychological; Baby-Face Bias;

Freeze-Flight-Fight-Forfeit; Play Preferences; Threat Detection

1

The seminal work on the contour bias is “Humans Prefer Curved Visual Objects” by Moshe Bar and Maital Neta, 2006, Psychological Science, 17(8), 645 – 648. See also “Visual Elements of Subjective Preference Modulate Amygdala Activation” by Moshe Bar and Maital Neta, 2007, Neuropsychologia, 45(10), 2191– 2200.

2

“Emotional Valence Modulates the Preference for Curved Objects” by Helmut Leder et al., 2011, Perception, 40, 649 – 655.

il Conico

9093

9091

Mami

From top left to bottom right, these Alessi kettles are arranged from most angular to most contoured. At the extremes of this continuum, the il Conico will be most effective at grabbing attention; and the Mami will be most liked generally. The 9093 and 9091 incorporate both angular and contoured features, balancing attention-getting with likeability. Historically, the il Conico and 9093 are Alessi’s best-selling kettles.

038

Control The level of user control should be related to the proficiency and experience of the user. People should be able to exercise control over what a system does, but the level of control should be related to the user’s proficiency and experience using the system. Novices perform best with less control, while experts perform best with more control. A simple example is a child learning to ride a bicycle. Initially, training wheels are helpful in reducing the difficulty of riding by reducing the level of control (e.g., eliminating the need to balance while riding). This allows the child to safely develop basic riding skills with minimal risk of accident or injury. Once the basic skills are mastered, the training wheels get in the way and hinder performance. As expertise increases, so too does the need for greater control.1 A system can accommodate varying needs for control by offering multiple ways to perform a task. For example, novice users of word processors typically save their documents by accessing the File menu and selecting Save, whereas more proficient users typically save their documents using a keyboard shortcut. Both methods achieve the same outcome, but one favors simplicity and structure, while the other favors efficiency and flexibility. Novices benefit from structured interactions with minimal choices, typically supported by prompts, constraints, and ready access to help. Experts benefit from less structured interactions that provide more direct access to functions. Accommodating multiple methods increases the complexity of a system, so the number of methods for any given task should be limited to two — one for novices, and one for experts. The need to provide expert shortcuts is limited to systems that are used frequently enough for people to develop expertise. For example, the design of museum kiosks and ATMs should assume that all users are first-time users and not try to accommodate varying levels of expertise. When systems are used frequently enough for people to develop expertise (and when system interfaces aren’t shared), it is often useful to provide simple ways to customize the system design based on personal preferences and level of expertise. This represents the highest level of control a design can provide and enables the efficiency of use to be fine-tuned over time. Consider the allocation of control in the design of complex systems. When possible, use a method that is equally simple and efficient for novices and experts. Otherwise, provide methods specialized for beginners and experts. When systems are complex and frequently used, consider designs that can be customized to conform to individual preference and levels of expertise. See also Constraint; Flexibility Tradeoffs; Visibility

1

See, for example, The Psychology of HumanComputer Interaction by Stuart Card et al., 1983, Lawrence Erlbaum Associates; and The Humane Interface: New Directions for Designing Interactive Systems by Jef Raskin, 2000, Addison-Wesley.

User Capability

High

Low

The Nova Tactica is a strategy map created by the authors for a client exploring the application of artificial intelligence to modulate system control. The goal of the map was to demonstrate the general relationship between user capability and system assistance, illustrate the perils of low

System Assistance

user capability–low system assistance and high user capability–high system assistance, and to highlight the market opportunity for serving users with very low capability — i.e., users in developing countries with very low language and technology literacy.

High

039

Convergence A tendency for similar characteristics to evolve independently in similar environments. Natural or human-made systems that best approximate optimal strategies afforded by the environment tend to be successful, while systems exhibiting lesser approximations tend to become extinct. This process results in the convergence of form and function over time. The degree of convergence in an environment is one indicator of its stability and of the possibilities for different kinds of innovation. In nature, we see evidence of convergence that has resulted over millions of years; for example, the set of adaptations for flight in birds, bats, and butterflies has converged to just gliding and flapping. In human-created designs, this process can happen more quickly; and it is common for discoveries and inventions to be made independently and simultaneously by multiple independent inventors and scientists. For example, the design of virtually all automobiles today includes elements such as a four-wheel chassis, steering wheel, and an internal combustion engine — a convergence of form and function in decades versus millions of years.1 A high degree of convergence indicates a stable environment — one that has not changed much over time because designs closely approximate the optimal strategies afforded by that environment. The rate of evolution is slow and incremental, tending toward refinements on existing convergent themes. Contrast this with the life-forms during the Cambrian period (570 million years ago) and dot-com companies of the 1990s, both periods of great diversity and experimentation of system form and function. This low degree of convergence indicates a volatile environment — one that is still changing — with few or no stable optimal strategies around which system designs can converge. The result is a rapid and disruptive rate of evolution, often resulting in new and innovative approaches that depart from previous designs.2 Consider the level of stability and convergence in an environment prior to design. Stable environments with convergent system designs are receptive to minor innovations and refinements but resist radical departures from established designs. Unstable environments with no convergent system designs are receptive to major innovations and experimentation. Focus on variations of convergent designs in stable environments and explore analogies with other environments and systems for guidance when designing for new or unstable environments. Remember that others are likely working on ideas similar to yours right now.3 See also Iteration; MAYA; Mimicry

1

See, for example, Cats’ Paws and Catapults by Steven Vogel, 2000, W.W. Norton & Company.

2

“A Novel Approach for Estimating Truck Factors” by Guilherme Avelino et al., 2016, IEEE 24th International Conference on Program Comprehension (ICPC), 1–10.

3

Alternatively, environments can be modified. For example, stable environments can be destabilized to promote innovation — e.g., a shift from managed markets to free markets.

Buoyancy

Jet Propulsion

Flapping

Soaring Environmental and system analogies often reveal new design possibilities. The set of strategies for flight has converged to just gliding and flapping but expands to include buoyancy and jet propulsion when flight is reconsidered as movement through a fluid. In this case, the degree of convergence still indicates environments that have been stable for some time. New flying systems that do not use one or more of these strategies are unlikely to compete successfully in similar environments.

040

Conway’s Law The structure of an organization is expressed in the design of the things it produces. Conway’s law, proposed by the computer scientist Melvin Conway, states that products and services mimic the structure of the teams that create them. For example, a product designed by a single team will look like a product designed by a single team, whereas a product designed by multiple teams will look like a product with modules designed by different teams.1 A key aspect of Conway’s law is the notion of a communication structure. A communication structure refers to a group within an organization that communicates and operates as a logical unit, usually through a team lead or manager. For example, ten people working together on a project reporting to one team lead is a logical unit, with the team lead acting as the communication interface to the greater organization. Communication structures are additionally impacted by things like communication technologies, differences in time zones, and differences in languages spoken. For example, a service organization that has poorly integrated infrastructural technologies will deliver services that are disjointed and difficult to use — e.g., inconsistent user interfaces, multiple logins, different support models, etc. The lack of integration in the service experience mimics the lack of integration in the organization’s infrastructure. Conway’s law is often used as the rationale for structuring organizations as teams with end-to-end project responsibilities. Communication becomes increasingly complex and inefficient with group size, eventually causing groups to splinter into cliques and subgroups. The small-team structure avoids this splintering, thereby improving the coherence and quality of the designs produced. Note that a risk of small-team organizational structures is that teams can become culturally disconnected from the parent organization, leading to counterproductive phenomena such as internal competition and not-invented-here bias.2 Consider Conway’s law in the design of organizations and the products and services they produce. For projects that prioritize designing tightly integrated and easy-to-use products and services, Conway’s law suggests that end-toend project teams using highly integrated technologies work best. For projects that prioritize distributing costs, working in parallel, or leveraging crowd intelligence, Conway’s law suggests that larger, distributed teams work best. In such organizations, it is critical to clearly define and enforce product design guidelines and interface standards to ensure interoperability and integrated user experiences. See also Brooks’ Law; Design by Committee; Dunbar’s Number;

Flexibility Tradeoffs; Not Invented Here

1

The seminal article on Conway’s Law is “How Do Committees Invent?” by Melvin Conway, 1968, Datamation, 14(4), 28 – 31. The original wording of the law is “Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure”. This principle is also known as the mirroring hypothesis.

2

See, for example, “How Platforms Are Neutralizing Conway’s Law” by Dan Woods, Aug 2017, Forbes.

The International Space Station (ISS) is composed of modules designed by different space agencies, operating in different time zones speaking different native languages. The costs of this type of multiagency organizational structure are increased complexity and inefficiency. For example, ISS produces fewer human hours of research per month than either Skylab or Salyut (both products of single agencies), despite having nearly 10 × as much internal volume. But the benefits of this type of multiagency structure are improved cost sharing and international relations.

Primarily, we have found a criterion for the structuring of design organizations: a design effort should be organized according to the need for communication. — Melvin E. Conway Datamation, April 1968

041

Cost-Benefit Value is a function of the costs of acquisition and use versus the benefits provided. From a design perspective, the cost-benefit principle is typically used to assess the value of new features and elements. If the costs associated with interacting with a design outweigh the benefits, the design is poor. If the benefits outweigh the costs, the design is good. Costs and benefits can be financial (e.g., monetary), physical (e.g., effort), emotional (e.g., frustration), and social (e.g., status). For example, the number of steps and time required to download a large video is a cost. The ability to be able to watch the video is a benefit. If the entertainment value of the video justifies the cost to download it, then the interaction is positive.1 The quality of every design aspect can be measured using the cost-benefit principle. How much reading is too much to get the point of a message? How many steps are too many to set the time and date of a video recorder? How long is too long for a person to wait for a Web page to download? The answer to all of these questions is that it depends on the benefits of the interaction. Reducing the download speed of a Web page no one is interested in is not a good cost-benefit trade; it misses the point of design altogether — i.e., to provide benefit. A common misconception is that adding features always increases product value. This is only correct when the benefits provided by new features outweigh the costs of increased complexity. If they do not, adding features decreases product value. For example, new design features or elements that excite designers are often never used or even noticed by people who interact with the design. In addition to no perceived benefit, such features and elements can increase the design’s interaction costs by adding complexity to the system. To understand which features are perceived as costs and which are perceived as benefits, observe users interacting with the design or similar designs in the actual target environment. Focus groups and usability tests are valuable in assessing the cost benefits of a design during development, when natural observation is not possible Consider the cost-benefit principle in all aspects of design. Do not make design decisions based on cost parameters alone, without due consideration of the benefits. Verify cost-benefit perceptions of target populations through careful observations, focus groups, and usability tests. See also Feature Creep; IKEA Effect; Pareto Principle; Performance Load;

Veblen Effect

1

See, for example, “Précis of Vigor: Neuroeconomics of Movement Control” by Reza Shadmehra and Alaa Ahmed, 2021, Behavioral and Brain Sciences, 44, e123: 1– 42; and “Effort-Based Cost–Benefit Valuation and the Human Brain” by Paula Croxson, 2009, The Journal of Neuroscience, 29(14), 4531– 4541.

The use of touch screens in cars has received much criticism in UX circles, with detractors pointing out that lack of tactility means drivers are more likely to take their eyes off the road to manipulate controls. This is true for certain operations, but to declare touch screens to be “bad automotive UX” as many have done is one-dimensional. The costbenefit of tactility must be weighed

against other factors, like the ability to easily reconfigure the user interface, add new functions and features, and perform live updates, as well as the cost-benefit of assembly, manufacturing, maintenance, etc. All design involves tradeoffs. The quality of a design cannot be properly evaluated based on the cost-benefit of any one factor but only on the costbenefit of all factors taken together.

042

Creator Blindness The inability of a creator to see fundamental flaws in their creation. There is an old saying that love is blind. Such is true for romantic love, and such is true for objects of our creation. While designers and other stakeholders often dwell on the cosmetic flaws in their designs, they are often blind to the more profound and less visible design failings in areas like strategy and usability. This creator blindness is a kind of meta-bias resulting from a mix of emotional, perceptual, and rational phenomena, including cognitive dissonance, groupthink, IKEA effect, and sunk costs, to name a few. The consequence is an inflated sense of a design’s likely success and an irrational resistance to critique and modification.1 There are four strategies effective at combating creator blindness: 1. Independent reviews — Engage outside experts to evaluate the product

design and strategy. It is a quirk of group dynamics that no person, however expert, can be a prophet in their own land; and therefore, soliciting outside review can be a means of helping the blind to see. 2. Strategic critiques — It is common for designers to receive design-

level critiques, but it is uncommon for them to receive strategic-level critiques — e.g., explorations regarding differentiation, competing products, price points, moats, etc. Though unfamiliar to many designers, such critiques can help reveal flaws not visible to the eye. 3. Devil’s advocate — Ritualizing the practice of playing devil’s advocate

can be a useful check on bad decision-making. In cultures where this is uncomfortable, it can be useful to appoint a person to play this role. It gives them political cover and ensures the tough questions will be asked. 4. Small-scale user testing — Nothing breaks the fever of creator blindness

like observing users struggle with a product or watching sales plummet in a small-scale pilot. Creator blindness begets overconfidence, which is why such small-scale testing before large-scale launches is essential. Humility is an effective curative for creator blindness. Beware creator blindness in yourself and your colleagues. It is good to love the objects of your creation but important to perceive them accurately for what they are. Employ the four strategies to combat creator blindness. When all else fails, conduct small-scale user testing and run small-scale pilots to help break the fever. See also Cognitive Dissonance; Groupthink; IKEA Effect;

Minimum-Viable Product; Sunk Cost Effect; Testing Pyramid

1

See, for example, a discussion of the overconfidence effect in The Psychology of Judgment and Decision Making by Scott Plous, 1993, McGraw-Hill Education.

In 1992, PepsiCo prepared to launch Crystal Pepsi, a colorless cola designed to appear more refreshing and good for you. Pepsi had pilottested the soda in a limited market, and it performed well. But there were problems. Employees expressed concerns about the product’s short shelf life, bottlers were concerned that it did not taste like Pepsi, and then there were concerns about whether a clear drink would be associated with cola or water. The creator of Crystal Pepsi, David Novak, was undeterred. Wanting to ride the wave of “pure” and “clear” buzzwords that dominated the marketing landscape at that time, Novak ignored the concerns and pushed forward with the launch. Initially, customers flocked to the product, but its popularity was short lived. Sales quickly declined, and Pepsi discontinued the product in 1993. Blinded by the beauty of the marketing story, he ignored critical feedback and rushed to launch before the product was fully developed. Novak refers to Crystal Pepsi as his “biggest career fail”.

It was probably the best idea I’ve ever had — and the most poorly executed. I let my passion for the product override real issues…It would have been nice if I’d made sure the product tasted good. — David Novak, former Pepsi marketing executive who created Crystal Pepsi

043

Crowd Intelligence An emergent intelligence arising from the unwitting collaboration of many people. Crowd intelligence is expressed as an average response to a problem. For a certain class of problems, this average is typically better than any individual response from the group. A crowd is defined as any group of people whose members contribute independently to solving a problem.1 Crowd intelligence works best on simple problems that have clear right and wrong answers. For example, crowd intelligence is effective at solving math problems and answering questions about geography but is less effective at solving problems requiring creativity or innovation. For these kinds of complex problems, the group average gives you mediocrity, not optimality. Experiments have been conducted with crowds estimating things like the number of jellybeans in a jar, temperature, height, weight, and maze running. In all of these cases, the average estimate of the crowd is more accurate than all individual estimates, with the occasional exception of one or two estimates. The highest crowd intelligence emerges from groups made up of diverse opinions and beliefs acting independently. Strong authoritative or social influences within a group undermine its crowd intelligence. This is one reason why trying to use crowd intelligence in workplace meetings — like taking votes to make decisions — does not yield good results. Examples of successful uses of crowd intelligence include the following: • Websites that enable visitors to upvote or downvote an article use crowd intelligence to regulate the visibility of content. A similar, but automated, popularity algorithm is used by most search engine ranking systems. • Route planning and wayfinding apps use crowd intelligence to find the most efficient path through a maze or network. • Simple prediction markets use crowd intelligence to produce more accurate and useful predictions than traditional survey methods. Typical surveys ask questions like, “Would you buy this product?” Prediction markets ask questions like, “How many people do you think would buy this product in the next six months?” Consider crowd intelligence when solving simple problems with definitive answers, when information is unavailable or unevenly distributed, or when solutions are highly subject to bias. Make sure the “crowd” is made up of people with diverse opinions and beliefs who are acting independently. Treat crowd intelligence as a valuable source of input but not as definitive information or design guidance. See also Design by Committee; Groupthink; Normal Distribution;

Pareto Principle; Selection Bias

1

See “Vox Populi” by Francis Galton, 1907, Nature, 75, 450 – 451. For a popular overview, see The Wisdom of Crowds by James Surowiecki, 2004, Doubleday.

Almost 800 people guessed the weight of an ox butchered and dressed. Statistician Francis Galton found that the average of these guesses missed the weight by only one pound, which was closer than any individual guess.

Groups are only smart when there is a balance between the information that everyone in the group shares and the information that each of the members of the group holds privately. It’s the combination of all those pieces of independent information, some of them right, some of them wrong, that keeps the group wise. — James Surowiecki The Wisdom of Crowds

044

Death Spiral A phenomenon in which a social organization persists in behaviors that lead to self-destruction. Army ants navigate largely by following pheromone trails left by other army ants. Under certain conditions, these trails become crossed, forming a closed loop. Ants caught in such loops will often circle the never-ending trail until they die from exhaustion. Thus, the term death spiral. But the death-spiral phenomenon is not unique to ants. The pattern can manifest in any social organization — including human social organizations — in which members persist in behaviors that inevitably lead to self-destruction.1 Death spirals occur when members of a social organization rigidly adhere to strategies that are counterproductive to their survival. Those caught in death spirals are often unaware of the fact, but even if they become aware, they often lack the capacity to escape. Examples of human death spirals include the following: • In 1346 at the Battle of Crécy, French knights repeatedly charge the muddy slopes of a high ground lined with English longbowmen. The results are disastrous for the French, but they keep regrouping and repeating the charges until their forces are decimated. The chivalrous culture of the French knights prevents them from altering their tactics. • In 1975, Kodak invents the first digital camera. Some at Kodak recognize the inevitable transition away from film to digital photography, but they continue to focus on their highly profitable film business. Kodak files for bankruptcy in 2012. • In 2022, many people recognize that burning fossil fuels leads to global warming, but the costs of switching to alternatives are high and the benefits are deferred to the distant future. While people dither, temperatures continue to rise. Consider death spirals when diagnosing vicious cycles and making the case for change. The easiest way out of a death spiral is to follow someone out. Therefore, engage leaders who understand the dynamic and who can rally collective action. Promote a culture of experimentation and innovation, as such cultures are most able to adapt and break out of death spirals. Beware cultures steeped in compliance, hierarchy, tradition, and rigid operating procedures, as they are most vulnerable. Organizations that are unwilling to try new things, cannibalize their own products, and put their ideas to the test are — or soon will be — in a death spiral. See also Creator Blindness; Groupthink; Not Invented Here; Process Eats Goal;

Social Trap

1

See, for example, “Ants Swarm Like Brains Think” by Carrie Arnold, 2014, Nautilus; and The Wisdom of Crowds by James Surowiecki, 2004, Doubleday.

The death spiral is not exclusive to ants and is, in fact, an emergent phenomenon of social systems. Whenever groups blindly follow people or processes, they are susceptible to falling victim to similar patterns.

045

Defensible Space An environment designed to signal ownership and deter crime. Defensible spaces are used to deter crime. A defensible space is an area such as a neighborhood, house, park, or office that has features that convey ownership and afford easy and frequent surveillance. These features allow residents to establish control over their private and community property and ultimately deter criminal activity.1

1

The seminal works on defensible space are Defensible Space: People and Design in the Violent City by Oscar Newman, 1972, Macmillan; and Creating Defensible Space, by Oscar Newman, 1996, U.S. Department of Housing and Urban Development.

There are three key features of defensible spaces:

2

“Territorial Cues and Defensible Space Theory: The Burglar’s Point of View” by Julie MacDonald and Robert Gifford, 1989, Journal of Environmental Psychology, 9, 193 – 205.

1. Territoriality — Establishing clearly defined spaces of ownership.

Community signs and gates cultivate a community identity and mark the collective territory of residents; visible boundaries such as walls, hedges, and fences create private yards; and elements such as private trash cans instead of community dumpsters indicate residents have personal responsibility and ownership of services. These territorial markers explicitly assign custodial responsibility of a space to residents and communicate to outsiders that the space is owned and protected. 2. Surveillance — Monitoring of the environment during normal daily

activities. Common surveillance features include external lighting, windows and doors that open directly to the outside of first-floor dwellings, mailboxes located in open and well-trafficked areas, and well-maintained courtyards, playgrounds, and walkways that increase pedestrian activity and casual surveillance. These features make it more difficult for people to engage in unnoticed activities. 3. Symbolic barriers — Placing objects in the environment to create the

perception that a person’s space is cared for and worthy of defense. Common examples include picnic tables, swings, flowers, and lawn furniture — anything that conveys that the owner of the property is actively involved in using and maintaining the property. Note that when excessively showy or unique objects are displayed, it can sometimes symbolize affluence and act as a lure rather than a barrier to criminal activity. Therefore, the appropriateness of various symbolic barriers must be considered within the context of a particular community.2 Incorporate defensible space features in the design of residences, offices, industrial facilities, and communities to deter crime. Clearly mark territories to indicate ownership and responsibility, increase opportunities for surveillance and reduce environmental elements that allow concealment, and display typical symbolic barriers to indicate activity and use. See also Affordance; Archetypes, Psychological; Control; Prospect-Refuge;

Visibility; Wayfinding

Elements that indicate ownership and improve surveillance enhance the defensibility of a space. In this case, the addition of community markers and gating indicates a territory that is owned by the community; improved lighting and public benches increase opportunities for casual surveillance; and local fences, doormats, shrubbery, and other symbolic barriers clearly convey that the space is owned and maintained.

Design can make it possible for both inhabitant and stranger to perceive that an area is under the undisputed influence of a particular group, that they dictate the activity taking place within it, and who its users are to be.

Territoriality

Surveillance

— Oscar Newman Defensible Space

Symbolic Barriers

Before

After

046

Depth of Processing Thinking hard about a thing improves the likelihood that it can be recalled. Deep processing, i.e., thinking hard about information, improves the likelihood that the information will be recalled at a later time. For example, consider the following study: In one group, people are given the task to locate a keyword in a list and circle it. In a second group, people are asked to locate the keyword in a list, circle it, and then define it. After a brief time, both groups are asked to recall the keywords from the tasks. The group that had to define the keywords will have better recall because they had to analyze the keywords at a deeper level than the first group; they had to think harder about the information.1

1

The seminal work on depth of processing is “Levels of Processing: A Framework for Memory Research” by Fergus Craik and Robert Lockhart, 1972, Journal of Verbal Learning and Verbal Behavior, 11, 671– 684.

2

See, for example, “Depth of Processing and the Retention of Words in Episodic Memory” by Fergus Craik and Endel Tulving, 1975, Journal of Experimental Psychology: General, 104, 268 – 294.

The phenomenon of memory results from the two ways in which information is processed:

3

See, for example, “The Self as a Mnemonic Device: The Role of Internal Cues” by Francis Bellezza, 1984, Journal of Personality and Social Psychology, 47, 506 – 516.

• Maintenance rehearsal — Repetition of information; for example, people repeating a phone number back to themselves to help them remember it • Elaborative rehearsal — Deeper processing , i.e., more meaningful analysis of the information; for example, reading a text passage and having to answer questions about the meaning of it Generally, elaborative rehearsal results in recall performance that is two to three times better than maintenance rehearsal.2 The key determining factors for how deeply information is processed are: • Distinctiveness of the information — The uniqueness of the information relative to surrounding information and previous experience • Relevance of the information — The degree to which the information is perceived to be important • Degree to which the information is elaborated — How much thought is required to interpret and understand the information Generally, deep processing of information that involves these factors will result in the best possible recall and retention of information.3 Consider depth of processing when recall and retention are important. Use unique presentation and interesting activities to engage people to deeply process information. Use case studies, examples, and other devices to make information relevant to an audience. Get people to think hard about information by engaging them in different types of elaboration activities. Deep processing requires concentration and effort; therefore, frequent periods of rest should be incorporated into the presentation and tasks. See also Recognition over Recall; Serial Position Effects; von Restorff Effect

“Don’t make me think” is a fine mantra for usability, but when the goal is to develop deep understanding that is to be remembered, the mantra is, “Make me think hard about the right things”. For example, multivariate graphs and diagrams cultivate multivariate thinking. This makes them harder to interpret than simpler

renderings, but they push thinking in a way that facilitates connections, insights, and durable learning. John Snow’s classic data map of Soho from 1854 enabled him to connect cholera outbreaks to a contaminated water well. A reader of his map must think hard to reproduce his epiphany, but once achieved, it is never forgotten.

047

Design by Committee A design process based on consensus building, group decision-making, and extensive iteration. Design by committee has become pejorative in most design circles because the process is inherently inefficient and frustrating, costly in terms of both time and budget, and produces designs generally lacking in aesthetic appeal and innovation. Design by committee essentially averages out designs. But design by committee often produces superior outcomes compared to the alternative — design by dictator — because a committee’s designs better meet all requirements. The quality of a design, especially something complicated, can’t be evaluated on a single, superficial dimension, like aesthetics. Meeting a multitude of complex requirements entails a lot of tradeoffs, and that will necessarily involve design by committee.1 With the exception of inventors, celebrity designers, and entrepreneurial startups, most modern design is at some level design by committee (e.g., clients, brand managers, etc.). The belief that great design typically comes from auteur dictators is more myth than reality. It’s important to understand when to use design by committee versus design by dictator. Both models have their place depending on context. • Favor design by dictator when projects are time- or budget-driven, requirements are relatively straightforward, consequences of error are tolerable, and stakeholder buy-in is unimportant. For example, startups typically employ a design-by-dictator model because they need to move quickly and try out many things before stabilizing around a solution. Such ventures are the source of most significant innovation, but they have a high failure rate. Design by dictator is fast but risky. • Favor design by committee when requirements are highly complex, tolerance for risk is low, consequences of error are serious, and stakeholder buy-in is important. For example, NASA employs a highly bureaucratized design process for each mission, involving numerous working groups, supervisory committees, and layers of review from teams of various specializations. The process is time-consuming and expensive but, the complexity of the requirements is high, the consequences of error are severe, and the need for stakeholder buy-in is critical. Virtually every aspect of mission technology is a product of design by committee. Design by committee is slow but careful. Design by committee is optimal when committee members are diverse, positionality and influence among committee members are minimized, group sizes are 7 to 12 members per committee, and a simple governance model is adopted to facilitate decision-making and prevent deadlocks. See also Dunning-Kruger Effect; Feature Creep; Groupthink; Iteration;

Not Invented Here; Prototyping

1

See, for example, “Design by committee” by Armin Vit, 2008, HOW, 23(1), 36 – 39; and “DEBATE: Do design ‘experts’ have too much influence?” 2013, Building Design, 9.

Winter 2002

Fall 2003

Summer 2003

The striking original design for Freedom Tower came from Daniel Libeskind using a design process that can be aptly characterized as design by dictator. However, the requirements of the building that would take the place of the World Trade Center towers were extraordinarily complex, the consequences of getting the design wrong unacceptable, and the number

Winter 2003

of passionate stakeholders great. Given these conditions, Freedom Tower was destined to be designed by committee. As the design iterated through the various commercial, engineering, security, and political factions, idiosyncrasies were averaged out — a standard byproduct of design by committee. The final design is less visually interesting, but it is, by definition, a superior design.

It is often said that a camel is a horse designed by committee — but in some climates, a camel is the better design. — William Lidwell

Summer 2005

048

Desire Line Traces of use or wear that indicate preferred methods of interaction. Desire lines generally refer to paths where people naturally walk — the beaten path that trails off the sidewalk, usually as a shortcut to a destination — but can be applied more broadly to any signs or traces of user activity in an object or environment. The wear on certain keys on a keyboard, the bite and chew marks on a pen, or the damage and residual car paint left on road guardrails are all examples of desire lines.1 Desire lines represent an unbiased indication of how things are actually used. In contexts of repeated use, as with students crossing a campus quad, desire lines will generally correspond to the paths of least resistance. In contexts of first-time use, as with tourists visiting an attraction, desire lines will often correspond to the quality of attractions or experience. As such, desire lines represent valuable research information that can be applied to the design or, in some cases, redesign of an object or environment. Landscape architects are increasingly using desire lines from the outset, allowing paths to emerge in parks and campuses over a period of many months, and then paving the lines to make permanent walkways. In design contexts that do not preserve natural traces of use like this, desire lines can be artificially created and studied using technologies such as video cameras, GPS, and website heat maps.2 When desire lines do not correspond to intended use, a common reaction is to attempt to modify user behavior versus modify the design. For example, desire lines that branch off sidewalks are often met with erected barriers or “Keep Off” signs, around which new desire lines invariably emerge. Accordingly, desire lines should be interpreted and treated as indications of user preferences — as feedback — not indications of user delinquency. This is why desire lines have been characterized as “voting through behavior”. Consider desire lines in projects that emphasize usability. When possible, use creative methods to detect desire lines prior to finalizing design specifications. When desire lines emerge after a design has been implemented, they do so due to an overriding user preference or improvement in efficiency. If the cost of a desire line is nominal, consider living with it. If the cost is significant, it is generally more cost-beneficial to modify the design to incorporate and leverage the desire line than to attempt to prevent its use. See also Affordance; Cost-Benefit; Gamification; Performance Load;

Root Cause

1

The seminal work on desire lines is The Poetics of Space: The Classic Look at How We Experience Intimate Places by Gaston Bachelard and Maria Jolas (Tr.), 1964, The Orion Press, Inc. Desire lines are also known as desire paths, cow paths, pirate paths, social trails, kemonomichi (beast trails), chemins de l’âne (donkey paths), and olifantenpad (elephant trails).

2

For example, the reconstruction of paths in New York City’s Central Park was based on paving desire lines that were created over many years by park visitors versus repaving existing paths. See Rebuilding Central Park: A Management and Restoration Plan by Elizabeth Barlow Rogers, 1987, The MIT Press.

Desire lines are often seen branching off of designated paths, indicating a strong pedestrian preference for alternate routes.

049

Development Cycle The stages of product creation: requirements, design, development, and testing. All products progress sequentially through basic stages of creation. Understanding and using effective practices for each stage allows designers to maximize a product’s probability of success.1 There are four basic stages of creation for all products: 1. Requirements — The result of formal or informal needs analysis. In

formal processes, requirements are gathered through market research, customer feedback, focus groups, and usability testing. Informally, design requirements are often derived from direct knowledge or experience. Design requirements are best obtained through controlled interactions between designers and members of the target audience and not simply by asking people what they want or like — often they cannot clearly articulate their needs. 2. Design — Requirements are transformed into visual form, which

becomes the specification. The goal is to meet the design requirements, although an implicit goal is to do so in a unique fashion. Excellent design is usually accomplished through careful research of existing or analogous solutions, active brainstorming of many diverse participants, ample prototyping, and many iterations of trying, testing, and tuning concepts. 3. Development — Design specifications are transformed into an

actual product. The goal of development is to precisely meet the design specifications. Two basic quality control strategies are used to accomplish this: Reduce variability in the materials, creation of parts, and assembly of parts; and verify that specifications are being maintained throughout the development process. 4. Testing — The product is evaluated to ensure that it meets design

requirements and specifications and will be accepted by the target audience. Testing generally focuses on the quality of modules and their integration, real-world performance (real contexts, real users), and ease and reliability of installation. Testing early and often eliminates surprises and yields the best end product. Consider the following for best product results: Gather requirements through controlled interactions with target audiences rather than simple feedback or speculation by team members; use research, brainstorming, prototyping, and iterative design to achieve optimal designs; minimize variability in products and processes to improve quality; and test all aspects of the design to the degree possible. See also Design by Committee; Hierarchy of Needs; KISS; Iteration;

Ockham’s Razor; Product Life Cycle; Prototyping

1

A nice treatment of contemporary product development issues and strategies is found in Products in Half the Time: New Rules, New Tools, 2nd ed., by Preston Smith and Donald Reinertsen, 1997, John Wiley & Sons; and Managing the Design Factory: The Product Developer’s Toolkit by Donald Reinertsen, 1997, Free Press.

Requirements

Testing

Design

Development

The ideal development cycle fosters iteration between adjacent stages in the cycle — and, as needed, nonadjacent stages.

050

Diffusion of Innovations A theory describing how new things gain acceptance in a population over time. Diffusion of Innovation is a theory that explains how new ideas, behaviors, or products gain acceptance and are ultimately adopted by people over time. Adoption of the new does not happen evenly across a population but, rather, spreads through the population by subgroups. Understanding the unique characteristics and preferences of these subgroups can inform design and marketing strategy, increasing the probability of adoption.1 There are five established subgroups: 1. Innovators — People who want to be the first to try out new things. They

are willing to take risks, forgive bugs for being on the cutting edge, and make willing beta testers. Aside from getting access to the newest things, they need little else to convince them to adopt. 2. Early Adopters — People who try new things very early in their life cycle

but, are more practical than innovators: it is not enough for something to be new; it also needs to work. Early adopters appreciate, understand, and often evangelize the benefits of new products. They like technical data and first-hand experiences. 3. Early Majority — People who rarely lead change, but they adopt before

the average person. They like evidence that an innovation works in the form of independent reviews, testimonials, and test drives. 4. Late Majority — People who are skeptical of new things and tend to only

adopt them when the majority of a population has already done so. They like to see other people using innovations first-hand and to have opportunities to test things without a long-term commitment. 5. Laggards — People who are the most conservative and resistant to

change. They often fear the new and have an emotional attachment to the old. This group sometimes responds to fear appeals, pressure from people in other subgroups, and innovations framed in traditional ways. Consider the diffusion of innovations theory in change management, innovation, and marketing contexts. Develop experiential and marketing campaigns that differentially address the unique characteristics and preferences of the target subgroups. In general, enlist innovators; woo early adopters; persuade the early majority; incentivize the late majority; and ignore the laggards. See also MAYA; Normal Distribution; Paradox of Great Ideas;

Product Life Cycle

1

The seminal work is Diffusion of Innovations by E.M. Rogers, 1962/2003, Simon and Schuster. A proposed revision to this theory visually separates the subgroups by how different they are, noting a particularly large disconnect or “chasm” between early adopters, who seek revolution, and the early majority, which seeks evolution. The significance of this chasm is that it highlights where a disproportionate number of new ventures fail. See Crossing the Chasm by Geoffrey A. Moore, 1991/2014, HarperCollins.

MAINSTREAM MARKET

THE CHASM

EARLY MARKET

Innovators 2.5%

Early Adopters 13.5%

The adoption of the new doesn’t happen quickly or evenly; it generally proceeds left to right in order through a standard set of market segments. People in each market segment have unique desires and needs, which means the product design and marketing strategy for one segment may not work for the next segment. The largest gap in desires and needs lies between the early market and mainstream market —“the chasm”— which is where new companies struggle the most. The key to success is understanding the differences between these groups and designing accordingly.

Early Majority 34%

Late Majority 34%

Laggards 16%

It turns out our attitude toward technology adoption becomes significant — at least in a marketing sense — any time we are introduced to products that require us to change our current mode of behavior or to modify other products and services we rely on. — Geoffrey A. Moore Crossing the Chasm

051

Don’t Eat the Daisies The fallacy that exhaustive requirements and specification documents lead to better design. The don’t-eat-the-daisies fallacy is the belief that it is possible to create checklists, requirements, and specification documents that account for every possibility, including obvious and idiotic possibilities. The principle borrows from the book Please Don’t Eat the Daisies, in which a mother learns that no matter how comprehensive her list of directives to her children — e.g., don’t leave your bicycles on the front steps; don’t use the guest towels, etc. — they would always find mischief to engage in that was not on the list, such as eating the daisies on the dining-room table. The lesson is that there are always unstated assumptions and implicit requirements, and to try to list them all is futile.1 If the answer to communicating design intent doesn’t lie with exhaustive directives, then what? More important than any specific requirement or specification is goal clarity and alignment. When there are questions about what to do or not to do, clarity on what is trying to be accomplished is the best catch-all strategy. Working without a clear goal to align and guide efforts is like navigating without a compass: If the teams don’t know what direction they are supposed to be going, all directions appear valid.2 Once the goal is clearly communicated, the high-priority items should be presented. These are checklist items, requirements, and specifications that are safety- or mission-critical in determining the success or failure of the project. These should be presented separately from less critical and more detailed items — to mix the two is to dilute the important with the trivial. To address the infinite “eat the daisies” variety of cases, items should be abstracted to higher-level descriptions. For example, rather than very detailed items like, “Don’t eat the daisies”, opt for framings like, “Only do preapproved activities”. Other useful catch-all framings include “performs in alignment to the goal” and “implemented consistent with conventions and best practices”. It is not possible to checklist your way to great design. Such approaches waste time, create unmanageable complexity, and invite hacking and rule quibbling. Instead, communicate goals clearly to the teams involved. Emphasize essential requirements and specifications, detailing lesser requirements separately. Use abstracted language and catch-all framings to address eat-the-daisies-type cases. Add checkpoint reviews to confirm that implementation matches intent. See also Brown M&M’s; Development Cycle; Garbage In–Garbage Out;

Pareto Principle

1

Please Don’t Eat the Daisies by Jean Kerr, 1957, Doubleday. In this context, “Don’t eat the daisies” means “Don’t create excessively detailed documentation”.

2

Design comps that visually depict end goals — e.g., a rendering of a website, interior room, or landscape — are superior to verbal representations.

All design specifications rest on a number of commonsense and informed assumptions, which sometimes lead to ridiculous outcomes — e.g., assumptions about appropriately spacing urinals in a bathroom. This often results in futile attempts to document every little thing — e.g., “Don’t eat the daisies”— which requires significant

time and energy and makes the documentation less and less usable. It is impossible to document and communicate every possible requirement and specification, but the problem can be addressed by aligning teams on the big-picture goals, using catch-all language like “ergonomically located for comfortable use”, and checkpoint reviews.

My real problem with children is that I haven’t any imagination. I’m always warning them against the commonplace defections while they are planning the bizarre and unusual. — Jane Kerr Please Don’t Eat the Daisies

052

Dunbar’s Number The maximum number of relationships a person can comfortably maintain. Dunbar’s number, proposed by the anthropologist Robin Dunbar, states that there is a limit to the number of meaningful relationships people can maintain. The limit is a function of the information-processing power of the brain, derived from correlations between brain size and group size in various primates: The larger the primate brain, the larger their group sizes. For human primates, the group size maximum is about 150 members.1 The proposed number of 150 refers to the maximum number of meaningful relationships a person can maintain. A meaningful relationship is loosely defined as one in which two people would go out of their way to greet one another in a chance encounter at a public place like a store or airport. By extension, the natural size of personal social networks and communities is also 150. Once this number is exceeded, a community will splinter into two groups. But 150 is just one number in a multilevel hierarchy based on different depths of relationship. Dunbar asserts that a typical person can only remember the names and faces of about 1,500 people. Of these, about 500 can be maintained as acquaintances; 150 as meaningful relationships; 50 as friends; 15 as good friends; and 5 as intimate friends or family. Since it was proposed in 1992, various challenges to Dunbar’s number have been advanced: some agreeing with the number but disputing the cause; some agreeing with the cause but disputing the number; some questioning whether limits are relevant in an age of contact databases and social networks; and some disputing that there are limits at all.2 While the theoretical foundation and empirical status of Dunbar’s number continue to be investigated, it is hard to ignore the fact that the number 150 appears with conspicuous frequency across a wide range of social-group contexts, from the size of hunter-gatherer societies to the size of modern military units to the size of Facebook groups.3 Consider Dunbar’s number when designing systems rooted in social relationships, including social media platforms, office buildings, management structures, and cybercrime and bot-detection software. Given the practical need to make decisions about group size in designs, it is reasonable to consider Dunbar’s number(s) a provisional default guideline until something better comes along. See also Conway’s Law; Mental Model; Pareto Principle;

Recognition over Recall

1

The seminal work on Dunbar’s number is “Neocortex size as a constraint on group size in primates” by Robin Dunbar, 1992, Journal of Human Evolution, 22(6), 469 – 493.

2

See, for example, “‘Dunbar’s number’ deconstructed” by Patrik Lindenfors et al., May 5, 2021, Biology Letters, 17.

3

See, for example, “Dunbar’s number: why my theory that humans can only maintain 150 friendships has withstood 30 years of scrutiny” by Robin Dunbar, May 12, 2021, The Conversation, www.theconversation.com.

Intimates

Good Friends

Friends Meaningful Relationships

Acquaintances Known Faces and Names

When designing group structures for peak collaboration and stability, size matters. A typical person can only remember the names and faces of about 1,500 people; of these, about 500 can be maintained as acquaintances; 150 as meaningful relationships; 50 as friends; 15 as good friends; and 5 as intimate friends or family.

053

Dunning-Kruger Effect A tendency for unskilled people to overestimate their competence and performance. The Dunning-Kruger effect (DKE), proposed by the psychologists Justin Kruger and David Dunning, states that novices in any given domain tend to overestimate their performance, while experts are more accurate or even underestimate their performance. As a consequence, novices tend to be overconfident in their judgments and behavior, whereas experts tend toward greater humility. The likely cause is that novices lack both the introspective access and the knowledge and experience to recognize their incompetence, as well as the competence of others.1 The DKE creates a seeming paradox for novices: An incompetent person can’t perceive their own incompetence because they are incompetent, and overcoming incompetence requires the ability to distinguish skill levels, which is an ability they lack. This paradox is especially problematic for novices performing in contexts that can have life-critical or long-term consequences, such as law or medicine. And since expertise does not easily transfer across domains, being an expert in one area does not immunize you from the DKE in other areas. The primary way out of this paradox is independent adjusting feedback and experience in a specific domain — especially experience that involves failures. The DKE also creates challenges for processes that rely on self-report measures, especially in learning contexts. The implication is that learners who know the least about a subject will overestimate their abilities the most, and learners who know the most will underestimate their abilities the most. There is often a disconnect between who people really are and the idealized perceptions they have of themselves, leading to self-reports based on whom they would ideally like to be versus who they are. For these reasons, assessments based on self-report strategies alone should be avoided.2 Combat the DKE by first raising awareness of the effect. Recognition is the first step to recovery. Provide guidelines and heuristics that help discern competence from incompetence. Don’t assume expertise in one domain immunizes someone against the effect in other domains. Ritualize regular feedback and critiques to promote the development of metacognition and self-assessment skills. The most common misunderstanding about the DKE is who falls victim to it. The effect is commonly invoked to explain or dismiss the incompetence of others, but the key lesson is that people should be humble and cautious about themselves. It is not about “them”; it is about “us”. See also Creator Blindness; Faith Follows Function; Feedback; Icarus Matrix;

Knowing-Doing Gap

1

The seminal research is “Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments” by Justin Kruger and David Dunning, 1999, Journal of Personality and Social Psychology, 77(6), 1121–1134. The DKE is a robust effect that has been widely replicated, and parallel effects can be observed in other contexts — e.g., unattractive people tend to overestimate their attractiveness more so than attractive people. See, for example, “Unattractive people are unaware of their (un) attractiveness” by Tobias Greitemeyer, 2020, Scandinavian Journal of Psychology, 61(4), 471– 483.

2

See, for example, “A systematic review and meta-analysis of discrepancies between logged and self-reported digital media use” by Douglas Parry et al., 2021, Nature Human Behaviour, 5, 1535 –1547.

Confidence

High

Low

Experience The least competent tend to be the most confident, then the roller-coaster ride of reality begins.

The first rule of the Dunning-Kruger club is you don’t know you’re a member of the Dunning-Kruger club. People miss that. — David Dunning in an interview with vox.com

High

054

Entry Point A point of physical or attentional entry that sets the emotional tone for subsequent interactions. People judge books by their covers, Internet sites by their first pages, and buildings by their lobbies. The initial impression of a system or environment greatly influences subsequent perceptions and attitudes, which then affects the quality of subsequent interactions. This impression is largely formed at the entry point to a system or environment. Errors in entry-point design annoy visitors who make it through or deter visitors altogether. A well-designed entry point promotes additional interaction.1 The key elements of good entry point design are: • Minimal barriers — Barriers should not encumber entry points. Examples of barriers to entry are highly trafficked parking lots, noisy displays with unnecessary elements, salespeople standing at the doors of retail stores, or anything that impedes people from getting to and moving through an entry point. Barriers can be aesthetic as well as functional in nature. For example, a poorly maintained building front or landscape is an aesthetic barrier to entry. • Points of prospect — Entry points should allow people to become oriented and clearly survey available options. Points of prospect include store entrances that provide a clear view of store layout and aisle signs, and Internet pages that provide good orientation cues and navigation options. Points of prospect should provide sufficient time and space for a person to review options with minimal distraction or disruption — i.e., people should not feel hurried or crowded by their surroundings. • Progressive lures — Lures should be used to attract and pull people through the entry point. Progressive lures can be compelling headlines on the front page of a newspaper, greeters at restaurants, or the display of popular products or destinations (e.g., restrooms) just beyond the entry point of a store. Progressive lures get people to incrementally approach, enter, and move through the entry point. Maximize the effectiveness of the entry point in a design by reducing barriers, establishing clear points of prospect, and using progressive lures. Provide sufficient time and space for people to review opportunities for interaction at the entry point. Consider progressive lures such as highlighting on a web page, entry-point greeters at restaurants, and popular offerings visibly located beyond the entry point to get people to enter and progress through. See also Anchoring; Desire Line; Priming; Prospect-Refuge; Wayfinding

1

See, for example, Why We Buy: The Science of Shopping by Paco Underhill, 2000, Touchstone Books; Hotel Design, Planning, and Development by Walter Rutes et al., 2001, W.W. Norton & Company; and “The Stanford-Poynter Eyetracking Study” by Marion Lewenstein et al., 2000, Poynter, poynter.org.

A large point of prospect is provided after entry to support orientation and decision-making.

The use of glass minimizes visual barriers.

A glass front eliminates visual barriers.

A small set of glass stairs at the entry point acts as a lure, creating the impression of entering a special place.

Whether a book cover, website home page, or retail storefront, entry points are opportunities to transport people from one world and mode of thinking to another. Good entry points focus attention on the right things, set the right emotional tone, convey essential

Products line the periphery of the space, offering clear options from the point of prospect.

A large glass staircase acts as a secondary lure, creating the impression of entering another special space.

information, and create memorable experiences. When done well, entry points frame an experience. For example, Apple retail stores have redefined the retail experience, in part due to their exceptional entry-point experience design.

055

Error, Design A design-caused action or inaction that yields an unintended or undesirable result. Most accidents are attributed to human error. For example, greater than 70% of aviation, marine, and train accidents are attributed to crew error. More than 95% of automobile accidents are attributed to driver error. About 80% of chemical plant accidents are attributed to human error. And so on.1 A deeper investigation into accidents attributed to human error reveals they are most often caused by design error, of which there are three types. 1. Design-induced errors — Occur when a person is intuitively led to

interact with a design in an inappropriate way. For example, placing a pull handle on a push door intuitively leads people to pull even when there is a sign that says “push”. These errors are typically the result of unintended affordances, bad mappings, misleading icons or labels, and counterintuitive operations and processes. 2. Design-caused errors — Occur when a person is prevented from

interacting with a design in an appropriate way. For example, a designcaused error results when an emergency alarm is so loud that it prevents operators from thinking clearly, which then leads them to make errors in trying to resolve the emergency. These errors are typically the result of poorly located or obstructed controls, kinematic or cognitive overload, confusing icons or labels, unclear indications of status and feedback, and distracting stimuli. 3. Design-enabled errors — Occur when a person is allowed to interact with

a design in ways that are known to be inappropriate. For example, a control system that allows a pilot to accidentally retract the landing gear while the plane is on the ground. These errors are typically the result of immature or untested designs, design compromises due to schedule or cost pressures, constraints such as weight, technological limitations, and contradictory design requirements such as enabling vehicles to travel well beyond maximum speed limits. Consider design error in the diagnosis and prevention of accidents and errors. Be skeptical of claims of “human error”, as the primary cause of most accidents is bad design: design that induces error, design that causes error, and design that enables error. Don’t confuse causation with correlation. It is easy to attribute the cause of accidents to human error, as people are typically involved in the last link of an accident’s causal chain. It is easy but wrong. The first step in reducing accidents is acknowledging this fact and intentionally designing systems that mitigate or prevent errors. See also Affordance; Confirmation; Consistency; Constraint; Error, Human;

Forgiveness; Root Cause; Visibility

1

See, for example, “1998 Global Fatal Accident Review 1980 – 96 (CAP 681)”, 1998, Civil Aviation Authority; “Critical Reasons for Crashes Investigated in the National Motor Vehicle Crash Causation Survey”, Feb 2015, USDOT; “Classification of Human Failure in Chemical Plants: Case Study of Various Types of Chemical Accidents in South Korea from 2010 to 2017” by Seungho Jung et al., Nov 2021, International Journal of Environmental Research and Public Health, 18(21), 11216. This principle is also known as latent human error.

In 2004, a leak was detected on the International Space Station. A U-shaped hose used to equalize pressure and prevent moisture between windowpanes on the Destiny module was determined to be the cause. An investigation found that the leak was due to repeated, inadvertent use of the hose as a handhold to steady crew members floating at the window. Handholds for this purpose were awaiting delivery to ISS, but in the interim, this hose proved an irresistible affordance.

056

Error, Human An action, or omission of action, that leads to an unintended result. It is said that “to err is human, to forgive design”. But forgiveness cannot be designed into systems without an understanding of the hows and whys people make errors and the corresponding design strategies that can be applied to reduce their frequency and severity.1

1

The seminal work on errors is “Categorization of Action Slips” by Donald Norman, 1981, Psychological Review, 88, 1–15; and Absent Minded? The Psychology of Mental Lapses and Everyday Errors by James Reason and Klara Mycielska, 1982, Prentice-Hall.

2

Note that there are many different error taxonomies. A nice review and discussion regarding the various taxonomies is found in Human Error by James Reason, 1990, Cambridge University Press. A very readable and interesting treatment of human error is Set Phasers on Stun and Other True Tales of Design, Technology, and Human Error by Steven Casey, 1998, Aegean Publishing Company.

There are three basic types of human errors: 2 1. Slips — Occur when a simple action is not what was intended, for

example, when a person intends to press one button but accidentally presses another. Slips are sometimes referred to as errors of action or errors of execution. For example, a slip occurs when a person dials a frequently dialed phone number when intending to dial a different number. Slips are the result of automatic, unconscious processes and frequently result from a change of routine or an interruption of an action. Minimize slips by providing affordances, constraints to prevent accidental activation, clear feedback on actions, and confirmations. 2. Lapses — Occur when a required action is forgotten or omitted. For

example, a pilot forgets to lower landing gear for landing, a driver forgets to turn on their turn signal, or a surgeon accidentally leaves an instrument in a patient during surgery. Lapses are sometimes referred to as errors of inaction and are caused by failures in short-term memory. Minimize lapses through use of checklists, procedures, alarms, and removal of distractions and interruptions. 3. Mistakes — Occur when an intention is inappropriate, sometimes

referred to as errors of intention or errors of planning. For example, a mistake occurs when a nurse interprets an alarm incorrectly and then administers the incorrect medicine. Mistakes are caused by conscious mental processes and frequently result from stress or decision-making biases. Minimize mistakes by increasing situational awareness and reducing environmental noise. Provide clear feedback, procedures, and job aids; and ensure training is adequate. Human errors are inevitable, so plan for them. Train on error recovery and troubleshooting, and always incorporate the principle of forgiveness into designs to reduce the frequency and severity of errors when they occur, enhancing the design’s safety and usability. See also Affordance; Confirmation; Consistency; Constraint; Error, Design;

Forgiveness; Root Cause; Visibility

Texting Slip

Texting Lapse

Amanda

1-888-445-6782

Today 3:15 PM

Monday 7:12 PM

I just met the perfect boy for you. Very smartly dressed. Baby face. Sort of bland. Not too tall.

Hi. My name is Scott. I’d like to look at the car that’s for sale. Do you have any time this week?

Bland? So he’s perfect for me? Ouch!

Sure. How about 6pm tomorrow?

Monday 10:44 PM

OMG! Blond! I meant blond!

Good night beautiful. I can’t wait to give you a big kiss. xoxoxoxox Ha ha ha! OK. I was worried that maybe you think I’m boring. Look man, I’m just interested in the car.

Texting Mistake

Mom Tuesday 9:12 AM

Your great aunt just passed away. LOL

Why is that funny?

It’s not funny!! What do you mean?!

Mom, LOL means Laughing Out Loud.

Oh no! I sent that to everyone. I thought it meant Lots Of Love.

Preventing or mitigating different kinds of errors requires different kinds of design strategies for prevention or mitigation. Slips are errors of execution and therefore involve strategies that center around constraining or confirming actions. Lapses are errors of attention and therefore involve strategies that center around grabbing, focusing, and holding attention. Mistakes are errors of intention and therefore involve strategies that center around developing accurate and complete mental models.

057

Expectation Effects Changes in perception or behavior resulting from personal expectations or expectations of others. Expectation effects refer to ways in which expectations affect perception and behavior. Generally, when people are aware of a probable or desired outcome, their perceptions and behavior are affected in some way. For example, tell a large group of people that a new product will change their lives, and a significant number will find their lives changed — the belief is simply a device that helps create the change.1

1

Seminal works on the expectation effect include The Human Problems of an Industrial Civilization by Elton Mayo, 1933, Macmillan; “The Effect of Experimenter Bias on the Performance of the Albino Rat” by Robert Rosenthal and Kermit Fode, 1963, Behavioral Science, 8, 115 –118; and “Teachers’ Expectancies: Determinants of Pupils’ IQ Gains” by Robert Rosenthal and Lenore Jacobson, 1966, Psychological Reports, 19(1), 115 –118. For a nice review of the placebo effect, see The Placebo Effect: An Interdisciplinary Exploration edited by Anne Harrington, 1999, Harvard University Press.

2

Note that while these effects are supported by empirical studies, some have failed replication. Effects are sensitive to the strength and clarity of the expectation, so they may not exhibit if the expectation is not strong and clear.

Examples of expectation effects include: 2 • Halo effect — Employers rate the performance of certain employees higher than others based on the employers’ overall positive impression of those employees. • Hawthorne effect — Employees are more productive based on their belief that changes made to the environment will increase productivity. • Pygmalion effect — Students perform better or worse based on the expectations of their teacher. • Placebo effect — Patients experience treatment effects based on their belief that a treatment will work. • Rosenthal effect — Teachers treat students differently based on their expectations of how students will perform. • Demand characteristics — Participants in an experiment or interview provide responses and act in ways that they believe are expected by the experimenter or interviewer. Expectation effects create challenges for direct measurement techniques, (focus groups, interviews, surveys) and can have a negative impact on the ability to accurately measure a design’s success. Since designers are naturally biased toward their designs, they often unintentionally influence test subjects through words or actions or may omit certain results in order to corroborate their expectations. Test subjects often respond by seeking to meet the expectations communicated to them. Consider the expectation effect when introducing and promoting a design. When trying to persuade, set expectations in a credible fashion for the target audience rather than letting them form their own unbiased conclusions. When evaluating a design, use proper test procedures to avoid biases resulting from expectation effects. See also Exposure Effect; Framing; Uncertainty Principle

A taste test between two wines: an inexpensive wine in cheap packaging and an expensive wine in fancy packaging. People rate the expensive wine as tasting better — even when the wines are the same.

058

Exposure Effect The more people are exposed to a thing, the more they like and trust it. The exposure effect occurs when stimuli are repeatedly presented and, as a result, are increasingly well liked and accepted. For example, the more a song or slogan is repeated, the more popular it is likely to become — a phenomenon exploited by both radio and television networks. The exposure effect applies only to stimuli that are perceived as neutral or positive. Repeated exposures to an offending stimulus may actually amplify the negative perception rather than remedy it. The exposure effect is observed with advertisements, music, paintings, people, and political campaigns.1 Familiarity plays a primary role in aesthetic appeal and acceptance; people like things more when frequently exposed to them. For example, the initial resistance by many people to the Vietnam Veterans Memorial was primarily caused by a lack of familiarity with its minimalist, abstract design. Similar resistance was experienced by Pablo Picasso with his Cubist works, Gustave Eiffel with the Eiffel Tower, Frank Lloyd Wright with the Guggenheim Museum, and many others whose works are today widely accepted as brilliant and beautiful. As the level of exposure to these works increased, familiarity also increased and resulted in greater acceptance. The strongest exposure effects are seen with photographs, meaningful words, names, and simple shapes; the smallest effects are seen with icons, people, and auditory stimuli. The exposure effect is strongest during the first 10 to 20 exposures and gradually weakens as the number of presentations increases — probably due to boredom. Complex and interesting stimuli tend to amplify the effect, whereas simple and boring stimuli tend to weaken it. Interestingly, the longer a stimulus is exposed, the weaker the exposure effect. The strongest effect is achieved when exposures are so brief or subtle that they are subliminal (not consciously processed) or when they are separated by a delay.2 Use the exposure effect to strengthen advertising and marketing campaigns, enhance the perceived credibility and aesthetic of designs, and generally improve the way people think and feel about a message or product. Keep the exposures brief, and separate them with periods of delay. The exposure effect will be strongest for the first 10 exposures; therefore, focus resources on early presentations for maximum benefit. Expect and prepare for resistance to a design if it is significantly different from the norm; people may need time to become familiar with it. See also Classical Conditioning; Cognitive Dissonance; Expectation Effects; Framing; MAYA; Priming; Stickiness

1

The seminal application of the exposure effect was in early twentieth-century propaganda — see, for example, Adolf Hitler: A Chilling Tale of Propaganda by Max Arthur and Joseph Goebbels, 1999, Trident Press International. The seminal empirical work on the exposure effect is “Attitudinal Effects of Mere Exposure” by Robert Zajonc, 1968, Journal of Personality and Social Psychology Monographs, 9(2), 1– 27. This principle is also known as mere-exposure effect, repetitionvalidity effect, frequency-validity effect, truth effect, and repetition effect.

2

See, for example, “Exposure and Affect: Overview and Meta-Analysis of Research, 1968 –1987” by Robert Bornstein, 1989, Psychological Bulletin, 106(2), 265 – 289.

The exposure effect has always been a primary tool of propagandists. Ubiquitous positive depictions, such as these of Joseph Stalin, are commonly used to increase the likeability and support of political leaders. Similar techniques are used in marketing, advertising, and electoral campaigns.

059

Face Detection The tendency to find or see human faces in objects and patterns. Humans and other primates are hardwired to recognize faces. Newborn babies stare at face-like images longer than at any other kind of images, and stare longer when the face-like images are upright versus tilted or inverted. This tendency likely evolved as a mechanism for babies to find, bond, and communicate nonverbally with their caregivers, giving them an adaptive advantage. A byproduct of this hardwiring is facial pareidolia, which is the tendency to see illusory faces in things when they have facelike configurations or patterns. Since seeing faces and interpreting their expressions is a reflexive response, it is important for designers to understand how this tendency can both enhance and potentially compromise designs.1 Face detection keys on the triangular arrangement of the eyes and the mouth. Common examples include seeing faces in rain stains on buildings, the gnarls of a tree trunk, and in everyday objects like the fronts of cars. Children seem to experience facial pareidolia more than adults, and females more than males. In all cases, illusory faces tend to be perceived as young and male unless there are conspicuous features indicating otherwise — e.g., large round eyes, small mouth, etc.2 It has been proposed that there are seven universal facial expressions — i.e., happiness, sadness, fear, anger, surprise, disgust, and contempt — and these facial expressions indicate emotional states. As such, whether real or illusory, perceived facial expressions imbue things with personality. When these perceived personalities are congruous with a brand or design, the overall experience is strengthened. When perceived personalities are incongruous with a brand or design, the overall experience is weakened. For example, cars with round headlights and small ovoid grilles tend to be considered happy and feminine and are more likely to appeal to female consumers, whereas cars that have more angular headlights and grilles tend to be considered angry and masculine and are more likely to appeal to male consumers.3 Consider face detection in designs to grab attention and imbue them with personality. When compositions naturally approximate faces, optimize the facial configuration to take full advantage and convey the proper affect. Ensure that facial patterns in design are congruous with the brand and design and resonate with the intended audience. See also Anthropomorphism; Archetypes, Psychological; Baby-Face Bias;

Magic Triangle; Uncanny Valley

1

The seminal work is “Pattern Vision in Newborn Infants” by R.L. Fantz, 1963, Science, 140, 296 – 297. For a current review, see “Face recognition in infants: A review of behavioral and near-infrared spectroscopic studies” by Yumiko Otsuka, 2013, Japanese Psychological Research, 56(1), 76 – 90.

2

See, for example, “Illusory faces are more likely to be perceived as male than female” by Susan Wardle et al., Jan 24, 2022, PNAS, 119(5), 1–12.

3

See, for example, Darwin and Facial Expression; A Century of Research in Review by Paul Ekman, 1973/2015, Academic Press. For explorations of pareidolia in product design, see “Pareidolia: Characterising Facial Anthropomorphism and Its Implications for Product Design” by Andrew Wodehouse et al., Jan 2018, Journal of Design Research, 16(2), 83 – 98.

Humans are wired to interpret certain configurations as faces and reflexively react to the emotional expressions they convey. For example, many Jeep owners enjoy signaling status and power. The iconic grille and headlight configuration of Jeep Wranglers appears face-like, but its affect is ambiguous — possibly even cute.

Some Jeep owners choose to resolve this incongruity with an “Angry Bird Grille” or “Grumper Grille”, popular after-market front ends that give the Jeep an angry, aggressive face, more clearly signaling status and power.

060

Face-ism Ratio The ratio of face to body in an image that influences how the person is perceived. Images depicting a person with a high face-ism ratio — the face takes up most of the image — focus attention on the person’s intellectual and personality attributes. Images depicting a person in a low face-ism ratio — the body takes up most of the image — focus attention on the physical and sensual attributes of the person. Irrespective of gender, people rate individuals in high face-ism images as being more intelligent, dominant, and ambitious than individuals in low face-ism images.1 The term face-ism originated from research on gender bias in the media. It was found that images of men in magazines, movies, and other media have significantly higher face-ism ratios than images of women. This appears true across most cultures and is thought to reflect gender-stereotypical beliefs regarding the characteristics of men and women. In one experiment, for example, male and female college students were randomly assigned a task to draw either a man or a woman. The students were told they would be evaluated on their drawing skills and were given no additional instructions. Both genders drew men with prominent and detailed faces and women with full bodies and minimally detailed faces.2 The face-ism ratio is calculated by dividing the distance from the top of the head to the bottom of the chin (head height) by the distance from the top of the head to the lowest visible part of the body (total visible height). • An image without a face has a face-ism ratio of 0.00. • An image of a full head and body has a face-ism ratio of about 0.13. • An image from the waist-up has a face-ism ratio of about 0.37. • A classic portrait shot has a face-ism ratio of about 0.50. • An image with only a face has a face-ism ratio approaching 1.00. Consider face-ism in the representation of people in photographs and drawings. When the design objective requires more thoughtful interpretations or associations, use images with high face-ism ratios. When the design objective requires more ornamental or emotional interpretations or associations, use images with low-face-ism ratios. Note that the interpretations of the images will be the same irrespective of the subject’s or viewer’s gender. See also Attractiveness Bias; Baby-Face Bias; Classical Conditioning

1

The term face-ism is used by some researchers to refer to the tendency of the media to represent men in high faceism images and women in low face-ism images — also referred to as body-ism.

2

The seminal work on face-ism is “Face-ism” by Dane Archer et al., Sept 1978, Psychology Today, 65 – 66; and “Face-ism: 5 Studies of Sex-Differences in Facial Prominence” by Dane Archer et al., 1983, Journal of Personality and Social Psychology, 45, 725 –735.

Face-ism Ratio = .96

The effect of face-ism is evident in these photographs. The high face-ism photograph emphasizes more cerebral or personality-related attributes like intelligence and ambition. The lower face-ism photographs emphasize more physical attributes like sensuality and physical attractiveness.

Face-ism Ratio = .55

Face-ism Ratio = .37

061

Factor of Safety The design of a system beyond expected loads to offset unknowns and prevent failure. Design requires dealing with unknowns. No matter how knowledgeable a designer and how thoroughly researched a design specification, basic assumptions about unknowns of one kind or another are inevitable in every design. Factors of safety are used to offset the potential negative effects of these unknowns.1 Increasing the factor of safety is achieved by adding capacity, materials, and redundant components to a system to enable it to exceed anticipated operating loads. For example, designing an Internet service that can support 1,000 users appears straightforward. However, to account for unanticipated uses of the service (e.g., downloading giant files), the design specification should be multiplied by a factor of safety. At a factor of safety of 2, the service would be publicly rated to support 1,000 users but actually designed to support twice that many. The size of the factor of safety corresponds to the level of ignorance about the design or its operating environment. The greater the ignorance, the greater the factor of safety. For example, tried-and-true structures like modern buildings made of materials of consistent quality, such as steel and concrete, typically use a factor of safety between 1.5 and 2. By contrast, when ignorance combines with materials of varying quality, the factor of safety can be quite large. For example, the designers of the Great Pyramid of Giza unknowingly applied a factor of safety > 20, which is the reason it is still standing thousands of years later.2 Increasing the factor of safety means adding elements. Adding elements means more cost, complexity, and weight. New designs typically have large factors of safety because the number of unknowns is large. The initial priority is reliability. If a design performs reliably over time, confidence in the design increases and the priority shifts from reliability to efficiency, which means reducing elements. Unfortunately, this trend usually continues until a failure occurs, at which point the priority shifts back to increasing reliability — i.e., increasing the factor of safety.3 Employ factors of safety to reduce the risk of failure. Increase them in proportion to ignorance of the design parameters or its operating environment. Reduce factors of safety with caution, especially when the consequences of failure are severe. Abide by the rated performance of systems versus the designed performance — i.e., performance including factors of safety — except in cases of emergency. See also Design by Committee; Error, Design; Modularity; Redundancy;

Structural Forms; Weakest Link

1

This principle is also known as Safety Factor and Factor of Ignorance.

2

Weight is a major consideration when increasing factors of safety. For example, aircraft designs typically observe a factor of safety of only 1.5, despite the severe consequences of system failure. But note that different subsystems are designed with different factors of safety based on their criticality and reliability. For example, a pressurized fuselage may have a factor of safety of 2.0, whereas landing gear a factor of safety of 1.25.

3

See, for example, To Engineer Is Human: The Role of Failure in Successful Design by Henry Petroski, 1992, Vintage; and Design Paradigms: Case Histories of Error and Judgment in Engineering by Henry Petroski, 1994, Cambridge University Press.

3

80

2

60

1

40

Temperature (°F)

Factor of Safety

100

No O-ring Damage

Minor O-ring Damage

Major O-ring Damage

20 1981

1982

1983

1984

1985

1986

Shuttle Launches

Rubber O-rings, about 38 feet (11.6 m) in circumference and .25 inch (6 mm) thick

The O-ring design of the space shuttle Challenger’s solid rocket booster was designed to have a safety factor of three. However, low temperatures contributed to the erosion of O-rings in past launches and, consequently, to the erosion of this safety factor; at low temperatures, the safety factor was well below three. On the morning of January 28, 1986, the temperature at the launch pad was 36 degrees F (2.2 degrees C) — the lowest launch

temperature to date. Despite the objections of several engineers, the decision to proceed with the launch was based largely on the belief that the safety factor was sufficient to offset any low-temperature risks. Catastrophic failure occurred shortly after launch.

062

Faith Follows Function Ideological and spiritual considerations should be secondary to functional considerations. The “function” of a design refers broadly to what a design seeks to accomplish once brought to fruition — it is the design’s purpose or raison d’être. As such, function is paramount in design and should not be traded for other considerations. But often the personal biases, beliefs, and values of designers influence their thinking in ways that run counter to function — i.e., function follows their faith. Examples of such faith-based influences include an overriding emphasis on sacred geometries (e.g., golden ratio), geomancy (e.g., feng shui), fads and stylings (e.g., streamlining), artistic and emotional expressions (e.g., gewgaws), and ethical or values-based imperatives (e.g., intersectionalism). In such cases, the advancement of the designer’s personal ideology can undermine the success of the design.1 As with form and function, there are situations in which there is no conflict between faith and function; and no tradeoffs are necessary. But often tradeoffs are required, and in such cases, function should be prioritized. This does not mean that a designer’s beliefs or values are irrelevant but, rather, that the functional requirements should take precedence. For example, a designer may ethically prefer to use sustainable materials, but if such materials do not satisfy the functional requirements, the options are to persuade stakeholders to alter the requirements, to use non-sustainable materials, or to resign from the project. Faith should follow function. Note that the function of a design can take many forms and should not be reduced to mere mechanics. For example, one could mistake the principal function of Philippe Starck’s Juicy Salif as juicing — which it does rather poorly — or one could take Starck at his word that the juicer was “not meant to squeeze lemons” but “to start conversations”. If the intended function was to juice, function followed faith. However, if the intended function was to start conversations, faith followed function. Be conscious of how ideological, political, and spiritual predilections can negatively impact design success. When making design tradeoffs, favor function over faith in the application. The prioritization of function is what most distinguishes design from art: Design succeeds or fails based on meeting functional requirements external to the designer, whereas art succeeds or fails based on meeting personal requirements internal to the artist. Design solves; art expresses. See also Appeal to Nature; Form Follows Function; Maslow’s Hammer

1

The term function is used to refer to two different things in design, which can create confusion: (1) the purpose of a thing and (2) how a thing works. In the context of faith follows function, function refers to the former — i.e., the design characteristics required to achieve a goal or realize a purpose.

The Dymaxion car was conceived as the first generation of a vehicle that, according to inventor Buckminster Fuller, would eventually also be able to fly. The three-wheeled wonder was nothing if not a spectacle, with a teardrop shape sculpted by Isamu Noguchi, a nautically inspired interior design, and a ridiculously tight turning radius that was the highlight of product demonstrations. It was a pure expression of Fuller’s Dymaxion philosophy —“Dymaxion” is a portmanteau of the words dynamic, maximum, and tension — but by any objective measure, a terribly designed car and an example of function following faith.

…we have to report, with some sadness, that it’s the scariest, most poorly designed vehicle we’ve ever been behind the wheel of. — Graham Kozak, Autoweek after test driving a modern Dymaxion replica

063

Feature Creep A continuous expansion or addition of new product features beyond the original scope. Feature creep is one of the most common causes of cost and schedule overruns. The key driver is the perception that more is better, and therefore features are continuously added and rarely taken away. But adding features adds complexity, and complexity is expensive. Every feature added is a feature to be maintained, updated, and tested. For users, every added feature is something new to be learned or something to filter out if not needed. Unnecessary features increase the chance of errors and can cause frustration. A study commissioned by Philips Electronics found that at least half of returned products have nothing wrong with them; consumers just couldn’t figure out how to use them.1 Feature creep occurs because: • Features are easy or convenient to add — this is particularly common in software development. • Features accumulate over multiple generations of a product. • Features are added to appease internal project stakeholders. This is often due to what’s called the internal-audience problem: Designers or marketers think they know what’s best for the customer, but they discover that customers disagree. Best case, feature creep changes the scope of a project, increasing time and cost with nominal impact on performance and the customer experience. Worst case, feature creep has unintended performance or usability consequences and negatively impacts the customer experience. To avoid creeping featurism, be on the lookout for feature creep in design and development and educate your peers about the trap — feature additions and changes typically come in little bits and pieces, so vigilance is important. Ensure that features are linked to customer needs and not added out of convenience or appeasement. When creating a new version of a design, ask what can be subtracted; every good product release should subtract as well as add. Create project milestones to formally freeze product specifications — freeze means no more changes. Use awareness of feature creep to help keep your projects on time and on budget and to avoid the unintended and sometimes catastrophic consequences of scope changes. Incorporate “feature subtraction” as a formal part of your product update cycle. See also Death Spiral; Design by Committee; KISS; Ockham’s Razor;

Pareto Principle; Progressive Subtraction; Social Trap

1

The term was originally coined creeping featurism in the 1980s, but the general concept was introduced in The Mythical ManMonth by Fred Brooks, 1975, Addison-Wesley.

On its maiden voyage in 1628, the Swedish warship Vasa sank after going less than one mile. The cause? Extra guns, decks, and carvings added during construction compromised its stability. Feature creep literally sank the ship.

064

Feedback Information about status or performance used for confirmation, decision-making, and improvement. Giving feedback to people can be challenging because there are human factors independent of the feedback that determine its effectiveness (e.g., attentional and working memory limits, background knowledge, motivation, perceived relevance).1

1

The research on feedback is complex with many mixed findings. Orientation and targeting, however, are reliably effective principles across contexts. For more in-depth treatments, see, for example, “The Power of Feedback Revisited: A Meta-Analysis of Educational Feedback Research” by Benedik Wisniewski et al., Jan 2020, Frontiers in Psychology, 10, 3087.

2

It is, for example, a failing of many grading practices to issue letter grades without targeted feedback; and when targeted feedback is provided, it is often not accompanied by the time and opportunity to act on it.

3

See, for example, “Focus on Formative Feedback” by Valerie Shute, Mar 2008, Review of Educational Research, 78(1), 153 –189; and “A Review of Public Response to Short Message Alerts under Imminent Threat” by Erica Kuligowski and Jessica Doermann, Jan 2018, NIST Technical Note, 1982.

Whether feedback is system-to-person or person-to-person, there are two overarching principles essential to all effective feedback: 1. Orientation — Grabbing attention with a stimulus and then setting an

appropriate affect and expectation of what’s next. For example, the system may provide feedback with a pop-up dialog and alert sound, complete with a warning icon and headline about the problem (systemto-person feedback); or a supervisor may request a meeting and tell their employee they have received a rating of “excellent” (person-to-person feedback). Common examples of orienting stimuli include color codes (e.g., red, yellow, green), mode indicators (e.g., drive, neutral, reverse), and letter grades (e.g., A, B, C). 2. Targeting — Providing guidance that is focused on what a person needs

to do to remedy a problem or improve performance. This should follow an orienting stimulus and be short, plainly worded with no cryptic codes and relevant to the context. For example, a pop-up dialog may read “Audio unit overheating. Power off for 30 minutes”. Or a supervisor may discuss improvements that will help the employee reach a “superior” rating next year. In both cases, the feedback provides guidance. Note that for targeted feedback to be effective, the recipient must have the time, opportunity, and ability to act on it.2 Common feedback errors include too little information (e.g., unintelligible error codes), too much information (e.g., multiple, unnecessary or repetitive pop-up messages), or information at the wrong time (e.g., negative performance feedback a year after a work incident). Feedback that is too verbose, too frequent, or too delayed is ineffective. Therefore, systems should be designed to offer minimum-viable feedback and to offer it quickly when needed or requested.3 Feedback is the foundation for all usability and learning. Orient people, give them targeted guidance when needed, then repeat. The rule should be minimum-viable feedback delivered quickly. Note that if feedback is not useful or if it’s employed too frequently, people will habituate — and the feedback will become ineffective. See also Anchoring; Feedback Loop; Habituation; Learnability

Clarity of feedback is critical when managing complex control systems, especially during times of emergency. The Three Mile Island disaster began on March 28, 1979, with a pilotoperated relief valve (PORV) that was stuck open when it should have been closed. This open valve allowed large

amounts of nuclear reactor coolant to escape. Operators were not aware of this problem because the PORV light was off, which they interpreted to mean closed. However, the light being off signaled that the system had merely sent a message to close the valve, not that the valve was closed.

And then there was the inconsistency of the feedback systems: 14 different meanings for red lights and 11 different meanings for green lights.

But there was one essential thing whose failure loomed largest at TMI (Three Mile Island), one essential thing that we demand of any gadget in our lives: feedback…the machine just wasn’t telling the men what they needed to know. With every little thing they tried, they grabbed on to the wrong feedback, focusing on the wrong things. — Cliff Kuang and Robert Fabricant User Friendly

065

Feedback Loop A cycle in which output feeds back into a system as input, changing subsequent output. Every action creates an equal and opposite reaction. When reactions loop back to affect themselves, a feedback loop is created. All dynamic, realworld systems are composed of many interacting feedback loops. Human physiology, traffic patterns, animal populations in the wild, and the spread of contagions are all examples of real-world systems with feedback loops.1

1

In terms of practical application, the seminal works on systems and feedback loops include Industrial Dynamics by Jay Forrester, 1961, MIT Press; Urban Dynamics by Jay Forrester, 1969, MIT Press; and World Dynamics by Jay Forrester, 1970, MIT Press.

There are two types of feedback loops:

2

See, for example, Why Things Bite Back: Technology and the Revenge of Unintended Consequences by Edward Tenner, 1997, Vintage Books.

3

See, for example, Macroscope: A New World Scientific System by Joël de Rosnay and Robert Edwards (Tr.), 1979, Harper & Row Publishers.

1. Positive feedback loops — Output is amplified, resulting in accelerated

growth or decline. Therefore, positive feedback can be useful for creating rapid change but generally results in negative consequences if not moderated by negative feedback loops. For example, in response to head and neck injuries in football in the 1950s, designers created plastic football helmets with internal padding to replace leather helmets. The helmets provided more protection but induced players to take increasingly greater risks when tackling. More head and neck injuries occurred than before. By concentrating on the problem in isolation (e.g., not considering changes in player behavior), designers inadvertently created a positive feedback loop in which players used their head and neck in increasingly risky ways. This resulted in more injuries, which led to additional redesigns that made the helmet shells harder and more padded, and so on.2 2. Negative feedback loops — Output is dampened, resulting in equilibrium

around a point. For example, the Segway Human Transporter uses negative feedback loops to maintain equilibrium. As a rider leans forward or backward, the Segway accelerates or decelerates to keep the system in equilibrium. To achieve this smoothly, the Segway makes 100 adjustments every second. Given the high adjustment rate, the oscillations around the point of equilibrium are so small as to not be detectable. However, if fewer adjustments were made per second, the oscillations would increase in size and the ride would become increasingly jerky. Therefore, negative feedback can be useful for stabilization but perilous because it can be difficult to manage. Consider positive feedback loops to perturb systems to change, but include negative feedback loops to prevent runaway behaviors that can lead to system failure. Consider negative feedback loops to stabilize systems, but be cautious — too much negative feedback in a system can lead to stagnation.3 See also Archetypes, System; Iteration; Root Cause; Shaping; Social Trap

Bridges resist dynamic loads using structures and materials that create negative feedback. But the negative feedback built into the 1940 Tacoma Narrows Bridge was no match for the positive feedback between the bridge’s deflection and the wind. The Tacoma Narrows Bridge collapsed five months after it opened.

066

Fibonacci Sequence A sequence of numbers that forms patterns commonly found in nature. In a Fibonacci sequence of numbers, each number is the sum of the two preceding numbers (e.g., 1, 1, 2, 3, 5, 8, 13). Patterns exhibiting the sequence are commonly found in natural forms, such as the petals of flowers, spirals of galaxies, and bones in the human hand. The ubiquity of the sequence in nature has led many to conclude that patterns based on the Fibonacci sequence are intrinsically aesthetic and, therefore, worthy of consideration in design.1 Fibonacci patterns are found in many classic works, including poetry, art, music, and architecture. For example, it has been argued that Virgil used Fibonacci sequences to structure the poetry in The Aeneid. Fibonacci sequences are also found in the musical compositions of Mozart’s sonatas and Beethoven’s Fifth Symphony. Le Corbusier meshed key measures of the human body and Fibonacci sequences to develop the Modulor, a classic system of architectural proportions and measurements to aid designers in achieving practical and harmonious designs.2 Fibonacci sequences are generally used in concert with the golden ratio. The division of any two adjacent numbers in a Fibonacci sequence yields an approximation of the golden ratio. Approximations are rough for early numbers in the sequence but increasingly accurate as the sequence progresses. As with the golden ratio, debate continues as to the aesthetic value of Fibonacci patterns. Are such patterns considered aesthetic because people find them to be more aesthetic or because people have been taught to believe they are aesthetic? Research on the aesthetics of the golden ratio indicates that we have a preference for golden proportions in linear or rectilinear forms, though the effect is small; also, many of the studies are old and their methodologies a bit weak. Little empirical research exists on the aesthetics of non-golden Fibonacci patterns.3 The Fibonacci sequence continues to be one of the most influential patterns in mathematics and design. Consider Fibonacci sequences when developing interesting compositions, geometric patterns, and organic motifs and contexts, especially when they involve rhythms and harmonies among multiple elements. Do not contrive designs to incorporate Fibonacci sequences, but do not forgo opportunities to explore Fibonacci relationships when other aspects of the design are not compromised. See also Golden Ratio; Self-Similarity; Wabi-Sabi

1

The seminal work on the Fibonacci sequence is Liber Abaci [Book of the Abacus] by Leonardo of Pisa, 1202. Contemporary seminal works include The Geometry of Art and Life by Matila Ghyka, 1946/1978, Dover Publications; and Elements of Dynamic Symmetry by Jay Hambidge, 1920/1978, Dover Publications.

2

See, for example, Structural Patterns and Proportions in Virgil’s Aeneid by George Eckel Duckworth, 1962, University of Michigan Press; “Did Mozart Use the Golden Section?” by Mike May, Mar – Apr 1996, American Scientist, 84(2), 118–120; and Le Modulor by Le Corbusier, 1948/2000, Birkhäuser.

3

“All That Glitters: A Review of Psychological Research on the Aesthetics of the Golden Section” by Christopher Green, 1995, Perception, 24, 937– 968.

Millimeters

Le Corbusier derived two Fibonacci sequences based on key features of the human form to create the Modulor. The sequences purportedly represent a set of ideal measurements to aid designers in achieving practical and harmonious proportions in design. Golden ratios were calculated by dividing each number in the sequence by its preceding number (indicated by horizontal lines).

067

Figure-Ground The brain automatically makes elements objects of focus or background. Figure-ground, one of the Gestalt principles of perception, asserts that the human perceptual system separates stimuli into either figure elements or ground elements. Figure elements are the objects of focus, and ground elements compose an undifferentiated background. This relationship can be demonstrated with visual stimuli, such as photographs, as well as auditory stimuli, such as soundtracks with dialogue and background music.1

1

The seminal work on the figure-ground relationship is “Synoplevede Figurer” [Figure and Ground] by Edgar Rubin, 1915, Gyldendalske, translated and reprinted in Readings in Perception by David C. Beardslee and Michael Wertheimer (Eds.), 1958, D. Van Nostrand Company, 194 – 203.

Whenever visual elements are juxtaposed, we automatically perceive relationships among them. Elements are perceived as either figure or ground in accordance with the following visual cues:

2

“Lower Region: A New Cue for Figure-Ground Assignment” by Shaun P. Vecera et al., 2002, Journal of Experimental Psychology: General, 131(2), 194 – 205.

• The figure has a definite shape, whereas the ground is shapeless. • The ground continues behind the figure. • The figure seems closer with a clear location in space, whereas the ground seems farther away and has no clear location in space. • Elements below a horizon line are likely perceived as figures, whereas elements above a horizon line are likely to be perceived as ground. • Elements in the lower regions of a design are likely to be perceived as figures, whereas elements in the upper regions are likely to be perceived as ground.2 When the figure and ground of a composition are clear, the relationship is considered stable; the figure element receives more attention and is better remembered than the ground. In unstable relationships, the figure and ground are neither clear nor stable. Unstable figure-ground relationships can be reversible or ambiguous. In reversible figure-ground relationships, the figure and ground attract attention equally, resulting in what some consider a dynamic design. In ambiguous figure-ground compositions, the interpretation of elements is not clear and may depend on the viewer. Consider the figure-ground principle when the goal is to focus attention and minimize perceptual confusion. Ensure that designs have stable figure-ground relationships by incorporating appropriate visual cues. Increase the probability of recall of key elements by making them figures in the composition. If the goal is to create a dynamic visual design, consider using reversible figureground relationships. See also Closure; Common Fate; Good Continuation; Perspective Cues;

Proximity; Signal-to-Noise Ratio; Similarity; Uniform Connectedness

When logos have the company name low, below the horizon line (e.g., left column), the name becomes a figure element . Because of this, the name will receive more attention and be better remembered than designs that place the name at the top of the logo (e.g., Paramount [below]). And when graphical elements are cut out and high contrast (e.g., the adidas mountain), they are more likely to be perceived as figures versus elements that are low contrast (e.g., the Toblerone mountain) or inscribed in backgrounds (e.g., Patagonia).

068

First Principles Things we know for certain to be true and that are not derived from anything else. First principles are things about the world that we know to be true — or, at least, that we posit to be true or have very high confidence are true — and that aren’t derived or reasoned from other things.1 First principles are irreducible and immutable, like laws of nature, and as such represent the canonical set of constraints and rules governing problem solving and design. All other constraints and rules are derived or postulated — i.e., they are, in a sense, artificial. Examples of first principles include Newton’s laws of motion in physics, the proposition that all people are created equal in law, working memory limits in psychology, and the mechanical properties of materials in engineering. Reasoning or designing from first principles means thinking in terms of what’s theoretically possible versus what’s currently known or done. The process involves identifying the first principles germane to a particular problem, deliberately removing all other assumptions from consideration, and then exploring solutions with just the first principles in mind. Designing from first principles liberates designers from the biases and conventions of the day, enabling them to see new opportunities and explore innovative approaches. For example, in considering whether to buy or build rockets, the founder of SpaceX, Elon Musk, estimated the cost of raw materials needed to build a rocket to be about 2% of the price of commercially available rockets. He also knew that conventional rockets were inefficiently manufactured and expensive because they were single use (akin to using a commercial airliner for one flight and then throwing it away), which meant plenty of room for improvement and cost reduction. A first-principles analysis laid bare the opportunity, which led to the formation of SpaceX.2 Design from first principles when significant innovation is the goal. The essence of designing from first principles is embodied in the Latin phrase, nullius in verba (take nobody’s word for it) — i.e., question everything and trust only that which is irreducible and immutable. Disregard constraints not based on fundamental laws and principles. Not all problems require (or have time and budget for) a first-principles approach and therefore consider the approach in contexts where significant innovation is required. See also Archetypes, System; Iron Triangle; Kano Model; Levels of Invention

1

The concept of first principles is rooted in philosophy, introduced by Aristotle as “the first basis from which a thing is known” in Metaphysics.

2

“Elon Musk’s Mission to Mars” by Chris Anderson, Oct 21, 2012, Wired.

In the late 1930s, the British Air Ministry issued a request for proposals to create a twin-engine, medium-range bomber. Aviation firms responded with conventional designs featuring metal airframes, high-powered engines, and multiple defensive turrets. Geoffrey de Havilland and his design team knew that essential metals and metalworkers would be in short supply during wartime and that wood had

a strength-to-weight ratio that was equal to or better than light alloys or steel. Designing from first principles, their solution: a streamlined bomber constructed mostly of wood. It wouldn’t need defensive armaments because it would be faster than enemy aircraft. Additional benefits of this approach included plentiful raw materials, speed of prototyping, rapid development and design iteration, and employment of an underutilized

workforce: woodworkers. After initial resistance to its radical design, the de Havilland Mosquito was eventually approved for production. When it entered service in 1941, it was one of the fastest operational aircraft in the world and would become known affectionately as “Mossie” or the “Wooden Wonder”.

In 1940 I could at least fly as far as Glasgow in most of my aircraft, but not now! It makes me furious when I see the Mosquito. I turn green and yellow with envy. The British, who can afford aluminum better than we can, knock together a beautiful wooden aircraft that every piano factory over there is building, and they give it a speed which they have now increased yet again. What do you make of that? There is nothing the British do not have. They have the geniuses, and we have the nincompoops. — Hermann Göring, 1943

069

Fitts’ Law The time required to touch a target is a function of the target size and the distance to the target. Fitts’ law, proposed by the psychologist Paul Fitts, states that close, large targets can be accessed more quickly and with fewer errors than distant, small targets. In addition, the faster the required movement and the smaller the target, the greater the error rate due to a speed-accuracy tradeoff. Fitts’ law has implications for the design of controls, control layouts, and any device that facilitates movement to a target.1

1

The seminal work on Fitts’ law is “The Information Capacity of the Human Motor System in Controlling Amplitude of Movement” by Paul Fitts, 1954, Journal of Experimental Psychology, 4, 381– 391. The Fitts’ law equation is MT = a + b log 2 (d /s + 1), where MT = movement time to a target, a = 0.230 sec, b = 0.166 sec, d = distance between pointing device and target, and s = size of the target. For example, assume the distance between the center of a screen and an icon of 1 inch (2.5 cm) diameter is 6 inches (15 cm). The time to acquire the icon would be MT = 0.230 sec + 0.166 sec (log 2 (6/1 + 1)) = 0.7 sec.

2

See “Human Performance Times in Microscope Work” by Gary Langolf and Walton Hancock, 1975, AIIE Transactions, 7(2), 110 –117; and “Application of Fitts’ Law to Foot-Pedal Design” by Colin Drury, 1975, Human Factors, 17(4), 368 – 373.

Fitts’ law is applicable for rapid, pointing movements, as opposed to more continuous movements, such as writing or drawing. Pointing movements typically consist of: 1. A ballistic movement — One large, quick movement toward a target. 2. Homing movements — Fine-adjustment movements which result in a

resting position over (acquiring) the target. Homing movements are responsible for most of the movement time and cause most errors. Fitts’ law has most often been used to model pointing to an object or computer screen using the finger or pointing device. It has also been used to predict efficiency of movement for assembly work performed under a microscope, as well as movement of a foot to a car pedal. The law is predictive over a wide variety of conditions, devices, and people.2 Designers can decrease errors and improve usability by understanding the implications of Fitts’ law. For example, when pointing to an object on a computer screen, movement in the vertical or horizontal dimensions can be constrained, which dramatically increases the speed with which objects can be accurately acquired. This kind of constraint is commonly applied to controls such as scroll bars but less commonly to the edges of the screen, which also act as a barrier to cursor movement; positioning a button along a screen edge or in a screen corner significantly reduces the homing movements required, resulting in fewer errors and faster acquisitions. Consider Fitts’ law when designing systems that involve pointing. Make sure that pointing targets are near or large, particularly when rapid movements are required and accuracy is important. Likewise, make targets more distant and smaller when they should not be frequently used or when they will cause problems if accidentally activated. Consider strategies that constrain movements to improve performance and reduce error. See also Constraint; Error, Design; Error, Human; Hick’s Law;

Performance Load

The time and error rate involved in whacking a mole is a function of the distance between the whacker and the mole.

070

Five Hat Racks A metaphor representing the five ways information can be organized. The term hat racks is built on an analogy — hats as information and racks as the ways to organize information. The organization of information is one of the most powerful factors influencing the way people think about and interact with a design.1 The five hat racks principle asserts that there are five organizational strategies, regardless of the specific application: 1. Category — Organization by similarity or relatedness. Examples include

areas of study in a college catalog and types of retail merchandise on a website. Organize information by category when clusters of similarity exist within the information or when people will naturally seek out information by category. For example, a person desiring to purchase a stereo may seek a category for electronics. 2. Time — Organization by chronological sequence. Examples include

historical timelines, meeting agendas, and television program guides. Organize information by time when presenting and comparing events over fixed durations or when a time-based sequence is involved. 3. Location — Organization by geographical or spatial reference. Examples

include historic sites, emergency exit maps, and travel guides. Organize information by location when orientation and wayfinding are important or when information is meaningfully related to the geography of a place. 4. Alphabet — Organization by alphabetical sequence. Examples include

dictionaries, encyclopedias, lists of contact information, as well as this book. Organize information alphabetically when information is referential, when efficient nonlinear access to specific items is required, or when no other organizing strategy is appropriate. 5. Continuum — Organization by magnitude (e.g., highest to lowest, best to

worst). Examples include baseball batting averages and Internet search engine results. Organize information by continuum when comparing things across a common measure. Consider the five hat racks when organizing information for presentation. Be intentional with its application, as the architecture of information influences how it will be perceived and remembered. Ensure that the chosen hat rack is relevant to the goal and reinforces the aspects of the information that are most important. See also Consistency; Framing; Hierarchy; Similarity; Storytelling; Wayfinding

1

The seminal work on the five hat racks is Information Anxiety by Richard Saul Wurman, 1990, Bantam Books. Note that Wurman changed the hat rack title of continuum to hierarchy in a later edition of the book, which permits the acronym LATCH. The original title continuum is presented here, as the authors believe it to be a more accurate description of the category.

ALPHABETICAL

NAME

TIME

RECOGNIZED BY AKC

LOCATION

COUNTRY OF ORIGIN POPULARITY

CONTINUUM

English Bulldog

French Bulldog

1886

1887

English Bulldog

Poodle

CANADA

Labrador Retriever

Golden Retriever

Labrador Retriever

1898

1908

1917

1925

French Bulldog

German Shepherd

Labrador Retriever

Golden Retriever

ENGLAND

English Bulldog

Poodle

GERMANY

French Bulldog

German Shepherd

SCOTLAND

Poodle

Golden Retriever

1

2

3

4

5

6

Labrador Retriever

French Bulldog

Golden Retriever

German Shepherd

Poodle

English Bulldog

NON-SPORTING

SPORTING

GROUP

HERDING CATEGORY

German Shepherd

Labrador Retriever

Poodle

Six breeds of dog organized using the five hat racks. Note how the method of organization influences the story the data tell. Data are from the American Kennel Club (AKC), 2021.

English Bulldog

French Bulldog

Labrador Retriever

Golden Retriever

071

Five Tenets of Queuing Five principles for improving the experience of waiting in lines. Average Americans spend more than two years of their life waiting in lines. For those living in population-dense areas, the estimate jumps to as much as five years. It turns out that the experience of such waits has as much to do with the psychology of the design as the physical layout.1 The five tenets of queueing offer guidance for enhancing the experience of people waiting in lines. 1. Occupied time feels shorter than unoccupied time — When the mind

is occupied, it is distracted from the idleness of waiting. For example, mirrors installed near elevator doors have been used to reduce complaints about wait times. When people are primping in mirrors, they get distracted from delays and complain less. 2. People want to get started — A perceived lack of progress is frustrating;

therefore, breaking an experience into progressive stages can be helpful. For example, it is common for doctors to have exam rooms that are essentially secondary waiting rooms — wait in the waiting room, your name is called, and then wait in the exam room. 3. Anxiety makes waits seem longer — When waiting in line for a

scarce resource, anxiety can result, making wait times seem longer. Interventions that disassociate position in line with access to the offering can mitigate this. For example, a long line into a theater can create anxiety for those in the back, fearing they will get a low-quality seat. Solutions like assigned seating can alleviate this anxiety and make wait times more tolerable. 4. Uncertain waits seem longer than known, finite waits — Uncertainty

about when one will get served creates anxiety, which makes wait times seem longer. For example, when offices post signs like, “We’ll be back soon”, customers get frustrated far more quickly than when the signs are specific, like “Back in 10 minutes”. And when such expectations are exceeded, delays are often perceived as a net positive. 5. Unfair waits are longer than equitable waits — People prioritize fairness

over efficiency. For example, people would rather wait in a long, single line configuration than wait in shorter, multi-line configurations where the lines move at different rates. Line psychology is “first come, first served”. Any deviation from this rule exacerbates perceived wait times. Consider the five tenets of queueing when designing experiences and processes that involve people waiting in lines. Keep people busy, moving, relaxed, aware of their position, and served in order of arrival. Remember that perceived wait time is more important than actual wait time. See also Entry Point; Nudge; Peak-End Rule; Progressive Disclosure; Visibility

1

Proposed in “The Psychology of Waiting Lines” by David Maister, 1984, Background Note 684-064, Harvard Business School. See also “Designing Waits That Work” by Donald Norman, Jul 2009, MIT Sloan Management Review, 50(4), 23.

Occupied time feels shorter. “Hidden Mickeys” are strategically placed in queues throughout the Disney Parks. This gives people standing in line something to do. People want to get started. Airline passengers show up to airports and wait, to line up at gates and wait, to board planes and wait, and then often wait some more on the runway. But as long as people perceive progress, they are happier. Anxiety makes waits seem longer. Waiting for a table in a crowded, noisy restaurant can be stressful. Buzzers reduce this stress by allowing patrons to roam freely without fear of not hearing their name called. Uncertain waits seem longer. Some elevator panels don’t indicate where elevators are or if they are moving. This lack of status makes wait times seem longer. Unfair waits seem longer. Everybody’s default rule is “first come, first served”. Issuing tickets with numbers ensures that people in the queue are served in the order in which they arrived.

072

Flexibility Tradeoffs As the flexibility of a design increases, the performance of the design decreases. The idea that flexible designs can perform as well as specialized designs has great appeal, but unless existing specialized designs are terribly inefficient, this is rarely the case. For example, no one car design is optimal for all types of races. When optimal performance is required, designs need to be specialized. When average performance across a wider range of contexts is required, designs need to be flexible. You can’t have both. Compare the design of top-fuel dragsters to Formula One cars to offroad racers. Each vehicle is designed to perform optimally in a particular context. Now compare these to consumer cars, which perform well across a wide range of contexts but can’t be competitive in such specialized races. Flexibility tradeoffs are the reason why.1 Flexibility requires tradeoffs in usability and performance when the design requirements conflict or are in tension with one another. For example, offroad racers need a high ground clearance to clear obstacles, whereas a Formula One car needs to sit low for aerodynamics and cornering. These requirements are in tension, and one vehicle design that tries to meet both will be far from optimal. When requirements are similar and can be combined without significant tradeoffs, there are potential opportunities for integrating multiple distinct functionalities into one design. If the answers to the following five questions are “yes”, then flexibility tradeoffs may be minimal and there is a potential opportunity for one general-purpose design to be successful: 1. Is suboptimal performance good enough? 2. Is the performance environment stable over time? 3. Are functional and usability requirements similar? 4. Can the functions be modularized in the design? 5. Does the value of flexibility justify the tradeoffs?

There is an old saying that holds true for design: “A jack of all trades is a master of none but oftentimes better than a master of one”. Consider flexibility tradeoffs in design strategy, especially when exploring the merging of distinct functional systems into a single integrated system. When seeking to optimize performance in a narrow context, pursue specialized designs. When seeking to optimize flexibility, set expectations that performance and usability will likely be compromised. The exception is when requirements are highly similar or when there is compensating merit for the integration. See also Ackoff’s Law; Brooks’ Law; Convergence; Conway’s Law; Modularity;

Reverse Salient

1

See, for example, The Invisible Computer by Donald Norman, 1999, MIT Press; and “The Visible Problems of the Invisible Computer: A Skeptical Look at Information Appliances” by Andrew Odlyzko, 1999, First Monday, 4(9), www.firstmonday.org.

The F-35 Joint Strike Fighter was conceived in the late 1990s to be a replacement for several aircraft: Air Force F-16 air superiority fighter, A-10 ground support attack aircraft, Navy F/A 18 multirole combat aircraft designed for carrier takeoff and landing, and the Marine Harrier vertical takeoff and landing ground

attack aircraft. Each of these roles has distinct performance requirements, many of which are in tension with one another. For example, the optimal design to fly high and fast (air superiority) is significantly different from the optimal design to fly low and slow (ground support). To optimize for one compromises the other. The

inevitable result is a very complex, expensive, and unreliable Swiss Army knife of an aircraft with suboptimal performance across its functional roles. The question to be answered: Do the benefits of integration outweigh the costs of such flexibility tradeoffs?

F-35A is “double inferior”…inferior acceleration, inferior climb, inferior sustained turn

capability. Also has lower top speed. Can’t turn, can’t climb, can’t run.

— John Stillion and Harold Scott Perdue, analysts at RAND in a written report, “Air Combat Past, Present and Future”, August 2008

073

Flow A state of immersion so intense that awareness of the real world is lost. When people are not challenged, they become bored; and when challenged too much, they become anxious. Flow occurs in the Goldilocks zone between these two states, where people are challenged at or near their maximum skill level but not beyond. People in a state of flow lose track of time and experience feelings of joy and satisfaction. But not only are flow states pleasing to experience; they have also been shown to help people reach peak performance. One 10-year longitudinal study showed people in flow states were five times more productive. Despite this potential, flow states rarely occur in everyday life because challenges and skills are rarely matched long enough to sustain them.1 But if there are consistent factors that enable flow, they can be intentionally considered in the design of activities and experiences. Nine components of flow have been proposed: 1. Goals — People can achieve the goal with their abilities and skill set. 2. Concentration — People are able to focus deeply. 3. Loss of self — People lose awareness of themselves and their thoughts. 4. Time — People lose the sense of the passage of time. 5. Feedback — People receive clear and immediate feedback. 6. Balance — People feel a balance between skill level and the challenge. 7. Control — People have a sense control over the activity. 8. Rewarding — People find the activity enjoyable on its own merits. 9. Immersion — People become absorbed and lost in the activity.

The rare and multivariate nature of flow makes it difficult to verify experimentally. The construct was derived by interviewing hundreds of chess players, musicians, rock climbers, etc. and comparing descriptions of their best moments. As such, flow is not an everyday occurrence but, rather, a rare perfect storm of confluent factors. Not surprisingly, the hundreds of empirical studies that have been conducted trying to pin down the construct have yielded mixed results, with definitions of flow varying from study to study.2 Consider elements of flow in activities that benefit from intense focus, such as instruction, games, coding, writing, and playing music. Place special emphasis on setting clear and achievable goals, matching difficulty to skill level, and providing clear and immediate feedback. See also Control; Depth of Processing; Gamification; Miller’s Law;

Performance Load

1

The seminal work is Beyond Boredom and Anxiety by Mihaly Csikszentmihalyi, 1975, Jossey-Bass. See also Flow: The Psychology of Optimal Experience by Mihaly Csikszentmihalyi, 1990, Cambridge University Press. For productivity, see “Increasing the meaning quotient of work” by Susie Cranston and Scott Keller, 2013, McKinsey Quarterly, 1, 48–59.

2

See, for example, “Investigating the ‘Flow’ Experience: Key Conceptual and Operational Issues” by Sami Abuhamdeh, 2020, Frontiers in Psychology, 11(158), 1–13.

FL OW

FL OW

BOREDOM

Difficulty Level

ANXIETY

Skill Level

[Flow is] being completely involved in an activity for its own sake. The ego falls away. Time flies. Every action, movement, and thought follows inevitably from the previous one, like playing jazz. Your whole being is involved, and you’re using your skills to the utmost. — Mihaly Csikszentmihalyi Wired, September 1, 1996

074

Forgiveness Designs should help people avoid errors and protect them from harm when they do occur. Human error is inevitable, but it need not be catastrophic. Forgiveness in design helps prevent errors before they occur and minimizes the negative consequences of errors when they do occur. Forgiving designs provide a sense of security and stability, which in turn, fosters a willingness to learn, explore, and use the design.1 Common strategies for incorporating forgiveness in designs include: • Good affordances — Physical characteristics of the design that influence its correct use (e.g., a uniquely shaped plug that can only be inserted into the appropriate receptacle). • Reversibility of actions — Actions can be reversed if an error occurs or the intent of the person changes (e.g., the undo function in software). • Safety nets — A device or process that minimizes the negative consequences of a catastrophic error or failure (e.g., a pilot ejection seat in aircraft or the autosave function in software applications). • Confirmation — Verification of intent is required before critical actions are allowed (e.g., a lock that must be opened before equipment is activated). • Warnings — Signs, prompts, or alarms used to warn of imminent danger (e.g., road signs warning of a sharp turn ahead). • Help — Information that assists in basic operations, troubleshooting, and error recovery (e.g., documentation or a helpline). The preferred methods of achieving forgiveness in a design are affordances, reversibility of actions, and safety nets. Designs that use these strategies require minimal confirmations, warnings, and help — i.e., if the affordances are good, help is less necessary; if actions are reversible, confirmations are less necessary; if safety nets are strong, warnings are less necessary. When using confirmations, warnings, and help systems, avoid cryptic messages or icons. Ensure that messages clearly state the risk or problem and also what actions can or should be taken. Too many confirmations or warnings impede the flow of interaction and increase the likelihood that the message will be ignored. Be aware that the amount of help necessary to successfully interact with a design is inversely proportional to the quality of the design — if a lot of help is required, the design is poor. See also Affordance; Confirmation; Constraint; Error, Design; Error, Human;

Factor of Safety; Poka-Yoke; Weakest Link

1

See, for example, Human Interface Guidelines: The Apple Desktop Interface, 1987, Apple Computer, Inc.

In case of a catastrophic failure, the ballistic recovery system acts as a safety net, enabling the pilot and craft to return safely to earth.

To err is human, to forgive design. — Andrew Dillon Designing Usable Electronic Text

075

Form Follows Function Aesthetic considerations should be secondary to functional considerations. The maxim form follows function was proposed by the architect Louis Sullivan, who wrote, “Form ever follows function. This is the law”. To Sullivan, the question was: What overriding priority should be used to drive a building’s design? His conclusion was that a building’s exterior design should reflect its interior functions or purpose — i.e., it should be designed from the inside out so that its form follows its function. This maxim was adopted and popularized by modernist architects and designers in the early twentieth century who emphasized function over ornamentation. The principle has since been embraced as a tried-and-true heuristic by designers.1 The notion of whether form should, indeed, follow function has been debated ever since its introduction. Modernist architects such as Le Corbusier, Mies van der Rohe, and Walter Gropius embraced the maxim, translating it into a new style of minimalist building design that emphasized functional forms without ornamentation. Others rejected this interpretation. For example, Frank Lloyd Wright argued that Sullivan had been misunderstood and that “form and function should be one, joined in spiritual union”. Numerous variants of the phrase have been proposed, including form follows emotion, function follows human needs, and form follows behavior. Much of the debate centers on confusion around the word function. • Some use function to mean how a thing works. • Others use function to mean a thing’s purpose, or raison d’être. With respect to the first meaning, it is true that how a thing looks should be in harmony with how it works; but there is no essentiality to this. It is an arbitrary value judgment. Form follows function is no more valid than form follows tradition. By contrast, with respect to the second meaning, how a thing looks should always follow its purpose — i.e., what it seeks to accomplish. There is essentiality to this, as surely all aspects of a design should serve the goal for which it is being created. In this meaning, it is not a stylistic declaration but a design imperative: Form should always follow function. Apply form follows function in the sense of ensuring that all aspects of design support its goal. Don’t invoke it as a dogmatic minimalist slogan but, rather, consider it as a guide to set design strategy and support decision-making. Remember, depending on the goal, beauty can be as functionally important as how something works. See also Aesthetic-Usability Effect; Appeal to Nature; Faith Follows Function; KISS; Ockham’s Razor

1

The origin of the concept is attributed to the eighteenth-century Jesuit monk Carlo Lodoli. His theories on architecture likely influenced later designers like Horatio Greenough and Louis Sullivan who then articulated the concept in popular form. The seminal works on form follows function are “The Tall Office Building Artistically Considered” by Louis H. Sullivan, Mar 1896, Lippincott’s Magazine; and Form Follows Fiasco: Why Modern Architecture Hasn’t Worked by Peter Blake, 1977, Little, Brown, and Company.

In the aftermath of Hurricane Katrina in 2005, Brad Pitt’s nonprofit Make It Right Foundation committed to help rebuild houses in the Lower Ninth Ward of New Orleans. The houses were designed by renowned architects such as Frank Gehry, David Adjaye, and Shigeru Ban and certified LEED Platinum by the USGBC. The intentions were good, but the outcomes were not. The homes were not designed to properly withstand the climate of New Orleans, leading to rampant rot and mold. Many of the local features of the neighborhood were not taken into consideration, such as closely located shotgun cottages with generous front porches. Experimental materials were used that didn’t work as expected, leading to insulation and structural problems. Out of the 106 buildings built, 2 have been demolished, 6 abandoned, and about 90 have undergone significant structural repairs or renovations. In the end, the houses were designed by and for architects, not for the people living in them. Function followed form.

[Residents] say the houses were built too quickly, with low-quality materials, and that the designs didn’t take into account New Orleans’ humid, rainy climate. — NBCNews.com Article, September 12, 2018

076

Framing A method of presenting choices in specific ways to influence decision-making and judgment. Framing is the use of images, words, and context to sway how people think about something. Information can emphasize the positive (e.g., glass is half-full) or the negative (e.g., glass is half-empty). The type of frame used to present information dramatically affects how people make decisions and judgments and is consequently a powerful influencer of behavior. News media, politicians, propagandists, and advertisers all commonly use framing (knowingly or unknowingly) with great effect.1 Headlines covering a tragic event in October 2002 illustrate the power of framing. Russian Special Forces used a sedating gas to knock out Chechen rebels who were holding over 750 hostages captive in the Moscow Theater. The gas prevented the rebels from setting off explosives and killing all of the hostages, but the gas itself caused the death of well over 100 hostages. Newspapers throughout the world reported the incident as either Gas Kills Over 100 Hostages or Gas Saves Over 500 Hostages. The negative frame presents the information in a way that suggests the Russians bungled the affair, while the positive frame suggests the Russians cleverly salvaged a seemingly intractable situation. Advertising makes great use of the framing principle. For example, it is common to see yogurt advertised as 95% fat free rather than 5% fat rich; and tobacco legislation has been defeated more than once by framing the legislation as a matter of taxation instead of a matter of public health. Frames that emphasize benefits are most effective for audiences focused on aspiration and pleasure seeking. Frames that emphasize losses are most effective for audiences focused on security and pain avoidance. Positive frames result in proactive behaviors. Negative frames result in reactive behaviors. Stress and time pressures amplify these behaviors, a phenomenon frequently exploited in high-pressure sales. However, when people are exposed to multiple conflicting frames, the framing effect is neutralized and people think and act consistently with their own beliefs. Use framing to elicit positive or negative feelings about a design and to influence behaviors and decision-making. Use positive frames to move people to action (e.g., make a purchase) and negative frames to move people to inaction (e.g., prevent use of illegal drugs). To maintain a strong framing effect, make sure that frames are not conflicting. Conversely, neutralize framing effects by presenting multiple conflicting frames. See also Expectation Effects; Exposure Effect; Priming; Scarcity

1

The seminal work on framing is “The Framing of Decisions and the Psychology of Choice” by Amos Tversky and Daniel Kahneman, 1981, Science, 211, 453 – 458. A nice treatment of the subject is The Psychology of Judgment and Decision Making by Scott Plous, 1993, McGraw-Hill Education.

Littering is a problem for governments around the world: It looks bad, damages the environment, and takes significant resources to clean up. One strategy adopted by some U.S. states is to reframe littering as a deliberate act of disrespect, stigmatizing it as an offensive act to others living there. The strategy is enacted through ad campaigns presenting pithy ultimatums, made memorable with some clever wordplay. The strategy has proven effective. The “Don’t Mess with Texas” campaign is credited with reducing litter on Texas highways roughly 72% between 1987 and 1990.

077

Freeze-Flight-Fight-Forfeit The ordered, instinctive response to acute stress. When people are exposed to stressful or threatening situations, they respond in a manner often referred to as “fight or flight”. Less catchy but more accurate is the contemporary construction, “freeze-flight-fight-forfeit”, which not only describes the full set of responses but also reflects the general sequence in which they occur.1 The response set typically begins at stage 1 and escalates to subsequent stages as the level of threat increases: 1. Freeze — When a threat is believed to be imminent, the instinctive

response is to stop, look, and listen to try and detect potential threats. This stage induces a mental state of hyperawareness and hypervigilance 2. Flight — When a threat is detected, the instinctive response is to run

away and try to escape from the threat. This stage induces a mental state of fear and panic. 3. Fight — When unable to escape from a threat, the instinctive response is

to fight for your life and try to neutralize the threat. This stage induces a mental state of desperation and aggression. 4. Forfeit — When unable to neutralize the threat, the instinctive response is

to play dead and yield to the threat. This stage induces a mental state of helplessness and surrender. These stages are innate responses that operate in all humans (and mammals generally), though the triggers for each stage vary widely from person to person. Depending on the strength of the threat stimulus, the response can skip stages. For example, an unexpected explosion might immediately trigger a flight response in some and a forfeit response in others. Training can alter the sensitivity to triggers and the stage sequence. For example, soldiers are trained to freeze and then fight and, in some cases, to never engage in flight or forfeit. Consider freeze-flight-fight-forfeit in the design of systems that involve performance under stress. Simplify tools, plans, and displays appropriately in anticipation of diminished performance capabilities. Employ tools and controls that require gross motor control only and incorporate forgiveness to prevent and minimize the effects of errors. Ensure the visibility of critical elements to mitigate the effects of tunnel vision. In contexts where complex decisionmaking is required, avoid overusing alerts and alarms, as they undermine concentration and further burden cognitive functions. It is critical to design systems and training to address each stage of the stress response differently, versus a one-strategy-fits-all approach. See also Classical Conditioning; Feedback; Performance Load;

Threat Detection; Visibility

1

The seminal work on “fight or flight” is Bodily Changes in Pain, Hunger, Fear and Rage: An Account of Recent Research into the Function of Emotional Excitement by Walter Cannon, 1916, D. Appleton & Company. The updated construction — freeze-flight-fightforfeit — builds on proposals presented in The Psychology of Fear and Stress by Jeffrey Gray, 1971, Weidenfeld & Nicolson; and “Does ‘Fight or Flight’ Need Updating?” by H. Stefan Bracha et al., Oct 2004, Psychosomatics, 45, 448 – 449.

Freeze Flight

Forfeit

Fight

People respond to extreme stress in four ways. It is important to design systems and training to address each of them.

078

Gall’s Law All successful complex things begin as simple things and become complex through iteration. Gall’s law, proposed by the systems theorist John Gall, states that all successful complex systems begin as simple systems and achieve complexity through iterative modification over time — i.e., they never start out as complex systems. This is always true in the evolution of complex systems, and it is generally true in the design of complex systems. The full quote by John Gall: “A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system”.1 Complexity can be understood as the number of elements comprising a system, or “summative complexity”, and the number of relations between those elements, or “constitutive complexity”. A system of high summative complexity can’t easily be created from scratch, as it’s not yet clear what essential elements and features are required for success. A system of high constitutive complexity cannot easily be created from scratch, as it’s not yet clear how a design will be used or how elements and features will interact. With summative complexity, the whole is the sum of the parts. But with constitutive complexity, the whole can be greater than the sum of the parts — i.e., parts can work together to create emergent phenomena and unintended consequences.2 A potential hack to Gall’s law is copying, or reverse engineering. It is, for example, no coincidence that the Soviet Buran spacecraft looked like the space shuttle, or that the supersonic Soviet airliner Tu-144 looked like the Concorde. It is far easier to copy a complex system than to design one from scratch. And standing on the shoulders of a working design can save thousands of iterations of development, enabling copycat teams to quickly catch up and even surpass the original.3 There is no royal road to complex systems. Start with simple systems, make them work, and then add complexity iteratively over time. Knowing that something is possible is about a third of the challenge, knowing how to achieve something is about another third, and building something and making it work is the remaining third. Therefore, consider copying or reverse engineering successful systems (when legal to do so) to effectively shortcut the development cycle. See also Iteration; Kano Model; Minimum-Viable Product; Modularity;

Prototyping; Testing Pyramid

1

General Systemantics by John Gall, 1975, General Systemantics Press.

2

General System Theory by Ludwig von Bertalanffy, 1968, George Braziller.

3

For an introduction to reverse engineering, see Reverse Engineering: Mechanisms, Structures, Systems & Materials by Robert Messler, 2014 McGraw-Hill Education.

The similarities between the Soviet space shuttle Buran and the U.S. space shuttles were no coincidence. When the United States decided not to classify the technology behind the space shuttle, the temptation to copy versus invent was too good to pass up. It also allowed the Soviets, who had a world-class space program with top scientists and engineers, to build

upon and improve the basic design of the Americans, including increased payload, automated and remotecontrolled flight, and emergency ejection seats for all crew members. These enhancements made the Buran more capable, more versatile, and safer than the U.S. space shuttle; but the project lost political support as the Cold War was coming to an end.

The Buran flew just one once, but it was the first spaceplane to perform an uncrewed flight and fully automated landing. The lesson: If designing from scratch, start with simple systems and iterate toward complexity; but if there is a functioning complex system, reverse engineer it.

079

Gamification Using gaming strategies in nongame contexts to enhance experience and modify behavior. Gamification involves making experiences “game-like” by rewarding desired behaviors, providing frequent feedback, and illustrating achievements in highly visible ways. By weaving fun and entertaining features into an experience, gamified environments tap intrinsic motivation when it exists and help develop intrinsic motivation when it doesn’t.1 The SAPS model (an acronym that stands for Status, Access, Power, and Stuff) purports to list in order the things people most desire:

1

See, for example, “A Systematic Review of Gamification Research: In Pursuit of Homo Ludens” by Aras Bozkurt et al., 2018, International Journal of Game-Based Learning, 8(3), 15 – 33; and Play at Work: How Games Inspire Breakthrough Thinking by Adam Penenberg, 2013, Portfolio.

2

See Gamification by Design by Gabe Zichermann and Christopher Cunningham, 2011, O’Reilly Media.

3

The seminal work in this area is “Effects of externally mediated rewards on intrinsic motivation” by Edward Deci, 1971, Journal of Personality and Social Psychology, 18(1), 105 –115. But this research does not reliably generalize to gamification because of the “fun factor”. See, for example, “Science to practice: Does gamification enhance intrinsic motivation?” by Matthew Jones et al., 2022, Active Learning in Higher Education.

1. Status — A person’s standing relative to other people. Examples include

public recognition, impressive titles, and awards such as badges, plaques, and medals. 2. Access — A person’s ability to get to desirable things. Examples include

entry into private and select venues, elite clubs and institutions, and unlocked game levels. 3. Power — A person’s ability to do things. Examples include increased

decision-making authority, more flexibility to work remotely, and more options from which to choose. 4. Stuff — A person’s ability to get more things. Examples include point

systems like frequent-flyer miles, more money through bonuses and commissions, and special gifts or tokens.2 An oft-cited concern with gamification is that it risks undermining the intrinsic joy associated with performing tasks. This concern derives from psychological research that shows that extrinsic reinforcers can undermine intrinsic motivation. For example, a mouse will run on a wheel for fun. If you start rewarding the mouse for running and then stop the rewards, it will no longer run for fun. The intrinsic motivation to run is undermined by the treats. It is true that gamification, poorly applied, can be counterproductive in this way. However, when gamification is properly applied, there is an experiential reward — i.e., fun — that complements the incentive effects and enhances intrinsic motivation.3 Measure behaviors you want to increase and provide immediate visual feedback about that behavior. Consider the SAPS model in the design of rewards systems. Use a mix of status, access, power, and stuff to create compelling and durable engagements. Ensure that gamified experiences are fun independent of rewards or other incentives. This is what builds and reinforces intrinsic motivation. See also Classical Conditioning; Flow; Nudge; Operant Conditioning;

Progressive Disclosure; Shaping

How do you get people to start taking the stairs in Sweden? Convert the stairs into a piano that plays music with each step.

Now, imagine a world where there is no longer a divide between what you need to do and what you want to do…This is the promise and vision that good gamification design can create. — Yu-Kai Chou Actionable Gamification

080

Garbage In–Garbage Out The quality of system output is largely dependent on the quality of system input. The garbage in–garbage out principle is based on the observation that good inputs generally result in good outputs, and bad inputs, barring design intervention, generally result in bad outputs. The rule has been generalized over time to apply to all systems and is commonly invoked in domains such as business, education, nutrition, and engineering.1 The garbage-in metaphor refers to one of two kinds of input problems: • Problems of type — Occur when the incorrect type of input is fed into a system, such as entering a phone number into a credit card number field. Problems of type are serious because the input provided could be radically different from the input expected. This can be advantageous in that problems of type are relatively easy to detect but problematic in that they represent the maximum form of garbage if undetected. Problems of type are generally caused by a class of errors called mistakes — incorrect actions caused by conscious actions. The primary strategies for minimizing problems of type are affordances and constraints. These strategies structure input and minimize the frequency and magnitude of garbage input. • Problems of quality — Occur when the correct type of input is fed into a system but with defects, such as entering a phone number into a phone number field but entering the wrong number. Depending on the frequency and severity of these defects, problems of quality may or may not be serious. Mistyping one letter in a name may have minor consequences (e.g., search item not found); trying to request a download of 50 records but typing 5,000 might lock up the system. Problems of quality are generally caused by a class of errors called slips — incorrect actions caused by unconscious, accidental actions. The primary strategies for minimizing problems of quality are previews and confirmations. These strategies allow the consequences of actions to be reviewed and verified prior to input. Avoid garbage out by preventing garbage in. Use affordances and constraints to minimize problems of type. Use previews and confirmations to minimize problems of quality. When input integrity is critical, use validation tests to check integrity prior to input and consider confirmation steps that require the independent verification of multiple people. Consider mechanisms to automatically flag and possibly autocorrect bad input (e.g., automatic spelling correction in word processors). See also Affordance; Confirmation; Constraint; Error, Design; Error, Human;

Feedback; Feedback Loop; Forgiveness; Signal-to-Noise Ratio

1

While the garbage in–garbage out concept dates back to Charles Babbage (1864) or earlier, the term is attributed to George Fuechsel, a programming instructor who used it as a teaching device in the late 1950s. It should be noted that Fuechsel used the principle to emphasize that “garbage out” is not the inevitable result of “garbage in” but, rather, a condition that should be addressed through design. This principle is also known as GIGO.

The Mars Climate Orbiter disintegrated in the Martian atmosphere in 1999. The cause? Garbage in–garbage out. Trajectory corrections were entered in English units versus the required metric units, dooming the craft.

Planned Trajectory Minimum Survivable Trajectory Actual Trajectory Mars Climate Orbiter

226 km (140 miles) 80 km (50 miles) 57 km (35 miles)

Mars

To Earth

081

Gates’ Rule of Automation Automation applied to an operation will magnify both its efficiencies and deficiencies. Bill Gates proposed two rules about the use of technology to automate business operations. The point of the rules is that automation is not a panacea and, in some cases, can make a bad business situation worse. Automation requires significant up-front investment and adds additional complexity to an operation. To be successful, it requires the presence of a sound strategy, well-designed business processes, and tasks that lend themselves to being automated.1 1. Automation applied to an efficient operation will magnify the efficiency of

that operation. Gates’ first rule is generally true as written, though perhaps oversimple. The framing puts forth a binary condition in which an operation is presented as either efficient or inefficient, when in reality most operations have elements of both — i.e., therefore, automation can both improve and worsen different aspects of an operation at the same time. For example, automating an ordering ticket system can improve the efficiency of placing orders, but the resulting increase in order flow can overwhelm fulfilment, creating backlogs and frustrating customers. One area of an operation can be made more efficient, while another area is made more deficient. 2. Automation applied to an inefficient operation will magnify the

inefficiency of that operation. Gates’ second rule is more problematic. It is true that an inefficient operation executing bad strategy can’t be made more efficient through automation — the more efficient the operation becomes, the better the bad strategy is executed and the worse things get. But it is not true that an inefficient operation executing good strategy can’t be made more efficient through automation. If the strategy is good, automating manual processes and even poorly designed processes can speed execution and improve efficiency. Gates’ rules are here consolidated into a single rule: Automation applied to an operation will magnify both the efficiencies and deficiencies of that operation. This phrasing captures the spirit presented in the original context and remedies the problems with the original framing. Avoid automating things when strategy or business processes are ill defined or poorly designed. Prototype processes manually prior to automating. Consider what humans do best and what machines do best, and then automate, or not, accordingly. See also Ackoff’s Law; Feedback Loop; Process Eats Goal; Prototyping

1

The Road Ahead by Bill Gates et al., 1995, Viking Press.

In 2018, Tesla sought to greatly ramp up production of its Model 3, a car critical to the company’s survival and future. Key to this ramp-up was a strategy of hyper-automation: a highly automated production line with more than a thousand robots. Established car companies typically

master assembly processes with humans first and then phase in automation technology to take over tasks well suited for machines. Tesla did the opposite and discovered Gates’ rule of automation the hard way: The production line bogged down rather than accelerated. Tesla

quickly pivoted, hired more than 400 new line workers, and opened a more traditional human-driven production line under a massive tent outside its factory. The Tesla Model 3 would go on to become one of the best-selling cars in the world.

082

Gloss Bias A preference for glossy versus dull objects People find glossy objects more interesting and appealing than dull objects. For example, people generally prefer glossy lipsticks, jewelry, paper, and paints to their matte counterparts. Young children presented with glossy objects lick them significantly more than dull objects, a behavior that appears to be triggered by optical cues of reflection and gleaming highlights.1 In 2008, Apple made the controversial decision to stop shipping MacBook laptop computers with matte displays. Many people in the Apple community were incredulous. From a human factors and ergonomics perspective, the matter is open and shut: Matte displays are superior to glossy. However, glossy displays outsell matte displays every time. Even consumers who know the usability advantages of matte displays, when presented with two MacBooks, one with a matte display and one with a glossy display, would often buy the glossy.2 The preference for glossy objects is likely an evolutionary artifact, as glossy surfaces suggested nearby water sources. The ability to find water sources provided our early human ancestors with an adaptive advantage, which means they were more likely to survive and thrive. The preference has been passed down in the form of an innate, unconscious bias, meaning that our visceral, unthinking preference is for glossy, even if our conscious, thinking brain favors matte. Because the gloss bias is instinctive, the preference for glossy finishes is especially strong when the purchases are “impulse buys” or when two items — one glossy and one matte — are presented side by side. Consider the gloss bias when selecting finishes for objects and when selecting images of objects. The bias is stronger in general audiences and weaker in audiences that have experience with different finishes. So, when designing for the mass market, the default finish should be glossy. When designing for a niche audience with experience with different finishes, consider matte if it helps to differentiate your product. When in doubt, or when all other variables are equal, the default rule should be to choose glossy. A clear exception to this rule is when designing objects that pose a potential mouthing or swallowing hazard to young children. In these cases, glossy, reflective finishes should be avoided. Conversely, if you are designing objects that you want young children to mouth — like teething rings or pacifiers — a glossy, wet look is best. See also Archetypes, Psychological; Biophilia Effect; Savanna Preference;

Scarcity; Supernormal Stimulus

1

The seminal work is “Taking a shine to it: How the preference for glossy stems from an innate need for water” by Katrien Meert et al., 2014, Journal of Consumer Psychology, 24(2), 195 – 206.

2

See, for example, “Ars reviews the 2008 MacBook Pro, Part I: aluminum & glass” by Clint Ecker, 2008, Ars Technica.

Humans have evolved to find glossy things appealing. You might even say we thirst for them.

083

Golden Ratio A ratio within the elements of a form, such as height to width, approximating 0.618. The golden ratio is commonly believed to be an aesthetically pleasing proportion, primarily due to its unique mathematical properties, prevalence in nature, and use in great artistic and architectural works. Pinecones, seashells, and the human body all exhibit the golden ratio. Piet Mondrian and Leonardo da Vinci commonly incorporated the golden ratio into their paintings. Stradivari utilized the golden ratio in the construction of his violins. The Parthenon, the Great Pyramid of Giza, Stonehenge, and the Chartres Cathedral all exhibit the golden ratio.1 Many manifestations of the golden ratio in early art and architecture were likely caused by processes not involving formal knowledge of the golden ratio — it may be that these manifestations were coincidental or resulted from a subconscious preference for the ratio. Is there any merit to the idea that we have a preference for golden proportions? For linear and rectilinear forms, the answer appears to be yes, though the effect is small. In more complex shapes and geometries, such as golden triangles, spirals, etc., there is no credible evidence that golden ratio proportions are preferred.2

1

The seminal work on the golden ratio is Über die Frage des Golden Schnitts [On the question of the golden section] by Gustav T. Fechner, 1865, Archiv für die zeichnenden Künste [Archive for the Drawn/Graphic Arts], 11, 100 –112. The golden ratio is an irrational decimal (never-ending) and can be computed with the equation (√5 – 1) / 2. Adding 1 to the golden ratio yields 1.618…, referred to as Phi (φ). The values 0.618 and 1.618 are used interchangeably to define the golden ratio, as they represent the same basic geometric relationship. The golden ratio is also known as golden mean, golden number, golden section, golden proportion, divine proportion, and sectio aurea.

2

See, for example, “All That Glitters: A Review of Psychological Research on the Aesthetics of the Golden Section” by Christopher D. Green, 1995, Perception, 24, 937– 968. For a critical examination of the golden ratio thesis, see “The Cult of the Golden Ratio” in Weird Water & Fuzzy Logic by Martin Gardner, 1996, Prometheus Books, 90 – 96.

3

In 1932, designer Hans Mardersteig worked with German publisher Albatross and standardized the physical dimensions for their paperback books to closely approximate the golden rectangle. This was a decision Mardersteig made after reading Leonardo da Vinci’s reflections on the ideal page size. In the 1940s, Penguin Books founder Sir Allen Lane and designer Jan Tschichold copied and built upon Albatross’ innovations, including the golden rectangle dimensions of their books.

Why would people find lines and rectangles based on the golden ratio more aesthetic? Is there a plausible, nonmystical basis? One possible explanation implicates the Old Masters and the field of publishing: The golden ratio has interesting mathematical properties and is pervasive in nature, which led early designers such as Leonardo da Vinci to use it in their works. This influenced later designers, such as mass-market book designers, to incorporate the ratio into the dimensions of their books, making them golden rectangles. The ubiquity of these books created a mainstream familiarity with the rectangular proportion. This familiarity led to a culturally based preference for the ratio in certain forms such as rectangles.3 Consider golden ratio proportions in linear and rectangular designs, especially when its application does not come at the expense of other design objectives. Note that the ratio does not need to be precisely expressed visually but just reasonably approximated — the untrained eye can’t detect the difference between 0.618 and 0.6. Unless a design seeks to leverage the narrative appeal of the golden ratio, do not waste time incorporating it into complex compositions or forms. See also Confirmation Bias; Fibonacci Sequence; Rule of Thirds;

Selection Bias; Waist-to-Hip Ratio

A

B Golden Ratio A/B = 1.618 B/A = 0.618 Golden Section

In each example, the ratio between the blue and red segments approximates the golden ratio. Note how the ratio corresponds with a significant feature or alteration of the form. Examples are the Parthenon, Stradivarius violin, Notre-Dame Cathedral, nautilus shell, Eames LCW chair, Apple iPod MP3 player, and da Vinci’s Vitruvian Man.

084

Good Continuation The brain automatically assumes elements in motion continue in their established directions. Good continuation, one of the Gestalt principles of perception, asserts that elements will be perceived as a group when they lie along continuous lines with few interruptions or changes. In other words, elements arranged in a straight line or along a smooth curve are perceived as a unit (more related than unaligned elements). This automatic grouping makes the elements easier to visually process and remember. For example, speed markings on a speedometer are easily interpreted as a group because they are aligned along a linear or circular path.1 The principle of good continuation also explains why lines will generally be perceived as maintaining their established directions versus branching or bending abruptly. For example, two V-shaped lines side by side appear simply as two V-shaped lines. When one V-shaped line is inverted and the other is placed above it (forming an X), the shape is interpreted as two opposing diagonal lines instead of two V-shaped lines — the less abrupt interpretation of the lines is dominant. A bar graph in which the bars are arranged in increasing or decreasing order so that the tops of the bars form a continuous line are more easily processed than bar arrangements in which the tops of the bars form a discontinuous, abrupt line.2 The ability to accurately perceive objects depends largely on the perceptibility of the corners and sharp curves that make up their shape. When sections of a line or shape are hidden from view, good continuation leads the eye to continue along the visible segments. If extensions of these segments intersect with minimal disruption, the elements along the line will be perceived as related. As the angle of disruption becomes more acute, the elements will be perceived as less related.3 Consider the good continuation principle when the goal is to indicate relatedness among elements in a design. Locate elements such that their alignment corresponds to their relatedness, and locate unrelated or ambiguously related items on different alignment paths. Ensure that line extensions of related objects intersect with minimum line disruption. Arrange elements in graphs and displays such that endpoints of elements form continuous rather than abrupt lines. See also Alignment; Closure; Common Fate; Figure-Ground; Miller’s Law;

Proximity; Similarity; Uniform Connectedness

1

The seminal work on good continuation is “Untersuchungen zür Lehre von der Gestalt, II” [Laws of Organization in Perceptual Forms] by Max Wertheimer, 1923, Psychologische Forschung, 4, 301– 350, reprinted in A Source Book of Gestalt Psychology by Willis Ellis (Ed.), 1999, Routledge & Kegan Paul, 71– 88. See also Principles of Gestalt Psychology by Kurt Koffka, 1935, Harcourt Brace.

2

See, for example, Elements of Graph Design by Stephen Kosslyn, 1994, W.H. Freeman and Company.

3

See, for example, “Convexity in Perceptual Completion: Beyond Good Continuation” by Zili Liu et al., 1999, Vision Research, 39, 4244 – 4257.

Artist Norman Wilkinson’s renderings show a German U-boat commander’s periscope view of a merchant ship in dazzle camouflage (left) and the same ship uncamouflaged (right). The varying line configurations were designed to make the type and heading of the camouflaged ship difficult to determine.

Dazzle camouflage was also applied to ships of war to make them a more difficult target for submarines. The photograph is the French light cruiser Gloire during exercises off the north African coast in 1943 or 1944.

085

Groupthink A decision-making phenomenon that occurs when group harmony is overprioritized. Groupthink is a dysfunctional group dynamic that occurs when agreement and conformity are prioritized over critical analysis and debate, often leading to poor decision-making outcomes. The phenomenon is most likely to occur when groups are highly cohesive, are insulated from experts and people with differing views, have limited access to information, operate under a directive style of leadership, have low confidence or low self-esteem, and are working under conditions of high stress. When one or more of these conditions are present, groups tend toward consensus-seeking behaviors — i.e., groupthink.1

1

“Groupthink” by Irving Janis, Nov 1971, Psychology Today; and Victims of Groupthink by Irving Janis, 1972, Houghton Mifflin. The empirical evidence for groupthink is scant, in part due to challenges of measurability; but there is no denying its intuitive appeal and practical utility. See, for example, “Twenty-Five Years of Groupthink Theory and Research: Lessons from the Evaluation of a Theory” by Marlene Turner and Anthony Pratkanis, Feb/Mar 1998, Organizational Behavior and Human Decision Processes, 73(2/3), 105 –115. Classic decision-making fiascos attributed to groupthink include the Japanese attack on Pearl Harbor, the Bay of Pigs invasion, Watergate, the Vietnam War escalation, Chemie Grünenthal’s decision to market the drug thalidomide, and the launch decision leading to the space shuttle Challenger disaster.

3

See, for example, Wiser: Getting Beyond Groupthink to Make Groups Smarter by Cass Sunstein and Reid Hastie, 2014, Harvard Business Review Press.

Designers should guard against groupthink, as bad decisions beget bad designs. Proposed symptoms of groupthink include: • Culture of compliance — Members tend to ignore or stifle dissenting voices, pressuring others to conform. • Self-censorship — Members feel compelled to censor their own contrary thoughts and opinions. • Mindguards — One or more members act as enforcers, filtering contrary information and forcing group compliance. • Apparent unanimity — Silence is interpreted as agreement, creating the impression of unanimity. • Illusion of invulnerability — Groups without dissent believe they are performing well and grow overconfident in their abilities. • Illusion of morality — Members subordinate their individual sense of right and wrong to a greater group morality, distancing themselves from personal responsibility for decisions. • Outgroup bias — Members of cohesive groups tend to perceive nongroup members as outsiders, discrediting their capabilities. • Confirmation bias — Members embrace supporting evidence and dismiss contrary evidence. Consider groupthink in decision-making contexts. Groupthink can be minimized through a range of preventative strategies. For example, many people are resistant to proactively speaking up in groups; therefore, ask each individual person for their opinions; ritualize playing devil’s advocate to air contrary opinions; discuss ideas with trusted people outside of the group; have multiple teams work on the same problem independently; and reduce the role of leaders in meetings, having them volunteer ideas last or abstain from recommendation-formulating meetings altogether.3 See also Crowd Intelligence; Death Spiral; Design by Committee;

Dunning-Kruger Effect; Not Invented Here; Social Proof

Inventor Steven Sasson shows his prototype digital camera, built for Kodak in 1975, next to the Kodak EasyShare One, built in the aughts. Despite being at the forefront of the digital photography revolution, Kodak was resistant to cannibalize its profitable film business. Sasson described management’s reaction

to his work as, “that’s cute — but don’t tell anyone about it”. Kodak had invented the digital camera, invested billions in the technology, and even foresaw that photos would be shared online; but they culturally resisted innovations that did not align with their film and printing businesses. Kodak filed for bankruptcy in 2012.

The important thing about groupthink is that it works not so much by censoring dissent as by making dissent seem somehow improbable. — James Surowiecki The Wisdom of the Crowds

086

Gutenberg Diagram A diagram that describes the pattern followed by the eyes when looking at a page of information. The Gutenberg diagram divides a display medium into four quadrants: 1. Top left — Primary optical area 2. Bottom right — Terminal area 3. Top right — Strong fallow area 4. Bottom left — Weak fallow area

According to the diagram, Western readers naturally begin at the primary optical area (top left) and move across and down the display medium in a series of sweeps to the terminal area (bottom right). Each sweep begins along an axis of orientation and proceeds in a left-to-right direction. The strong and weak fallow areas lie outside this path and receive minimal attention unless visually emphasized. The tendency to follow this path is attributed to reading gravity — the left-right, top-bottom habit formed from reading.1 Designs that follow the Gutenberg diagram are said to work in harmony with reading gravity and return readers to a logical axis of orientation, purportedly improving reading rhythm and comprehension. For example, a layout following the Gutenberg diagram would place key elements at the top left (e.g., headline), middle (e.g., image), and bottom right (e.g., contact information). Though designs based directly or indirectly on the Gutenberg diagram are widespread, there is little empirical evidence that it contributes to improved reading rates or comprehension. The Gutenberg diagram is likely only predictive of eye movement for heavy text information, evenly distributed and homogeneous information, and blank pages or displays. In all other cases, the weight of the elements of the design in concert with their layout and composition will direct eye movements. For example, if a newspaper has a very heavy headline and photograph in its center, the center will be the primary optical area. Familiarity with the information and medium also influences eye movements. For example, a person who regularly views information presented in a consistent way is more likely to first look at areas that are often changing and then at areas that are the same. Consider the Gutenberg diagram to assist in layout and composition when the elements are evenly distributed and homogeneous, or the design contains heavy use of text. Otherwise, use the weight and composition of elements to lead the eye. See also Alignment; Entry Point; Legibility; Progressive Disclosure; Readability;

Serial Position Effects

1

The seminal work on the Gutenberg diagram is attributed to the typographer Edmund Arnold, who is said to have developed the concept in the 1950s. See, for example, Type & Layout: How Typography and Design Can Get Your Message Across or Get in the Way by Colin Wheildon, 1995, Strathmoor Press. This principle is also known as the Gutenberg rule and the Z pattern of processing.

Primary Optical Area

The composition of the pages below illustrates the application of the Gutenberg diagram. The first page is all text, and it is, therefore, safe to assume readers will begin at the topleft and stop at the bottom-right of the page. The pull quote, placed between these areas, reinforces reading gravity. The placement of the image on the second page similarly reinforces reading gravity, which it would not do if it were positioned at the top-right or bottom-left of the page.

Strong Fallow Area

Axis of Orientation

Weak Fallow Area

Terminal Area

087

Habituation Repeated exposure to a stimulus reduces the response to that stimulus. Habituation is the diminishing of a physiological or emotional response to a stimulus upon repeated exposure. For example, people living in dense cities habituate to the sounds of city noise such as cars honking, sirens blaring, etc., which means they decreasingly notice such sounds over time with repeated exposure. The phenomenon is likely an evolved mechanism to attend to stimuli when they are novel but then to increasingly filter those stimuli as it becomes clear that they pose no threat — i.e., upon repeated exposures, nothing bad occurs.1 Almost any response or behavior can become habituated: the fleeting thrill of winning a contest, fear of a neighborhood dog, being startled at the sight of spiders, and feelings of enmity toward others. Frequent exposure to such stimuli will decrease the strength of the response. In cases where the intention is to reduce a counterproductive response, habituation is a good thing. For example, exposure therapy uses habituation to treat phobias. However, in cases where the intention is to create or maintain a productive response, habituation is a challenge. For example, excessive alerts or warnings from software decreases the likelihood that they will be read or acted upon.2 Habituation occurs primarily with weaker or less intense stimuli. Most people would habituate to daily sounds of fireworks in the distance but likely would never habituate to the daily sounds of fireworks being set off right next to them. And the diminishing response applies not only to the original stimulus but to all similar stimuli as well, a phenomenon known as stimulus generalization. The stronger the similarity, the stronger the generalization. So, one who habituates to the sound of fireworks in the distance would also be habituated to the sound of gunshots in the distance.3 Consider habituation in the design of systems seeking to elicit a particular kind of response, especially in health care and emergency-response contexts. When the goal is to moderate an unproductive behavior or response, design experiences that increase exposure to triggering stimuli. When the goal is to increase or maintain a productive behavior or response, design experiences that minimize exposure to triggering stimuli. In systems where everything is an alert, people will respond as if nothing is an alert. See also Classical Conditioning; Error, Design; Error, Human;

Operant Conditioning

1

A seminal review is “Habituatory response decrement in the intact organism” by J.D. Harris, 1943, Psychological Bulletin, 40, 385 – 422.

2

See, for example, “Alert override as a habitual behavior — a new perspective on a persistent problem” by Melissa Baysari et al., 2017, Journal of the American Medical Informatics Association, 24(2), 409 – 412.

3

See, for example, “Habituation to repeated stress: Get used to it” by Nicola Grissom and Seema Bhatnagar, Sep 2009, Neurobiology of Learning and Memory, 92(2), 215 – 224.

In December 2017, RaDonda Vaught, a nurse at Vanderbilt University Medical Center in Nashville, Tennessee, intended to administer a sedative to patient Charlene Murphey before a PET scan. Instead of injecting the sedative, Versed, she mistakenly gave Murphey vecuronium, a known paralyzing agent that resulted in Murphey’s death. Vaught obtained the lethal drug from the hospital’s automated medication cabinet after overriding several warnings in the cabinet’s computer user interface. Why did she do this? She had likely become habituated to the frequent alerts and overrides triggered by the system, a situation exacerbated by cryptic codes, poor interface design, hospital policy, and the bustle of a hospital environment. Vaught was charged and found guilty of criminally negligent homicide and abuse of an impaired adult and sentenced to three years of supervised probation.

Overriding was something we did as a part of our practice every day. You couldn’t get a bag of fluids for a patient without using an override function. — RaDonda Vaught, court testimony July 2021

088

Hanlon’s Razor Never attribute to malice what can be adequately explained by incompetence. Hanlon’s razor asserts that when bad things happen that are human caused, it is far more likely to be the result of ignorance or bureaucracy than conspiracy or malice. For example, when Apple’s Siri search was unable to find abortion clinics, many claimed Apple purposefully excluded them from the search results. It is possible that Apple was covertly promoting a political agenda, but it is more likely that Siri was incomplete or buggy.1 People are especially quick to accuse groups, organizations, and governments of mischief, personifying them and treating the monolithically as one. But as the number of people involved gets larger, the probability that bad happenings are the consequence of bureaucracy, incompetence, or mediocrity actually increases. It also means that the probability of keeping secrets decreases — i.e., when many people are involved, somebody will talk and reveal the malfeasance. An exception to this rule is when groups are governed by autocratic rulers, especially rulers with a history of bad behavior and who preside over compliant or oppressed cultures. In such cases, Hanlon’s razor does not apply.2 Hanlon’s razor is a deliberative override of at least three cognitive biases working in combination: 1. Attribution bias — The tendency to attribute motives and reasons behind

why people act the way they do. 2. Spotlight effect — The tendency for people to assume that more attention

or care is focused on them than is actually the case. 3. Affect heuristic — The tendency to reach conclusions based on how

people feel rather than based on rational analysis. Taken together, these biases lead people to believe that there is malice behind unfortunate acts and events, even when there is no rational basis to support this belief. Keep Hanlon’s razor in mind when bad things happen. Give the benefit of the doubt to individuals and groups, especially when they are diverse, have freedom of expression, and have histories of reputable behavior. Be aware that the principle does not exclude the possibility of malice — sometimes bad things are caused by bad people — but, in general, malice is less probable. See also Archetypes, Psychological; Confirmation Bias; Ockham’s Razor;

Paradox of Unanimity

1

Hanlon’s razor is a variant of Ockham’s razor, Hanlon’s razor was proposed by both Robert Hanlon and science fiction author Robert Heinlein.

2

Slava Ukraini.

What caused the post-Katrina New Orleans levees to fail? Inadequate engineering is more likely than government conspiracy — though, the government did blow up the levees in 1927 under similar circumstances.

…misunderstandings and lethargy perhaps produce more wrong in the world than deceit and malice do. At least the latter two are certainly rarer. — Johann Wolfgang von Goethe The Sorrows of Young Werther

089

Hick’s Law Time to make a decision increases as the number of decision options increases. Hick’s law, proposed by the psychologist W.E. Hick, states that the time required to make a decision is a function of the number of available options. For example, when a pilot has to press a particular button in response to some event, such as an alarm, Hick’s law predicts that the greater the number of alternative buttons, the longer it will take to make the decision and select the correct one. Hick’s law has implications for the design of any system or process that requires simple decisions to be made based on multiple options; for example, if A happens, press button one; if B happens, press button two.1 All tasks consist of four basic steps: 1. Identify a problem or goal. 2. Assess the available options to solve the problem or achieve the goal. 3. Decide on an option. 4. Implement the option.

Hick’s law applies to the third step: Decide on an option. However, the law does not apply to decisions that involve significant levels of searching, reading, or complex problem solving. The law is decreasingly applicable as the complexity of tasks increases.2 Designers can improve the efficiency of a design by understanding the implications of Hick’s law. For example, the law applies to the design of software menus, control displays, wayfinding layout and signage, and emergency response training — as long as the decisions involved are simple. Hick’s law does not apply to complex menus or hierarchies of options. Menu selection of this type is not a simple decision-making task, since it typically involves reading sentences, searching and scanning for options, and some level of problem solving. Consider Hick’s law when designing systems that involve decisions based on a set of options. When designing for time-critical tasks, minimize the number of options involved in a decision to reduce response times and minimize errors. When designs require complex interactions, do not rely on Hick’s law to make design decisions; rather, test designs on the target population using realistic scenarios. In training people to perform time-critical procedures, train the fewest possible responses for a given scenario. This will minimize response times, error rates, and training costs. See also Error, Design; Error, Human; Fitts’ Law; Interference Effects;

Miller’s Law; Signal-to-Noise Ratio; Wayfinding

1

The seminal work on Hick’s law is “On the Rate of Gain of Information” by W.E. Hick, 1952, Quarterly Journal of Experimental Psychology, 4, 11– 26; and “Stimulus information as a determinant of reaction time” by Ray Hyman, 1953, Journal of Experimental Psychology, 45, 188 –196. Hick’s law is also known as the Hick-Hyman law.

2

The Hick’s law equation is RT = a + b log 2 (n ), where RT = response time, a = the total time that is not involved with decision-making, b = an empirically derived constant based on the cognitive processing time for each option (in this case ≈ 0.155 second for humans), and n = number of equally probable alternatives. For example, assume it takes 2 seconds to detect an alarm and understand its meaning. Further, assume that pressing one of five buttons will solve the problem caused by the alarm. The time to respond would be RT = (2 sec) + (0.155 sec)(log 2 (5)) = 2.36 seconds.

Locations Messages Settings Favorites

Menus The time to select an item from a simple software menu increases with the number of items.

Test Options Hick’s law does not apply to tasks that involve significant levels of reading and problem solving, like taking a test.

Braking The time to press the brakes to avoid hitting an obstacle increases if there is an opportunity to steer around it.

Predatory Behavior The time for a predator to target prey increases with the number of prey.

Device Settings The time to make simple decisions about adjustments on a device increases with the number of controls.

Road Signs The time for a driver to make a turn based on a particular road sign increases with the total number of road signs.

Simple Tasks The time to press the button that matches a changing light color increases with the number of colors.

Martial Arts The time for a martial artist to block a punch increases with the number of known blocking techniques.

090

Hierarchy of Needs A hierarchy of user-centered goals that a design must satisfy to achieve optimal success. The hierarchy of needs specifies that a design must serve low-level needs (e.g., it must function) before the higher-level needs, such as creativity, can begin to be addressed. Good designs follow the hierarchy of needs principle and are generally most successful in the marketplace. Designs that attempt to meet needs from the various levels without building on the lower levels of the hierarchy first are generally unsuccessful.1 The five key levels of needs in the hierarchy from lowest to highest are: 1. Functionality — The design meets basic functional needs, fostering

satisfaction. For example, a video recorder must, at minimum, be able to record, play, and rewind recorded programs. Designs at this level are perceived to be of minimal value. 2. Reliability — The design has consistent and reliable performance over

time, fostering trust. For example, a video recorder should perform consistently and play back recorded programs at an acceptable level of quality. If the design performs erratically or is subject to frequent failure, reliability needs are not satisfied. Designs at this level are perceived to be of low value. 3. Usability — The design is easy to use, fostering fondness. For example,

configuring a video recorder to record programs at a later time should be easily accomplished, and the recorder should be tolerant of mistakes. If the difficulty of use is too great or the consequences of simple errors too severe, usability needs are not satisfied. Designs at this level are perceived to be of moderate value. 4. Proficiency — The design leads to increased productivity and

empowerment, fostering pride and status. For example, a video recorder that can seek out and record programs based on keywords enables people to do things not previously possible. Designs at this level are perceived to be of high value. 5. Creativity — The design satisfies all needs, and people begin interacting

with the product in innovative ways. Designs are used to create and explore areas that extend both the design and the person using the design. Products at this level are perceived to be of the highest value and can have a cult-like following. Consider the hierarchy of needs in design, and ensure that lower-level needs are satisfied before resources are devoted to serving higher-level needs. Evaluate existing designs with respect to the hierarchy to determine where modifications should be made. See also Aesthetic-Usability Effect; Form Follows Function; Kano Model;

Pareto Principle; Product Life Cycle

1

This principle was modeled after Maslow’s Hierarchy of Needs, described in Motivation and Personality by Abraham Maslow, 1954/1987, Addison-Wesley.

GoPro cameras were designed to be small and rugged, but who could have anticipated the creativity that they would unleash?

Perceived Value

Creativity

Proficiency

Usability

Reliability

Functionality

The hierarchy of needs specifies that a design must address lower-level needs before higher-level needs can be addressed. The perceived value of a design corresponds to its place in the hierarchy — i.e., higher levels in the hierarchy correspond to higher levels of perceived value. The levels of hierarchy are adapted from Maslow’s Hierarchy of Needs.

091

Highlighting A technique for focusing attention on an area of text or image. Highlighting is an effective technique for bringing attention to elements of a design. If applied correctly, the viewer can quickly get to the most important information. If applied improperly, however, highlighting can be ineffective and actually reduce performance. For example, highlighting more than 10% of the visible design dilutes the benefit. Use a small number of highlighting techniques applied consistently throughout the design.1 Common highlighting techniques include: • Bold, italics, and underlining — An effective highlighting technique for titles, labels, captions, and short word sequences when the elements need to be subtly differentiated. Bolding is generally preferred over other techniques, as it adds minimal noise to the design and clearly highlights target elements. Italics adds minimal noise to a design but is less detectable and legible. Underlining adds considerable noise and compromises legibility and should be used sparingly, if at all.2 • Typeface — Avoid using different fonts as a highlighting technique. A detectable difference between fonts is difficult to achieve without also disrupting the aesthetics of the typography. Uppercase text in short word sequences is easily scanned and thus can be advantageous when applied to labels and keywords within a busy display. • Color — A potentially effective highlighting technique but should be used sparingly and only in concert with other highlighting techniques. Highlight using a few desaturated colors, and ensure high contrast. • Inversing — An effective highlighting technique that works well with text but may not work as well with icons or shapes. Inversing foreground and background elements (e.g., white text on a black background) is effective at attracting attention but adds considerable noise to the design and, therefore, should be used sparingly. • Blinking — An effective highlighting technique that should be reserved for highly critical information requiring an immediate response, such as an emergency status light. Flashing an element between two states is a powerful technique for attracting attention, but it is important to be able to turn off the blinking once it is acknowledged, as it compromises legibility and distracts from other tasks. Consider highlighting to grab and focus attention. Use highlights that are effective at attracting attention but that add minimal noise to the overall display. Do not highlight more than 10% of a display or else the highlighting effect is weakened: When everything is highlighted, nothing is highlighted. See also Color Effects; Interference Effects; Legibility; Readability;

Signal-to-Noise Ratio; von Restorff Effect

1

See, for example, “A Review of Human Factors Guidelines and Techniques for the Design of Graphical Human-Computer Interfaces” by Martin Maguire, 1982, International Journal of Man-Machine Studies, 16(3), 237– 261.

2

A concise summary of typographic principles of this kind is found in The Mac Is Not a Typewriter by Robin Williams, 1990, Peachpit Press. Despite the title, the book is of value to non-Macintosh owners as well.

Highlight 10% or Less “You mean you can’t take less”, said the Hatter: “it’s very easy to take more than nothing”.

“You mean you can’t take less”, said the Hatter: “it’s very easy to take more than nothing”.

“Nobody asked your opinion”, said Alice.

“Nobody asked your opinion”, said Alice.

Bold, Italics, and Underlining Advice from a Caterpillar

Advice from a Caterpillar

Advice from a Caterpillar

“I can’t explain myself, I’m afraid, sir” said Alice, “because I’m not myself, you see”.

“I can’t explain myself, I’m afraid, sir” said Alice, “because I’m not myself, you see”.

“I can’t explain myself, I’m afraid, sir” said Alice, “because I’m not myself, you see”.

Typeface “What IS a Caucus-race?” said Alice; not that she wanted much to know, but the Dodo had paused as if it thought that SOMEBODY ought to speak, and no one else seemed inclined to say anything.

“What is a Caucus-race?” said Alice; not that she wanted much to know, but the Dodo had paused as if it thought that somebody ought to speak, and no one else seemed inclined to say anything.

Color

Inversing Who Stole the Tarts?

Who Stole the Tarts?

The King and Queen of Hearts were seated on their throne when they arrived, with a great crowd assembled about them — all sorts of little birds and beasts, as well as the whole pack of cards: the Knave was standing before them, in chains, with a soldier on each side to guard him; and near the King was the White Rabbit, with a trumpet in one hand, and a scroll of parchment in the other.

The King and Queen of Hearts were seated on their throne when they arrived, with a great crowd assembled about them — all sorts of little birds and beasts, as well as the whole pack of cards: the Knave was standing before them, in chains, with a soldier on each side to guard him; and near the King was the White Rabbit, with a trumpet in one hand, and a scroll of parchment in the other.

092

Horror Vacui A tendency to fill blank spaces with things rather than leaving spaces empty. Horror vacui — a Latin expression meaning fear of emptiness — refers to the desire to fill empty spaces with information or objects. In style, it is the opposite of minimalism. Though the term has varied meanings across different disciplines dating back to Aristotle, today it is principally used to describe a style of art and design that leaves no empty space. Examples include the paintings of artists Jean Dubuffet and Adolf Wölfli, works of graphic designers David Carson and Vaughan Oliver, and the cartoons of S. Clay Wilson and Robert Crumb. The style is also commonly employed in commercial media such as newspapers, comic books, and websites.1 Horror vacui can be particularly problematic for safety-critical displays. Well-intentioned designers may believe there is value in providing as much information as possible on a display, without considering the detrimental effects of that approach. Insufficient white space prevents visual grouping and organization, and unnecessary information takes attentional resources away from key information required to perform a task. Recent research into how horror vacui is perceived suggests an inverse relationship between horror vacui and value perception — i.e., as horror vacui increases, perceived value decreases. In a survey of more than 100 clothing stores that display merchandise in shop windows, the degree to which the windows were filled with stuff was inversely related to the average price of the clothing and brand prestige of the store. Bulk sales shops and chain stores tended to fill windows to the maximum degree possible, whereas high-end boutiques often displayed only a single mannequin. It may be that the inverse relationship is actually between the affluence of a society and the perceived value associated with horror vacui — i.e., for those accustomed to having more, less is more; and for those accustomed to having less, more is more. Others have speculated that the relationship is more a function of education. This area of research is immature and follow-up is needed, but initial findings are compelling.2 Consider horror vacui in the design of displays and advertising. Favor minimalism when offering quality-driven or luxury goods, focusing on just a few choices with ample negative space. Favor horror vacui when offering price-driven or lower-quality goods, focusing on many choices with little negative space. For information-rich media such as newspapers, websites, and safety-critical displays, employ information-organizing principles such as alignment and chunking to retain the benefits of information density while mitigating the noisiness of horror vacui. See also Entry Point; Inattentional Blindness; Interference Effects; KISS;

Ockham’s Razor; Performance Load; Signal-to-Noise Ratio

1

Horror vacui is most notably associated with the Italian-born critic Mario Praz, who used the term to describe the cluttered interior design of the Victorian age.

2

“Visualizing Emptiness” by Dimitri Mortelmans, 2005, Visual Anthropology, 18, 19 – 45. See also The Sense of Order: A Study in the Psychology of Decorative Art by Ernst Gombrich, 1970, Phaidon.

Three shop windows with varying levels of merchandise on display. The perceived value of the merchandise and prestige of the store are generally inversely related to the visual complexity of the display.

White space is to be regarded as an active element, not a passive background. — Jan Tschichold, 1930

093

Icarus Matrix A 2 × 2 matrix representing the possible successfailure outcomes of a design iteration. It has become fashionable to embrace pithy slogans like, “Fail fast, fail often” and “Move fast and break things”. But such slogans embody an oversimple understanding of possible outcomes. It is more productive to think of success and failure in terms of a 2 × 2 matrix, here referred to as an Icarus matrix. The Icarus matrix teaches that not all successes are good and not all failures are bad: Just as there are beautiful successes, there are beautiful failures; and just as there are ugly failures, there are ugly successes.1 • Ugly successes — The goal is achieved, but the costs are so high and the learning so minimal that success is effectively meaningless, akin to pyrrhic victories. Ugly successes typically occur in public, as the substantial resources they consume make them hard to conceal. Earmarks of ugly successes include bad strategy, overconfidence, undisciplined cost management, and sunk cost effect. • Beautiful successes — The goal is achieved, the costs are low, and significant learning occurs. Beautiful successes are the best-case scenario. They are shared publicly when there is high confidence of a positive outcome or after the fact when the outcome is uncertain. Earmarks of beautiful successes include realistic goals and the ability to accurately assess progress, comprehensive testing, responsiveness to data and feedback, and scaling in phases. • Ugly failures — The goal is not achieved, the costs are high, and no learning results. Ugly failures are the worst-case scenario. They typically occur in public and are often exacerbated by attempts to conceal them. In extreme cases, ugly failures can cause bankruptcy, loss of life, and damage to property. Earmarks of ugly failures include delusional or wishful thinking, inadequate testing, groupthink, and premature scaling. • Beautiful failures — The goal is not achieved, but the costs are low and significant learning results. While not the best-case scenario, beautiful failures are positive outcomes worthy of celebration. They typically occur in private, often in the form of camouflaged prototypes and confidential pilot programs. When it is not possible to keep failures private, beautiful failures tend to be highly public, with the stakeholders involved owning the spectacle and engaging the public in the experience. Earmarks of beautiful failures are identical to those of beautiful successes. Consider the Icarus matrix to promote understanding of productive (and unproductive) success and failure. Use it to guide product development, testing, and marketing strategy. Do not seek to indiscriminately “fail fast” or “break things”, but, rather, seek to succeed and fail beautifully, celebrating both beautiful successes and beautiful failures equally. See also Cost-Benefit; Knowing-Doing Gap; Sunk Cost Effect; Testing Pyramid

1

See, for example, “Why ‘Fail Fast, Fail Often’ Is All Hype” by Steve Tobak, Jan 2017, Entrepreneur ; and “The Era of ‘Move Fast and Break Things’ Is Over” by Hemant Taneja, Jan 2019, Harvard Business Review, 21. It has long been said that success has many parents and that failure is an orphan. The Icarus matrix version: Beautiful successes and beautiful failures have many parents, and ugly successes and ugly failures are orphans.

HIGH COST

LOW COST

SUCCESS

Ugly Success Pyrrhic Victory

Beautiful Success Best-Case Scenario

FAILURE

Ugly Failure Worst-Case Scenario

Beautiful Failure Queen Sacrifice

Most people think about success and failure in binary terms, but this isn’t appropriate in iterative contexts. In iterative contexts like design, it is more productive to think of success and failure in terms of an Icarus matrix. In an Icarus matrix, there are the all-too-familiar beautiful successes and ugly failures; but there are also the more interesting ugly successes and beautiful failures. What counts are the costs incurred and the learnings gained, as these are the fuel of subsequent iterations.

Testing new launch vehicles is impossible to do privately. SpaceX therefore embraces the spectacle of potential failures, live streaming the launches and return landings with multiple camera views and color commentary. Significant amounts of data are collected with each test, and the learnings are rolled into subsequent iterations. In these photographs, Starship SN8 (meaning the eighth major iteration of this vehicle design) fails beautifully.

094

Iconic Representation The use of pictorial images to improve recognition and recall. Iconic representation is the use of pictorial images to make actions, objects, and concepts in a display easier to find, recognize, learn, and remember. They can be used for identification (company logo), serve as a space-efficient alternative to text (road signs), or to draw attention to an item within a display (error icons appearing next to items in a list).1 There are four types of iconic representation: 1. Similar icons — Images that are visually analogous to an action, object, or

concept. They are most effective at representing simple actions, objects, or concepts and less effective when the complexity increases. For example, a sign indicating a sharp curve ahead can be represented by a similar icon (e.g., curved line). A sign to reduce speed, however, is an action not easily represented by similar icons. 2. Example icons— Images of things that exemplify or are commonly

associated with an action, object, or concept. They are particularly effective at representing complex actions, objects, or concepts. For example, a sign indicating the location of an airport uses an image of an airplane rather than an image representing an airport. 3. Symbolic icons— Images that represent an action, object, or concept at

a higher level of abstraction. They are effective when actions, objects, or concepts can be represented by simple, well-established, and easily recognizable forms of the action, object, or concept. For example, a door lock control on a car door uses an image of a padlock to indicate its function, even though the padlock looks nothing like the actual control. 4. Arbitrary icons — Images that bear little or no relationship to the action,

object, or concept — i.e., the relationship has to be learned. Generally, arbitrary icons should only be used when the action, object, or concept cannot be represented using other approaches and when developing cross-cultural or industry standards. For example, the icon for radiation must be learned, as nothing intrinsic to the image indicates radiation. Those who work with radiation, however, recognize the symbol all over the world. Consider iconic representation to aid recognition and recall, overcome language barriers, and enhance the aesthetics of communication. Generally, icons should be labeled and share a common visual motif (style and color) for optimal performance. See also Miller’s Law; Performance Load; Picture Superiority Effect;

Recognition over Recall; Rosetta Stone

1

The seminal work in iconic representation is Symbol Sourcebook by Henry Dreyfuss, 1984, Van Nostrand Reinhold. The four kinds of iconic representation are derived from “Icons at the Interface: Their Usefulness” by Yvonne Rogers, Apr 1989, Interacting With Computers, 1, 105 –118.

Similar

Falling Rocks

Fire

Right Turn

Sharp

Airport

Hiking Trail

Picnic Area

Restaurant

Electricity

Fragile

Unlock

Water

Collate

Female

Radioactive

Resistor

Example

Symbolic

Arbitrary

095

Identifiable Victim Effect A single, identifiable victim elicits more helping behaviors than a group of anonymous victims. The identifiable victim effect refers to the increased tendency to help a specific, identifiable victim versus a larger number of abstract or anonymous victims. For example, highlighting the plight of an individual using their name, photo, and story will generally elicit more helping behaviors than highlighting the same plight of a large group using statistics. The identifiable victim effect is sensitive to many factors and, therefore, can be tricky to reliably employ. Attempts to experimentally replicate the effect in laboratory contexts have yielded mixed results, from failing to show an effect altogether to showing weak effects.1

1

The seminal reference is “The life you save may be your own” by Thomas Schelling, 1968, in Problems in Public Expenditure Analysis by Samuel Chase (Ed.), The Brookings Institute, 127–162. For mixed results and replication issues, see, for example, “The identifiable victim effect: a meta-analytic review” by Seyoung Lee and Thomas Hugh Feeley, 2016, Social Influence, 11(3), 199 – 215; and “The elusive power of the individual victim: Failure to find a difference in the effectiveness of charitable appeals focused on one compared to many victims” by P. Sol Hart et al., 2018, PLoS One, 13(7), 1–15.

2

See, for example, “COVID-19 has killed a million Americans. Our minds can’t comprehend that number” by Sujata Gupta, May 18, 2022, ScienceNews.

3

See, for example, “How far is the suffering? The role of psychological distance and victims’ identifiability in donation decisions” by Tehila Kogut et al., Sep 2018, Judgment and Decision Making, 13(5), 458 – 466.

4

See, for example, “Emotional reactions, perceived impact and perceived responsibility mediate the identifiable victim effect, proportion dominance effect and in-group effect respectively” by Arvid Erlandsson et al., Mar 2015, Organizational Behavior and Human Decision Processes, 127, 1–14.

5

See, for example, “The identifiable victim effect and public opinion toward immigration; a natural experiment study” by Odelia Heizler and Osnat Israeli, Aug 2021, Journal of Behavioral and Experimental Economics, 93(1), 101713.

The key factors influencing the effect seem to be: • Number of victims — People are more likely to help when the victim is an individual or a small group (less than five). People feel empathy for individuals but not for large numbers or more abstract groups.2 • Familiarity of victims — People are more likely to help when the victims are familiar in some way. Names, photographs, background stories, and nearby geographic locations all increase familiarity. For example, people are more likely to render aid to specific people who live near them versus groups of people who live far away.3 • Impact of helping — People are more likely to help when they feel like it can have a meaningful impact. In most cases, the perceived ability to help an individual or a few is greater than a large group, which seems overwhelming and beyond the scope of most people.4 • Authenticity of the victim and their plights — People are more likely to help when they perceive the victim and their plight to be authentic. This is likely one reason laboratory experiments have difficulty reproducing the effect. If the presentation of the victim or their plight seems staged or manipulated, people are less likely to feel empathy or render aid. Note that if presentations are too sad or emotional, they can reduce helping behavior by inducing people to tune out to avoid distress.5 Consider the identifiable victim effect when creating help appeals. The best appeals emphasize an individual victim, present information that brings that victim to life and makes them familiar, explains how the help will have a high impact, and ensures that the appeal is credible and verifiable. See also Cognitive Dissonance; Exposure Effect; Framing; Peak-End Rule;

Social Proof

Donate to help stray dogs in Harris County

Humans are wired to empathize with and help identifiable individuals. As the number in need increases and becomes more anonymous, the tendency to empathize and help diminishes. Given these two fundraising ads, the identifiable victim effect suggests the lower ad would raise more money.

A single death is a tragedy; a million deaths is a statistic. — Joseph Stalin (attributed)

Donate to help Millie

096

IKEA Effect The act of creating a thing increases the perceived value of that thing to the creator. The IKEA effect is the sense of emotional attachment resulting from creating or partially creating a thing (e.g., assembling furniture). People are willing to pay more for products they create than equivalent preassembled products and value things they personally create — even if shoddily constructed — as much as if those things had been created by an expert. The effect only applies to projects that are completed and dissipates if individuals disassemble their creations. The IKEA effect is not unique to humans and has been observed in animals such as rats and starlings, which seem to prefer food from sources that required effort on their part.1 In general, the level of effort invested in creating something corresponds to the level of its valuation by the creator: High effort translates into high valuation, and low effort translates into low valuation. But if the effort required is too great, then people won’t finish and the effect becomes moot. The key, therefore, is designing experiences that require a minimal level of actual effort but enough perceived effort that people feel invested in their creation. A classic experiment along these lines was conducted in the 1950s. American food manufacturers had mastered the art of instant-cake mixes. All ingredients were included — just add water. Despite the simplicity and quality of the cakes, sales stalled, and nobody understood why. The psychologist Ernest Dichter investigated and concluded that the cake mixes were too instant — i.e., so instant that women did not feel like they were baking a cake at all. Dichter’s solution: Require a couple of fresh eggs to make the cake and play up its icing and decoration in the marketing. This would make women feel more connected with the creative aspects of the baking process, more like chefs. It worked. Sales of instant cake mix shot up.2 Consider the IKEA effect in product strategy and experience design. Engage users in the creation of products to increase their value perception. Seek the sweet spot between minimal actual effort and maximum perceived effort to realize the greatest benefit. Note that assembly processes must be clear and free of frustration or the negative experience can undermine the positive associations of the effect. See also Cognitive Dissonance; Not Invented Here; Sunk Cost Effect;

Zeigarnik Effect

1

The seminal research is “The IKEA effect: When labor leads to love” by Michael Norton et al., 2012, Journal of Consumer Psychology, 22(3), 453 – 460. This research was successfully replicated in “The IKEA Effect. A Conceptual Replication” by Marko Sarstedt et al., 2016, Journal of Marketing Behavior, 2, 307–312. For the IKEA Effect in animals, see, for example, “Cost can increase preference in starlings” by Alex Kacelnik and Barnaby Marsh, 2002, Animal Behaviour, 63(2), 245 – 250.

2

See, for example, Finding Betty Crocker by Susan Marks, 2005, Simon & Schuster.

The effort people expend when assembling IKEA furniture actually makes them value the furniture more.

097

Inattentional Blindness A failure to perceive an unexpected stimulus presented in clear view. When people are focused intently on performing a task, roughly 50% of them will be functionally blind to stimuli that are unexpected and unrelated to the task. Functionally blind means that the eyes see, but the brain does not process the visual inputs. For example, in 1972, an Eastern Airlines cockpit crew noticed that a landing gear indicator failed to illuminate. They became so fixated on the cause that they failed to notice their loss in altitude or respond to ground alarms. The resulting crash killed more than 100 people. Inattentional blindness is one reason talking on the phone (even a hands-free phone) while driving is unsafe — the eyes may be on the road, but the mind is occupied elsewhere.1 Inattentional blindness can result from both cognitive overload and the performance of highly practiced or automatized tasks. In the former case, the brain doesn’t have sufficient cognitive resources to process all of the visual stimuli and is therefore effectively blind to them. In the latter case, the brain gets in a kind of automatic-pilot mode and fails to detect stimuli outside of the parameters of the automatized task. For example, in 1996, Pennsylvania highway workers paved over a dead deer — they didn’t see it. It seems reasonable to think that a surprise such as seeing a deer in the road would have captured the workers’ attention, but, counterintuitively, when people are task-focused, unexpected stimuli are actually worse at getting noticed than anticipated stimuli. Strategies to mitigate inattentional blindness include minimizing distractions in the environment (e.g., reducing auditory and visual noise), reducing cognitive load (e.g., chunking elements), strategically interrupting automatized procedures to heighten situational awareness and engage deliberative thought (e.g., using checklists to confirm steps and statuses), and expressing stimuli through different modalities (e.g., repeating instructions using auditory and textual stimuli).2 Inattentional blindness is the cornerstone of many of the tricks and misdirections employed by magicians and illusionists. Performers will frequently focus the audience’s attention on one thing while performing the sleight of hand where the audience is not looking. Consider inattentional blindness when situational awareness is key. Design tasks to focus attention on desired stimuli while reducing cognitive load and environmental distractions. Use alerts, confirmations, and multiple modalities of communication to interrupt processes and engage conscious thought. See also Confirmation; Flow; Habituation; Interference Effects;

Permformance Load; Signal-to-Noise Ratio; von Restorff Effect

1

The seminal work on inattentional blindness is Inattentional Blindness by Arien Mack and Irvin Rock, 1998, MIT Press. See also “Gorillas in Our Midst: Sustained Inattentional Blindness for Dynamic Events” by Daniel Simons and Christopher Chabris, 1999, Perception, 28(9), 1059 –1074; and “Selective Looking: Attending to Visually Specified Events” by Ulric Neisser and Robert Becklen, 1975, Cognitive Psychology, 7, 480 – 494. This principle is also known as perceptual blindness.

2

See, for example, “What You See Is What You Set: Sustained Inattentional Blindness and the Capture of Awareness” by Steven Most et al., 2005, Psychological Review, 112(1), 217– 242.

In the early morning of January 13, 2018, a ballistic missile alert was issued via the Emergency Alert System and Wireless Emergency Alert System over television, radio, and cell phones in the state of Hawaii. The alert caused chaos and panic among residents and visitors. About 40 minutes later, a second alert was issued, communicating it had been a false alarm. What happened? A night-shift supervisor decided to run an unscheduled drill that relayed a phone call pretending to be from U.S. Pacific Command, which would normally warn the agency of a nuclear missile attack. The phone message used the disconcerting phrase, “This is not a drill”. Running unscheduled drills that used this phrase was not standard procedure for obvious reasons. Even though the message began and ended with “Exercise, exercise, exercise”, this language had become so routinized that it did not garner much attention. What did garner attention was the surprise phone call and the language, “This is not a drill”, which made at least one employee blind to everything else. He followed his training and transmitted the public alert. In the aftermath that followed, the employee was terminated, and the state emergency manager resigned. A number of policy and process changes were instituted, including prescheduling surprise drills with supervisors and requiring confirmation by two officers to send an alert, not just one.

I was 100% sure that it was the right decision; that it was real…I heard “This is not a drill”. I didn’t hear “exercise” at all. — Employee who issued the alert, interview with NBC News

098

Interference Effects Things that trigger conflicting thought processes, reducing reaction and thinking efficiency. Interference effects occur when nonessential mental processes are triggered and interfere with essential mental processes, increasing errors and slowing performance. Nonessential mental processes can be triggered by conflicting meanings, distractions in the environment, and memories that are irrelevant to the task at hand.1 Common types of interference effects include: • Stroop interference — Two aspects of a design are incongruous and trigger competing mental processes. For example, a green “stop” button triggers a mental process for “go”, which is incongruous with its function. • Distraction interference — Visual clutter or noise requires increased searching and filtering, complicating the task and adding cognitive load. For example, a cluster of irrelevant signs distracts from one relevant sign. • Emotional interference — An aspect of a design triggers an emotional response that is incongruous with the intended outcome. For example, a logo with pointed features is emotionally incongruous with baby products. • Proactive interference — Existing knowledge hinders new learning. For example, in learning a new language, errors are made trying to apply the grammar rules of a native language to the new language. • Retroactive interference — New learning alters or makes you forget existing knowledge. For example, learning a new phone number interferes with similar phone numbers in memory. All elements of a design should be congruous with the design goal, which also means congruous to one another. Minimize interference by eliminating incongruous or distracting elements from the design or environment. Abide by strong color and symbol conventions when they exist (e.g., red means stop, green means go). Use learning devices such as analogies and knowledge maps to minimize proactive interference, and mix the presentation modes of instruction (e.g., lecture, video, and computer activities) to minimize retroactive interference. See also Error, Design; Error, Human; Inattentional Blindness; Mapping;

Performance Load; Signal-to-Noise Ratio

1

The seminal works on interference effects include “Studies of Interference in Serial Verbal Reactions” by James Stroop, 1935, Journal of Experimental Psychology, 18(6), 643 ; “Stimulus Configuration in Selective Attention Tasks” by James Pomerantz and Wendell Garner, 1973, Perception & Psychophysics, 14(3), 565 – 569; and “Characteristics of Word Encoding” by Delos Wickens, in Coding Processes in Human Memory, A.W. Melton and E. Martin (Eds.), 1972, V.H. Winston.

Arrows mean “go”, but red arrows mean “stop”. When traffic signs and signals create interference effects, accidents increase.

099

Inverted Pyramid The presentation of information from most important to least important. The inverted pyramid refers to a method of information presentation in which key information is presented first, and then additional elaborative information is presented in descending order of importance.1 In the pyramid metaphor, the base represents the most important information, while the tip represents the least important information. To invert the pyramid is to present the important information first, and the supplemental information last. The inverted pyramid has been a standard in journalism for over 100 years and has found wide use in instructional design and technical writing. The inverted pyramid consists of a lede (critical information) and a body (elaborative information). The lede is a concise summary of the “what”, “where”, “when”, “who”, “why”, and “how” of the information. The body consists of subsequent paragraphs or chunks of information that elaborate facts and details in descending order of importance. It is increasingly common in Internet publishing to present only the lede and make the body available upon request (e.g., with a “more…” link). The inverted pyramid offers a number of benefits over traditional methods of information presentation: It conveys the key aspects of the information quickly; it establishes a context in which to interpret subsequent facts; initial chunks of information are more likely to be remembered than later chunks of information; it permits efficient searching and scanning of information; and information can be easily edited for length, knowing that the least important information will always be at the end. The efficiency of the inverted pyramid is also its limiting factor. While it is a succinct, information-dense method of information presentation, it achieves its efficiency by sacrificing storytelling devices such as building suspense and surprise endings. As such, inverted pyramid accounts are often perceived as clinical and boring. Use the inverted pyramid when communication efficiency is key or when communication channels are unreliable. When it is not possible to use the inverted pyramid (e.g., scientific writing), consider an abstract or executive summary at the beginning to present the key findings. See also Entry Point; Form Follows Function; Ockham’s Razor;

Progressive Disclosure; Serial Position Effects

1

The development of the inverted pyramid is attributed to Edwin Stanton, Abraham Lincoln’s Secretary of War (1865). See, for example, Just the Facts: How “Objectivity” Came to Define American Journalism by David Mindich, 2000, New York University Press.

LEDE Information Readers Must Have to Know What Happened

This evening at about 9:30 PM, at Ford’s Theater, the President, while sitting in his private box with Mrs. Lincoln, Miss Harris, and Major Rathbone, was shot by an assassin who suddenly entered the box and approached the President. BODY Information That Helps Readers Understand but Isn’t Essential

General Grant and wife were advertised to be at the theater this evening, but he started for Burlington at six o’clock this evening. At a cabinet meeting at which General Grant was present, the subject of the state of the country, and the prospect of a speedy peace was discussed. CONCLUSION Information That’s Interesting or Nice to Have

All the members of the cabinet, except Mr. Seward, are now in attendance upon the President. I have seen Mr. Seward, but he and Frederick are both unconscious.

The report of President Lincoln’s assassination established the inverted pyramid style of writing. Its economy of style, a stark contrast to the lavish prose of the day, was developed for efficient communication by telegraph.

100

Iron Triangle A model that proposes three constraints for all projects: time, cost, and scope. The iron triangle is a project management model that proposes three governing constraints for all projects: time, cost, and scope. Time refers to the amount of time the project has to be completed. Cost refers to budget, people, and resources assigned to the project. And scope refers to the features, functionalities, and quality of the thing being developed. Altering any one of these constraints likely necessitates changes to the other two. For example, an increase in scope would likely necessitate an increase in both budget and time.1 The iron triangle is commonly introduced by the expression, You can have it good, fast, or cheap; pick two. The notion is that project owners should identify the two most important constraints for the project and be willing to compromise the third, if needed, to make the project a success. In most real-world cases, however, project owners can reliably preserve just one constraint, not two. They should declare this one constraint a priority at the outset and then be prepared to compromise the other two to ensure project success. For deadline-driven projects, there needs to be flexibility to increase budget and reduce scope. For budget-driven projects, there needs to be flexibility to increase time and reduce scope. For scope-driven projects, there needs to be flexibility to increase time and budget.2 Declaring one constraint the priority does not mean the other two constraints are unimportant, nor does it signal an intention at the outset to compromise the other two. It is basic contingency planning, aligning the project owner and the design and development teams on the constraint that is most important, which will save valuable time and resources mid-project if tradeoff decisions must be made.3 The iron triangle is a first principle of project management, akin to a law of physics. As such, consider it in all project design and project management contexts. Collaborate with project owners to identify the highest-priority constraint and then contingency plan tradeoffs for the other two. Manage tradeoffs rather than be managed by them. It is not uncommon for project owners to resist declaring one constraint the priority — they typically want good, fast, and cheap — but there can be only one: They can choose, or reality will choose for them.4 See also Box’s Law; Brooks’ Law; Constraint; First Principles;

Minimum-Viable Product; Process Eats Goal; Satisficing

1

Note that quality and performance are often introduced as an adjunct to scope, but the classic model has scope encapsulating capability and grade attributes. See, for example, “Theory of the Triple Constraint — a Conceptual Review” by C.J. Van Wyngaard et al., Dec 2012, Proceedings of the 2012 IEEE IEEM, 1991–1997. This principle is also known as the triple-constraint and project management triangle.

2

The “pick two” practice works for simple projects with well-understood risks and few unknowns, but projects complex enough to require project management should follow the “pick one” practice.

3

Note that satisfying the constraints of the iron triangle does not guarantee ultimate project success. Projects can fail to abide time, cost, and scope constraints and still succeed in the long term. Conversely, projects can succeed in abiding all three constraints and still fail in the long term. See, for example, “Beyond the Iron Triangle: Evaluating Aspects of Success and Failure using a Project Status Model” by Malcolm Bronte-Stewart, Nov 2015, Computing and Information Systems Journal, 9(2), 19 – 36.

4

Contrary to the opinion of many, the iron triangle applies to all projects, including those using Scrum/Agile/Lean. Certain of these approaches may be more effective at avoiding or navigating tradeoffs, but the dynamics of the iron triangle remain whether they are acknowledged or not.

Money

The Iron Triangle

Scope

Time

The Sydney Opera House was originally estimated to cost $7 million and take 4 years to complete. In the end, it cost $102 million and took 14 years. As a project, it was an abject failure; but as a product, it is an unequivocal success. Despite being extremely late and over budget, it has far exceeded expectations in terms of economic performance and international recognition. The iron triangle lesson? The Sydney Opera House was always a scopedriven project, though this was never explicitly acknowledged by stakeholders during the project. Not declaring and aligning to scope as the priority created extreme tension

among stakeholders when cost and time tradeoffs needed to be made, putting the project at existential risk throughout the project life cycle. However, there is an alternative possibility: The project champions knew the opera house was scope driven, but they also knew that the other stakeholders would never agree to the budget and timeline required to achieve the scope. They therefore intentionally set unrealistically low budget and timeline estimates to get approval and then relied on salesmanship and the sunk cost effect to ratchet costs and deadlines incrementally over time to bring the full scope to fruition.

…the Iron Triangle represents a kind of “project management physics”. That is, your project is going to follow the laws of the Iron Triangle whether you’re conscious of it or not. It’s like saying that regardless of whether or not you believe in gravity, Newton’s apple is still going to hit the ground! — Raj Nagappan “The Iron Triangle and Agile”

101

Iteration Designing things in phases, each phase building on the last, until a desired result is achieved. In nature, iteration allows complex structures to form by progressively building on simpler structures. In design, iteration allows complex structures to be created by progressively exploring, testing, and tuning the design. The emergence of ordered complexity results from an accumulation of knowledge and experience that is then applied to the design. Iteration occurs in all development cycles and can be beneficial or detrimental, depending on when it is employed.1 • Design iteration (employed during the design stage) — The backbone of design and design thinking. Great design does not happen without it. Design iteration refers to repeating the basic steps of analysis, prototyping, and testing until a desired result is achieved. Each cycle in the design process narrows the wide range of possibilities. Prototypes of increasing fidelity are used throughout the process to test concepts and identify unknown variables. For example, a quality software user interface might begin as a paper prototype, be iterated and improved through user feedback, and after further iteration based on new understanding, finally mature to a fully interactive, high-fidelity product. Whether tests that occur throughout an iterative process are a success or failure is irrelevant, since both success and failure provide important information. The outcome of design iteration is a detailed and well-tested specification that can be developed into a final product.2 • Development iteration (employed during the development stage) — The unexpected iteration that occurs when building a product, which is undesirable. Unlike design iteration, development iteration is rework — i.e., unnecessary waste in the development cycle. Development iteration is costly and generally the result of either inadequate or incorrect design specifications or poor planning and management in the development process. Design unknowns and alternate ideas should be addressed during the design stage, not the development stage. Plan for and employ design iteration. Establish clear criteria defining the degree to which design requirements must be satisfied for the design to be considered complete. One of the most effective methods of reducing development iteration is to ensure that all development members have a clear, high-level vision of the final product. This is often accomplished through well-written specifications, detailed comps, and high-fidelity models and prototypes. See also Development Cycle; Feedback Loop; KISS; Progressive Subtraction;

Prototyping; Satisficing; Self-Similarity

1

A seminal contemporary work on iteration in design is The Evolution of Useful Things by Henry Petroski, 1994, Vintage Books. See also Product Design and Development, 2nd ed., by Karl Ulrich and Steven D. Eppinger, 1999, McGraw-Hill Higher Education. See also “Positive vs. Negative Iteration in Design” by Glenn Ballard, 2000, Proceedings of the Eighth Annual Conference of the International Group for Lean Construction.

2

A common problem with design iteration is the absence of a defined end point — i.e., each iteration refines the design but also reveals additional opportunities for refinement, resulting in a design process that never ends. To avoid this, establish clear criteria defining the degree to which design requirements must be satisfied for the design to be considered complete.

By focusing on designing a plane that could be rebuilt in hours versus months, engineer Paul MacCready enabled his team to dramatically speed up iteration. Within a year, the Gossamer Condor flew 1.35 miles (2.17 km) from takeoff to landing, winning the first Kremer Prize.

…because the problem [MacCready] set out to solve was creating a plane he could fix in hours, he was able to quickly iterate. Sometimes he would fly three or four different planes in a single day. The rebuild, re-test, and re-learn cycle went from months and years to hours and days. — Aza Raskin Fast Company

102

Kano Model A model for understanding customer needs and then prioritizing design features accordingly. The Kano model, proposed by the professor Noriaki Kano, describes the relationship between product features and customer satisfaction. The English translations of his research vary, but the model basically proposes the existence of three key feature categories.1 1. Delighter features — Product features that create surprise and delight.

Customers are typically unaware of delighters or the problems they solve until they are experienced, after which they can’t imagine life without them. This also means that designers can’t identify delighters by asking or surveying users because customers have no frame of reference for the features. Because delighters are unexpected, they create customer satisfaction even when not fully implemented or refined and, for this same reason, can be offered as options or upgrades. 2. Performance features — Product features that are known and sought

out by customers. Customers typically research and compare products based on performance features to make their buying decisions. Accordingly, performance features can be identified through focus groups and surveys. Some competitive baseline level of performance features must be present for a product to be successful; enhancements or extensions of these features can be offered as options or upgrades. 3. Threshold features — Product features that are assumed to be present

and to perform well. Customers typically do not consider threshold features, as they are considered too basic and fundamental to merit attention. Products get no credit for having threshold features that work well, but they get severely punished for not having them. Features tend to migrate downward across Kano categories over time, from delighters to performance to threshold. Why? A delighter feature is initially differentiated, but if successful, it is widely copied and becomes a performance feature. As performance features become common, they become increasingly expected until they become threshold features. Consider the Kano model to analyze the competitive landscape and to decide which features to include in an offering. Remember, customers can’t tell you what delighters are and won’t tell you what threshold features are; so use appropriate research methods by feature category. Ensure that products have at least one delighter feature (however unrefined), a competitive set of performance features (fairly refined), and a complete set of threshold features (very refined). See also Development Cycle; Expectation Effects; Habituation;

Hierarchy of Needs; Minimum-Viable Product

1

“Attractive quality and must-be quality” by Noriaki Kano et al., 1984, The Journal of the Japanese Society for Quality Control, 14(2), 39 – 48. Note that Kano notes two additional categories, unimportant and undesired features, which have no impact and a negative impact on customer satisfaction, respectively.

Customer Satisfied

ce an rm rfo Pe

Delighter Features Not Implemented

es ur at e F

Fully Implemented

Threshold Features

Customer Dissatisfied

Kano examples per the Rivian electric truck: Delighters include a gear tunnel with optional pull-out camp kitchen and rooftop tent; performance features include 0 to 60 miles per hour (0 to 96.5 km/hr) in 3 seconds

and range of 260 to 400 + miles (418 to 644 + km); threshold features include adaptive cruise control and warranty of 5 years or 60,000 miles (96,560 km).

The Kano model teaches that there are three product-feature categories from a customer perspective. Delighter features surprise customers and differentiate offerings. Customers don’t ask for them and don’t expect them but are delighted to learn about them. Performance features are what customers know and seek out when making buying decisions. Customers compare products based on these features, and they influence the bulk of the buying decision. Threshold features are the basic features a product category should have. Customers don’t ask for them because the assumption is that these features are there.

103

KISS Simple designs work better and are more reliable than complex designs. The acronym KISS — Keep it Simple, Stupid — was proposed by Kelly Johnson, lead engineer at the Lockheed Skunk Works in the 1950s. Johnson designed aircraft for war, which meant planes had to perform reliably, and they had to be easy to fix in rough, stressful field conditions. One of the most powerful principles he employed to do this was KISS. KISS asserts that simple systems work better over the long term, benefiting from increased maintainability and reliability. Accordingly, applying KISS

means designing systems that use a minimal number of parts with a minimal number of interactions between those parts and doing so with a mind for ease of maintenance. Given the choice between two systems of equal performance — car engines, algorithms, workflows — the ones with fewer parts and interactions between parts will be easier to manufacture, more reliable, and easier to troubleshoot and maintain. Design is inherently messy. The process of iteration and prototyping is experimental and nonlinear, and designs accrue inessential elements and interactions as they are developed. KISS is a guiding principle to help prune such complexity. Rather than always being additive, designers following KISS seek to simplify and subtract with each iteration. Good designers ask what can be removed without hurting performance and usability, and they keep simplifying and subtracting until they can’t simplify any further — until subtracting something hurts a design requirement. One of the greatest examples of applying KISS was the AK-47 assault rifle, invented by Mikhail Kalashnikov. The AK-47 began as kind of a Frankenstein rifle, borrowing from a variety of existing designs. Through a process of iterative subtraction, it was simplified to having only eight moving parts. It is objectively the most successful firearm in history, with an estimated 70 to 100 million of the weapons in circulation. In Afghanistan, an AK-47 is reputed to cost as little as $10. Tragically, these very qualities have led the rifle to become the preferred weapon of criminals and terrorists around the world.1 Consider KISS in all aspects of design. Ritualize simplification and progressive subtraction in iterations. Use the fewest parts possible, and minimize the interactions between those parts. Design systems to be easy to comprehend, maintain, and repair. Continue to simplify and reduce until performance or usability is compromised. See also Feature Creep; Iteration; Maintainability; Modularity; Ockham’s Razor;

Prototyping; Progressive Subtraction

1

A year before he died, inventor Mikhail Kalashnikov said, “It is painful for me to see when criminal elements of all kinds fire from my weapon, I created this weapon primarily to safeguard our fatherland”. This serves as a healthy reminder to designers: The KISS principle gives us good design, but it doesn’t give us wisdom.

When working to solve complex problems, Keep It Simple, Stupid. For example, the mobility of astronauts is limited by their seat restraints, space suits, and complex cabin configurations, which can make accessing and manipulating controls difficult. The solution? They use a stick, or, more properly, what they call a “swizzle stick”, to extend their reach. This is NASA Astronaut Doug Wheelock, swizzle stick in hand, aboard the Russian Soyuz spacecraft.

Simplicity is about subtracting the obvious, and adding the meaningful. — John Maeda The Laws of Simplicity

104

Knowing-Doing Gap The divide that exists between knowing how to do something and actually doing it. The knowing-doing gap refers to a common disconnect in organizations between know-how and practice: they know better, but they fail to do better. The question is why. Know-how includes everything from management skills to people practices, from business strategy to line-level training and development. Why are the benefits of research-based practices so difficult to realize? The answer lies in the knowing-doing gap.1 The proposed causes of the knowing-doing gap are social-psychological in nature: (1) talking is mistaken for doing, (2) resistance to change, (3) fear of failure, (4) measuring the wrong things, and (5) internal competition. These causes, singularly or in combination, inhibit people in organizations from translating their know-how into appropriate actions. Bridging the knowing-doing gap is difficult, as it itself requires translating knowledge about the problem into actions to resolve it. A number of guidelines for action have been proposed.2 • Lead with why — Explain why people are being asked to do something before explaining how to do it. Without the greater context and rationale, people tend to default to the status quo. • Be intentional about execution — Classroom instruction and abstract presentations tend not to transfer to the field. Situate change and learning interventions in authentic contexts, doing real things and solving real problems. • Forgive errors for action — Create a bias for action that is tolerant of mistakes and failure. It is generally better to try things out and iterate than to spend extensive time planning or hedging. • Eliminate internal competition — Avoid teams competing with one another, especially when compensation or incentives are involved. Internal competition creates barriers to collaboration and invariably renders customer interests secondary to team interests. • Measure what matters — Focus on measuring the few things that matter most. In the age of big data, it is tempting to engage in kitchen-sink analytics; but this risks distracting from what’s important. Consider the knowing-doing gap in the implementation of new initiatives, processes, and programs. Beware the common causes of the knowingdoing gap and practice guidelines for action. Recognition is the first step to recovery, but it is not enough to bridge the gap: Be intentional about doing — walk the talk. See also Ackoff’s Law; Death Spiral; Icarus Matrix; Not Invented Here;

Reverse Salient

1

The seminal work is The Knowing-Doing Gap by Jeffrey Pfeffer and Robert Sutton, 2000, Harvard Business School Press.

2

Pfeffer and Sutton identify eight guidelines for action, which have been consolidated here to five guidelines for brevity.

New United Motor Manufacturing Inc. (NUMMI), a $400 million joint venture between General Motors Corp. and Toyota Motors Corp., was inaugurated with a dedication ceremony at the Fremont, California, plant in 1985. The joint venture was conceived to help GM learn the Toyota Production System to improve car quality and to help Toyota get a manufacturing foothold in North America. But it would not be that easy. The NUMMI plant was viewed as internal competition by people within GM, a

view exacerbated by the fact that the senior leadership of NUMMI was from Japan. Despite these challenges, the NUMMI factory was soon producing cars at the same speed and quality as Japanese factories. GM executives attempted to spread the Toyota Production System and learnings from NUMMI to other assembly plants for 25 years, but it proved largely unsuccessful. GM knew what to do; they just couldn’t do it. In 2009, GM filed for bankruptcy. One year later, the NUMMI plant was closed.

The problem is that there are too many organizations where having a mission or values statement written down somewhere is confused with implementing those values. — Jeffrey Pfeffer The Knowing-Doing Gap

105

Learnability The ease with which a new thing can be understood and productively used. Learnability refers to the ease with which a person can understand a new thing and perform essential functions with it. For example, a first-time user picking up an iPhone is greeted with the prompt “Swipe to open”, after a brief delay. This prompt instructs new users how to begin interactions with the device but is only presented when users hold the device for a few seconds and don’t swipe, suggesting they may not know what to do. Once learned, the prompt is rarely, if ever, seen again.1 The four pillars of learnability are: 1. Consistency — Form and function (e.g., icons, labels, colors, layout,

behaviors) are applied the same way throughout, allowing learning from one part of a system to efficiently transfer to other parts of the system. When conventions exist, they are observed versus reinvented. If there is a compelling reason for a thing to be inconsistent, then it is inconsistent; otherwise, the rule is consistency. 2. Discoverability — It is clear how to begin and perform basic functions

quickly. Key controls are clearly actionable. Errant actions and experiments are not allowed to cause irreversible harm. Complex processes are supported using wizards or similar strategies, providing step-by-step explanations and guidance. Manuals, tutorials, and similar aids are available upon request. 3. Responsiveness — The system provides just the amount of feedback

needed, when needed, and no more. Information is strategically located to support points of known difficulty. The system alerts and confirms the intent of impactful operations before they occur. In cases where it is difficult to determine if people need help, the system uses subtle, nondisruptive prompting. 4. Simplicity — Controls are clear and easy to use. The system provides a

minimal set of essential information and functions that are highly visible. Basic operations are intuitive to perform. Complexity is progressively revealed as people are ready and able to receive it. Incorporate learnability in all designs but especially in those that are used infrequently (e.g., information kiosks). Verify learnability by testing with real users, seeking to minimize dependence on documentation and help. Learnability does not mean leading with tutorials or instruction. Just the opposite: If a system is learnable, instructions and tutorials aren’t needed. See also Confirmation; Consistency; Feedback; KISS; Ockham’s Razor;

Progressive Disclosure; Visibility

1

Learnability is generally considered a dimension of usability but is unique in that it requires some level of deliberative thought versus intuitive action. If usability can be summarized as “Don’t make me think”, learnability can be summarized as “Help me to think (when I need it)”.

Pac-Man is one of the best-selling games of all time. In America, it earned more than $1 billion in quarters and became a nationwide success as soon as it was released. What are its secrets? One is its extreme learnability. One joystick control that moves up, down, left, and

right. One maze across levels. Players receive rewards by simply moving in any direction and eating dots. One set of ghostly antagonists that behave in a predictable manner. Clear auditory and visual feedback. Pac-Man is consistent, discoverable, responsive, and simple. Waka Waka Waka!

106

Left-Digit Effect People give more weight to the left-most digits of prices in buying decisions. People judge the difference between $5.00 and $4.99 to be larger than between $5.01 and $5.00, even though the differences are the same. Why? When viewing a price, people have an outsized emotional reaction to the first digit in the sequence, giving more weight to its value. In cultures that read from left to right, this reaction is based on the left-most digit in prices. In countries that read top-down or right-left, the effect is based on the topmost or right-most digit, respectively. The effect applies to all numbers or measurements, including weight on a scale, available bytes of disk space, running times in a race, and driving speeds relative to posted limits.1

1

The seminal work is “Penny Wise and Pound Foolish: The Left-Digit Effect in Price Cognition” by Manoj Thomas and Vicki Morwitz, 2005, Journal of Consumer Research, 32(1), 54 – 64. For an example of the effect in non-pricing contexts, see “The left digit effect in a complex judgment task: Evaluating hypothetical college applicants” by Andrea Patalano et al., 2021, Journal of Behavioral Decision Making, 35(1), 1–14.

2

See “The Left-Digit Bias: When and Why Are Consumers Penny Wise and Pound Foolish?” by Tatiana Sokolova et al., 2020, Journal of Marketing Research, 57(4), 771–788; and “Distortion of price discount perceptions through the left-digit effect” by Chien-Huang Lin and Jyh-Wen Wang, 2017, Marketing Letters, Springer, 28(1), 99 –112.

3

See, for example, “Israel to Abolish Deceptive Pricing Ending in .99 Shekels” by Gabriela Davidovich-Weisberg, Oct 20, 2013, Haaretz.

Given two prices, $29.99 and $30.00, people react as if the difference is not one cent but one dollar. The effect is strongest at the dollar and 50-cent marks. When people consciously think about the pricing, they know the difference is only a penny — but the subconscious emotional reaction weights the difference as more. The result of the left-digit effect can increase sales up to 15%. The left-digit effect is strongest when presented next to a reference price versus stand-alone. When reference prices are presented — e.g., “was $4.00, now $2.99”— less conscious deliberation is required to evaluate the price and the emotional reaction is strongest. When prices are presented stand-alone — e.g., “now $2.99”— more conscious deliberation is required to evaluate the price. This increases the likelihood of recalling simplifiedrounded prices from memory, which weakens the effect. Additionally, the effect is stronger when the left digit is less than 5 and the price is fewer than four digits.2 Prices ending in 99s indicate low prices but also lower quality. Prices ending in round numbers indicate high prices but also higher quality. In many nonWestern cultures, the additional complexity of 99s in prices is perceived as the retailer being tricky or deceptive.3 Consider the left-digit effect when developing pricing strategy. When selling commodities or when price is the primary driver of buying behavior, favor pricing a few cents below a rounded amount. Present higher reference prices alongside actual prices to maximize the effect. When selling highquality products or luxury goods, favor pricing that uses round numbers. For international audiences, research how different pricing strategies are interpreted before setting them. See also Anchoring; Framing; Number-Space Associations; Priming;

Serial Position Effects

Ending prices with nines is a good strategy for commodity products like gasoline. This permits the highest possible price for the lowest perceived cost, which drives buying behaviors.

But for differentiated products like luxury goods, brand and quality drive buying behaviors. In these cases, round-number pricing is preferred.

107

Legibility The visual clarity of text, generally based on size, typeface, contrast, line length, and spacing. Confusion regarding the research on legibility is widespread. The rapid growth and advancement of modern desktop publishing, Web-based publishing, and multimedia presentation compound the confusion with increasing font and layout capabilities, display and print options, and the need to effectively integrate with other media.1

1

The seminal empirical works on legibility for print are Legibility of Print by Miles Tinker, 1963, Iowa State University Press; and Bases for Effective Reading by Miles Tinker, 1965, University of Minnesota Press. A classic reference from a typographic perspective is The Elements of Typographic Style by Robert Bringhurst, 1992, Hartley & Marks.

2

Legibility research on low-resolution computer displays continues to yield mixed results but generally supports Tinker’s original findings.

3

On low-resolution displays and for type smaller than 12 point, use sans serif typefaces without antialiasing. Serifs and antialiasing blur the characters of smaller type.

4

High-contrast, inverse text can “visually bleed” into the background and dramatically reduce legibility. In addition to legibility, other factors should also be considered when selecting foreground/background color combinations (e.g., color blindness and fatigue); so select carefully and test atypical combinations.

5

The speed with which text can be visually processed is greatest on long text lines of 80 characters or more. However, readers prefer short text lines of 35 to 55 characters. Therefore, a middle ground of 50 to 75 characters per line is recommended. See, for example, “The Effects of Line Length and Method of Movement on Patterns of Reading from Screen” by Mary Dyson and Gary Kipping, 1998, Visible Language, 32(2), 150 –181.

The following guidelines address common issues regarding text legibility: • Use 9- to 12-point type for high-resolution media such as print. Smaller type is acceptable when limited to captions. Use larger type for lowresolution media. Favor larger type for very young or elderly readers.2 • Use clear typefaces that can be easily read. There is no performance difference between serif and sans serif typefaces, so select based on prima facie legibility and aesthetics. Favor sentence case for text blocks. Title case and uppercase text should be reserved for headlines and labels. Acronyms and initialisms shouild be set using uppercase text and a smaller type size; periods are not needed after each letter. Environments with variable or low lighting may need larger text sizes.3 • Performance is optimal when contrast levels between text and background exceed 70% and generally favors dark text on light backgrounds versus light text on dark backgrounds. Avoid text on patterned or textured backgrounds.4 • Favor flush-left, ragged-right text blocks to justified text blocks. Justified text spaces words unevenly, which compromises legibility. Justified text can be used when text is short and aesthetics are paramount. Signal new paragraphs with either a blank line or by indenting the first line of the new paragraph. • Optimal line lengths are 50 to 75 characters per line for body text and 25 to 35 characters per line for captions. Separate sentences with one space, not two. Using two spaces is a convention based on the limits of typewriters and is no longer necessary. Avoid line breaks that isolate single words on a line, single lines on a page, or break hyphenated words to another line.5 • Set leading (space between text lines, baseline to baseline) to the type size plus 1 to 4 points. Favor proportionally spaced typefaces to monospaced typefaces. Consider legibility in the design of anything text related. Be intentional about the selection and use of typefaces following the guidelines presented. See also Alignment; Depth of Processing; Miller’s Law; Readability

Type Size This is 9-point Trade Gothic

This is 10-point Trade Gothic

This is 12-point Trade Gothic

Typeface and Text Case Serif vs. Sans Serif

Acronyms and Initialisms

Text Case

Serif typefaces have small “feet” at the ends of the letters.

Both the FBI director and IBM CEO spoke to NATO leaders.

This is sentence case text

Serif

This Is Title Case Text this is lowercase text THIS IS UPPERCASE TEXT

Sans Serif Contrast

The Mad Hatter

The Mad Hatter

The Mad Hatter

Textblocks Flush Left, Ragged Right Text

Justified Text

Soon her eye fell on a little glass box that was lying under the table: she opened it, and found in it a very small cake, on which the words “EAT ME” were beautifully marked in currants.

Soon her eye fell on a little glass box that was lying under the table: she opened it, and found in it a very small cake, on which the words “EAT ME” were beautifully marked in currants.

Flush Right, Ragged Left Text Soon her eye fell on a little glass box that was lying under the table: she opened it, and found in it a very small cake, on which the words “EAT ME” were beautifully marked in currants.

Line Length Body Text

Body Text

Poor Alice! It was as much as she could do, lying down on one side, to look through into the garden with one eye; but to get through was more hopeless than ever: she sat down and began to cry again.

Poor Alice! It was as much as she could do, lying down on one side, to look through into the garden with one eye; but to get through was more hopeless than ever: she sat down and began to cry again.

Leading Leading (rhymes with sledding) is the amount of vertical space from the baseline of one line of text to the baseline of the next line of text. At the right, the type size is 12 points and the leading is 16 points.

It was all very well to say “Drink me”, but the wise little Alice was not going to do that in a hurry.

Baseline Leading Baseline

108

Levels of Invention A model that classifies inventions based on complexity, nonobviousness, and impact. The levels of invention model is the centerpiece of an invention methodology referred to as TRIZ, a Russian acronym that translates to Theory of Inventive Problem Solving. The model is the product of a systematic review of 200,000 patent abstracts, from which 40,000 exemplars were selected, and from which five categories of inventions were derived. The levels of invention are ranked based on their ability to eliminate performance tradeoffs.1 • Level 1: Minor Improvements Solves a simple problem by refining existing designs, leveraging common knowledge and common sense within a domain. These improvements are generally not sufficiently novel or nonobvious to be considered inventions. Requires 1 to 10 iterations.2 • Level 2: Major Improvement Solves a complex problem by improving upon one key aspect of that problem, leveraging expertise and best practices within a domain. Designs at this level and above are considered sufficiently novel and nonobvious to be inventions. Requires 10 to 100 iterations. • Level 3: Major Innovation Solves a complex problem by conceptualizing it in a new way, leveraging knowledge and best practices from other domains. Requires multiple level 2 inventions to be in place and 100 to 1,000 iterations. • Level 4: Significant Innovation Solves a newly discovered problem, requiring a design approach that leverages new science and technology. Requires multiple level 3 inventions to be in place and 1,000 to 100,000 iterations.3 • Level 5: Revolutionary Breakthrough Solves a newly discovered problem of great significance, requiring an approach that leverages a new scientific breakthrough. Requires multiple level 4 inventions to be in place and 100,000+ iterations.4 Consider the levels of invention model when contemplating innovation, new product development, and research and development. Use the model’s benchmarks (e.g., risk-reward, timeline, iterations, etc.) as general reference points versus values to be taken literally. Note the dependence of higher-level innovations on earlier levels as well as the estimated iterations required: There is no royal road to level 4 and 5 inventions! See also Box’s Law; First Principles; Iron Triangle; Iteration; Kano Model;

Reverse Salient

1

The seminal work is “About a Technology of Creativity” by Genrich Altshuller and Rafael Shapiro, 1956, Questions of Psychology, 6, 37– 49; see also Creativity as an Exact Science: The Theory of the Solution of Inventive Problems by Genrich Altshuller and A. Williams (Tr.), 1984, Gordon and Breach.

2

Note that the term design iterations is used here. Other translations have used, trials, solutions, and trial-and-error experiments. See, for example, “Levels of Invention and Intellectual Property Strategies” by Boris Zlotin and Alla Zusman, 2003, Ideation International.

3

Level 4 innovations are sometimes called moonshots.

4

Level 5 innovations are sometimes called loonshots — inventions with the ambition of moonshots but crazier. See, for example, Loonshots: How to Nurture the Crazy Ideas That Win Wars, Cure Diseases, and Transform Industries by Safi Bahcall, 2019, St. Martin’s Press.

Minor Improvements Risk: Lowest Reward: Lowest Timeline: 1– 2 years

Major Improvement Risk: Low Reward: Medium Timeline: 2 – 5 years

Major Innovation Risk: Medium Reward: High Timeline: 5 –10 years

Significant Innovation Risk: High Reward: High Timeline: 10 –15 years

Not all inventions are created equal, nor do they require the same level of effort to achieve. Simple inventions can be achieved quickly in just a few iterations, whereas revolutionary inventions can require decades and 100,000 + iterations or more. The TRIZ levels of invention framework helps us understand how innovation occurs, the different types along with their respective costs and benefits, and their implications for planning and road mapping.

Number of Inventions

Revolutionary Breakthrough Risk: Highest Reward: Highest Timeline: 15+ years

Level

Level

Level

Level

Level

109

Leverage Point A place within a system where small changes produce big effects. Leverage points are variables within systems where small changes can produce significant system-level effects. The concept is commonly known by metaphors such as silver bullets, unicorns, miracle cures, etc.; and the idea is essentially the same: a single variable that can change everything. While leverage points are specific to the systems of which they are part, there are generic categories that can speed identification and intervention. In order of their effectiveness and difficulty to change, these categories are listed below:1 1. Mindsets — Changing mindsets is the highest leverage action, as it

affects all other leverage points below (e.g., shifting a country’s aspiration from gross domestic product [GDP] to gross national well-being [GNW]). 2. Goals — Changing goals is effective at bending and redirecting system

structures to conform to those goals (e.g., setting a goal of landing on the moon in less than a decade forced NASA to rapidly reengineer itself). 3. Strategy — Changing strategies to achieve a goal is the most effective

means of moving a system, assuming the mindsets and goals are fixed (e.g., applying one or more principles from this book to change a design or product strategy). 4. Structure — Changing the system by adding, subtracting, or modifying

system structures (e.g., making micro-loans available to small businesses in developing countries to stimulate economic growth). 5. Information — Changing the flow of information within the system to

influence it to change (e.g., making the amount and cost of energy a home consumes more visible to consumers to influence them to consume less). Leverage points are not easily accessible, changeable, or predictable; and when they are changed, they often produce counterintuitive and unintended results. To make matters worse, the greater a leverage point’s potential for change, the more a system will fight back — like an immune response. This makes the manipulation of leverage points a long-term learning endeavor, not a quick fix. Consider leverage points when seeking to modify complex systems. Evaluate leverage points in order of effectiveness, finding the most effective levers for a given system. Conduct small-scale, controlled experiments before introducing large-scale interventions. Be open to counterintuitive results and prepared for unintended consequences. See also Archetypes, System; Cost-Benefit; Feedback Loop;

Levels of Invention; Pareto Principle; Perverse Incentives

1

The seminal work is “Leverage Points: Places to Intervene in a System” by Donella Meadows, 1999, The Sustainability Institute. Note that the five categories are a consolidation of the highimpact subset of Meadows’ 12 leverage points. See also Urban Dynamics by Jay Forrester, 1969, MIT Press.

PAIN SCALE No Pain

Mild Pain

Moderate Pain

Severe Pain

OxyContin Prescriptions per 1,000 Beneficiaries

Every complex system has within it points of disproportionate influence. In the U.S. health care system, one such leverage point is the point of prescription — i.e., doctors. Drug manufacturers have long known this, which is why they hire attractive representatives to visit doctors’ offices, woo them with gifts, and sell to them directly. But with regard to opioid drugs like OxyContin, drug manufacturers sought to change the system at its maximum leverage point: the mindsets of doctors. Prior to the epidemic, pain medications were prescribed conservatively, with doctors wary of the risks of addiction. But new marketing campaigns normalized the idea that pain should be considered the “fifth vital sign” (the standard four vitals being temperature, heart rate, respiratory rate, and blood pressure) while also downplaying the risks of addiction of new opioid drugs. Armed with a patient-friendly pain scale and an increased emphasis on improving the patient experience, doctors dramatically increased pain medication prescriptions. The opioid epidemic was born. But interestingly, the crisis spread unevenly. It turns out certain state laws also targeted doctors as the leverage point in the system. California, Idaho, Illinois, New York, and Texas required doctors to submit prescription paperwork for all controlled substances in triplicate — one copy for the patient, one for the pharmacist, and one to be sent to the state drug monitoring agency. As a result, these “triplicate states” had significantly lower rates of prescription and addiction.

Very Severe Pain

Worst Pain Possible

Non-Triplicate States

Triplicate States

Year

110

MAFA Effect The average face of a local population is more attractive than any individual face. People tend to find the most average facial appearance (MAFA) of their population group more attractive than faces that deviate from the average. Population group refers to the group in which a person lives or was raised. For example, when pictures of many faces within a population group are combined to create a composite image, the composite image is more attractive than the individual source images and similar in appearance to professional models in that population.1 The MAFA effect is likely due to some combination of evolution and cognitive prototypes. Average faces tend to be more symmetrical, and symmetry has long been viewed as an indicator of health and fitness. Asymmetric members of all species tend to have fewer offspring and live shorter lives — generally, the asymmetry is the result of disease, malnutrition, or bad genes. Therefore, a preference for average facial features may have evolved as an indicator of fitness. Cognitive prototypes are mental representations that are formed through experience. As people see the faces of other people, their mental representation of what a face is may be updated through a process similar to compositing. If this is the case, average faces pattern-match easily with cognitive prototypes and contribute to a learned preference.2 An exception to the MAFA effect is when certain facial features represent exaggerations of universal facial preferences or local aesthetic ideals, representing a supernormal stimulus (e.g., larger-than-average eyes). In such cases, the most beautiful or hyperattractive faces will possess generally average features but with one or two distinctive, non-average features that make their appearance unique. The most average facial appearance of a population is a benchmark of beauty for that population. There are other elements that contribute to attractiveness (e.g., smile versus scowl), but in general, faces that deviate from the average are not perceived as attractive. Consider composite images of faces created from randomly sampled faces of target populations to indicate local perceptions of beauty. Consider the use of digital compositing and morphing software to develop attractive faces from common faces for advertising and marketing campaigns, especially when real models are unavailable or budgetary resources are limited. See also Attractiveness Bias; Baby-Face Bias; Symmetry

1

The seminal work on the most average facial appearance effect is “Attractive Faces Are Only Average” by Judith Langlois and Lori Roggman, 1990, Psychological Science, 1, 115 –121. See also “Facial Attractiveness: Evolutionary Based Research” by Anthony Little et al., 2011, Philosophical Transactions of the Royal Society B: Biological Sciences, 366(1571), 1638 –1659

2

See, for example, “Developmental Stability, Disease, and Medicine” by Randy Thornhill and Anders P. Møller, 1997, Biological Reviews, 72, 497– 548.

Source First Generation Composite Second Generation Composites — MAFA First Generation Composite Source

The most average facial appearance for a population is also the most attractive. In this population of four men and four women, two generations of composites were created to demonstrate the effect. In each composite image, unique and idiosyncratic facial features are minimized and symmetry is improved.

111

Magic Triangle A triangular relationship between facial features that creates the illusion of sentience. The magic triangle is a triangular relationship between the eyes, nose, and mouth, developed by Don Sahlin, chief designer of the Muppets. Eye placement is considered the most important aspect of the magic triangle: Pupils are slightly crossed, staring toward the tip of the nose. This makes it appear that muppets are staring at a point in space about 18 to 24 inches (45.7 to 61 cm) in front of their faces, creating the appearance of intentional focus — like the character is looking at something specific versus staring into space. This helps bring muppets to life and makes them appear sentient.1 Although the magic triangle is an artistic principle used to guide the creation of muppets, there is an empirical basis to believe the muppeteers were on to something. Large eyes make faces seem cuter across facial expressions and ages. This is somewhat related to the baby-face bias, but the eyes as a facial feature are unique. Eyes contain more important information about face identity and emotional state than other facial features. Humans notice eyes at a very early age, even before they can recognize faces. And the gaze of the eyes communicates much about a person’s emotional state. When eyes gaze into space, they appear emotionless, lifeless. When eyes gaze downward or to the side, they appear fearful or shy. When eyes gaze upward, they appear deceptive or evasive. When eyes gaze directly at another, they can appear threatening. But when eyes gaze at a forward point in space, they appear to be looking at something, in a somewhat cross-eyed fashion — not lifeless, fearful, evasive, or threatening but focused and interested.2 Pupil size also affects perceptions of intelligence, emotional state, and maturity. Pupils dilate when people are aroused, thinking hard about something, or surprised. In general, smaller pupils make characters look older, sophisticated, and calm. Larger pupils make characters look younger, unsophisticated, and lively.3 Eyes really are the window to the soul — or at least to the personality and emotional state. Consider the magic triangle in the design of physical and illustrated characters. Align the size of the eyes and pupils to the age of characters, their emotional state, and their personality. Ensure that pupil size and eye alignment are symmetrical unless the intent is to make a character appear crazy or impaired. See also Anthropomorphism; Baby-Face Bias; Face Detection; Uncanny Valley

1

The technique is referenced in The Art of the Muppets by Henson Associates, 1980, Bantam Books.

2

See, for example, “The eyes have it: the neuroethology, function and evolution of social gaze” by N.J. Emery, 2000, Neuroscience & Biobehavioral Reviews, 24, 581–604; and “Eye Size Affects Cuteness in Different Facial Expressions and Ages” by Lichang Yao et al., 2022, Frontiers in Psychology, 12.

3

See, for example, “Eye-Opener: Why Do Pupils Dilate in Response to Emotional States?” by Joss Fong, Dec 7, 2012, Scientific American.

Perhaps the single most important aspect of the Muppet look is the set of the eyes in relation to the nose and mouth. The Muppet people call this the “magic triangle”: correctly positioned, it creates a central focal point essential to bringing a puppet to life in the eye of the camera — and therefore the viewer. — Henson Associates The Art of the Muppets

112

Maintainability The ease with which a thing can be accessed, inspected, repaired, and serviced. Maintainability is the ease with which a thing can be repaired, serviced, or upgraded. It is one of the most neglected aspects of design, often considered as an afterthought if at all — until, that is, something breaks or there is a catastrophic failure, at which point everyone acts incredulous about how bad the design is.1 There are five factors for achieving maintainability in design: 1. Accessibility — Essential components can be accessed and serviced

with minimal effort. Examples in this context: well-placed access panels, junctions, and control panels; assemblies designed with sufficient space for human hands and tools; easily findable or searchable elements. 2. Comprehensibility — Components can be easily identified and fault

conditions diagnosed. Examples: clearly tagged and labeled components; helpful and noncryptic error codes; readable part numbers and serial numbers; embedded instructions and warnings; clear checklists, manuals, and related documentation. 3. Interchangeability — Components are standardized with minimum

variation across the system. Examples: the use of generic components rather than custom-fit parts; components in one assembly are common and can be used in other assemblies. 4. Modularity — Components are logically grouped in modules or sub-

assemblies, which can be treated as self-contained components. Examples: the ability to replace or upgrade entire modules, hierarchically nesting modules within other modules. 5. Visibility — Essential components can be evaluated through direct

inspection or testing. Examples: components that can be inspected by looking at them, components that can be tested to detect faults or failure, systems that automatically alert when there is a problem. Designing for maintainability requires imagining how people will interact with assemblies when performing maintenance and repairs, verifying the feasibility of service procedures through testing, and doing this considering field conditions and any restrictive clothing or special equipment the maintainer might be wearing. There is perhaps no higher indication of designer care and craftsmanship than when systems are clearly designed for ease of maintenance and repair. Prioritize maintainability from the beginning of the design process. Don’t let it be an afterthought. Keep humans in mind as the maintainers, verifying service experiences through testing. Design for maintenance today and avoid calamity tomorrow. See also Back of the Dresser; Learnability; Modularity; Redundancy; Visibility

1

See, for example, Design for Maintainability by Jack Dixon and Louis Gullo (Eds.), 2021, Wiley.

In 2018, the Morandi Bridge in northwestern Italy collapsed, killing 43 people. The bridge was a cablestayed design, which featured stays constructed from steel cables with prestressed concrete shells poured around them. Hiding the cables in this way produced a minimalist bridge design that was heralded as an exemplar of Italian innovation, but it also made inspecting and maintaining the cables difficult. The bridge designer and engineer, Riccardo Morandi, was aware that the steel cables were vulnerable to corrosion and that some mistakes were made during construction that could expose them to the elements. He recommended a set of testing and maintenance strategies in 1985, but these were never carried out by the bridge’s operator, likely due to their cost and complexity. The cables had been corroding all along and finally snapped on the morning of August 14, 2018, taking down the deck as dozens of vehicles were crossing.

Collapsed Section

Approximately 3,878 feet (1,182 m)

Steel Cables Encased in Concrete Shells

Approximately 657 feet (200 m)

113

Mapping A correspondence in layout and movement between controls and the things they control. Swipe a touchscreen, flip a switch, or push a button and you expect some kind of effect. When the effect corresponds to expectation, the mapping is considered to be good or natural. When the effect does not correspond to expectation, the mapping is considered to be poor. For example, an electric window control on a car door can be oriented so that raising the control switch corresponds to raising the window and lowering the control switch lowers the window — good mapping. Compare this to an orientation of the control switch on the surface of an armrest, such that the control motion is forward and backward. The relationship between the control and the raising and lowering of the window is no longer obvious: Does pushing the control switch forward correspond to raising or lowering the window? 1 Good mapping is primarily a function of: • Similarity of layout — The layout of stovetop controls corresponds to the layout of burners. • Similarity of behavior — Turning a steering wheel left turns the car left. • Similarity of meaning — An emergency shutoff button is colored red (most people associate red with stop). In each case, similarity makes the control-effect relationship predictable (good mapping); and therefore the design is easy to use. When the control-effect relationship is not predictable (bad mapping), designs are counterintuitive and hard to use.2 What makes a control-effect relationship intuitive? The answer is not always simple and may depend on regional or cultural norms. For example, in England, flipping a light switch up turns it off, and flipping it down turns it on; whereas, in the United States, the opposite is true. It is therefore prudent to test and confirm mappings with target audiences prior to manufacturing. Consider the importance of good mapping whenever designing a system that has controls. Ensure good mapping in your designs to minimize errors and make things easy to use. Position controls so that their locations and behaviors correspond to the layout and behavior of the device. Simple control-effect relationships work best. Avoid using a single control for multiple functions whenever possible; it is difficult to achieve good mappings for a one control – multiple effect relationship. In cases where this is not possible, use visually distinct modes (e.g., different colors) to indicate active functions. See also Affordance; Constraint; Nudge; Proximity; Similarity; Visibility

1

The seminal work on mapping is The Design of Everyday Things by Donald Norman, 1990, Doubleday. This principle is also known as control-display relationship and stimulusresponse compatibility.

2

For a review of these kinds of issues, see Spatial Schemas and Abstract Thought by Merideth Gattis (Ed.), 2001, MIT Press.

Poor Mapping The relationship between stovetop controls and burners is ambiguous when the controls are horizontally oriented and equally spaced.

Poor but Improved Mapping The relationship becomes clearer when the controls are grouped with the burners, but the horizontal orientation still confuses which control goes with which burner.

Good Mapping The control-burner relationships are clear when the layout of the controls aligns to the layout of the burners.

114

Maslow’s Hammer The tendency to approach problems based on the tools and expertise at hand. Maslow’s hammer is commonly known by the phrase “If the only tool you have is a hammer, [it is tempting] to treat everything as if it were a nail”.1

1

The Psychology of Science: A Reconnaissance by Abraham Maslow, 1966, Harper & Row. This principle is also known as the Law of the Instrument, Law of the Hammer, and the Golden Hammer. The principle is related to a sentiment expressed by Winston Churchill: “We shape our tools, and thereafter our tools shape us”. Churchill’s original quote is, “We shape our buildings and afterwards our buildings shape us”. The phrase morphed over time to substitute “buildings” with “tools” and is often attributed to Marshall McLuhan and others.

2

The classic experiment on functional fixedness is the “Candle Problem” presented in “On Problem-Solving” by Karl Duncker, 1945, Psychological Monographs, 58:5.

3

The classic experiment on the Einstellung effect is the “Luchins water jar experiment” presented in “Mechanization in Problem Solving: The Effect of Einstellung” by Abraham Luchins, 1942, Psychological Monographs, 54(6), i – 95. In the context of biases related to expertise, the effect is closely related to déformation professionnelle, or professional deformation.

There are two related but distinct meanings offered by the principle. First, the literal tools at our disposal bias how we approach problem solving. If the only tool available is a hammer, the bias will be to bang on things; whereas if the only tool is a screwdriver, the bias will be to twist and pry things. Second, the metaphorical tools at our disposal — e.g., area of expertise, team members, resources, etc.— also bias how we approach problem solving. If the goal is to transport people across a river, a bridge engineer will be biased toward a bridge, a tunnel engineer toward a tunnel, and a helmsman toward a ferry. The literal sense of the principle is related to a cognitive bias known as functional fixedness, which limits a person’s ability to “think outside the tool”— i.e., to use a familiar tool in an unfamiliar way. For example, a hammer has a well-established manner of use; and this prior knowledge impedes thinking about unconventional ways that it could be used, such as a doorstop or instrument of measure.2 The metaphorical sense of the principle is related to a cognitive bias known as the Einstellung effect, which limits a person’s ability to think outside of their experience. The Einstellung effect typically refers to an expertise bias, but it can also refer to things like generational preferences. For example, designers raised with video games tend to favor video game-like controls and displays (e.g., integrated touch screens), even when they perform less well than conventional analog controls and displays.3 Maslow’s hammer teaches that the tools available to us, both literal and metaphorical, can impede our ability to solve problems. To mitigate the bias, consider analogical problems and solutions in other domains. Engage people with a range of experiences and different expertise. Do things that delay and interrupt automatic thinking processes, such as taking a short walk or “sleeping on it”. Develop performance criteria to compare different solutions, put them to the test, and then let the best design win. See also Anchoring; Clarke’s Laws; Creator Blindness; Groupthink; Priming;

Testing Pyramid; Visibility

What do you get when you unleash one of the world’s leading industrial designers to build the ultimate coldpress juicer? Perhaps the most overdesigned and overengineered mechanism to perform a simple function since the south-pointing chariot. Juicero offered pre-sold packets of diced fruits and vegetables that users plugged into their $700 machines (later reduced to $400). The industrial design was a work of art with over 400 custom parts: aircraft-grade aluminum components, custom gearbox with hardened-steel spur gears, fully Wi-Fi enabled, optics/ camera assembly for scanning QR codes on the juice packs, and all managed by its own ARM processor. The beginning of the end came when it was revealed that a person could simply squeeze the produce packs with their hands and get the same quality of juice, rendering the pricy juicer unnecessary. Juicero became a symbol of Silicon Valley excess and shut down operations in 2017.

Juicero’s Press is an incredibly complicated piece of engineering. Of the hundreds of consumer products I’ve taken apart over the years, this is easily among the top 5% on the complexity scale. — Ben Einstein Founder of Bolt

115

MAYA A strategy for determining the most commercially viable aesthetic for a design. While some define design success in terms of aesthetics, others in terms of function, and still others in terms of usability, Raymond Loewy, the father of industrial design, defined success in terms of commercial performance — i.e., sales. In 1951, he proposed the Most Advanced Yet Acceptable (MAYA) principle, which asserts that the most advanced form of an object or environment that is still recognizable as something familiar will have the best prospects for commercial success. Loewy believed that aesthetic appeal was essentially a balancing act between two variables: familiarity and uniqueness — or, in modern psychological parlance, typicality and novelty — and to find the optimal balance between these variables was to find the commercial sweet spot for success.1 Although MAYA clearly has pragmatic appeal, the question as to its correctness is an empirical one; and a growing body of research supports the principle. People like the familiar, an observation supported by the exposure effect, which claims that the appeal of objects and environments increases with repeated exposures. People also like the novel, especially within design and fine art circles — two communities that tend to value originality above all else. Additionally, people tend to notice and remember novelty more than typicality, a phenomenon known as the von Restorff effect. Research assessing the relative value of typicality and novelty suggests that the two variables weigh equally in influencing perceptions of aesthetic appeal. Another empirical question is whether MAYA’s proposed point along the familiarity-novelty continuum is the ideal one, and there is good evidence that Loewy got it pretty much right. When dealing with everyone but design and art experts, the most novel design that is still recognizable as a familiar object or environment is perceived to have the greatest aesthetic appeal.2 Consider MAYA when designing commercially for mass audiences. Be careful not to introduce radical innovations all at once but, rather, in gradual steps over multiple product releases. You still want to release products and experiences that break new ground, but they should contain enough that is familiar to still be acceptable to most people. In contexts where aesthetic assessments are made by design or art experts (e.g., design competitions refereed by expert judges or when dealing with aesthetically sophisticated clients), MAYA does not apply — emphasize novelty; it will be weighed more heavily than typicality.3 See also Expectation Effects; Exposure Effect; Form Follows Function; IKEA Effect; von Restorff Effect

1

The seminal work on MAYA is Never Leave Well Enough Alone by Raymond Loewy, 1951, The Johns Hopkins University Press.

2

“‘Most Advanced, Yet Acceptable’: Typicality and Novelty as Joint Predictors of Aesthetic Preference in Industrial Design” by Paul Hekkert et al., 2003, British Journal of Psychology, 94, 111–124. See also “Exposure and Affect: Overview and Meta-Analysis of Research, 1968 –1987” by Robert Bornstein, 1989, Psychological Bulletin, 106, 265 – 289.

3

Though familiarity and typicality are similar and generally correlate, they are distinct. Familiarity refers to the level of past exposure (e.g., a person sees a juicer every day). Typicality refers to how recognizable a thing is for its type (e.g., a new juicer is recognizable as a juicer based on the appearance of past juicers).

MAYA explorations by Raymond

Loewy demonstrating how the ideal aesthetic form of common products changes over time.

The adult public’s taste is not necessarily ready to accept the logical solutions to their requirements if the solution implies too vast a departure from what they have been conditioned into accepting as the norm. — Raymond Loewy Never Leave Well Enough Alone

116

Mental Model A mental simulation of how things work. Mental models are internal representations of systems and environments derived from experience. People understand and interact with systems and environments by comparing the outcomes of their mental models with the real-world systems and environments. When the outcomes correspond, a mental model is accurate and complete. When the outcomes do not correspond, the mental model is inaccurate or incomplete. With regards to design, there are two basic types of mental models: mental models of how systems work (system models) and mental models of how people interact with systems (interaction models).1 Engineers necessarily have strong system models — i.e., they know much about how a system works but little about how people will interact with the system. Conversely, users of a design tend to have sparse and inaccurate system models but, through use and experience, commonly attain strong interaction models. Designers must develop strong system and interaction models. Optimal design results only when designers have an accurate and complete system model, attain an accurate and complete interaction model, and then design a system interface that merges the two.2 Designers can obtain accurate and complete system models by collaborating with engineers early in a design to make sure there is solid understanding of how the system works and the boundaries of functionality. Time spent gaining this understanding will prevent expensive rework down the road if ideas for innovative design features are technically infeasible. Designers can obtain accurate interaction models through direct observation of people interacting with the system, laboratory testing (e.g., focus groups and usability testing), and personal use of the system. Watching people use the system in the target environment and testing with representative users are the preferred methods for acquiring accurate information about how people interact with systems. Personal use of the system will provide insight, but a complete interaction model is dependent on the user’s perspective.3 Consider the mental models of different stakeholders in the design process. Do not impose the mental models of designers and engineers on users. Leverage background knowledge and experience that draw on common mental models when available, but note the limitations imposed by these mental models — i.e., these models are resistant to change and may work against new or innovative kinds of interactions. Conduct user testing and field observation to develop empathy, humility, and understanding of the range of mental models at play. See also Affordance; Box’s Law; Expectation Effects; Visibility

1

The seminal works on mental models are The Nature of Explanation by Kenneth Craik, 1943, Cambridge University Press; and Mental Models: Towards a Cognitive Science of Language, Inference, and Consciousness by Philip N. Johnson-Laird, 1983, Cambridge University Press. For a design perspective, see “Surrogates and Mappings: Two Kinds of Conceptual Models for Interactive Devices” by Richard M. Young, and “Some Observations on Mental Models” by Donald Norman, both in Mental Models by Dedre Gentner and Albert Stevens (Eds.), 1983, Lawrence Erlbaum Associates.

2

Note that an efficient merging does not simply mean revealing the system model. It may mean concealing the system model from users, revealing the system model to users, or a combination therein.

3

The quote, Know thy user — for they are not you, is the backbone of human-centered design. It highlights the importance of not making the assumption that what works for you will work for your users. Observing, interviewing, and testing representative users will provide the most accurate interaction model.

For decades, automatic shift levers used the same basic design, with gears in the same order and the ability for drivers to confirm what gear was selected by feel alone. Accordingly, drivers have robust mental models based on how to interact with this design, reinforced by years of experience and practice. So, when Fiat Chrysler introduced a new electronic rocker shift–style gearshift that deviated from this standard, it was not surprising that there could be usability issues. In this new “monostable” design, the shifter always pops back to its center position after the driver moves the spring-loaded lever rather than staying in different positions for each selected

mode. As a consequence, there were 41 reported injuries because the car rolled away when it should have been put in Park before Fiat Chrysler voluntarily recalled cars with the shifter. Whenever mental models are well established around standards, designers should think twice before reinventing the wheel — or, in this case, the shifter.

117

Miller’s Law The number of objects an average person can hold in working memory is 7± 2. Miller’s law, proposed by the psychologist George Miller, states that the maximum number of novel things a person can remember after a brief exposure is 7± 2. More recent studies put the number of things one can remember at 4 ± 1. The most important aspect of Miller’s law is the concept of chunking. The term chunk refers to a unit of information in short-term memory — a string of letters, a word, or a series of numbers. The technique of chunking accommodates short-term memory limits by formatting information into four or five units. For example, few people can remember a list of 10 words for 30 seconds. Group the list of 10 words into chunks of three or four words, and recall performance becomes roughly equivalent to remembering a list of five words.1 Information that is chunked is easier to manipulate and remember. And with repeated exposure and practice, chunks can grow and become more complex. For example, to a novice, each piece and its position on a chess board is a chunk. This can be overwhelming to a beginner, as there are well over 4 ± 1 pieces on the board. However, with practice, the size and complexity of the chunks in memory grow. Chess masters can store entire board positions — i.e., all the pieces and their respective positions — in memory as a single chunk, freeing their minds to concentrate on other aspects of the game.2 Chunking is often invoked as a rationale to simplify designs. This is a potential misapplication of the principle. Miller’s law applies specifically to tasks involving memory. For example, it would be productive to limit the number of instructional chunks on a page to four or five but counterproductive to limit the number of dictionary entries on a page to the same. Referencerelated tasks consist of searching for items. Applying Miller’s law would, in this case, dramatically increase the number of pages required to present the information, as well as the scan time and effort required, all to no benefit. Observe Miller’s law when information needs to be remembered. Do not chunk information that is to be searched or scanned. In high-stress environments, consider chunking critical controls and information in anticipation of diminished cognitive performance. See also Mnemonic Device; Performance Load; Recognition over Recall;

Uniform Connectedness

1

The seminal work on short-term memory limits is “The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information” by George Miller, 1956, The Psychological Review, 63, 81– 97; and “The Magical Number Four in Short-Term Memory: A Reconsideration of Mental Storage Capacity” by Nelson Cowan, 2001, Behavioral and Brain Sciences, 24, 87–114.

2

See, for example, “Expert Chess Memory: Revisiting the Chunking Hypothesis” by Fernand Gobet and Herbert Simon, 1998, Memory, 6(3), 225 – 255.

Why are large personal codes and numbers often presented in groups of three or four characters? Miller’s law. Chunking the numbers makes them easier to work with and remember.

Kelly Johnston

Kelly Johnston

Pet Sitter

Pet Sitter

work

work

mobile

mobile

other

other

118

Mimicry Copying properties from familiar things in order to realize benefits of those properties. In nature, mimicry refers to the copying of properties of familiar objects, organisms, or environments in order to hide from or deter other organisms. For example, katydids and walking sticks mimic the leaves and branches of plants to hide from predators; and the viceroy butterfly mimics the less tasty monarch butterfly to deter predators. In design, mimicry refers to copying properties of familiar objects, organisms, or environments in order to improve the usability, likeability, or functionality of an object.1

1

The history of mimicry in design likely predates the development of tools by early humans. The seminal work on mimicry in plants and animals was performed by Henry Bates and Fritz Muller in the late 1800s.

2

See, for example, The Design of Everyday Things by Donald Norman, 1990, Doubleday.

3

See, for example, Designing Sociable Robots by Cynthia L. Breazeal, 2002, MIT Press; and “The Lovable Cat: Mimicry Strikes Again” by William Calvin, in The Throwing Madonna, 1983, McGraw-Hill.

4

See, for example, Biomimicry: Innovation Inspired by Nature by Janine Benyus, 1998, William Morrow & Company; and Cats’ Paws and Catapults by Steven Vogel, 2000, W.W. Norton & Company.

There are three basic kinds of mimicry in design: 1. Surface mimicry — Copying the way things look (e.g., designing software

icons to look like folders and documents). When a design mimics the surface aspects of a familiar object, the design implies (by its familiar appearance) the way it will function or can be used.2 2. Behavioral mimicry — Copying the way things act (e.g., making a robotic

dog act like a real dog). Behavioral mimicry is useful for improving likeability but should be used with caution when mimicking complex behaviors. For example, mimicking behaviors like smiling generally elicits positive responses but can give the impression of artificiality or deceit if inconsistent with other cues (e.g., a baby doll that smiles when touched — or spanked).3 3. Functional mimicry — Copying the way things work (e.g., mimicking the

keypad of an adding machine in the design of a telephone keypad). Functional mimicry is useful for solving mechanical and structural problems. Significant insights and rapid progress can be achieved by mimicking existing solutions and design analogs; however, functional mimicry must be performed with caution, since the physical principles governing function may not transfer from one context to another (e.g., early attempts at human flight by flapping wings).4 Mimicry is perhaps the oldest and most efficient method for achieving major advances in design. Consider surface mimicry to improve usability, ensuring that the perception of the design corresponds to how it functions or is to be used. Consider behavioral mimicry to improve likeability, but exercise caution when mimicking complex behaviors. Consider functional mimicry to assist in solving mechanical and structural problems, but also consider transfer and scaling effects that may undermine the success of the mimicked properties. See also Anthropomorphism; Baby-Face Bias; Savanna Preference;

Scaling Fallacy; Supernormal Stimulus

Scuba suits that mimic the black and white banding pattern of an IndoPacific sea snake show promise as a way to deter sharks.

119

Minimum-Viable Product A version of a product with just enough features to evaluate customer demand. A minimum-viable product (MVP) is a product used to gauge demand and inform business and product strategy. The goal is to invest the minimum amount of energy, money, and time to validate that there is product demand — i.e., that people will buy it — and then to either use what’s learned to cut bait or iterate the design into a mature offering. MVPs are typically more complete and refined than functional prototypes but less complete and refined than production products.1 MVPs are typically tested by employees, trusted friends, and existing customers who are early adopters — people who can understand the longterm vision, forgive the shortcomings of the MVP, and provide valuable feedback. The point is as much about learning what customers don’t want as it is about learning what they do want, investing as little as possible to gain this understanding as early as possible. The MVP approach contrasts with traditional market-testing and stealth-development approaches, which typically don’t share or test products with customers until they are fairly mature and ready to bring to market.

The concept of a MVP is often misapplied, which can lead to bad business and design strategy. The most common error is confusing business viability with functional viability: MVPs are used to test business viability, not functional viability. For example, a common cartoon depicts a skateboard as an MVP for other forms of transportation, such as a motorcycle or car. Since the consumer demand for skateboards tells us nothing about the consumer demand for motorcycles or cars, it is not a minimum-viable product. One could argue that a skateboard is a minimal-functional prototype or precursor to a motorcycle or car, but even that connection is tenuous. A simple skateboard is an MVP for a more advanced skateboard — e.g., an electric skateboard — but not for other products that just happen to have wheels or share some functionality. Consider using MVPs in design strategy and product development. Focus on business viability, not functional viability — i.e., whether people will buy, not whether it will work. Test MVPs early to determine if they merit further investment and development, removing features or processes that confound customer valuation. Ensure MVPs are of sufficient quality to provide an accurate assessment of whether customers will use and value the product. See also Diffusion of Innovations; Iteration; Levels of Invention;

Nirvana Fallacy; Prototyping; Satisficing; Sunk Cost Effect

1

The term was coined by Frank Robinson in 2001, but the popular understanding is attributable to The Lean Startup by Eric Ries, 2011, Crown Business. The specific definition has and continues to evolve. See, for example, “MVP Explained: A Systematic Mapping Study on the Definitions of Minimal Viable Product” by Valentina Lenarduzzi and Davide Taibi, 2016, Conference: Euromicro SEAAAt: Cyprus.

In 1999, Zappos founder Nick Swinmurn had an idea. Frustrated by his inability to find a pair of shoes he liked locally, Swinmurn came up with the idea of selling shoes online. But before he invested the time and money to start a company and buy inventory, he wanted to test whether the idea was viable. He built a simple website and filled it with pictures of shoes he took from a local mall. If customers bought shoes, he would head back to the mall to buy the shoes and send them to the

customer. This was not a viable way to run a business long-term, but it was a minimum-viable product to test the feasibility of the business concept. The strategy worked perfectly and positioned the company to raise money and grow quickly. In July 2009, Amazon acquired Zappos for $1.2 billion in stock.

120

Mnemonic Device Techniques for making things more meaningful and memorable. Mnemonic devices are used to reorganize information so that the information is simpler and more meaningful and, therefore, more easily remembered. They involve the use of imagery or words in specific ways to link unfamiliar information to familiar information that resides in memory. Mnemonic devices that involve imagery are strongest when the images are vivid, peculiar, and exaggerated in size or quantity. Mnemonic devices that involve words are strongest when the words are familiar and clearly related. Mnemonic devices are useful for remembering names of new things, large amounts of rote information, and sequences of events or procedures.1 Types of mnemonic devices include: • First letter — The first letter of items to be recalled are used to form the first letters in a meaningful phrase, or combined to form an acronym. For example, Please Excuse My Dear Aunt Sally is used to assist in the recall of the arithmetic order of operations: parentheses, exponents, multiplication, division, addition, subtraction; or AIDS as a means of referring to and remembering acquired immune deficiency syndrome. • Keyword — A word that is similar to, or a subset of, a word or phrase that is linked to a familiar bridging image to aid in recall. For example, the insurance company AFLAC makes its company name more memorable by reinforcing the similarity of the pronunciation of AFLAC and the quack of the duck. In their logo and advertising, the duck is the bridging image. • Rhyme — One or more words in a phrase are linked to other words in the phrase through rhyming schemes to aid in recall. For example, red touches yellow, kill a fellow is a popular mnemonic to distinguish the venomous coral snake from the nonvenomous king snake. • Feature name — A word that is related to one or more features of something that is linked to a familiar bridging image to aid in recall. For example, the rounded shape of the Volkswagen Beetle is a key feature of its biological namesake, which serves as the bridging image. Consider mnemonic devices when developing corporate and product identities, slogans and logos for advertising campaigns, instructional materials dealing with rote information and complex procedures, and other contexts in which ease of recall is critical to success. Use vivid and concrete imagery and words that leverage familiar and related concepts. See also Recognition over Recall; Serial Position Effects; Stickiness;

von Restorff Effect

1

The seminal contemporary work on mnemonics is The Art of Memory by Frances Yates, 1974, University of Chicago Press.

Clever use of mnemonic devices can dramatically influence recall. These company names and logos employ various combinations of mnemonic devices to make them more meaningful and memorable.

121

Modularity Managing system complexity by dividing large systems into smaller, self-contained systems. Modularity is a structural principle used to manage complexity in systems. It involves identifying related groups of functions in systems and then transforming those groups into independent self-contained units or modules. For example, the modular design of computer memory chips provides computer owners the option of increasing the memory in their computer without replacing the entire computer. The option to easily and inexpensively improve a system gives modular designs an intrinsic advantage over nonmodular designs.1 Modules should be designed to hide their internal complexity and interact with other modules through simple interfaces. The result is an overall reduction in system complexity and a decentralization of system architecture, which improves reliability, flexibility, and maintainability. Additionally, a modular design encourages innovation of modules, as well as competition regarding their design and manufacture; it creates an opportunity for third parties to compete to develop better modules. Most systems do not begin as modular systems. They are incrementally transformed to be modular as function sets mature. The benefits of modular design are not without costs: Modular systems are significantly more complex to design than nonmodular systems. Designers must have significant knowledge of the inner workings of a system and its environment to decompose the system into modules and then make those modules function as a whole. Consequently, most modular systems that exist today did not begin that way — they have been incrementally transformed to be more modular as knowledge of the system increased. Consider modularity when designing or modifying complex systems. Identify functional clusters of similarity in systems and clearly define their relationships with other system elements. If feasible, create modules that conceal their complexity and communicate with other modules through simple, standard interfaces. Do not attempt complex modular designs without experienced designers and a thorough understanding of the system. However, consider the incremental modularization of existing systems, especially during maintenance and product updates.2 See also Cost-Benefit; Flexibility Tradeoffs; Pareto Principle; Self-Similarity

1

The seminal work on modularity is Design Rules: Volume I. The Power of Modularity by Carliss Y. Baldwin and Kim B. Clark, 2000, MIT Press.

2

Many designers resist modularity for fear of limiting creativity. However, modules applied at the appropriate level will liberate designers from useless activity and allow them to focus creativity where it is most needed.

Google’s Project Ara attempted to reinvent the cell phone using a highly modular design. The product consisted of hardware modules that provided common smartphone functionalities, such as batteries, cameras, displays, etc., with the flexibility to add more specialized components. The goal was to make smartphones more affordable, reduce electronic waste, and open innovation. But modularity comes with costs. First, it is far more complex to design, engineer, and support. Second, it presumes that users want the flexibility and upgradability over the simplicity of an integrated solution. Third, it results in a larger and heavier overall product than integrated alternatives. And fourth, it was resisted by the large mobile carriers as a potential disruptor to their markets. Project Ara was an interesting study in the potential for user-managed hardware modularity, but it was better suited for early adopters than the mainstream audiences for which it was intended. For these reasons and others, Project Ara was canceled in September 2016.

122

Nirvana Fallacy The tendency to disregard good solutions because they fall short of perfection. The nirvana fallacy is the tendency to criticize, devalue, and delay otherwise good solutions because they compare unfavorably to idealized solutions. It typically manifests as a choice between a realistic, imperfect solution and an unrealistic, perfect solution. Since nothing, in reality, can ever compare to perfection — i.e., nirvana — the end result is usually to do nothing. This problem of doing nothing or not starting is common with people suffering from clinical perfectionism, who experience self-defeating thoughts and behaviors in pursuit of unrealistic goals.1 The nirvana fallacy can be understood as a bias or tendency, as it often occurs without conscious deliberation; but it is considered an informal fallacy of reasoning. The logical form of the fallacy is: A is proposed as a solution. B is proposed as an idealized solution. A is dismissed because it is inferior to B. Thus, the status quo is maintained. This is fallacious reasoning because the cost-benefit of a proposed solution is incorrectly compared against an unachievable solution instead of against the status quo. For example, electric cars are more environmentally friendly than gasoline cars but often use electricity supplied by coal plants. As such, they have a carbon footprint and contribute to global warming — i.e., they are imperfect. The ideal electric car would use energy with no negative environmental impact. Therefore, the fallacious conclusion is that it is preferable to continue driving gasoline cars until electric cars achieve that idealized goal.2 The nirvana fallacy can be exacerbated by inspirationally and broadly worded vision statements, fanciful concept prototypes, and long-term goals. Vague language, sensationalist demos, and distant deadlines give rise to imaginative and overoptimistic thinking, which can make actualizing real progress challenging. Therefore, it is important to accompany such things with product road maps and project plans to ground stakeholder expectations while still pursuing ambitious goals. Do not let perfect be the enemy of good. Seek attainable improvements over the status quo, resisting the temptation to dismiss proposed solutions because they fall short of perfection. Favor doing something over nothing, progress over perfection. Beware of comparisons involving unrealistic options and instead compare proposed solutions against the status quo. Use product road maps and project plans to anchor aspirational vision statements, visionary concept demos, and distant deadlines to reality. See also Cost-Benefit; Faith Follows Function; Iron Triangle; Satisficing;

Status Quo Bias; Sunk Cost Effect

1

The seminal work is “Information and Efficiency: Another Viewpoint” by Harold Demsetz, 1969, The Journal of Law & Economics, 12(1), 1– 22. This principle is also known as the perfect solution fallacy.

2

See, for example, “Electric Vehicle Myths”, United States Environmental Protection Agency, www.epa.gov.

Masks are not 100% effective against airborne viruses. Therefore, many choose not to wear them at all, waiting for a more perfect alternative. But in terms of cost-benefit, masks are the cheapest and most effective

solution available. Letting perfect be the enemy of good in public health contexts gets people killed.

Give them the third best to go on with; the second best comes too late, the best never comes. — Sir Robert Watson-Watt, quoted in A Radar History of World War II

123

No Single Point of Failure The design of systems to be able to continue operating even when components fail. A single point of failure is an element of a system that, if it fails, will cause the entire system to fail. Therefore, the no-single-point-of-failure principle (NSPF) refers to the design of systems such that they can continue operating at a safe level — i.e., without harm to people and property — despite the failure of one or more constituent elements.1 The key to eliminating single points of failure is adding redundancy, but this is not always possible. Adding redundancy invariably increases cost, complexity, size, and weight, which often cannot be accommodated in a design. For example, the James Webb Space Telescope has 344 single points of failure. The unique size and weight requirements of putting a space telescope at the Earth-Sun L2 Lagrange point, 994,000 miles (1.6 million km) away, required many tradeoffs. In cases like this where single points of failure cannot be eliminated, fault-tolerant design requires highly reliable components, exhaustive testing across a wide range of scenarios, and the ability to selfrepair or repair remotely. The primary goal of NSPF design is the preservation of life and property. This often does not require full operational performance in the event of element failure but, rather, just “good-enough” performance to stabilize the failure condition until repairs can be made. For example, the failure of one engine on a commercial airplane will not cause the plane to crash because it has redundant engines that allow it to fly “good enough” to land safely. When employing redundancy to eliminate single points of failure, it is important that the redundant elements are functionally isolated — i.e., they are unable to fail due to a common cause. For example, if the engines of a commercial airplane all use the same fuel pump, the benefits of engine redundancy are lost because the fuel pump becomes a single point of failure. Eliminating single points of failure is as much of a mindset as a principle, often requiring few tradeoffs when sought from the outset. When tradeoffs are required, however, prioritize the preservation of life and property over performance: Good-enough performance to fail safely should be the threshold goal. Minimize dependencies with redundant systems, isolating common causes of failures. When redundancy cannot be used, increase the requirements for component reliability and test extensively across a wide range of scenarios. See also Bus Factor; Error, Design; Error, Human; Factor of Safety;

Forgiveness; Maintainability; Redundancy; Testing Pyramid

1

See, for example, “Identifying single points of failure in your organisation” by Robby Bryant, 2013, Journal of Business Continuity & Emergency Planning, 7(1), 26 – 32; and “Preventing failure: The value of performing a single point of failure analysis for critical applications and systems” by Laurence Wolf, 2004, EDPACS, 31(12), 14 –18.

Angle-of-Attack (AoA) Sensor There is a sensor on each side of the plane’s nose.

Airflow

MCAS adjusts the

horizontal stabilizer when it receives false signals from the AoA sensor that the plane’s angle of attack is too high — suggesting an approaching stall. This raises the tail and pushes the plane downward into a dive.

In March 2019, a Boeing 737 Max crashed shortly after takeoff. It was the second 737 Max crash in five months, resulting in the deaths of 346 people. In both cases, the crashes were initiated by a malfunctioning angle-of-attack (AoA) sensor in combination with a new piece of software called the Maneuvering Characteristics Augmentation System (MCAS). The AoA sensor sent false signals about the airplane’s angle of attack, which the MCAS tried to automatically correct, ultimately causing the crash. Even though the planes were equipped with two AoA sensors, only one was used to

trigger MCAS on a flight, creating a single point of failure in the system. After being grounded for 20 months, the Boeing 737 Max returned to operational service in November 2020. The MCAS was redesigned to take input from both AoA sensors rather than one, eliminating it as a single point of failure.

124

Normal Distribution A bell-shaped curve formed by plotting the frequency of a variable within a population. Normal distributions result when many independently measured values of a variable are plotted. The resulting bell-shaped curve is symmetrical, rising from a small number of cases at both extremes to a large number of cases in the middle. Normal distributions are found everywhere — annual temperature averages, stock market fluctuations, student test scores — and are thus commonly used to determine the parameters of a design.1 In a normal distribution, the average of the variable measured is also the most common. As the variable deviates from this average, its frequency diminishes in accordance with the area under the curve. It is a mistake to conclude that the average is the preferred design parameter because it is the most common. Generally, a range across the normal distribution must be considered in defining design parameters, since variance between the average and the rest of the population translates to the variance the design must accommodate. Additionally, real-world design parameters are multivariate, which means that a combination of varying dimensions needs to be considered. For example, a shoe designed for the average foot length of a population would theoretically fit about 68% of that population; but this does not take into account the varying foot widths, which would make the shoe fit well less than this percentage.2 It is important to avoid trying to create something that is average in all dimensions. A person average in one measure will not be average in other measures. The probability that a person will match the average of their population group in two measures is approximately 7%; this falls to less than 1% for eight measures. The average person fallacy is the belief that average people exist and are the standard to which designers should design.3 Where possible, create designs that will accommodate 98% of the population. Emphasize adjustability in the design, making the environment fit the person versus making the person fit the environment. While design considerations can be expanded to accommodate a larger portion of the population, generally, the larger the audience accommodated, the greater the costs. Consideration of the target population is key. When designing for a narrow portion of the population (e.g., playground equipment that will accommodate 98% of American children), it is crucial to obtain the measurement data for the specific subgroups. See also Convergence; MAFA Effect; Selection Bias; Streetlight Effect

1

For an overview, see “Why are Normal Distributions Normal?” by Aidan Lyon, 2014, British Journal for the Philosophy of Science, 65, 621– 649. This principle is also known as standard normal distribution, Gaussian distribution, and bell curve.

2

See, for example, The Measure of Man and Woman: Human Factors in Design, by Henry Dreyfuss Associates, 2001, Wiley.

3

See, for example, The End of Average by Todd Rose, 2016, HarperCollins. The image caption account is drawn from this book.

It is important to distinguish between statistical traits and actual traits. For example, the average height of a population could be 6 feet tall without anyone in that population actually being 6 feet tall. This lesson was learned by the U.S. Air Force in the late 1940s. The USAF had been experiencing many noncombat plane crashes. After investigating the incidents, they ruled out mechanical failure and human error and turned their attention to the cockpit design. In the early days of flight, cockpit design was based on the average physical measurements of male pilots. A researcher by the name of Lt. Gilbert S. Daniels suspected this might be the problem. Using the size data gathered from over 4,000 pilots, he discovered that there were no average pilots. A cockpit designed to fit the average statistical pilot did not fit any actual living pilot. As a result of his research, the Air Force changed its design philosophy to make the cockpit fit the pilots, using adjustable seats, foot pedals, helmet straps, and flight suits. Once in place, pilot performance increased, and the number of unexplained accidents sharply declined. Non-average pilots of the 37th Pursuit Group, Albrook Field, Panama, 1941 (top). Cockpit of USAAF P-61 Black Widow Night Fighter (bottom).

There was no such thing as an average pilot. If you’ve designed a cockpit to fit the average pilot, you’ve actually designed it to fit no one. — Todd Rose The End of Average

125

Not Invented Here A tendency to oppose ideas and innovations that originate outside of your social group. The not-invented-here (NIH) syndrome is an organizational phenomenon in which groups resist ideas and inputs from external sources, often resulting in subpar performance and redundant effort (i.e., “reinventing the wheel”). The “external sources” can be from outside the organization or from inside the organization but from a different team, department, or office at a different geographic location. In this way, NIH manifests as a form of tribalism, resisting things simply because they originate from an outgroup or “other”. 1 Four social dynamics underlie NIH: 2

1

The seminal works on NIH are “Receptivity to Innovation — Overcoming NIH” by Robert Clagett, 1967, Master’s Thesis, MIT; and “Investigating the Not-Invented-Here (NIH) Syndrome: A Look at Performance, Tenure and Communication Patterns of 50 R&D Project Groups” by Ralph Katz and Thomas Allen, 1982, R&D Management, 12, 7–19.

2

See, for example, “New Product Development: Strategies for Supplier Integration” by Robert Monczka et al., 2000, American Society for Quality, 178 –179; and Management of Research and Development Organizations by Ravinder Jain and Harry Triandis, 1997, Wiley, 36 – 38.

3

See, for example, Open Business Models: How to Thrive in the New Innovation by Henry Chesbrough, 2006, Harvard Business School Press.

1. Belief that internal capabilities are superior to external

capabilities — Often pervasive in organizations with a proud legacy of successful innovation; their past successes effectively sabotage their capacity to consider external sources. Correction typically requires a significant failure to humble the organization and reset the culture. 2. Fear of losing control — Common when groups worry about a loss of

organizational authority or responsibility. Correction typically requires clarification around the processes that will be used to adopt and integrate external solutions and the roles and responsibilities needed to be successful. 3. Desire for credit and status — Status is a major driver of behavior, and

fear of ceding credit or status to an external party can lead teams to resist new solutions. Correction typically requires finding a way for internal teams to meaningfully participate or a clear mandate from leadership that adoption is nonnegotiable. 4. Significant emotional and financial investment in internal

initiatives — Difficult to overcome, as it relates to overcoming the sunk cost effect, which invariably requires a loss of political face and cutting bait on current projects. Correction typically requires significant organizational change and a change in leadership. The best way to address NIH is through prevention. Rotate and cross-pollinate team members on a project basis. Engage outsiders in both the strategy and the evaluation stages of the design process to ensure fresh perspectives and new thinking. Encourage team members to regularly interact with the wider community (e.g., conferences). Formalize regular competitor reviews and environmental scanning to stay abreast of the activities of competitors and the industry in general. Consider open innovation models, competitions, and outside collaborations to institutionalize a meritocratic approach to new ideas. Lastly, teach team members about the causes, costs, and remedies for NIH.3 See also Creator Blindness; Death Spiral; Design by Committee; Gamification;

IKEA Effect; Status Quo Bias; Sunk Cost Effect

In 1982, the Sinclair ZX81 was licensed to Timex for resale in the United States as the Timex Sinclair 1000. The computers were identical except for the name on the case and minor motherboard differences. Sales were strong. With subsequent models, however, NIH syndrome inclined Timex to introduce more and more changes. Eventually, the product divergence created issues of software compatibility — costs went up, sales went down. Timex dropped out of the computer market in 1984.

In the scientific world, the NotInvented-Here bias is fondly called the toothbrush theory. The idea is that everyone wants a toothbrush, everyone needs one, everyone has one, but no one wants to use anyone else’s. — Dan Ariely The Upside of Irrationality

126

Nudge A method of influencing behavior without restricting options or changing incentives. People prefer the path of least resistance when making decisions. When the path leads to favorable outcomes, everyone is happy; when the path leads to unfavorable outcomes, the results are problematic. For example, when a pension program does not automatically register new employees, savings rates are very low. However, when the default option is to enroll employees automatically, rates increase dramatically. In both cases, employees are free to join, change plans, or not join; but intelligent defaults nudge employees to make the most responsible decision.1 Common nudging techniques include: • Smart defaults — Select defaults that do the least harm and greatest good. Set default states that correspond to the most generally desired option, not the most conservative option. For example, many lives are lost due to lack of available organ donations, a shortage that could be addressed by changing the default enrollment from opt-out to opt-in. • Clear feedback — Provide clear, visible, and immediate feedback to reinforce desired actions and mildly punish undesired behaviors. For example, many modern automobiles have alert lights on the dashboard that stay on until the seat belt is fastened, increasing seat belt usage. • Aligned incentives — Align incentives with desired behaviors, being careful to avoid incentive conflict. For example, the Cash for Clunkers legislation passed in the United States in 2009 provided a cash incentive for people to trade in older cars for new cars, boosting sales for the ailing automotive industry and reducing energy consumption and pollution. • Structured choices — Simplify and filter complexity by providing structured choices to facilitate decision-making. For example, Netflix structures choices to help customers find shows, enabling them to search and browse based on titles, actors, genres, what is currently being watched, and recommendations of other customers. • Visible goals — Make goals and performance status clearly visible so that people can immediately assess their performance against a goal state. For example, clearly displaying manufacturing output and goals in factories is often, by itself, sufficient to increase productivity. Consider nudges when behavior modification is key. Set defaults that correspond to the most generally desired option, not the most conservative option. Provide clear, visible, and immediate feedback to reinforce desired actions. Align incentives with desired behaviors, being careful to avoid incentive conflict. Simplify and structure choices when decision-making parameters are complex. Make goals and performance status clearly visible. See also Constraint; Framing; Gamification; Hick’s Law; Perverse Incentives

1

The seminal work on nudges is Nudge: Improving Decisions About Health, Wealth, and Happiness by Richard Thaler and Cass Sunstein, 2008, Penguin. See also Choices, Values, and Frames by Daniel Kahneman and Amos Tversky, 2000, Cambridge University Press. This principle is also known as choice architecture.

To reduce the cleaning burden of the men’s restrooms in the Schiphol airport in Amsterdam, the image of a fly was etched into each of the bowls just above the drains. The result was an 80% reduction in “spillage”. When people see a target, they try to hit it.

The first misconception is that it is possible to avoid influencing people’s choices. — Richard H. Thaler and Cass R. Sunstein Nudge

127

Number-Space Associations The intuition that serial information is spatially organized by magnitude along an axis. People intuitively think of serial information — e.g., a string of numbers, letters of an alphabet, months of a year — in spatial terms, placed in sequential order along a “mental number line”. For example, Western-oriented cultures intuit that number sequences are spatially organized from left to right, small to large. Strong support for this association comes from the SNARC effect (spatial-numerical association of response codes), in which responses to small/large numbers are faster on the left/right side of space, respectively.1 The orientation of this mental number line depends primarily on the direction of writing and reading habits present in a given culture. For example, Englishreading cultures intuit small-to-large representations as left-to-right, whereas Arab-reading cultures intuit small-to-large representations as right-to-left.2 Spatial intuitions exist not only on the horizontal axis but on any axis for which there is a well-practiced sequence, including vertical, diagonal, and radial axes. For example, in the context of floors of a building, people intuit small numbers at the bottom and large numbers at the top; in the context of a list or depth in a pool, people intuit small numbers at the top and large numbers at the bottom; in the context of a line graph, people intuit small numbers at the bottom-left and large numbers at the top-right; and in the context of an analog clock, people intuit small numbers on the right and large numbers on the left, and so on.3 Aspects of the effect have been observed in preverbal human infants and nonhuman animals, suggesting that the effect likely has a biological component, seemingly biased toward left-to-right. For example, day-old birds have been trained to detect the fourth element in a series irrespective of element spacing and orientation, indicating that even chicks are able to identify an ordinal position solely on the basis of numerical information.4 Consider number-space associations in the design of any sequential process, procedure, or layout. In general, designs should agree with intuitive directionality. For horizontal layouts, map serial information and controls to the reading direction of the target audience. For vertical, diagonal, and radial layouts, map based on the context and practiced conventions. In all cases, present serial information and controls in sequential versus mixed order. See also Learnability; Left-Digit Effect; Mapping; Mental Model;

Serial Position Effects

1

See, for example, “On the cognitive link between space and number: a meta-analysis of the SNARC effect” by Guilherme Wood et al., 2008, Psychology Science, 50(4), 489 – 525; and “Digits affect actions: The SNARC effect and response selection” by Marwan Daar and Jay Pratt, 2008, Cortex, 44(4), 400 – 405.

2

“How Culturally Predominant Reading Direction and Number Word Construction Influence Numerical Cognition and Mathematics Performance” by Silke M. Göbel, in Mathematical Cognition and Learning: Language and Culture in Mathematical Cognition, Daniel B. Berch et al. (Eds.), 2018, Academic Press, 229 – 256.

3

“Stimulus-response compatibility in representational space” by Daniel Bächtold et al., 1998, Neuropsychologia, 36, 731–735.

4

“Number-space associations without language: Evidence from preverbal human infants and non-human animal species” by Rosa Rugani and Maria-Dolores de Hevia, 2017, Psychonomic Bulletin & Review, 24, 352– 369.

The space shuttle’s Payload Deployment and Retrieval System included three fuel cells (FC) that needed to be opened or closed depending on operational conditions. The physical fuel cells were plumbed vertically as FC1, FC3, and FC2. The crew controls were then configured to match the plumbed layout versus the more intuitive FC1, FC2, and FC3 layout. As a result, when the crew needed to open or close Fuel Cells 2 or 3, they frequently activated the wrong switch. People intuit numerical sequences on spatial axes. When layouts do not match these spatial expectations, errors result.

128

Ockham’s Razor Given a choice between functionally equivalent designs, the simplest design should be selected. Ockham’s razor asserts that given competing explanations, the simplest explanation is preferred. The principle is named after William of Ockham, a fourteenth-century Franciscan friar and logician who reputedly made abundant use of the principle. There is an aesthetic appeal to the principle, which likens the “cutting” of unnecessary complexity to getting closer to the truth.1 Many variations of Ockham’s razor exist, each adapted to address the particulars of a field or domain of knowledge. A few examples include: Entities should not be multiplied without necessity. —William of Ockham Nature operates in the shortest way possible. — Aristotle Everything should be made as simple as possible but not simpler. — Albert Einstein A common misunderstanding of Ockham’s razor is that the simplest explanation should be preferred. This is, itself, an oversimplification. Ockham’s razor should be applied when having to choose between multiple functionally equivalent explanations for something, which means explanations that have the same explanatory power and predictive accuracy. Thus, given multiple functionally equivalent explanations, the simplest of them should be preferred. In other words, all other things being equal, favor the simplest thing. By analogy in design, the principle can be stated as: Given multiple functionally equivalent designs — i.e., equivalent in cost, performance, usability, weight, etc.— the simplest should be preferred. Implicit in Ockham’s razor is the idea that unnecessary complexity should be removed. A designer is frequently choosing between multiple functionally equivalent designs when iterating — i.e., the current design and a potentially simplified design if elements are removed or simplified — and Ockham’s razor suggests that the unnecessary complexity should be cleaved off. Then, the remaining elements should be further reduced and simplified as much as possible without compromising their function. Even unnecessary visual elements in a design consume attentional resources and distract from the most important elements. Unnecessary elements, whether physical, visual, or cognitive, exact a toll on overall design performance. Consider Ockham’s razor when choosing among designs or making decisions about design strategy. Given multiple equivalent designs or strategies, favor the simplest of them. When iterating and prototyping, favor simpler options and directions. Minimize elements to their most basic and essential forms. See also Feature Creep; Form Follows Function; Horror Vacui; Iteration; KISS;

Progressive Subtraction; Signal-to-Noise Ratio

1

The Ockham’s razor principle does not actually appear in any of William of Ockham’s extant writings, and, in truth, little is known about either the origin of the principle or its originator. See, for example, “The Myth of Occam’s Razor” by W.M. Thorburn, 1918, Mind, 27, 345 – 353. This principle is also known as law of parsimony, law of economy, and principle of simplicity.

2000

2005

The evolution of the iMac proves that no company wields Ockham’s razor with the skill and aggression of Apple.

2002

2007

2004

2009

2013

129

Operant Conditioning Using rewards and punishments to change the frequency and durability of a behavior. Operant conditioning is a technique used to change the frequency and durability of a behavior generally by following the occurrence of the behavior with rewards or punishments. It is commonly used in animal training, behavior modification, and incentive programs.1

1

The seminal work on operant conditioning is The Behavior of Organisms: An Experimental Analysis by B.F. Skinner, 1938, AppletonCentury. A more popular treatment is Don’t Shoot the Dog: The New Art of Teaching and Training by Karen Pryor, 1999, Bantam. This principle is also known as instrumental conditioning.

2

If punishment is used, it should be administered immediately after the harmful behavior and should be severe. Lax punishments are ineffective at extinguishing harmful behaviors and risk habituation to the punishment, which means the harmful behavior doesn’t stop and the effectiveness of punishments going forward is reduced.

3

There are also fixed-interval and variableinterval schedules of reinforcement. In these schedules, the relationship between a behavior per a period of time is what is manipulated. In a fixed-interval schedule, a reward is administered after a behavior and after a fixed period of time — e.g., one lever press per 30 seconds, one reward. In a variableinterval schedule, a reward is administered after a behavior and after a variable period of time — e.g., one lever per 30 to 60 seconds, one reward. These schedules are useful in many contexts but tend to be less effective and less durable than fixed-ratio and variable-ratio schedules.

There are three basic operant conditioning techniques: 1. Positive reinforcement — Associating the behavior with a positive

condition or reward increases the probability of a behavior (e.g., pulling the lever on a slot machine results in positive visual and auditory feedback and a possible monetary reward). 2. Negative reinforcement — Associating the behavior with the removal of a

negative condition or punishment increases the probability of a behavior ( e.g., fastening a seat belt in a car silences an annoying buzzer). 3. Punishment — Associating the behavior with a negative condition

decreases the probability of a behavior (e.g., touching a poison mushroom in a video game reduces the score). Positive and negative reinforcement should be favored over punishment whenever possible. Punishment should be reserved for rapidly extinguishing harmful behaviors, or it should not be used at all.2 The fastest behavior change occurs when there is a fixed-ratio relationship between a behavior and a reward — e.g., one lever press, one reward. This relationship also creates the least durable change: When the rewards stop, the behaviors stop. The most durable behavior change occurs when there is a variable-ratio relationship between a behavior and a reward — e.g., one lever press, 25% chance of a reward. Because the timing of the reward is unpredictable, the behavioral change is more persistent when the rewards stop. Behavior modification plans often use fixed-ratio programs early in training to achieve rapid behavioral change and then transition to variableratio programs to increase the durability of that change. Note that overrewarding can undermine intrinsic motivation. For example, lavishly rewarding people to perform tasks they enjoy will diminish their enjoyment of the tasks.3 Consider operant conditioning to modify behavior. Use fixed-ratio schedules to achieve rapid change and variable-ratio schedules to achieve durable change. Favor reward systems over punishment, and avoid over-rewarding behaviors when intrinsic motivation exists. See also Classical Conditioning; Gamification; Shaping

Whether a lever-pressing rat or a slot-playing human, rewards delivered right after the behavior make it addictive by design.

130

Orientation Sensitivity Certain line orientations are more quickly and easily processed and discriminated than others. The efficiency with which people can perceive and make judgments about the orientation of lines is influenced by a number of factors. For example, the time displayed on a standard analog clock can be quickly interpreted because the numbers are positioned at 30-degree increments around the center. The 30-degree increment happens to correspond to the minimum recommended difference in line orientation required to be easily detectable — i.e., differences in line orientation of less than 30 degrees require more effort to detect.1 Orientation sensitivity is based on two phenomena:

1

The seminal works on orientation sensitivity include “On the Judgment of Angles and Positions of Lines” by Joseph Jastrow, 1893, American Journal of Psychology, 5, 214 – 248; and “Perception and Discrimination As a Function of Stimulus Orientation: The ‘Oblique Effect’ in Man and Animals” by Stuart Appelle, 1972, Psychological Bulletin, 78, 266 – 278.

2

“An Oblique Effect in Aesthetics: Homage to Mondrian (1872–1944)” by Richard Latto, Douglas Brain, and Brian Kelly, 2000, Perception, 29(8), 981– 987.

3

See, for example, “Texture Segmentation and Pop-Out from Orientation Contrast” by Christoph Nothdurft, 1991, Vision Research, 31, 1073 –1078.

1. Oblique effect — The ability to more accurately perceive and judge line

orientations that are close to vertical and horizontal than line orientations that are oblique (i.e., diagonal). For example, in tasks where people have to estimate the relative orientation of a line by any number of methods (e.g., redrawing from memory), the most accurate judgments are for horizontal and vertical lines, and the least accurate judgments are for oblique lines. Additionally, lines oriented close to the vertical or horizontal axis will often be perceived or recalled as truly vertical or horizontal. Designs in which the primary elements are vertical or horizontal are considered more generally aesthetic than designs in which primary elements are oblique.2 2. Pop-out effect — The tendency of certain elements in a display to pop

out as figure elements and as a result be quickly and easily detected. For example, in tasks where people have to identify a target line against a background of lines of a common orientation, the target line is easily detected when it differs from the background lines by 30 degrees or more. The effect is strongest when combined with the oblique effect.3 Consider orientation sensitivity in compositions requiring discrimination between different lines or textures, or decisions based on the relative position of elements. Facilitate discrimination between linear elements by making their orientation differ by more than 30 degrees. In displays requiring estimates of orientation or angle, provide visual indicators at 30-degree increments to improve accuracy in oblique regions. Use horizontal and vertical lines as visual anchors to enhance aesthetics and maximize discrimination with oblique elements. See also Alignment; Figure-Ground; Good Continuation; Highlighting;

Signal-to-Noise Ratio

Makers of the original 1908 London Tube map (top) attempted to realistically represent the physical locations of the stations, the irregular shape of the tracks, and other geographic details. Over the years, the rail system grew large and complex and its renderings became increasingly difficult to comprehend in map form. In 1931, Harry Beck redesigned the map and traded realism for readability. Beck’s classic London Tube map (bottom) is both aesthetically pleasing and easy to read because railway lines are only represented in vertical, horizontal, and 45-degree orientations.

131

Paradox of Great Ideas Great ideas are indistinguishable from crazy ideas when first introduced. Based on a variant of a quote by the philosopher Arthur Schopenhauer: “All great ideas pass through three stages. First, they are ridiculed. Second, they are violently opposed. Third, they are accepted as great”.1 The problem is that all crazy ideas also pass through the first two stages — i.e., they are first ridiculed and then violently opposed. For an idea to be “great”, it must be nonobvious to someone skilled in the area of innovation and have the potential to make a significant impact. By definition, ideas that are nonobvious to someone skilled in the area of innovation defy orthodox thinking, which is why they initially appear crazy. But crazy ideas also defy orthodox thinking, which is why they are nonobvious and initially appear crazy. The paradox, therefore, is that crazy ideas and great ideas are indistinguishable in their early stages. The paradox of great ideas explains why small firms typically out-innovate large organizations and why large organizations often innovate by acquiring small firms. Large organizations achieve their scale by reducing variability and increasing efficiency. This means they have become adept at filtering out crazy ideas but, in so doing, adept at filtering out the great ideas as well. As a result, they fail less but are incapable of developing breakthrough innovations. By contrast, small organizations have fewer internal filters, enabling them to pursue crazy ideas, which also means great ideas. As a result, they fail more but are able to develop breakthrough innovations.2 The paradox of great ideas can be used as a test of idea potential: If a new idea is widely recognized as great when first introduced, it is likely neither crazy nor great. Conversely, if a new idea is widely recognized as crazy when first introduced, it is likely either crazy or great. Consider the paradox of great ideas in the development and evaluation of innovative ideas and designs. Large organizations seeking breakthrough innovations should support mechanisms to explore and protect what appear to be crazy ideas (e.g., R&D teams). Individuals and small firms seeking breakthrough innovations should remind themselves that ridicule and rejection are the early earmarks of great ideas — but to stay humble, as they are the early earmarks of crazy ideas as well. See also Ackoff’s Law; Levels of Invention; Maslow’s Hammer;

Pareto Principle

1

Schopenhauer’s original quote reads: “All truth passes through three stages. First, it is ridiculed. Second, it is violently opposed. Third, it is accepted as being self-evident”. The World as Will and Representation (Volume I) by Arthur Schopenhauer, E.F.J. Payne (Tr.), 1966, Dover Publications.

2

See, for example, Loonshots: How to Nurture the Crazy Ideas That Win Wars, Cure Diseases, and Transform Industries by Safi Bahcall, 2020, Griffin.

Crazy to some at the time and genius to others: Henry Ford sits in his first automobile, the Ford Quadricycle, 1896. The paradox of great ideas teaches that crazy ideas are indistinguishable from great ideas in their early stages.

The horse is here to stay, but the automobile is only a novelty — a fad. — Advice given to Henry Ford’s lawyer, Horace Rackam, by an unnamed president of Michigan Savings Bank, 1903

132

Paradox of Unanimity Extreme agreement in a diverse population is more likely the result of error than of consensus. The paradox of unanimity occurs when data are, in effect, too uniform to be true. Paradoxically, when large groups of people agree on something too much, or the data from complex systems are too consistent, or the results from analysis conform to expectations too well, then it is more likely to be evidence of systemic error, bias, or corruption than to actually be true. For example, from 2009 to 2015, Volkswagen sold cars in the United States equipped with a secret software “defeat device” that enabled its vehicles to cheat and pass emission tests. The fraud was discovered when experts found that emissions from older Volkswagens were the same as brand-new cars. The consistency of the data was too amazing to be true; and that’s how they got caught.1 The reasoning behind the paradox of unanimity is counterintuitive, but the effect has been demonstrated in a variety of contexts. For example, in police lineups, the probability that police have arrested the person guilty of a crime decreases after three unanimous identifications — i.e., if more than three witnesses identify the same person, then the probability that they arrested the right person starts to go down. Counterintuitively, if one of the witnesses were to identify a different suspect, then the probability that the other witnesses were correct would increase. When unanimity increases past a certain threshold, the probability that the conclusion is correct actually decreases.2 In research and testing contexts, it is not uncommon for results from interviews, usability testing, and A/B testing to return unanimous or perfectly conclusive results. The paradox of unanimity explains why we should be highly skeptical of such outcomes: They almost always indicate major error or bias in the system or methods. In decision-making contexts, a trend in many organizations is to try to drive decisions to consensus. But the paradox of unanimity suggests that unanimity in complex situations is highly unlikely, and therefore driving groups to consensus can undermine the quality of the decision-making. The paradox of unanimity teaches us that the perfect agreement is the enemy of the truth. Accordingly, use the paradox to identify systemic errors, bias, or malfeasance in systems and to improve the quality of decision-making and group governance. See also Confirmation Bias; Crowd Intelligence; Design by Committee;

Error, Human; Groupthink

1

Based on a principle observed in ancient Jewish legal proceedings. The seminal scientific work is “Too good to be true: when overwhelming evidence fails to convince” by Lachlan J. Gunn et al., Mar 2016, Proceedings of the Royal Society A, 472.

2

This effect is often observed in the electoral results of autocrats. For example, in the 2002 Iraqi presidential referendum, Saddam Hussein received 100% of the votes.

The primary source of Jewish religious law, the Talmud, recognizes the paradox of unanimity. In one passage, the Talmud rules that if a suspect is found unanimously guilty by the Sanhedrin (Jewish court), then they cannot be convicted and must be acquitted of all charges.

133

Pareto Principle A small percentage of variables in any large system is responsible for most of its behaviors. The Pareto principle, proposed by the economist Vilfredo Pareto, states that approximately 80% of the dynamics of any large system is caused by 20% of the variables comprising that system. The Pareto principle is observed in all large, complex systems — i.e., systems that involve many independent variables interacting with one another in different ways — including those in economics, management, product design, quality control, and engineering, to name a few. For example, 20% of a product’s features are used 80% of the time; 20% of a codebase is responsible for 80% of its bugs; 20% of a town’s roads host 80% of its traffic.1 The Pareto principle is useful for focusing resources and, in turn, realizing greater efficiencies in design. For example, if 20% of a product’s features are used 80% of the time, design and testing resources should prioritize focus on those features. The remaining 80% of the features should be reevaluated to verify their value in the design. Similarly, when redesigning systems to make them more efficient, focusing on aspects of the system beyond the key 20% yields diminishing returns. Improvements beyond the 20% will result in less substantial gains, and the value of those gains is often offset by the risks of introducing errors or new problems into the system. Note that the actual ratio can vary (e.g., 30/70, 10/90). And because the two values refer to different things — the small value to the number of variables involved and the large value to the corresponding behaviors — they need not sum to 100. For example, 20% of the roads could be responsible for 95% of the traffic. Not all elements in a design are created equal. Use the Pareto principle to assess the relative value and priority of elements, target areas of redesign and optimization, and focus resources in an efficient manner. Noncritical functions that are part of the less important 80% should be minimized or removed altogether from the design. When time and resources are limited, resist efforts to correct and optimize designs beyond the more important 20%, as such efforts yield diminishing returns. Generally, limit the application of the Pareto principle to complex systems that are influenced by many small and unrelated effects. See also Cost-Benefit; Feature Creep; Hanlon’s Razor; KISS;

Normal Distribution; Ockham’s Razor

1

In the early 1900s, Pareto observed that 20% of the Italian people possessed 80% of the wealth. The seminal work on Pareto’s Principle is Quality Control Handbook by Joseph M. Juran (Ed.), 1951, McGraw-Hill. This principle is also known as 80/20 rule, Juran’s principle, and the vital few and trivial many rule.

This heat map of a Lady Gaga fan page indicates where people are looking and clicking, which can help identify the essential variables that drive an experience. When Lady Gaga is involved, her photos and videos are the critical 20%.

134

Peak-End Rule People remember and judge an experience based on its most intense moment and its end. The peak-end rule is a cognitive bias that influences how people remember past experiences. An experience is defined as any event with a clear beginning and end. The peak refers to the moment that elicits the strongest emotional response in an experience, positive or negative; and the end refers to the conclusion of that experience. The peaks and the ends of experiences are overweighted in how we recall and feel about the experience as a whole.1 The peak-end rule should be considered in conjunction with serial position effects, which indicate that first, last, and different experiences are the most likely to be recalled and influence judgment. Designers should therefore prioritize the beginnings, peaks, and ends of experiences in their design, disproportionately investing in them to ensure people will remember the overall experience more fondly. For example, sharing photographs of peak moments after a theme park ride or giving surprise gifts during a ceremony will increase the probability that the experiences will be recalled favorably. In cases where a person has a negative experience, designing interventions based on the peak-end rule can reframe the negative memories as more positive ones. The more times a memory is retrieved, the more it will change. Therefore, interventions that engage people to repeatedly recall positive moments — e.g., highlighted in a follow-up email, celebratory pictures online, etc.— can help mitigate poor experiences. It is important to note that people remember negative experiences more vividly than positive ones, by some estimates as much as two to three times. And once a negative valence is established, it can contaminate otherwise positive and neutral events. Therefore, the peak-end rule tells us not only where to emphasize the positive but where not to emphasize the negative. For example, ending an event with a question-and-answer session risks ending on a negative note, which could taint the greater experience.2 Consider the peak-end rule in conjunction with serial position effects in the design of experiences, especially when delayed judgments and decisions are involved. Prioritize the quality of beginnings, peaks, and ends of experiences. Remediate poor experiences with repeated exposures to positive aspects of the experience. Design experiences to end on a high note, and avoid concluding moments that risk ending poorly. See also Entry Point; Exposure Effect; Serial Position Effects; Stickiness;

von Restorff Effect; Zeigarnik Effect

1

The seminal work on the peak-end rule is, “When More Pain Is Preferred to Less: Adding a Better End” by Daniel Kahneman et al., 1993, Psychological Science, 401– 405.

2

See, for example, “Choices, Values, and Frames” by Daniel Kahneman and Amos Tversky, 1984, American Psychologist, 39, 341– 350; and “Bad is Stronger than Good” by Roy Baumeister et al., 2001, Review of General Psychology, 5, 323 – 370.

Event: You are meeting a friend at a restaurant for dinner.

EXPERIENCE 1 LIKELY MEMORY: The restaurant was okay. Service was not great. Peak You have a food allergy and would like to ask your server a few questions about the menu. However, they are nowhere to be found; you don’t see them for another 10 minutes.

End Your server brings the receipt, hands you your packaged leftovers, and thanks you for visiting.

Salience

Beginning You arrive at the almost empty restaurant before your friend and ask for a table for two. The greeter replies, “We don’t seat incomplete parties. Let me know when your friend gets here”.

Time

EXPERIENCE 2 LIKELY MEMORY: The restaurant was great. They have the best rolls.

Peak You have a food allergy and notice that each item on the menu is clearly marked with an icon if it contains common allergens.

Salience

Beginning You arrive at the almost-empty restaurant before your friend and ask for a table for two. The greeter replies, “Please follow me. Can I bring you something to eat or drink while you wait?”

Time

End Your server brings the receipt and hands you your packaged leftovers. You notice that they included four complimentary dinner rolls; they must have overheard you tell your friend how much you enjoyed the rolls during dinner.

135

Performance Load The greater the effort required to complete a task, the less likely the task will be completed. Performance load is the degree of mental and physical effort required to achieve a goal. If the performance load is high, the probability of successfully accomplishing a goal decreases. If the performance load is low, the probability of successfully accomplishing a goal increases.1

1

The seminal works on cognitive load are “Cognitive Load During Problem Solving: Effects on Learning” by John Sweller, 1988, Cognitive Science, 12, 257– 285; “The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information” by George Miller, 1956, The Psychological Review, 63, 81– 97; and Human Behavior and The Principle of Least Effort by George Zipf, 1949, Addison-Wesley. The seminal works on kinematic load are Bricklaying System by Frank Gilbreth, 1909, M.C. Clark Publishing Company; and Motion Study: A Method for Increasing the Efficiency of the Workman by Frank Gilbreth, 1911, Constable and Company, Ltd. Performance load is also known as the path-of-leastresistance principle and principle of least effort.

2

The title of Steve Krug’s classic on usability sums it up: Don’t Make Me Think, 2000, New Riders Press.

Performance load is the sum of two types of loads: 1. Cognitive load — The mental effort (i.e., attention, perception, memory,

problem solving) required to accomplish a goal (e.g., recalling a new phone number). Cognitive load can be reduced by eliminating unnecessary information from displays, chunking information that is to be remembered, providing memory aids to assist in complex tasks, and automating computation-intensive and memory-intensive tasks.2 2. Kinematic load — The physical effort (i.e., number of steps or

movements, or amount of force) required to accomplish a goal (e.g., dialing a phone number). Kinematic load can be reduced by eliminating unnecessary steps in tasks, reducing overall motion and energy expended, and automating repetitive tasks. There are cases in which it is desirable to increase performance load. For example, video games are designed to progressively increase cognitive load to keep the experience challenging and fun; fitness equipment is designed to increase kinematic load through additional repetitions or resistance to improve endurance and strength; user interface designs increase both cognitive and kinematic load with alert messages and confirmation steps to verify intent on critical actions. People make decisions based on perceived versus actual performance load. For example, given equal visibility of two paths to a nearby destination, one shorter and one longer, people will overwhelmingly choose the shorter. However, given visibility of two paths to a nearby destination of equal length but one a moving walkway, people only marginally favor the moving walkway. Why? The perceived distances are the same, and there is additional performance load associated with entering and exiting the moving walkway. Preference for the moving walkway increases as the distance to the destination increases. Consider performance load in the design of interactions and experiences. Generally, make the desired manner of interaction the path of least resistance, unless overcoming challenges or problem-solving are essential aspects of the experience. Reduce performance load by minimizing the number of steps in an interaction and the effort required by those steps. See also Accessibility; Cost-Benefit; Depth of Processing; Desir Line; IKEA Effect; Nudge; Zeigarnik Effect

Classic photos from a time-motion study conducted by Frank and Lillian Gilbreth to reduce time and effort to perform tasks.

136

Performance vs. Preference Increasing performance does not necessarily increase desirability. What helps people perform well and what people like is often not the same thing. The Dvorak keyboard is estimated to improve typing efficiency by more than 30% but has failed to rise in popularity because people prefer the more familiar QWERTY keyboard. If you asked people if they would like to be able to type 30% faster with fewer errors, most would answer in the affirmative. Despite this, more than 50 years have passed since the introduction of the Dvorak keyboard, and it is still more of a novelty than a practical alternative.1 This underscores an important lesson for designers: The reasons people prefer one design to another is a combination of many factors and may have nothing to do with performance. Is the design pleasing to look at? Does it compete with long-standing designs or standards of use? Does it contribute to the well-being or self-esteem of the user? These are all factors that must be carefully balanced in the development of the design requirements. If a superbly performing design is never bought or used because people (for whatever reason) do not prefer it to alternatives, the performance benefits are moot. If a well-liked design does not help people perform at the level required, the preference benefits are moot. The best way to correctly balance performance and preference in design is to accurately determine the importance of performance versus preference. While surveys, interviews, and focus groups try to find out what people want or like, they are unreliable indicators of what people will actually do, especially for new or unfamiliar designs. Additionally, people are poor at discriminating between features they like and features that actually enhance their performance.2 The best method of obtaining accurate performance and preference requirements is to observe people interacting with the design (or a similar design) in real contexts. When this is not feasible, test using structured tasks that approximate key aspects of the way the design will be used. It is important to obtain preference information in context while the task is being performed and not afterward. Do not rely on reports of what people say they have done, will do, or are planning to do in the future regarding the use of a design; such reports are unreliable. See also Aesthetic-Usability Effect; Control; Desire Line; Flexibility Tradeoffs;

Hierarchy of Needs

1

See, for example, “Performance Versus Preference” by Robert Bailey, 1993, Proceedings of the Human Factors and Ergonomics Society 37th Annual Meeting, 282– 286.

2

See, for example, “Measuring Usability: Preference vs. Performance” by Jakob Nielsen and Jonathan Levy, 1994, Communications of the ACM, 37(4), 66 –75.

QWERTY Keyboard

Dvorak Keyboard

The QWERTY layout was designed to prevent the jamming of mechanical arms on early typewriters. The Dvorak layout, by contrast, was designed to maximize typing efficiency: It grouped keys based on frequency of use and positioned keys to promote alternating keystrokes between hands, among other refinements. The result is a 30% improvement in typing efficiency and claim to most of the world records for speed typing. Despite the clear advantages of the Dvorak design, QWERTY enjoys the following of generations of people trained on the layout, which in turn drives manufacturers to continue perpetuating the standard. Dvorak wins on performance, but QWERTY wins on preference.

137

Perspective Cues Visual properties that create the perception of depth and three-dimensionality. People have evolved to see things as three-dimensional whenever possible — even when the things are clearly not three-dimensional. The following visual cues are commonly used to encourage the perception of three-dimensional relationships.1 • Interposition — When overlapping objects are presented, the overlapped object is perceived to be farther away than the overlapping object. • Size — When two similar objects of different size are presented together, the smaller object is perceived to be farther away than the larger object. The size of familiar objects can be used to indicate the size and depth of unfamiliar objects. • Elevation — When two objects are presented at different vertical locations, the object at the higher elevation is perceived to be farther away. An exception to this is when a strong horizontal element is present, which tends to be perceived as a horizon line. In this case, objects that are closer to the horizon line are perceived as farther away than objects that are distant from the horizon line. • Linear perspective — When two vertical lines converge near their top ends, the converging ends of the lines are perceived to be farther away than the diverging ends. • Texture gradient — When the texture of a surface varies in density, the areas of greater density are perceived to be farther away than areas of lesser density. • Shading — When an object has shading or shadows, the shaded areas are perceived to be the farthest away from the light source, and the light areas are interpreted as being closest to the light source. • Atmospheric perspective — When multiple objects are presented together, the objects that are bluer and blurrier are perceived to be farther away than the objects that are less blue and blurry.2 Consider these visual cues in the depiction of 3D elements and environments. Strongest depth effects are achieved when visual cues are used in combination; therefore, use as many of the cues as possible to achieve the strongest effect, making sure the cues are appropriate for the context. See also Common Fate; Figure-Ground; Top-Down Lighting Bias;

Visuospatial Resonance

1

Note that only static cues (as opposed to motion cues) are presented here. A nice review of the various depth cues is found in Sensation and Perception by Margaret Matlin and Hugh Foley, 1997, Allyn & Bacon, 165 –193.

2

The relationship between the degree of blueness and blurriness to distance is a function of experience — i.e., people who live in a smoggy city will have a different sense of atmospheric perspective than people who live in less polluted rural areas.

Cinderella Castle at Walt Disney World in Orlando, Florida, opened in July 1971. At the time the castle was built, any structure that was 190 feet or taller required a red, flashing aircraft beacon on top. To avoid this, the castle stands 189 feet (58 m) tall but is designed to look taller. An old set-design trick known as forced perspective makes the castle appear larger than it is. At higher elevations, proportions are reduced for elements such as stones, windows, and doors. The higher you go, the smaller these elements, creating the illusion of distance. Many buildings at Disney parks employ this technique, including the shops on Main Street at Disney World and Disneyland. Imagineers design Disney buildings to a 1 : 5/8 : 1/2 scale. The first floor of a Disney building is to scale (100%), but the second floor of the building façade is 5/8 the size of the first floor. And if there is a third floor, it stands at 1/2 the size of the base floor. When guests standing on the ground look up, the building looks like it stands three stories tall, when in fact it shrinks with each floor.

6-Foot-Tall Human

22/12/22 11:40 AM

138

Perverse Incentive An incentive that unintentionally worsens the problem it seeks to solve. An incentive is anything that motivates or encourages someone to behave in a particular way. A perverse incentive, then, is an incentive gone awry. It unintentionally motivates or encourages someone to behave in a contrary way, or in a way that results in unintended negative consequences.1

1

The case for recognizing and understanding perverse incentives as a recurring theme behind economic crises is eloquently presented in “Teaching History in Business Schools: An Insider’s View” by Robert Wright, Dec 2010, Academy of Management Learning & Education, 9(4), 697–700.

2

Freakonomics by Stephen Dubner and Steven Levitt, 2005/2010, William Morrow.

3

Der Kobra-Effekt. Wie man Irrwege der Wirtschaftspolitik vermeidet by Horst Siebert, 2001, Deutsche Verlags-Anstalt.

4

See, for example, “Thresholds, Perverse Incentives, and Preemptive Conservation of Endangered Species” by Christian Langpap and JunJie Wu, 2017, Journal of the Association of Environmental and Resource Economists, 4(S1), S227– S259.

Perverse incentives manifest in three ways: 1. Boomerang effects — When an incentive is offered to modify behavior

but the opposite behavior results. For example, an Israeli daycare wanted to reduce the number of parents picking their children up late, so they introduced a small fine. The result? Late pickups doubled. Before the fine, parents would try to pick up on time because it was the rule. After the fine, however, parents felt they had permission to be late if they were willing to pay the fine.2 2. Cheating — When an incentive is offered to modify behavior but people

game the system (i.e., cheat) to get the incentive. For example, in the early 1900s, British officials in Delhi, India, offered a bounty for cobras to reduce the venomous snake population. The system seemed to work; large numbers of snakes were killed and turned in for a reward. It was later discovered that people were breeding cobras just to get the bounty, and the program shut down. The cobra breeders set their then-worthless snakes free, which increased the total cobra population.3 3. Collateral damage — When an incentive is offered to modify behavior

and it works but the collateral damage that results defeats the point. For example, some environmental protection laws impose development restrictions on landowners who find endangered species on their property. While such laws can protect endangered wildlife when discovered, they also encourage preemptive habitat destruction: Landowners are incentivized to quickly develop land, protecting the commercial value of their property by eliminating the possibility of finding endangered species.4 Consider the perils of perverse incentives when trying to modify behavior. Ensure incentives are appropriately aligned and weighted to the goal and that the overall cost-benefit of goal behaviors are more favorable than alternatives. Beware the use of proxy measures — e.g., bounties, quotas, test scores, etc. — which often induce cheating and permit the illusion of progress. See also Cost-Benefit; Desire Line; Gamification; Operant Conditioning;

Process Eats Goal; Shaping

In the 1860s, the United States began building the transcontinental railroad. The top image shows the approved route for track to be laid west of Omaha, Nebraska. Thomas C. Durant, vice president of Union Pacific Railroad Company, changed the route indicated in the bottom image. Why? The U.S. government contract for this portion of the railroad paid $50,000 for every mile of track. By extending the route to include the “oxbow”, Durant and his team added 9 miles of track, which increased their fee by $450,000 (over $8 million in 2022).

Once in Hartford the flies were so numerous for a time, and so troublesome, that Mrs. Clemens conceived the idea of paying George a bounty on all the flies he might kill. The children saw an opportunity here for the acquisition of sudden wealth…Any Government could have told her that the best way to increase wolves in America, rabbits in Australia, and snakes in India, is to pay a bounty on their scalps. Then every patriot goes to raising them. — Mark Twain Autobiographical Writings

139

Phonetic Symbolism The meaning conveyed by the sounds of words. Words convey meaning based on their assigned definitions but also based on how they sound when spoken. The sounds of consonants and vowels convey basic concepts like size, gender, and aggression — and reinforce these concepts in words. While cross-cultural research of phonetic symbolism is ongoing, there is suggestive evidence that there are common sound-meaning associations across many languages.1 Consonant sounds with a constant flow of air — such as the letters s, f, v, z — are associated with smallness, femininity, and passivity. Consonant sounds where the air is blocked — like the letters p, k, t, b, g, d, hard c — are associated with largeness, masculinity, and aggression. Vowel sounds that widen the mouth, as with a smile — e (bee), i (sit), a (hate), e (best) — are associated with smallness, femininity, and passivity. Vowel sounds that bring the mouth into a circle — o (dome), o (cot), a (can), u (food), u (put), u (luck), a (father) — are associated with largeness, masculinity, and aggression. One implication of phonetic symbolism regards brand perception. Perceptions of brands may be enhanced when the sound symbolism and product attributes are congruous — i.e., harmonious with one another — or undermined when they are incongruous. For example, the brand “Kraft” has very large, masculine, and strong phonetic symbolism. This is beneficial to convey dominance, but it could be counterproductive for products that are ideally perceived as soft or light, like bread or marshmallows.2 Phonetic symbolism applies to names of people as well as products. For example, one study rated the names of U.S. presidential candidates from 1824 to 1992 based on their vowel sound, consonant sound, and rhythm. The hypothesis was that names containing vowel sounds commonly used to express disgust (e.g., puke and Dewey) would be less favorably perceived than candidates with better phonetic symbolism. The candidates with bettersounding names — i.e., more positive phonetic symbolism — won the popular vote 35 out of 42 elections, a whopping 83% of the time. The analysis was extended to congressional and local elections, and candidates with bettersounding names continued to have a significant edge, consistently winning over 60% of the time.3 The sounds of words matter. Consider phonetic symbolism in brand names, slogans, and even pricing. Ensure that the phonetic symbolism of names and important numbers are congruent with their identities and goals. See also Left-Digit Effect; Priming; Propositional Density; Stickiness

1

The seminal work is “A study in phonetic symbolism” by Edward Sapir, 1929, Journal of Experimental Psychology, 12, 225 – 239.

2

“Phonetic Symbolism and Brand Name Preference” by Tina Lowrey and L.J. Shrum, 2007, Journal of Consumer Research, 34(3), 406 – 414.

3

“The political impact of name sounds” by Grant Smith, 1998, Communication Monographs, 65(2), 154 –172.

In a 2004 study, a fictitious ice cream named Frosh (rhymes with posh) was compared to a fictitious competitor named Frish (rhymes with fish). Both names sound like variants of the word “fresh”, with the subtle difference in the vowel sound — o versus i. Phonetic symbolism suggests that vowel sounds that make the mouth form a circle, like an o-sound, would make the product sound larger, more dominant. And in ratings based on these names alone, Frosh ice cream was rated smoother, richer, and creamier than Frish ice cream.

140

Picture Superiority Effect Pictures are remembered better than words. It is said that a picture is worth a thousand words, and it turns out that in most cases, this is true. Pictures are generally more easily recognized and recalled than words, although memory for pictures and words together is superior to memory for words alone or pictures alone. For example, instructional materials and technical manuals that present textual information accompanied by supporting pictures enable information recall that is better than that produced by either the text or pictures alone. The picture superiority effect is commonly used in instructional design, advertising, technical writing, and other design contexts requiring easy and accurate recall of information.1 The picture superiority effect applies only when people are asked to recall something after more than 30 seconds from the time of exposure. When information recall is measured immediately after the presentation of a series of pictures or words, recall performance for pictures and words is equal. The picture superiority effect is strongest when the pictures represent common, concrete things versus abstract things, such as a picture of a flag versus a picture depicting the concept of freedom, and when pictures are distinct from one another, such as a mix of objects versus objects of a single type. The picture superiority effect advantage increases further when people are casually exposed to information and the exposure time is limited. For example, an advertisement for a clock repair shop that includes a picture of a clock will be better recalled than the same advertisement without the picture. People not interested in clock repair who see the advertisement with the picture will also be better able to recall the brand if the need for clock repair service arises at a later time. The strength of the picture superiority effect diminishes as the information becomes more complex. For example, people are able to recall events from a story presented as a silent movie as well as events from the same story read as text.2 Use the picture superiority effect to improve the recognition and recall of key information. Use pictures and words together, and ensure that they reinforce the same information for optimal effect. Pictures and words that conflict create interference and dramatically inhibit recall. Consider the inclusion of meaningful pictures in advertising campaigns when possible, especially when the goal is to build company and brand awareness. See also Exposure Effect; Iconic Representation; Inattentional Blindness;

Recognition over Recall

1

The seminal work on the picture superiority effect is “Why Are Pictures Easier to Recall than Words?” by Allan Paivio et al., 1968, Psychonomic Science, 11(4), 137–138. This principle is also known as pictorial superiority effect.

2

See, for example, “Conditions for a PictureSuperiority Effect on Consumer Memory” by Terry L. Childers and Michael J. Houston, 1984, Journal of Consumer Research, 11, 643 – 654.

Pictures are more easily recognized and recalled than words, but memory for pictures and words together is superior to either alone. The NIKE logotype is good. The NIKE swoosh is classic. But in terms of recall, the two together will be the most memorable.

141

Play Preferences A tendency for male children and female children to like different kinds of play. There are a number of innate cognitive-behavioral differences between males and females, one of which is early childhood play preferences. Male children tend to engage in play activities that emulate hunting-related behaviors, whereas female children tend to engage in play activities that emulate nurturing-related behaviors. Play preferences were long thought to be primarily a function of social and environmental factors, but research increasingly favors a more biologically based explanation. For example, the fact that male children tend to prefer stereotypically male toys (e.g., cars) and females tend to prefer stereotypically female toys (e.g., dolls) has long been established. However, in studies where male and female vervet monkeys are presented with the same human toys, the male vervets prefer to play with the male toys, and the female vervets prefer to play with the female toys. This suggests a deeply rooted, biologically based gender bias for certain play behaviors.1 Like play behaviors in other animals, these early childhood fixations likely had adaptive significance in preparing our hunter-gatherer ancestors for survival: male children for hunting and female children for child-rearing. Though these fixations are essentially vestigial in modern society, they continue to influence our preferences and behaviors from early childhood through adolescence. Hunter fixation is characterized by activities involving: • Object movement and location, weapons and tools, hunting and fighting, predators, and physical play Nurturer fixation is characterized by activities involving: • Form and color, facial expressions and interpersonal skills, nurturing and caretaking, babies, and verbal play Consider hunter-nurturer fixations in the design of objects and environments for children. When targeting male children, incorporate elements that involve object movement and tracking, angular forms, predators, and physical play. When targeting female children, incorporate elements that involve aesthetics and color, round forms, babies, and tasks requiring interpersonal interaction. See also Archetypes, Psychological; Baby-Face Bias; Color Effects;

Contour Bias; Supernormal Stimulus; Threat Detection

1

See, for example, “Sex Differences in Infants’ Visual Interest in Toys” by Gerianne Alexander et al., 2009, Archives of Sexual Behavior, 38, 427– 433; “Sex Differences in Interest in Infants Across the Lifespan: A Biological Adaptation for Parenting?” by Dario Maestripieri and Suzanne Pelka, Sep 2002, Human Nature, 13(3), 327– 344; and “Sex Differences in Human Neonatal Social Perception” by Jennifer Connellan et al., 2000, Infant Behavior & Development, 23, 113 –118.

When vervets are presented with human toys, female vervets prefer stereotypically female toys, and male vervets prefer stereotypically male toys. This suggests a biological basis for gender-based play preferences in primates — including humans.

142

Poka-Yoke A thing designed to prevent defects and errors. Poka-yoke is a Japanese term that means mistake-proofing and broadly refers to any mechanism or process that prevents design defects. The concept was developed as part of the Toyota Production System to prevent defects — the notion being that human errors are inevitable but defects can be detected and corrected before they reach the customer. For example, the feature in many word processors that automatically highlights misspelled words is a poka-yoke.1 The systematic application of poka-yoke involves four-steps: (1) identifying defects, which are defined broadly as any cause of customer dissatisfaction, (2) identifying the causes of those defects, focusing on root versus proximal causes, (3) designing and implementing poka-yokes in accordance with the six-strategy hierarchy, and (4) maintaining improvements while building upon them (iterating back to step 1). This approach is reactive in nature and works best for systems that can iterate and mature over time without risk of catastrophic failure. For systems with no such luxury — e.g., prototypes, experimental designs, and innovative designs — a more proactive approach is required: borrowing tried-and-true poka-yokes from analogous systems coupled with extensive testing. Novel functions without strong analogs pose the greatest risks and should be treated with particular humility.2 Poka-yokes are applied using six strategies in order of effectiveness: 1. Elimination — Redesign the system so that an error-prone element or

procedure is no longer needed. 2. Prevention — Redesign the system to make crticial errors impossible. 3. Replacement — Redesign the system so that an error-prone element or

procedure can be substituted with something more reliable. 4. Facilitation — Redesign the system to make elements easier to work with

and procedures easier to perform. 5. Detection — Redesign the system to quickly detect and contain errors

before further processing occurs. 6. Mitigation — Redesign the system to minimize the impact of errors when

they do occur.3 Embrace the poka-yoke philosophy in the design of all systems, especially manufacturing and production processes. Consider the six strategies, favoring prevention over mitigation. For innovative designs, borrow poka-yoke strategies from analogous systems when possible. For truly novel functions that are high risk and have a low tolerance for error, never rely solely on human judgment or training: Preemptively poka-yoke and test exhaustively. See also Error, Design; Error, Human; Factor of Safety; Forgiveness; Iteration;

Root Cause; Testing Pyramid

1

The original phrase was introduced as bakayoke or idiot-proofing, but this reduced certain plant employees to tears. It was then recast as the gentler poka-yoke or “mistake-proofing”. The seminal work is Zero Quality Control: Source Inspection and the Poka-Yoke System by Shigeo Shingo and A.P. Dillon (Tr.), 1986, Productivity Press.

2

See, for example, “Quality improvement through Poka-Yoke: From engineering design to information system design” by Abraham Zhang, Jan 2014, International Journal of Six Sigma and Competitive Advantage, 8(2), 147–159.

3

“Principles of Poka-Yoke” by Rashmi Sattigeri and D.G. Kulkarni, Apr 2021, International Journal of New Innovations in Engineering and Technology, 16(3), 17–19.

On October 31, 2014, the VSS Enterprise was lifted to about 46,400 feet (14.1 km) by the White Knight Two carrier aircraft. Enterprise separated normally, and its hybrid rocket engine fired as planned. Nine seconds after rocket ignition, the copilot unexpectedly swung the tail booms up into a feathered position, increasing drag while the rocket continued to accelerate. There was no lockout device to prevent incorrect deployment of the feather. Two seconds later, Enterprise experienced a catastrophic structural failure and broke into multiple pieces, killing the copilot. Subsequent versions of the spacecraft were equipped with mechanical inhibit devices to prevent locking or unlocking of the feather during safety-critical phases.

143

Premature Optimization Making elements of a design efficient before they are recognized as important or even needed. Premature optimization is based on an observation by the computer scientist Donald Knuth that programmers often spend significant time optimizing code too early in the development cycle. They expend resources on components that end up playing a trivial role in overall performance or on components that ultimately get cut in the final product. The intentions behind premature optimization are invariably good — i.e., programmers trying to create efficient code — but the result of such efforts is wasted energy, time, and money. For this reason, Knuth suggests that “small efficiencies” should be ignored 97% of the time and declares premature optimization “the root of all evil”.1 The tendency to prematurely optimize is a problem that extends beyond computer science to all areas of design. Premature optimization is in some ways the obverse of the Pareto principle — i.e., it involves focusing on many things that don’t matter versus the few things that do. For example, it is common for startups to spend considerable resources on the design of their websites: optimizing the branding, usability of forms, meta tags for search engine rankings, site bandwidth for high traffic, etc. In most cases, these things are not needed at launch and carry significant costs in time and money. What is needed at launch, however, is a working website that presents the offering in a clear and compelling manner. This invariably requires experimentation and iteration, which will leave many of the early design elements and their refinements on the cutting-room floor. The lesson of premature optimization is not that optimization is bad but that the cost-benefit of what and when to optimize changes throughout the development cycle. Designers should prioritize optimizing things that matter most at a given time versus trying to optimize everything. Premature optimization is a cautionary principle of design: a practice to be avoided rather than followed. While the pursuit of perfection may seem laudable, it comes at a high price. Consider the Pareto principle when optimizing and educate teammates about the costs of overoptimizing. Stakeholders often resist releasing designs with incomplete or imperfect elements. Remind them that design is iterative: Noncritical things not included or optimized in one iteration can be included and optimized in later iterations. Do not let perfect be the enemy of shipping. See also Ackoff’s Law; Gall’s Law; KISS; Minimum-Viable Product;

Pareto Principle; Satisficing

1

“Structured Programming With go to Statements” by Donald Knuth, Dec 1974, Computing Surveys, 6(4), 261– 301.

In April 1997, 3D Realms announced the upcoming release of Duke Nukem Forever, the fourth installment in the hit video game franchise to be available by Christmas 1998. With intentions to make the best game possible, the team repeatedly tried to incorporate new features and technologies into the game, ensuring that it would be at the cutting edge

of game play when released. But gaming technology was evolving at a rapid pace, and the result was a continuous resetting of the design and development cycle. Christmas 1998 came and went. Then 1999 and 2000. In 2001, Duke Nukem Forever earned the number one spot on the Wired list “Vaporware 2001: Empty Promises”. The game was finally released in June

2011, after more than 14 years in

development, earning it the Guinness World Record for the longest development cycle of a video game.

I just hope that 3D Realms understands that if this game doesn’t turn out to be history’s greatest contribution to human culture and a cure for at least one type of cancer, then I and every other reviewer on Earth are going to saw its b******* off. — Ben “Yahtzee” Croshaw Zero Punctuation, February 2008

144

Priming Activating specific concepts in memory to influence subsequent thoughts and behaviors. Whenever stimuli are received by the senses — sights, sounds, smells, touches, tastes — concepts are automatically activated in memory. Once concepts are activated, they stay activated for a period of time, capable of influencing subsequent thoughts, reactions, emotions, and behaviors. This phenomenon was thought to be robust and a significant influence on human behavior. However, many of the foundational priming experiments have failed independent replication, casting doubt on the phenomenon. While the priming effect seems to be real, its ability to subconsciously influence thoughts and behaviors are significantly weaker than previously believed.1 Priming research exposes people to a stimulus that seeks to activate attitudes, traits, goals, or other concepts to subconsciously influence subsequent behavior and judgment. Examples of prominent priming research results that have failed to replicate include: • Reading sentences containing words related to old age induced people to walk slower. • Wearing a heavy backpack led people to judge a hill as steeper. • Thinking about the attributes of a professor enabled people to do better on a test than people thinking about the attributes of soccer hooligans. To repeat, these experiments failed replication, which means the results are likely not valid and similar claims about priming are not likely to be credible.2 Priming can subtly influence when the stimulus introduced activates concepts that are consistent with a preexisting need or goal. Showing an image of a person drinking soda before a movie would have a small effect on thirsty people, inducing more of them to buy soda than would otherwise; but it would have no effect on nonthirsty people. However, since everyone in the audience shares the goal of wanting to see a good movie, showing positive imagery and movie previews will prime the audience to react more favorably to the movie generally than they would otherwise. It is this latter kind of indirect (unnoticed) prime that is the most effective. Consider priming in design but as a complementary strategy versus a principal driver. First impressions, contexts, and antecedent events are all opportunities to influence subsequent reactions and behaviors — this includes the way products are presented in packaging, the articles adjacent to advertisements in newspapers, and the experiences leading from the parking lot to the entryway of a retail store. Expect small effects, if any at all. See also Expectation Effects; Framing; Nudge; Serial Position Effects

1

The seminal works on priming are “Automaticity of Social Behavior: Direct Effects of Trait Construct and Stereotype Activation on Action” by John Bargh et al., 1996, Journal of Personality and Social Psychology, 71(2), 230 – 244; and “Losing Consciousness: Automatic Influences on Consumer Judgment, Behavior, and Motivation” by John Bargh, Sep 2002, The Journal of Consumer Research, 29(2), 280 – 285. Note that some of the experiments in these studies have failed to replicate.

2

See, for example, “Priming Intelligent Behavior: An Elusive Phenomenon” by David Shanks et al., 2013, PLoS One, 8(4). Regarding the replication crisis around priming, see “What’s next for psychology’s embattled field of social priming” by Tom Chivers, 2019, Nature, 576, 200 – 202.

When people believe they are being watched, they behave differently than when in private. What environmental cues are sufficient to make people think they are being watched? In one experiment, researchers wanted to see if the mere presence of eyes in the environment would prime a sense of being watched, increasing

donations to an office coffee fund. For 10 weeks, they alternately taped two posters over the coffee station: The first poster featured a generic image, and the second poster featured staring eyes. The reported results were striking. When the eyes poster was displayed, people contributed almost three times more money than

they did when the generic poster was displayed. Too striking, it turns out. This experiment also failed replication. People do change their behavior when being watched, but it likely takes more than a poster of eyes to trigger the change. A security camera at the coffee station would likely generate more of an effect.

145

Process Eats Goal A situation in which people follow a process that undermines the greater goal. “Process eats goal” describes a situation in which people dutifully follow an authorized way of doing things at the expense of the greater goal. For example, a retirement community nurse calls 911 after a resident collapses and has trouble breathing. The dispatcher advises the nurse to administer CPR as firefighters rush to the scene, but the nurse refuses because it is against policy. The dispatcher pleads with the nurse to have somebody there assist, but everyone refuses. The resident dies. Process eats goal.1

1

See, for example, “Facility’s no-CPR policy takes heat after woman’s death” by Janice Lloyd, 2013 Mar 4, 2013, USA Today.

2

See, for example, “Leaks and the Law: The Story of Thomas Drake” by David Wise, Aug 2011, Smithsonian Magazine.

3

See, for example, “An American Atrocity: The My Lai Massacre Concretized in a Victim’s Face” by Claude Cookman, 2007, The Journal of American History, 94(1), 154 –162.

4

See, for example, “Six Dangerous Myths About Pay” by Jeffrey Pfeffer, May –Jun, 1998, Harvard Business Review, 5.

The three common causes of process eating goal are: 1. Inflexible, rule-based cultures — Organizational cultures that prioritize

rule compliance and punish deviations even when they ultimately benefit the organization. For example, it is common for organizations and governments to offer whistleblower protection to encourage reporting of illegal or unethical practices. But rather than being celebrated as heroes, whistleblowers are often blacklisted, smeared, and legally prosecuted.2 2. Process focus versus goal focus — Situations in which a practice or

process is highly trained but with little attention or training spent on the greater goal. Not only does this make following scripts increasingly habitual with practice; it makes people feel less culpable for their actions. For example, the “just following orders” defense, made famous in the post-WWII Nuremberg trials, continues to be invoked when people are trained to follow orders without consideration of the greater ethics or goals involved.3 3. Misaligned incentives — Incentives that improve process-specific

outcomes but, in so doing, undermine strategic goals. This often occurs with commission and quota systems. For example, a retail store institutes a new commission structure to improve sales. The commissions motivate the staff to sell but also promote aggressive selling and increase internal competition, which reduces customer satisfaction. The number of customers declines, and overall sales decrease as a result.4 Employ processes, but don’t be enslaved by them. Make goal deliberation and reflection an integral part of training, practicing scenarios that require deviation from standard processes and procedures. Ensure that everyone understands that processes are means to ends and not ends in and of themselves. Celebrate process deviations when they serve the greater goal. See also Ackoff’s Law; Don’t Eat the Daisies; Knowing-Doing Gap;

Perverse Incentive; Status Quo Bias

It has become a standard protocol in many retail stores to require cashiers to greet customers as they enter the store. The goal is to make customers feel welcome and, because customers feel less anonymous, reduce the likelihood of theft. In practice, however, cashiers are often too busy

with customers to be able to do this effectively, resulting in insincere, robotic welcomes blurted out without eye contact. This undermines the goal and creates awkward interactions for customers checking out and entering the store. Process eats goal.

146

Product Life Cycle The common stages of life for products. All products progress through stages of existence that roughly correspond to birth, life, and death. For example, a new type of electronic device is envisioned and developed; its popularity grows; after a while its sales plateau; and then finally, the sales decline. Understanding the implications of each of the stages allows designers to prepare for the unique and evolving requirements of a product over its lifetime.1 There are four basic stages of life for all products: 1. Introduction stage — The official birth of a product. This stage will at

times overlap with the late testing stage of the development cycle. The design focus is to monitor early use of the design to ensure proper performance, working closely with customers to tune or patch the design as necessary. 2. Growth stage — The challenging stage, where most products fail. The

design focus is to scale the supply and performance of the product to meet the growing demand and provide the level of support necessary to maintain customer satisfaction and growth. Efforts to gather requirements for the next-generation product should be underway. 3. Maturity stage — The peak of the product life cycle. Product sales

have begun to diminish, and competition from competitors is strong. The design focus at this stage is to enhance and refine the product to maximize customer satisfaction and retention. Design and development of the next-generation product should be well underway. 4. Decline stage — The end of the life cycle. Product sales continue

to decline, and core market share is at risk. The design focus is to minimize maintenance costs and develop transition strategies to migrate customers to new products. Testing of the next-generation product should begin. Consider the life cycle of a product when planning and preparing for the future. During the introduction phase, work closely with early adopters to refine and tune products. During the growth stage, focus on scaling product supply and performance. During the maturity stage, focus on customer satisfaction through performance enhancements and support. During decline, focus on facilitating the transition to next-generation products. Note that the development cycle for the next-generation product begins during the growth stage of a current-generation product. See also Development Cycle; Hierarchy of Needs; Iteration; Levels of Invention

1

The seminal work on the product life cycle is “International Investment and International Trade in the Product Cycle” by Raymond Vernon, 1966, Quarterly Journal of Economics, 80, 190 – 207. A contemporary review of the product life cycle is found in Marketing Management, 11th ed., by Philip Kotler, 2002 Prentice-Hall.

Growth

Maturity

Decline

Early Adopters

Mainstream

Late Adopters

Laggards

Small

Growing

Large

Contracting

SALES

Low

High

Flattening

Moderate

COMPETITION

Low

Moderate

High

Moderate

Awareness

Market Share

Customer Retention

Transition

Tuning

Scaling

Support

Transition

Product Sales

Introduction

AUDIENCE

MARKET

BUSINESS FOCUS

DESIGN FOCUS

The needs of a product change over the course of its life cycle. It is important to understand the dynamics of these changes in order to focus business and design resources accordingly. Failure to do so shortens the life cycle of a product.

147

Progressive Disclosure A method of managing complexity in which only necessary or requested information is displayed. Progressive disclosure involves separating information into multiple layers and only presenting layers that are necessary or relevant. It is primarily used to prevent information overload and is employed in computer user interfaces, instructional materials, and the design of physical spaces.1 Progressive disclosure keeps displays clean and uncluttered and helps people manage complexity without becoming confused, frustrated, or disoriented. For example, infrequently used controls in software interfaces are often concealed in dialog boxes that are invoked by clicking a More button. People who do not need to use the controls never see them. For more advanced users, the options are readily available. In either case, the design is simplified by showing only the most frequently required controls by default and making additional controls available on request.2 Learning efficiency benefits greatly from the use of progressive disclosure. Information presented to a person who is not interested or ready to process it is effectively noise. Information that is gradually and progressively disclosed to a learner, as they need or request it, is better processed and perceived as more relevant. The number of errors is significantly reduced using this method, and consequently the amount of time and frustration spent recovering from errors is also reduced.3 Progressive disclosure is also used in the physical world to manage the perception of complexity and activity. For example, progressive disclosure is found in queue design for modern theme park rides. Exceedingly long lines not only frustrate people in line but also discourage new people from joining the queue. Theme park designers progressively disclose discrete segments of the line (sometimes supplemented with entertainment) so that no one, in or out of the line, ever sees the line in its entirety. Use progressive disclosure to reduce information complexity and improve learning efficiency. Hide infrequently used controls or information, but make them readily available through some simple operation, such as pressing a More button. Progressive disclosure is also an effective method for leading people through complex procedures step by step and should be considered when such procedures are a part of a design. See also Control; Error, Design; Error, Human; Five Tenets of Queuing;

Interference Effects; Performance Load; Signal-to-Noise Ratio

1

The seminal applied work on progressive disclosure is the user interface for the Xerox Star computer. See “The Xerox Star: A Retrospective” by Jeff Johnson et al., in Readings in Human Computer Interaction: Toward the Year 2000 by Ronald Baecker et al. (Eds.), 1995, Morgan Kaufman Publishers, Inc., 53 –70.

2

A common mistake is to present all available information and options at once with the rationale that it reduces kinematic load. Since progressive disclosure affects only infrequently used elements and elements for which a person may not be ready, it will generally have minimal effect on kinematic load. Conversely, presenting everything at once will significantly increase cognitive load.

3

See, for example, “Training Wheels in a User Interface” by John M. Carroll and Caroline Carrithers, 1984, Communications of the ACM, 27(8), 800 – 806; and The Nurnberg Funnel by John Carroll, 1990, MIT Press.

Theme park rides often have very long lines — so long that seeing the lines in their entirety would scare away many would-be visitors. Therefore, modern theme park rides progressively disclose the length of the line so that only small segments of the line can be seen from any particular vantage point. Additional distractions are provided in the form of video screens, signage, and partial glimpses of people on the ride.

Low walls allow visitors near the end of the line to see they are getting close to the end. High walls prevent visitors at the beginning of the line from seeing the length of the line.

Video screens entertain visitors while they wait. Windows allow visitors at the end of the line to see the ride.

Status signs indicate wait time.

148

Progressive Subtraction Reducing the number of perceptible elements in a design over time. Progressive subtraction is the systematic and deliberate simplification of a design over the course of its product life cycle. The principle runs counter to the natural evolution of products and systems generally, which tend toward progressive addition — i.e., adding more features and elements with each iteration. Note that progressive subtraction refers primarily to the perceptible complexity experienced by users and less to the complexity concealed within a product’s case or codebase. For example, a new sensor could be added to the interior of a smartphone without increasing the perceptible complexity of the phone or its interface.1 All systems tend to accrete complexity as they evolve, and this accretion can often compromise the success of the system as a whole. This is why, for example, trees benefit from pruning — trees here also serve as a metaphor for logos, products, websites, policies and laws, and so on — retaining and refining the essential branches while removing the dead and diseased wood. In fact, the key difference between naturally evolved systems and intentionally designed systems is the progressive subtraction of elements as they evolve. In designed systems, progressive addition tends to be the norm because it is relatively safe and politically expedient to add elements and features but comparatively risky and politically challenging to subtract them. Existing features invariably have champions — often small in number but vocal in their advocacy — which makes subtraction difficult, unpopular, and sometimes career damaging. This is why scope creep or creeping featurism occurs: It’s easy to add and hard to subtract.2 An earmark of great design is that it grows simpler and performs better over time from a user perspective — i.e., version 10 has fewer elements, requires less effort to use, and is more capable in essential areas than version 1. As such, use progressive subtraction as a heuristic guiding iteration and assessing design quality. Progressive subtraction should be considered in the long-term arc of a product’s life cycle, especially in the mid to late phases. New products and services may require additional elements and features to become successful — e.g., in cases where the early versions did not meet the essential requirements — and in such cases, progressive subtraction should not be applied. See also Feature Creep; Iteration; KISS; MAYA; Ockham’s Razor;

Pareto Principle; Prototyping

1

Within various design specializations, there are related principles — for example, the concept of “Muntzing” in electrical engineering, which refers to the process of progressively removing components from an electronic appliance until it stops functioning to determine the minimum viable configuration; and Lauer’s Law in software engineering, which asserts that “less code is better code”.

2

See, for example, “People systematically overlook subtractive changes” by Gabrielle Adams et al., 2021, Nature, 592, 258 – 261.

In anything at all, perfection is finally attained not when there is no longer anything to add, but when there is no longer anything to take away, when a body has been stripped down to its nakedness. — Antoine de Saint-Exupery Wind, Sand and Stars, translated by Lewis Galantiere

149

Propositional Density The number of independent meanings conveyed by a design. Propositional density is the amount of information conveyed in an object or environment per unit element. High propositional density is a key factor in making designs engaging and memorable — it is what makes double entendres interesting and puns funny (i.e., they express multiple meanings with a single phrase). For present purposes, a proposition is an elementary statement about an object or environment that cannot be easily broken down into constituent propositions.1 There are two types of propositions: 1. Surface propositions — The salient, perceptible elements. 2. Deep propositions — The underlying and often hidden meanings of the

salient perceptible elements (i.e., surface propositions). Propositional density (PD ) can be estimated by dividing the number of deep propositions by the number of surface propositions. Objects and environments with high propositional density (PD > 1) are perceived to be more interesting and engaging than objects and environments with low propositional density (PD < 1). Simple objects and environments (i.e., few surface propositions) that are rich in meaning (i.e., many deep propositions) are perceived to be the most compelling. Consider, for example, the modern Apple, Inc., logo. The surface propositions expressed by the logo are the body of the apple, top leaf, and missing chunk. The deep propositions include but, are not limited to, the following: The apple is a healthy fruit; the apple tree is the biblical tree of knowledge; a bite from the apple represents a means to attain knowledge; Sir Isaac Newton’s epiphany about gravity came from a falling apple; an apple a day keeps the doctor away; an apple is an appropriate gift for a teacher; and so on. With just the propositions listed, the Apple logo would have a PD ≈ 2, a high propositional density that makes the logo interesting to look at and easy to remember. Consider propositional density in all aspects of design. Favor simple elements that are rich in meaning. Aspire to achieve the highest propositional density possible, but make sure the deep propositions are complementary. Contradictory deep propositions can confuse the message and nullify prospective benefits. See also Framing; Interference Effects; Inverted Pyramid;

Signal-to-Noise Ratio; Stickiness; von Restorff Effect

1

The seminal theoretical work on propositional density is Syntactic Structures by Noam Chomsky, 1957, Mouton & Company. For other examples of practical applications, see “Building Great Sentences: Exploring the Writer’s Craft” by Brooks Landon, 2008, The Teaching Company, Course No. 2368; “A Plain Man’s Guide to the Theory of Signs in Architecture” by Geoffrey Broadbent, in Theorizing a New Agenda for Architecture by Kate Nesbitt, 1996, Princeton Architectural Press, 124 –141.

The logo of Barack Obama’s 2008 U.S. presidential campaign received wide acclaim for its design, but the logo’s high propositional density is the prime reason for its success.

DEEP PROPOSITIONS

SURFACE PROPOSITIONS

The blue represents sky.

The logo contains a blue circle.

The sun rising represents change.

The logo contains red and white lines.

The circle represents stability.

The red and white lines cut across the lower half of the circle.

The circle represents unity. The circle represents an O for Obama. The center of the circle represents a sun rising. The red and white lines represent amber waves of grain. The red and white lines represent a landscape. The red and white lines represent the American flag. The red, white, and blue represent American patriotism.

Deep Propositions ÷ Surface Propositions = Propositional Density

10 ÷ 3

≈ 3.33

150

Prospect-Refuge A preference for environments where people can see without being seen. People prefer environments where they can easily survey their surroundings (via prospects) and quickly hide or retreat to safety if necessary (via refuges). Environments with both prospect and refuge elements are perceived as safe places to explore and dwell and consequently are considered more aesthetic than environments without these elements. The principle is based on the evolutionary history of humans: Environments with ample prospects and refuges increased the probability of survival for early humans.1 The design goal of prospect-refuge can be summarized as the development of spaces where people can see without being seen. The prospect-refuge principle suggests that people prefer: • The edges, rather than the middles of spaces • Spaces with ceilings or covers overhead • Spaces with few access points (protected at the back or side) • Spaces that provide unobstructed views from multiple vantage points • Spaces that provide a sense of safety and concealment The preference for these elements is heightened if the environment is perceived to be hazardous or potentially hazardous. Environments that achieve a balance between prospects and refuges are the most preferred. • In natural environments, prospects include hills, mountains, and trees near open settings. Refuges include enclosed spaces such as caves, dense vegetation, and climbable trees with dense canopies nearby. • In human-created environments, prospects include deep terraces and balconies and generous use of windows and glass doors. Refuges include alcoves with lowered ceilings and external barriers, such as gates and fences.2 Consider prospect-refuge in the creation of landscapes, residences, offices, and communities. Create multiple vantage points within a space so that the internal and external areas can be easily surveyed. Make large, open areas more appealing by using screening elements to create partial refuges with side and back barriers while maintaining clear lines of sight (e.g., shrubbery, partitions). Balance the use of prospect and refuge elements for optimal effect (e.g., sunken floors and ceilings that open to larger spaces enclosed by windows and glass doors). See also Biophilia Effect; Cathedral Effect; Defensible Space;

Savanna Preference; Threat Detection; Wayfinding

1

The seminal work on prospect-refuge theory is The Experience of Landscape by Jay Appleton, 1975, John Wiley & Sons.

2

See, for example, The Wright Space: Pattern and Meaning in Frank Lloyd Wright’s Houses by Grant Hildebrand, 1991, University of Washington Press.

High Ceiling

Low Ceiling

Divider Between Dining Areas

Low Ceiling

Tinted Windows

Raised Floor

Shrubbery Divider Between Entry and Main Area

This section of an imaginary café highlights many of the practical applications of the prospect-refuge principle. The entry is separated from the interior by a greeting station, and the ceiling is lowered to create a temporary refuge for waiting patrons. As the interior is accessed, the ceiling raises and the room opens up with multiple, clear lines of sight. A bar area is set against the far wall with a raised floor and lowered

ceilings, creating a protected perch to view interior and exterior areas. High-backed booths and partial screens provide refuge with minimal impediment to prospect. Windows are tinted or mirrored, allowing patrons to survey the exterior without being seen. Shrubbery serves as a practical and symbolic barrier, preventing outsiders from getting too close.

High-Backed Booths

I once worked in an office full of software engineers who were obsessed with what they called ninja-proof seats. A ninja-proof seat is one with its back to the wall so you can be sure no ninjas can sneak up from behind. — Lily Bernheimer The Shaping of Us

151

Prototyping Building low-fidelity models to explore ideas and deeply understand problems. Prototyping is the creation of simple, incomplete models or mockups of a design. Prototyping provides designers with key insights into real-world design requirements and gives them a method to visualize, evaluate, learn, and improve design specifications prior to delivery.1

1

See, for example, Human-Computer Interaction by Jenny Preece et al., 1994, Addison-Wesley, 537– 563; The Art of Innovation by Tom Kelley and Jonathan Littman, 2001, Doubleday; and Serious Play: How the World’s Best Companies Simulate to Innovate by Michael Schrage, 1999, Harvard Business School Press.

2

Evolutionary prototyping is often contrasted with incremental prototyping, which is the decomposition of a design into multiple stages that are then delivered one at a time. They are combined here because they are invariably combined in practice.

There are three basic kinds of prototyping: 1. Concept prototyping — Useful for exploring preliminary design ideas

quickly and inexpensively. For example, concept sketches and storyboards are used to develop the appearance and personality of characters in animated films well before the costly processes of animation and rendering take place. This approach helps communicate the concepts to others, reveals design requirements and problems, and allows for evaluation by a target audience. Common problem: the plausible presentation of an implausible design; a good artist or modeler can make almost any design look like it will work. 2. Throwaway prototyping — Useful for collecting information about the

functionality and performance of certain aspects of a system. For example, models of new automobile designs are used in wind tunnels to better understand and improve the aerodynamics of their form. The prototypes are discarded once the needed information is obtained. Common problem: the assumption that the functionality will scale or integrate properly in the final design, which it often does not. 3. Evolutionary prototyping — Useful when many design specifications are

uncertain or changing. In evolutionary prototyping, the initial prototype is developed, evaluated, and refined continuously until it evolves into the final system. Design requirements and specifications never define a final product but merely the next iteration of the design. For example, software developers invariably use evolutionary prototyping to manage the rapid and volatile changes in design requirements. Common problem: Designers tend to get tunnel vision, focusing on tuning existing specifications rather than exploring design alternatives.2 Incorporate prototyping into the design process. Use concept prototypes to develop and evaluate preliminary ideas, and throwaway prototypes to explore and test design functionalities and performance. Schedule time for prototype evaluation and iteration. When design requirements are unclear or volatile, consider evolutionary prototyping in lieu of traditional approaches. When evaluating prototypes and design alternatives, consider problems of artificial realities, scaling and integration, and tunnel vision. See also Iteration; KISS; Satisficing; Scaling Fallacy; Testing Pyramid

What’s a designer to do when they tire of getting parking tickets due to incomprehensible parking signs? Nikki Sylianteng decided to make a better sign. She designed a paper prototype at home and taped it to a pole outside her New York City apartment. To invite public comment, she hung a permanent marker by a string next to the sign. She collected feedback, iterated the design, and reposted the next prototype. What started as a personal passion project turned

into a guerrilla project to rethink and redesign confusing parking signs everywhere. She has since collaborated with drivers, city officials, and the colorblind community. Her redesigned signs have been piloted in Los Angeles, New Haven, Boston, and Brisbane. Sylianteng’s work teaches that prototypes need not always be elaborate or expensive and that rapid prototyping and user-informed iteration are the keys to great design.

If a picture is worth 1,000 words, a prototype is worth 1,000 meetings. — Saying at IDEO

152

Proximity The brain automatically assumes elements that are close together are related. Proximity, one of the Gestalt principles of perception, asserts that elements that are displayed close together are perceived as a single group or chunk and are interpreted as being more related than elements that are farther apart. For example, a simple matrix of dots can be interpreted as consisting of multiple rows, multiple columns, or as a uniform matrix, depending on the relative horizontal and vertical proximities of the dots.1 The grouping resulting from proximity reduces the complexity of designs and reinforces the relatedness of the elements. Conversely, a lack of proximity results in the perception of multiple, disparate chunks and reinforces differences among elements. For example, moving text labels even a few pixels closer to the related data beneath, and a few pixels further from unrelated elements that appear above can make a significant improvement in the organization and usability of a display. Ambiguity regarding the relationship between labels and data can result in increased cognitive overhead and task time, as the viewer has to expend unnecessary effort to determine the relatedness. The use of proximity to create meaningful groups is employed regularly when writing narrative text. Consider how lines of text on a page (like this one) are typically grouped into paragraphs separated by white space. This grouping of text and use of white space communicates that the information within the paragraph is related and different from the paragraphs above and below. Certain proximal layouts imply specific kinds of relationships and should be considered in layout design. For example, connecting or overlapping elements are commonly interpreted as sharing attributes, whereas proximal but noncontacting elements are interpreted as related but independent.2 Consider the proximity principle when the goal is to indicate relatedness in a design. It is one of the most powerful principles and will generally overwhelm competing visual cues (e.g., similarity). Arrange elements such that their proximity corresponds to their relatedness. Ensure that labels and supporting information are near the elements that they describe, opting for direct labeling on graphs over legends or keys. Locate unrelated or ambiguously related items relatively far from one another. See also Alignment; Closure; Common Fate; Figure-Ground;

Good Continuation; Similarity; Uniform Connectedness

1

The seminal work on proximity is “Untersuchungen zür Lehre von der Gestalt, II” [Laws of Organization in Perceptual Forms] by Max Wertheimer, 1923, Psychologische Forschung, 4, 301– 350, reprinted in A Source Book of Gestalt Psychology by Willis Ellis (Ed.), 1999, Routledge & Kegan Paul, 71– 88. See also Principles of Gestalt Psychology by Kurt Koffka, 1935, Harcourt Brace.

2

Euler circles and Venn diagrams (methods of illustrating the relationships between sets of things in logic and mathematics) utilize this principle.

The sets of shapes (left) demonstrate the efficacy of proximity as a grouping strategy, which overrides color. This sign at Big Bend National Park (top) misleads hikers by using proximity incorrectly. The redesign (bottom) fixes the problem.

153

Readability The ease with which text can be understood, based on the complexity of words and sentences. Readability is determined by factors such as word length, word commonality, sentence length, number of clauses in a sentence, and number of syllables in a sentence. It is an attribute that is seldom considered — either because designers are not sensitive or aware of its importance or because of the common belief that complex information requires complex presentation. In fact, complex information requires the simplest presentation possible so that the focus is on the information rather than the way it is presented.

1

Fry’s Readability Graph (right) is one of many readability formulas. Other popular measures include Flesch formula, Dale-Chall formula, Farr-Jenkins-Paterson formula, Kincaid formula, Gunning Fog Index, and Linsear Write Index.

2

“Use [readability formulas] as a guide after you have written, but not as a pattern before you write. Good writing must be alive; don’t kill it with systems”. The Technique of Clear Writing by Robert Gunning, 1968, McGraw-Hill.

3

For additional writing guidelines, see The Elements of Style 4th edition by William Strunk Jr. and E.B. White, 2000, Allyn & Bacon.

For enhanced readability: • Omit needless words and punctuation, but be careful not to sacrifice meaning or clarity in the process. • Avoid acronyms, jargon, and untranslated phrases. • Keep sentence length appropriate for the intended audience. • Generally, use active voice, but consider passive voice when the emphasis is on the message and not the messenger. A variety of published readability formulas and software applications are available to assist designers in producing prose with specific readability levels. The readability rating is usually represented in the form of school levels ranging from first to twelfth grade and college. While different tools may use slightly different approaches for calculating readability, they all generally use the same combination of core readability factors mentioned above.1 Use readability formulas to verify that the textual components of a design are at the approximate reading level of the intended audience. However, do not write for the formulas. Readability formulas are primitive guides and should not outweigh all other considerations. For example, more sentences per paragraph may increase readability for lower-level readers but frustrate readability for advanced readers who are distracted by the lack of continuity. Simple language is preferred, but overly simple language obscures meaning.2 Consider readability when creating designs that involve prose. Express complex material in the simplest way possible, using plain language. Follow guidelines for enhancing readability, and verify that the readability level approximates the level of the intended audience.3 See also Inverted Pyramid; KISS; Legibility; Stickiness; Storytelling

LONG SENTENCES — — — — — — SHORT SENTENCES

Average Number of Sentences per 100 Words

Average Number of Syllables per 100 Words

1st

INVALID

2nd 3rd 4th 5th

Approximate Grade Level 6th 7th 8th 9th

10th

INVALID

11th

12th

College

Edward Fry’s Readability Graph 1. Randomly select three sample passages from a text. 2. Count 100 words starting at the beginning of these passages (count proper nouns but not numbers). 3. Count the number of sentences in each 100-word passage, estimating the length of the last sentence to the nearest one-tenth. 4. Count the total number of syllables in each 100-word passage. 5. Calculate the average number of sentences and average number of syllables for the 100-word passage. If a great deal of variability is found, sample additional passages. 6. The area of intersection on the graph between the number of sentences and average number of syllables indicates the estimated grade level. Invalid regions indicate that a reading level could not be estimated.

Sample text written at a college reading level. In the first 100 words of this passage, there are 187 syllables and almost six sentences Chicken pox, or varicella, is an infectious disease usually occurring in young children. Chicken pox is believed to be caused by the same herpes virus that produces shingles. Chicken pox is highly communicable and is characterized by an easily recognizable rash consisting of blisterlike lesions that appear two to three weeks after infection. Usually there are also low fever and headache. When the lesions have crusted over, the disease is believed to be no longer communicable; however, most patients simultaneously exhibit lesions at different stages of eruption. Chicken pox is usually a mild disease requiring little treatment other than medication to relieve the troublesome itching, but care must be taken so that the rash does not become infected by bacteria.

Sample text written at a fourth-grade reading level. In the first 100 words of this passage, there are 137 syllables and almost 12 sentences. Not too long ago, almost everyone got chicken pox. Chicken pox is caused by a virus. This virus spreads easily. The virus spreads when an infected person coughs or sneezes. People with chicken pox get a rash on their skin. The rash is made up of clear blisters. These blisters are very itchy. It is hard not to scratch them. The blisters form scabs when they dry. Sometimes these scabs cause scars. Many people with chicken pox must stay in bed until they feel better. Until recently, almost all children in the U.S. got chicken pox between the ages of 1 and 10. In 1995, the Food and Drug Administration approved a vaccine that keeps the virus from spreading. Today, most people will never get chicken pox because of this vaccine.

154

Reciprocity The tendency for people to give back to those who have given to them. Reciprocity is a tendency to respond to kindness with kindness and cruelty with cruelty. It is an instinctive behavior deeply rooted in our evolutionary past; it lies at the heart of social behaviors such as cooperation, sharing, and trade. As such, reciprocity occurs across cultures, though the rules and traditions governing ceremony, equality of the exchange, and repayment vary. In most design contexts, it involves offering a small gift, gesture, or concession to elicit a desired, targeted response such as donating to a cause or purchasing a product. For example, a nonprofit organization sending mailouts that contain useful, personalized gifts — such as blank greeting cards — will receive significantly more donations in greater amounts than sending just a solicitation letter.1 Acts that promote reciprocity most effectively are meaningful, personalized, and unexpected. It does not matter if an initiating act is wanted or not or if it is forced onto the receiver; people will still feel compelled to reciprocate. For example, a server delivering a mint with the check will typically receive a small 1 to 3% increase in gratuity. But if the server makes a special moment of presenting the check — e.g., telling the table how much they enjoyed serving them and then conspicuously leaving them a pile of mints—gratuities will increase more than 20%, whether the mints are consumed or not.2 Reciprocity occurs whether done publicly or privately. This means that even when a gift giver is unable to tell if a gift is reciprocated, people will still feel inclined to give back. This suggests that the tendency is truly reflexive and not done in response to social signaling or relationship building. The only known exceptions to this reflexive response are circumstances in which people feel they are being manipulated or explicitly asked to do something antisocial or illegal, in which cases people tend to ignore the initiating act.3 Consider reciprocity to promote goodwill, garner attention and consideration, and move people to action. Ensure that initiating acts are meaningful, personalized, and unexpected, for maximum effect. Apply the principle sincerely and sparingly; otherwise, it will be perceived as manipulative and ignored. Research the differing gift-giving and gift-receiving traditions of target audiences, especially when designing for international audiences. See also Framing; Cognitive Dissonance; Gamification; Habituation; IKEA Effect; Nudge; Social Proof

1

Reciprocity as a human tendency has been recognized for thousands of years. The seminal popular work is Influence: The Psychology of Persuasion by Robert Cialdini, 1984, William Morrow and Company.

2

See, for example, “Sweetening the Till: The Use of Candy to Increase Restaurant Tipping” by David Strohmetz et al., 2006, Journal of Applied Psychology, 32(2), 300 – 309.

3

See, for example, “The Effect of a Favor on Public and Private Compliance: How Internalized is the Norm of Reciprocity?” by Mark Whatley et al., 1999, Basic and Applied Social Psychology, 21(3), 251– 259. Note that when circumstances around antisocial or illegal acts are ambiguous, reciprocity does work. This is the “I’ll do you a favor now, and one day you’ll do me a favor” technique so often employed in crime and politics.

Giving out free samples is as much about reciprocity as it is about introducing new products and getting customer feedback. When people are given something, they want to give back. In this case, giving people free samples of a new drink increases sales not of just the new drink but of all products.

Reciprocity is a deep instinct; it is the basic currency of social life. — Jonathan Haidt The Happiness Hypothesis

155

Recognition over Recall Memory for recognizing things is better than memory for recalling things. People are better at recognizing things they have previously experienced than recalling those things from memory. This is because recognition tasks provide memory cues that facilitate searching through memory. For example, it is easier to correctly answer a multiple-choice question than a short-answer question because multiple-choice questions provide a list of possible answers; the range of search possibilities is narrowed to the list of options, and the test taker only needs to recognize the correct answer.1 Recall memory is much harder to develop than recognition memory. Recall memory is attained through learning, usually involving some combination of memorization, practice, and application. Recognition memory is attained through exposure and does not necessarily involve any memory about origin, context, or relevance. It is simply memory that something has been experienced before. Recognition memory is retained for longer periods of time than recall memory. For example, the name of an acquaintance is often quickly forgotten but easily recognized when heard. The advantages of recognition over recall are often exploited in the design of interfaces for complex systems. For example, early computer systems used a command line interface, which required recall memory for hundreds of commands. The effort associated with learning the commands made computers difficult to use. The contemporary graphical user interface, which presents commands in menus, allows users to browse the possible options and select from them accordingly. This eliminates the need to have the commands in recall memory and greatly simplifies the usability of computers. Decision-making is also strongly influenced by recognition. A familiar option is often selected over an unfamiliar option, even when the unfamiliar option may be the best choice. For example, in a consumer study, people participating in a taste test rated a known brand of peanut butter as superior to two unknown brands, even though one of the unknown brands was objectively better (determined by earlier blind taste tests). Recognition of an option is often a sufficient condition for making a choice.2 Consider recognition over recall in design. Minimize the need to rely on recall memory whenever possible. Use readily accessible menus, decision aids, and similar devices to make available options clearly visible. Favor such recognition-based memory aids in performance support contexts, training programs, and advertising campaigns. See also Dunbar’s Number; Exposure Effect; Miller’s Law; Performance Load;

Stickiness; Visibility

1

The seminal applied work on recognition over recall is the user interface for the Xerox Star computer. See “The Xerox Star: A Retrospective” by Jeff Johnson et al., in Readings in Human Computer Interaction: Toward the Year 2000 by Ronald Baecker et al. (Eds.), 1995, Morgan Kaufman Publishers, Inc., 53 –70.

2

Note that none of the participants had previously bought or used the known brand. See “Effects of Brand Awareness on Choice for a Common, Repeat-Purchase Product” by Wayne D. Hoyer and Steven P. Brown, 1990, Journal of Consumer Research, 17, 141–148.

Edit

Contacts

AP

Search Contacts

J Alicia James

Furry Friends Boutique, Owner

K Gabriel Keith

General Contractor

Mackenzie Kimbrell

Architecture Ventures, Principal

L Dr. Jordan Lindon Neartown Vet Clinic

M

Remembering the number of a person you want to call and dialing it digit by digit is challenging. Recognizing the number of a person you want to call in a list and dialing it digit by digit is much easier. Recognizing the name of the person you want to call in a list and having the number dialed automatically is the easiest of all.

SM

Shawn Mitchell

University Bookstore

Anya Mizell

Sanctuary Day Spa

N

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

156

Redundancy Using backup or fail-safe elements to maintain system performance in the event of failure. Redundancy is the most reliable method of preventing catastrophic failure. The principle applies widely across a range of contexts, including archiving, communication, design, engineering, software, staffing, etc. For systems requiring high reliability, there should be no single points of failure — i.e., all critical elements should have backups of some type.1 There are four basic types of redundancy: 1. Mixed redundancy — When causes of failure cannot be anticipated

and the consequences of failure are catastrophic, different kinds of redundancy should be considered — e.g., having both a hydraulic and a mechanical brake. Mixed redundancy of this type is the most complex and expensive to employ but also the most effective. 2. Homogeneous redundancy — When causes of failure can be anticipated,

more of the same kinds of redundancy can be employed — e.g., using many independent strands of fiber to weave a rope. Homogeneous redundancy of this type is simple and inexpensive but susceptible to cascade failures — i.e., the thing that can cut one strand of a rope can cut all strands. 3. Active redundancy — When performance interruptions are not tolerable,

redundant elements should be active at all times — e.g., using additional columns to support a roof. Active redundancy of this type is more complex and expensive but allows for element failure, repair, and substitution with minimal disruption of system performance. 4. Standby redundancy — When performance interruptions are tolerable,

redundant elements can be passive but available — e.g., having a spare in the event of a flat tire. Standby redundancy of this type is simple and inexpensive but results in operational disruption. Standby systems can be designed to automatically switch to redundant elements in a failure condition, but such switchovers often themselves fail because the standby elements have not been maintained or sufficiently tested. Consider redundancy to increase the reliability of systems. Weigh the costs of increased redundancy — i.e., more materials, more weight, more complexity, etc.— with the benefits. Favor mixed redundancy when designs are novel, when there are many unknowns, or when failure results in catastrophic outcomes; otherwise, favor homogeneous redundancy. When service interruptions are not tolerable, favor active redundancy; otherwise, favor standby redundancy. See also Factor of Safety; Modularity; No Single Point of Failure;

Saint-Venant’s Principle; Weakest Link

1

See, for example, Why Buildings Fall Down by Matthys Levy and Mario Salvadori, 1992, W.W. Norton & Company; and “Achieving Reliability: The Evolution of Redundancy in American Manned Spacecraft” by James Tomayko, 1985, Journal of the British Interplanetary Society, 38, 545 – 552.

Eyebars On December 15, 1967, the Silver Bridge collapsed into the Ohio river, killing 46 people. The bridge was an eyebar-chain suspension bridge, meaning it was suspended by long lengths of steel with holes at the ends called eyebars. The eyebars were assembled like a bicycle chain, with pins connecting the eyebars together in sets of two.

Hanger Assembly Eyebars

Cleavage Fracture

Pin Eyebar 330

Ohio Ohio River West Virginia

When corrosion and wear caused one of the eyebars to crack and fail, the asymmetric load on the opposing eyebar caused it to twist off its pin, which then caused a cascade failure of the bridge. Other bridges built at the same time used this same basic design and are still standing today. The difference? Instead of two eyebars per length like the Silver Bridge, they have twice that number — i.e., they have greater redundancy.

157

Reverse Salient An element that limits the overall performance of the system of which it is part. A reverse salient is a metaphor that borrows from military parlance, referring to a section of an advancing military force that lags behind and impedes the rest of the force from achieving its objective. In design contexts, a reverse salient is any element — e.g., component, mechanism, person, team — that limits the performance of an overall system. For example, the limited ability of direct-current power to be transmitted over long distances acted as a reverse salient on large-scale, direct-current power distribution.1 Reverse salients form because technological advances occur unevenly. Complex technological systems are composed of many parts, and invariably some parts will advance at a faster rate than others. In such cases, there is both opportunity and risk: opportunity to recognize and remedy reverse salients and risk that someone else will beat you to it and render status-quo systems obsolete. Telltale signs of reverse salients include bottlenecks, desire lines, field hacks, friction points, latency, queues, and elements of high cost or low efficiency. Identifying and overcoming reverse salients typically occurs using either a “closed” or “open” strategy.2 • With a closed strategy, a person or company seeks to overcome reverse salients privately and then leverages the innovations as proprietary technology. For example, Thomas Edison attempted this with the creation of the electric utility, inventing all of the components needed for power distribution and usage. • With an open strategy, the public is engaged to participate in overcoming reverse salients, and the innovations are shared to the benefit of all. For example, DARPA or XPrize competitions openly engage teams to solve problems and share know-how for the promise of prize money and glory. Seek out reverse salients as a means to achieve dramatic performance gains, noting telltale indicators such as points of high cost, inefficiency, and user shortcuts and workarounds. Consider closed strategies when the number of reverse salients is low, the internal capability is high, and the goal is to advance private interests. Consider open strategies when the number of reverse salients is high, the internal capability is low, and the goal is to advance public interests. Once identified, reverse salients are likely be overcome; therefore, consider the implications and plan for the inevitability. See also Constriant; Desire Line; Habituation; Leverage Point; Status Quo Bias

1

The seminal work is Networks of Power: Electrification in Western Society, 1880 –1930 by Thomas Parke Hughes, 1983, Johns Hopkins University Press. See also “Introductory Essay” in The Social Shaping of Technology, 2nd ed., by Donald MacKenzie and Judy Wajcman (Eds.), 1999, McGraw-Hill Education.

2

For discussion on open versus closed strategies, see “The Weakest Link” by Nicholas Carr, Nov 2006, Strategy + Business, (45).

Near the end of the 1920s, aircraft manufacturers were focused on making planes fly faster. They knew how to make more powerful engines, stronger airframes, and more robust controls; but what they didn’t know how to do was overcome the wind resistance created by the landing gear. Landing gear at that time hung in a fixed position beneath the fuselage or wings, creating wind resistance that limited how fast planes could fly. It was the reverse salient to high speeds. There were two possible solutions to this problem: Improve the aerodynamics of the fixed landing gear, or retract the landing gear into the body of the plane after takeoff. Prior attempts at retractable landing gear had been heavy and unreliable. This led aeronautical engineer John Northrop to pursue option one, developing aerodynamic covers called pants to reduce wind resistance. These teardrop spats worked remarkably well, but they added a lot of weight. While Northrop continued to refine his wheel cowlings, others worked on option two. It turns out that retractable landing gear had its own reverse salient: the O-ring. Once O-rings were invented, simple hydraulic systems became practical for retracting wheels, which reduced weight and eliminated landing gear wind resistance altogether. Wheel pants became a thing of the past.

Airflow Drag

158

Root Cause The key initiating cause in a sequence of events that leads to an event of interest. The human tendency is to presume that problems have single causes, but most problems, especially difficult problems, have multiple causes. A proximal cause is the effect that happens right after a singular cause. Proximal causes are strung together into a cause-event sequence; the root cause is the key event in this sequence that leads to a problem. When solving a problem, it is important to understand the problem deeply and ask the right questions to make sure you are actually fixing the problem by addressing the root cause.1 Root cause analysis is a way of understanding problems in terms of their causes, with the goal of finding the first event that caused everything else, or the root cause. Once you find the root cause, you can figure out how to treat it and fix the problem. Events before and after the root cause are proximal causes, symptoms, or aftereffects. Treating these upstream and downstream issues may need to happen but will not fix the problem. One way to effectively analyze root cause is a technique called the five whys — i.e., asking why an event occurred (five times, plus or minus).2 For example, imagine there was a workplace accident in which a welder was burned. Using the five whys to understand how this accident occurred might look like this: 1. Why did the welder get burned?

The welder wasn’t wearing full protective clothing. 2. Why wasn’t the welder wearing full protective clothing?

It is hot in the work room. 3. Why is it so hot in the work room?

The air conditioner is broken. The root cause of the accident is a broken air conditioner. You may need to address other problems or proximal causes in the causal chain — perhaps the welder needs better safety training — but you should first focus on fixing the root cause: a broken air conditioner. Focus on root causes when troubleshooting problems. Use the five whys technique to identify root causes and other elements in the causal chain. Since asking why can lead to infinite regress, focus on actionable causes that create the majority of the effects. See also Confirmation Bias; Error, Design; Error, Human; Leverage Point;

Pareto Principle; Visibility

1

See, for example, Root Cause Analysis by Duke Okes, 2009, ASQ Quality Press.

2

The five whys technique was developed in the 1930s by Sakichi Toyoda, the acclaimed inventor and founder of Toyota Industries Corporation, which would later become Toyota Motor Corporation.

WHAT HAPPENED?

In the early morning hours of April 15, 1912, 1,504 people died when the “unsinkable” RMS Titanic sank in the North Atlantic Ocean, four days into her maiden voyage from Southampton to New York City. WHY DID THE TITANIC SINK?

The ship’s hull filled with water. WHY?

The bulkheads were not sealed.

There was an opening in the hull. WHY?

The steel plates on the ship’s hull buckled. WHY?

The hull was weak.

The ship hit an iceberg.

WHY?

WHY?

The ship builders used low-quality, weak rivets.

The ship did not turn fast enough to avoid the iceberg. WHY?

The ship was traveling too fast.

The crew did not see the iceberg in time to make the turn.

WHY?

WHY?

The crew wanted to make a quick trip to the United States.

The crew didn’t have any binoculars.

WHY?

WHY?

They wanted to beat another ship, the Olympic, to New York.

The binoculars were locked in a locker and not accessible.

A partial root cause analysis exploring the events that led to the sinking of the Titanic. It is oversimple to blame the iceberg.

159

Rosetta Stone A strategy for communicating novel information using elements of common understanding. The Rosetta Stone is an Egyptian artifact inscribed with one message in three scripts (Demotic, hieroglyphic, and Greek). Modern knowledge of classical Greek enabled scholars to decipher Egyptian texts, written in hieroglyphics and Demotic. Before discovery of the Rosetta Stone, the Egyptian texts were undecipherable because knowledge of these languages had been lost. The Rosetta Stone illustrates the power of embedding elements of common understanding in messages to ensure that their meaning can be unlocked by a receiver who may not understand the language of transmission. The principle has broad applications, ranging from the design of effective instruction (e.g., using familiarity with one concept to teach another) to the development of games and puzzles (e.g., crossword puzzles) to devising communications for extraterrestrial intelligences (e.g., plaques designed for the Pioneer 10 and Pioneer 11 space probes).1 Applying the Rosetta Stone principle involves two basic but nontrivial steps: 1. Identify and embed a key — A key is an element of common

understanding that the receiver will understand. For example, researchers in extraterrestrial communication speculate that mathematical concepts (e.g., prime numbers or pi) are strong candidates for keys in any attempted extraterrestrial communication because of their universality. Any civilization advanced enough to send or receive radio signals or recover a space probe will have an understanding of fundamental mathematical concepts. It is critical to make the key identifiable as a key. 2. Construct the message to be revealed in stages — Each stage should act

as a supporting key for subsequent stages. For example, in crossword puzzles, there are words that are relatively straightforward based on the clues provided, and then there are words that are difficult to solve until at least some of the intersecting words are completed. Consider the Rosetta Stone principle to lay the foundation for learning and communication. Incorporate an element of common understanding to be used as a key for the receiver. Make it clear that the key is a key. Generally, favor keys that reference concrete objects that can be detected by the senses versus abstract concepts. When no verifiable element of common understanding can be identified, consider embedding numerous keys in the message that reference archetypal and universal concepts. See also Archetypes, Psychological; Comparison; Iconic Representation;

Propositional Density; Uniform Connectedness

1

See, for example, The Rosetta Stone and the Rebirth of Ancient Egypt by John Ray, 2007, Harvard University Press; and “A Message from Earth” by Carl Sagan et al., Feb 25, 1972, Science, 175(4024), 881– 884.

Carl Sagan, Frank Drake, and Linda Salzman designed this plaque for the Pioneer 10 and Pioneer 11 space probes. The plaque utilizes a number of keys to help extraterrestrials understand the “who, when, and where” of the probes. The most effective key, the image of the craft itself, gives the receiver an easily decipherable comparative to determine the appearance and scale of the senders as well as the solar system from which it came. Less effective are the abstract keys representing the hyperfine transition of hydrogen (top left) and the relative position of our solar system to fourteen pulsars (middle left).

What intelligent species, if any, will be around 10,000 years from now? How will they decipher the many artifacts we are leaving behind? The Rosetta Disk is a durable titanium-nickel human language archive designed by the Long Now Foundation to survive for 10,000 years. It contains more than 1,500 languages and 13,000 documents micro-etched onto its 3-inch (7.6 cm) surface. When knowledge about the audience is in doubt, use lots of keys.

160

Rule of Thirds A technique of composition in which a medium is divided into thirds. The rule of thirds is a technique derived from the use of early grid systems in composition. It is applied by dividing a medium into thirds, both vertically and horizontally, creating an invisible grid of nine rectangles and four intersections. The primary element within a design is then positioned on an intersection of the grid.1 The rule of thirds has a loyal following in art circles, likely due to its use by the Renaissance masters and its rough relationship to the golden ratio. Although dividing a design into thirds yields a ratio different from the golden ratio (i.e., the 2/3 section = 0.666 versus golden ratio = 0.618), the users of the technique may have decided that the simplicity of its application compensated for its rough approximation.2 Despite the popularity of the convention, there is little empirical evidence that following the rule of thirds makes images more aesthetic. One study comparing images that followed and did not follow the rule of thirds found only a weak correlation to aesthetic ratings. There is evidence, however, that the rule is effective at focusing attention, increasing the probability that intended feelings and messages are gleaned from an image. Thus, the rule of thirds plays, at best, a minor role in creating aesthetic compositions but a valuable role in focusing attention.3 Four guidelines suggested by the rule of thirds are worthy of consideration: 1. When there is a strong primary element that benefits from background

context, center the element rather than using the rule of thirds. 2. When there are two strong elements, position them at opposing

intersections to achieve balance. 3. When there is a strong vertical or horizontal element, align the element

along one of the grid lines of corresponding orientation. 4. When there are many elements of equal visual weight, do not use the

rule of thirds. Consider the rule of thirds when developing a compositional strategy. Use it as a rule of thumb, not as a hard rule. Do not rely on the rule to achieve aesthetics, but do use it to focus attention on desired elements. Use the rule of thirds as a scaffold to develop compositional instincts, but once developed, do not be afraid to cast it away and make your own rules. See also Fibonacci Sequence; Golden Ratio; Symmetry; Wabi-Sabi

1

The thirds grid system likely dates back to the Renaissance or before, but formal reference by name was made in Remarks on Rural Scenery by John Smith, 1797, Nathaniel Smith.

2

The rule of thirds has its critics in art circles as well. For example, in his newsletter, the photographer Michael Freeman refers to the rule of thirds as “probably the worst piece of compositional advice I can imagine”.

3

See, for example, “Evaluating the Rule of Thirds in Photographs and Paintings” by Seyed Ali Amirshahi et al., 2014, Art & Perception, 2(1– 2), 163 –182; and “Guided by the Grid: Raising Attention with the Rule of Thirds” by Michael Koliska and Klive (Soo-Kwang) Oh, Apr 2021, Journalism Practice.

This photograph (above) from the Muhammad Ali –Joe Frazier fight in Manila, Philippines (1975) makes excellent use of the rule of thirds, placing the heads of both fighters at opposing intersections on the grid. This photograph (right) from the Muhammad Ali–Sonny Liston fight in Lewiston, Maine (1965), by contrast, is an excellent example of when not to use the rule of thirds — strong primary element that is reinforced by the surrounding space.

161

Saint-Venant’s Principle Local effects of loads on structures have negligible global effects. Saint-Venant’s principle, proposed by the mathematician and engineer Adhémar Barré de Saint-Venant, states that the detailed effects of applying loads to structures do not travel very far from their point of application — i.e., the effects are localized and quickly become spread out as they propagate through the structure. For example, attach two steel beams perpendicular to a wall such that they extend out in a cantilevered fashion. One beam has a single hole at the end, and the other beam has three holes at the end. Attach a chain through the holes in both beams and pull with a truck. The effects of the pull force on the beam where the holes are will be locally different, but the effects of the pull force where the beams attach to the wall will be basically the same. The principle is used to estimate and model loads in complex structures and as a rule of thumb in laying out mechanical and structural systems.1 Saint-Venant’s principle is based on the fact that forces travel through elastic bodies in predictable ways, quickly becoming evenly distributed. An elastic body is defined as a body that can regain its original configuration immediately after the removal of deforming force. At what distance do the local effects cease to be significant in elastic bodies? Saint-Venant’s principle states that the effect of force substantially dissipates beyond one characteristic dimension from the source and becomes virtually undetectable three to five characteristic dimensions from the source. So, in a square beam with 6-inch (15.2 cm) sides, local effects would become negligible beyond about 6 inches (15.2 cm) along its length and basically undetectable beyond 18 inches (45.7 cm). Note that the principle assumes that the elements being connected are substantial, solid objects. It does not apply in the same way to thinner structures such as cylinders, shells, or trusses. In thin-walled structures, localized stresses and deformations can travel a considerable distance away from the point of loading.2 Consider Saint-Venant’s principle in estimating and modeling the effects of loads, and laying out mechanical and structural systems. To safely minimize local load effects, space things three to five characteristic dimensions apart. The 3x to 5x is a rule of thumb that can be applied in engineering contexts. See also Abbe Principle; Factor of Safety; Redundancy; Structural Forms

1

The seminal work is Mém. savants étrangers by B. de Saint-Venant, 1855. His original statement of the principle is: “If the forces acting on a small portion of the surface of an elastic body are replaced by another statically equivalent system of forces acting on the same portion of the surface, this redistribution of loading produces substantial changes in the stresses locally but has a negligible effect on the stresses at distances which are large in comparison with the linear dimensions of the surface on which the forces are changed”.

2

See, for example, “The Applicability of SaintVenant’s Principle to Airplane Structures” by N.J. Hoff, 2012, Journal of the Aeronautical Sciences, 12(4), 455 – 460.

Bolts spaced within three to five bolt diameters apart (top) have overlapping stress cones, forming a weld-like connection. Bolts spaced farther apart (bottom) have negligible impact on one another. Both effects are due to Saint-Venant’s principle.

162

Satisficing A problem-solving strategy that seeks a satisfactory versus optimal solution. The best design decision is not always the optimal design decision. In certain circumstances, the success of a design is better served by design decisions that roughly satisfy (i.e., satisfice), rather than optimally satisfy, design requirements. Satisficing is the basis for iterative prototyping, design thinking, the concept of minimum-viable products, and most real-world problem solving. The idea is that it is better to get something working quickly that you can then learn from and build on than to spend a much longer time trying to design and build perfection.1

1

The seminal works on satisficing are Models of Man by Herbert Simon, 1957, John Wiley & Sons; and The Sciences of the Artificial, by Herbert Simon, 1969, MIT Press. This principle is also known as best is the enemy of the good.

2

In many time-limited contexts, the time limits are artificial (i.e., set by management), whereas the consequences of low-quality design and system failure are real. See, for example, Crucial Decisions: Leadership in Policymaking and Crisis Management by Irving Janis, 1989, Free Press.

3

For example, designers at Swatch realized that watches of increasing accuracy were no longer of value to consumers — i.e., accuracy to within one minute a day was accurate enough. This “good enough” standard allowed the designers of Swatch to focus their efforts on style and cost reduction rather than on further optimizing the timekeeping of their watches.

Satisficing should be considered for three kinds of design problems: 1. Problems that are complex — Characterized by a large number of

interacting variables and a large number of unknowns. A satisficer recognizes that the combination of complexity and unknowns makes an optimal solution unlikely (if not impossible). A satisficer seeks a satisfactory solution that is just better than existing alternatives; the satisficer seeks only to incrementally improve upon the current design rather than to achieve an optimal design. 2. Problems that are time limited — Characterized by time frames that do

not permit adequate analysis or development of an optimal solution. In cases where optimality is secondary to urgency, a satisficer selects the first solution that satisfactorily meets given design requirements. Satisficing should be cautiously applied in time-limited contexts, especially when the consequences of a suboptimal solution can be lifeor mission-critical.2 3. Problems for which anything beyond a satisfactory solution yields

diminishing returns — Characterized by problems for which satisfactory solutions are better than optimal solutions. Determining when satisfactory is best requires accurate knowledge of design requirements and accurate knowledge of users’ value perceptions. A satisficer weighs this value perception in the development of the design specification, ensuring that optimal specifications will not consume design resources unless they are both critical to success and valued by users.3 Consider satisficing in resource-limited and time-limited contexts, or when “good enough” is good enough. Do not let perfect be the enemy of progress. Embrace satisficing in rapid prototyping and the development of minimumviable products. See also Cost-Benefit; Iteration; KISS; Minimum-Viable Product;

Pareto Principle; Premature Optimization; Prototyping; Root Cause

The adapted square carbon dioxide filter from the command module (center) and round filter receptacle of the lunar lander (lower right). The Apollo 13 mission to the moon launched at 2:13 PM EST on April 11, 1970. An electrical failure occurred in the command module of the spacecraft 56 hours into the flight, causing the mission to be aborted and forcing the three-person crew to take refuge in the lunar lander. The carbon dioxide filters aboard the lunar lander were designed to support two people for two days — the planned duration of a lunar landing — and not the three people for four days needed to return the crew safely to Earth. The square carbon dioxide filters of the abandoned command module had the capacity to filter the excess carbon dioxide but did not fit into the round filter receptacle of the lunar lander. Using materials available on the spacecraft such as plastic bags, cardboard from log books, and duct tape, NASA engineers designed a makeshift adapter for the square command module filters. The ground crew talked the astronauts through the construction process, and the adapted filters were put into service immediately thereafter. The solution was far from optimal, but it was satisfactory — it eliminated the immediate danger of carbon dioxide poisoning and allowed ground and flight crews to focus on other critical problems. The crew of Apollo 13 returned safely home at 1:07 PM EST on April 17, 1970.

Astronaut John L. Swigert Jr., hooking up the adapted carbon dioxide filters.

163

Savanna Preference A preference for savanna-like environments over other types of environments. People tend to prefer savanna-like environments — open areas, scattered trees, water, and uniform grassiness — to other natural environments that are simple, such as deserts; dense, such as jungles; or complex, such as mountains. The preference is based on the belief that early humans who preferred savanna-like environments enjoyed a survival advantage over humans who lived in other environments. This advantage ultimately resulted in the development of a genetic disposition favoring savanna-like environments that manifests itself today.1 The evolutionary link to savanna environments in early hominids is hotly debated; the preference may also be rooted in other causes. But whatever the cause, the preference for savanna landscapes is found across all age ranges and cultures. The savanna preference tends to be strongest in young children and grows weaker with age. The characteristics of savannas that people prefer include depth, openness, uniform grassy coverings, and scattered trees, as opposed to obstructed views, disordered high complexity, and rough textures. For example, in an experiment where people were presented with images of savannas, deciduous forests, coniferous forests, rain forests, and desert environments, lush savannas were consistently preferred over other choices as a place to live or visit. The theory that the preference is related to the savanna’s perceived resource richness is supported by the finding that the least preferred environment is the arid desert landscape.2 A survey of art preferences of people living in countries in Asia, Africa, Europe, and the Americas found respondents in all countries expressed a differential preference for realistic paintings including water, trees and other plants, human beings, and animals. Even people who had never seen such environments, like tribal peoples living in deserts and mountains, expressed this preference. People have a general landscape preference for savanna-like or parklike environments that is independent of culture. Consider the savanna preference in the design of landscapes, advertising, and any other design that involves the creation or depiction of natural environments. The preference is strongest in young children. Therefore, consider savanna-like environments in the design of settings for children’s stories and play environments. See also Archetypes, Psychological; Biophilia Effect; Cathedral Effect;

Defensible Space; Prospect-Refuge; Self-Similarity

1

The seminal article on the savanna preference is “Development of Visual Preference for Natural Environments” by John Balling and John Falkin, 1982, Environment and Behavior, 14, 5 –28. This principle is also known as savanna hypothesis.

2

See, for example, “The Biological Basis for Human Values of Nature” by Stephen Kellert, in The Biophilia Hypothesis by Stephen R. Kellert and Edward O. Wilson (Eds.), 1993, Island Press.

Though adults generally do not share the fascination, the television series Teletubbies mesmerizes children in more than 60 countries and 35 languages. Simple stories played out by four baby-faced creatures on a lush savanna landscape equal excellent design for young children.

Wilderness gave us knowledge. Wilderness made us human. We came from here. Perhaps that is why so many of us feel a strong bond to this land called Serengeti; it is the land of our youth. — Boyd Norton Serengeti

164

Scaling Fallacy The assumption that designs that work at one scale will work at smaller or larger scales. Much is made of the relative strength of small insects as compared to that of humans. For example, a leafcutter ant can carry about 50 times its weight, whereas an average human can carry about half its weight. The standard reasoning goes that an ant scaled to the size of a human would retain this strength-weight advantage, giving a 200-pound (91-kg) ant the ability to lift 10,000 pounds (4,536 kg). In actuality, however, an ant scaled to this size would only be able to lift about 50 pounds (23 kg), assuming it could move at all. The effect of gravity at small scales is miniscule, but the effect increases exponentially with the mass of an object. This underscores the basic lesson of the scaling fallacy — systems act differently at different scales.1 There are two basic kinds of scaling assumptions to avoid when growing or shrinking a design: 1. Load assumptions — Assuming that forces and materials will act the

same when a design changes scale. For example, Trident missiles are submarine-launched ballistic missiles. Initial designs of the Trident 2 missile were based largely on the Trident 1 missile, which was much shorter than the Trident 2 and roughly half its weight. When the specifications for the Trident 1 were scaled to create the Trident 2, the result was multiple catastrophic failures in early tests and a major redesign of the missile.2 2. Interaction assumptions — Assuming that people will interact with a

design the same way at different scales. For example, the design of very tall buildings involves many possible interactions that don’t exist for buildings of lesser height — problems of evacuation in case of fire, people seeking to base-jump off the roof, and terrorists looking to attack dramatic targets, to name a few. The best way to avoid the scaling fallacy is to be aware of the tendency to make scaling assumptions. Raise awareness of load and interaction assumptions in the design process. Verify load assumptions through the use of careful calculations, systematic testing, and appropriate factors of safety. Minimize incorrect interaction assumptions through careful research and monitoring how the design is used once implemented. See also Appeal to Nature; Factor of Safety; Feedback Loop; Modularity;

Prototyping; Redundancy; Satisficing

1

The seminal work on scaling is Dialogues Concerning Two New Sciences by Galileo Galilei, 1991 [reprint], Prometheus Books. This principle is also known as cube law and law of sizes.

2

“Design Flaw Seen as Failure Cause in Trident Tests” by Andrew Rosenthal, Aug 17, 1989, The New York Times.

10

Meters

10 – 1

10 – 2

10 – 3

10 – 4

The scaling fallacy is nowhere more apparent than with flight. For example, at very small and very large scales, flapping to fly is not a viable strategy. At very small scales, wings are too small to effectively displace air molecules. At very large scales, the effects of gravity are too great for flapping to work — a painful lesson learned by many early pioneers of human flight. The lesson is that designs can be effective at one scale and completely ineffective at another. The images from small to large: aeroplankton simply float about in air; baby spiders use tiny web sails to parachute; insects flap to fly; birds flap to fly; humans flap but do not fly.

165

Scarcity Things become more desirable when they are in short supply or occur infrequently. Few principles move humans to action more effectively than scarcity. When items and opportunities become scarce, their general desirability increases, and even people who are otherwise disinterested often find themselves motivated to act. The cause likely regards scarcity acting as an indicator of quality, in combination with a strong preference for keeping options open whenever possible. The principle applies generally across the spectrum of human behavior, from mate attractiveness and selection (often referred to as the Romeo and Juliet Effect) to tactics of negotiation.1 Five tactics are commonly employed to apply the principle of scarcity: 1. Exclusive information — Supply is about to be depleted, and only a few

people have this knowledge (e.g., don’t tell anyone, but a sugar shortage is about to dramatically reduce the supply of cookies). 2. Limited access — Access to supply is limited (e.g., cookies available only

to elite passengers are more desirable than cookies available to all). 3. Limited time — Supply is available for a limited time (e.g., cookies sold

once a year are more desirable than cookies sold every day of the year). 4. Limited number — Supply is limited by number (e.g., cookies on a plate

of two cookies are more desirable than cookies on a plate of ten). 5. Suddenness — Supply is suddenly depleted (e.g., eight of ten cookies are

suddenly sold, making the remaining two cookies highly desirable). When competition for scarce resources is visible and direct, the effects can be contagious. This dynamic is commonly observed at auctions, where competing bidders become fixated on winning and consequently bid well over market value for an item. The effect is strongest when the desired object or opportunity is highly unique and not easily substituted by other means.2 Consider scarcity when designing advertising and promotion initiatives, especially when the objective is to move people to action. Scarce items are accorded higher value than plentiful items, so ensure that pricing and availability are aligned. In retail contexts, do not confuse having inventory with the need to display inventory — displays that show a lot of product will sell less quickly than retail displays that show small amounts. Make the effects of demand, especially sudden demand, clearly visible whenever possible to achieve maximum effect. See also Expectation Effects; Framing; Left-Digit Effect; Social Proof;

Supernormal Stimulus; Veblen Effect

1

The seminal works on scarcity are A Theory of Psychological Reactance by Jack Brehm, 1966, Academic Press; and “Implications of Commodity Theory for Value Change” by Timothy Brock, 1968, in Psychological Foundations of Attitudes by A.G. Greenwald et al. (Eds.), Academic Press. For a popular treatment of the principle, see Influence: The Psychology of Persuasion by Robert Cialdini, 1984, William Morrow and Company.

2

“Scarcity Effects on Value: A Quantitative Review of the Commodity Theory Literature” by Michael Lynn, 1991, Psychology & Marketing, 8(1), 43 – 57.

In a classic illustration of the power of scarcity, the “Running of the Brides” event at Filene’s Basement had brides-to-be coming from around the world to buy wedding dresses at bargain basement prices. The event was held once a year, one day only, from 1947 until 2011, when Filene’s Basement declared bankruptcy and suspended operations. Friends and family helped box out competitors and snatched up dresses as quickly

as possible. Brides would try candidate dresses on in the aisles until they found that special dress. All of the factors of scarcity were at play: exclusive information, limited access, limited time, limited number, suddenness, and visibility of demand.

166

Selection Bias A bias in the way evidence is collected that distorts analysis and conclusions. Selection bias results from the nonrandom sampling of evidence. Accordingly, it over-represents certain aspects of the evidence and under-represents others, distorting analysis and conclusions. Humans are pattern-detecting and pattern-making machines. When we see dots, we try to connect them. It is only when we understand the perils of biases like selection bias that we pause and make sure the dots that have been collected are worthy of connecting. For example, if subscribers of a science magazine are surveyed and their responses generalized to the overall population, science-minded viewpoints would be over-represented in the analysis and results.1 It is important to avoid selection bias when designing experiments to collect data and also when evaluating conclusions based on statistical analysis. Always scrutinize the selected population and sampling methods when evaluating conclusions based on statistical analysis. Selection bias may occur through no bad intentions or faults — certain evidence simply may not be available for consideration. However, it is often the case that people who want to persuade will cherry-pick data that support their position and exclude data that negate it, resulting in evidence that appears convincing but that is not representative of the truth. Avoid selection bias by collecting data ( or making sure others have collected data) from entire populations when they are small and by randomly sampling (or making sure others have randomly sampled) from populations when they are large. • When you’re dealing with a small population (meaning a small number of things about which you are collecting data, like a classroom of students), you collect data from everyone. If all members of a population are represented in your analysis, there can be no selection bias. • When you’re dealing with large populations or when members of a population are not available, it is not possible or practical to collect data from everyone. In these cases, you must randomly select members from the available population. A truly random sample prevents selection bias, but you need to make sure that you sample from the full set of things you are generalizing about. For example, if you survey a random group of Macintosh computer users, you can’t generalize the results to all computer users, because not everyone uses a Mac. Consider selection bias when collecting data, doing design research, and when using data and statistics to persuade and drive decision-making. See also Confirmation Bias; Garbage In–Garbage Out; Normal Distribution;

Uncertainty Principle

1

See, for example, “The Effect of Selection Bias in Studies of Fads and Fashions” by Jerker Denrell and Balázs Kovács, 2015, PLoS ONE, 10(4); and “The Importance of Selection Bias in Internet Surveys” by Zerrin Asan Greenacre, 2016, Open Journal of Statistics, 6(3), 397– 404.

The Arch Deluxe was launched by McDonald’s in 1996. It was a bold and upscale burger, featuring crisp lettuce, a bakery-style split-top potato roll with sesame seeds, peppered bacon, tomato, and a mustardmayonnaise sauce. It was designed to appeal to more sophisticated, adult palates. And perhaps it did, but it was also the biggest flop in McDonald’s history, with the brand spending a couple of years and hundreds of millions of dollars to research, develop, and advertise a burger that very few people ordered. What happened? McDonald’s had tested the new burger in focus groups and had gotten rave reviews. Unfortunately, the people in their focus groups were McDonald’s loyalists and hamburger aficionados — i.e., their tastes did not represent the tastes of the typical McDonald’s customer. Within a year, the Arch Deluxe was taken off the menu at most locations.

The people who participated in the [Arch Deluxe] focus groups weren’t a faithful reflection of McDonald’s customers as a whole…The lesson here should be clear: don’t assume that your initial audience is necessarily representative of the population as a whole. — John A. List The Voltage Effect

167

Self-Similarity A property in which a thing is composed of similar patterns at multiple levels of scale. Many forms in nature exhibit self-similarity, and as a result it is commonly held to be an intrinsically aesthetic property. People find self-similar forms beautiful, especially when the mathematical density of the pattern resembles savanna-like environments and trees.1 Natural forms tend to exhibit self-similarity at many different levels of scale, whereas human-created forms generally do not. For example, an aerial view of a coastline reveals the same basic edge pattern, whether standing at the water’s edge or viewed from low-Earth orbit. Although varying levels of detail are seen, the same pattern emerges — the detail is a self-similar mosaic of smaller wholes. Naturally occurring self-similarity is usually the result of a basic algorithmic process called recursion. Recursion occurs when a system receives input, modifies it slightly, and then feeds the output back into the system as input. This recursive loop results in subtle variations in the form — perhaps smaller, skewed, or rearranged — but is still recognizable as an approximation of the basic form. An example of recursion is a person standing between two mirrors facing each other, which yields an infinite sequence of smaller reflections of the person in the opposing mirror. Recursion occurs with the looping of the light between the two mirrors; self-similarity is evident in the successively smaller images in the mirrors. The ubiquity of self-similarity in nature hints at an underlying order and algorithm and suggests ways to enhance the aesthetic (and perhaps structural) composition of human-created forms. Self-similar modularity is an effective means of scaling systems and managing complexity. Consider, for example, the self-similarity of form and function found in the compound arch structures of the Roman aqueducts and the flying buttresses of gothic cathedrals, structures that are beautiful in form and rarely equaled in their structural strength and longevity. The self-similarity in these structures exists at only a few levels of scale, but the resulting aesthetic and structural integrity are dramatic. Consider self-similarity in all aspects of a design: story plots, visual displays, and structural compositions. The reuse of a single, basic form to create many levels of metaforms mimics nature’s tendency towards parsimony and redundancy. Explore the use of basic, self-similar elements in a design to create interesting organizations at multiple levels of scale. See also Archetypes, Psychological; Hierarchy; Modularity;

Savanna Preference; Similarity; Symmetry

1

The seminal work on self-similarity is Fractal Geometry of Nature by Benoit Mandelbrot, 1988, W.H. Freeman & Company.

The Mona Lisa photomosaic, the acacia tree, fractals, and Roman aqueducts all exhibit self-similarity.

168

Serial Position Effects Things at the beginnings and ends of sequences are more memorable than things in the middle. Serial position effects occur when people try to recall items from a sequence; items at the beginning and end are better recalled than the items in the middle. Improved recall for items at the beginning of a sequence is called a primacy effect. Improved recall for items at the end of a sequence is called a recency effect.1 • Primacy effects — The initial items in a sequence are stored in long-term memory more efficiently than later items. When a sequence of items is rapidly presented, the primacy effect is weaker — people have less time to store the initial items in long-term memory. When a sequence of items is slowly presented, the primacy effect is stronger — people have more time to store the initial items in long-term memory.2 • Recency effects — The last few items in a sequence are still in working memory and readily available. The strength of the recency effect is dramatically affected by the passage of time and the presentation of additional information. For example, the recency effect disappears when people think about other matters for 30 seconds after the last item in the sequence is presented.3 For visual stimuli, items presented early in a sequence have the greatest influence; they are not only better recalled but influence the interpretation of later items. For auditory stimuli, items late in a sequence have the greatest influence. In either case, if multiple presentations of information are separated in time and a person must make a selection decision soon after the last presentation, the recency effect has the greatest influence on the decision. These effects describe a general selection preference known as order effects — first and last items in a sequence are more likely to be selected than items in the middle (e.g., the order of candidates on a ballot).4 Present important items at the beginning or end of a sequence (versus the middle) in order to maximize recall. When items are visual, present important items at the beginning. When items are auditory, present important items at the end. In decision-making situations, if the decision is to be made immediately after the presentation of the last item, increase the probability of an item being selected by presenting it at the end of the sequence; otherwise, present it at the beginning. See also Interference Effects; Left-Digit Effect; Miller’s Law; Peak-End Rule;

von Restorff Effect

1

The seminal work on serial position effects is Memory: A Contribution to Experimental Psychology by Hermann Ebbinghaus, 1885, H.A. Ruger and C.E. Bussenues (Tr.), 1913, Teachers College, Columbia University, .

2

“Storage Mechanisms in Recall” by Murray Glanzer, in The Psychology of Learning and Motivation by G.H. Bower and J.T. Spence (Eds.), 1972, Academic Press, 5, 129 –193.

3

“Two Storage Mechanisms in Free Recall” by Murray Glanzer and Anita Cunitz, 1966, Journal of Verbal Learning and Verbal Behavior, 5, 351– 360.

4

See “Forming Impressions of Personality” by Solomon Asch, 1946, Journal of Abnormal and Social Psychology, 41, 258 – 290; and “First Guys Finish First: The Effects of Ballot Position on Election Outcomes” by Jennifer Steen and Jonathan Koppell, 2001, Presentation at the 2001 Annual Meeting of the American Political Science Association, San Francisco, Aug 30 – Sep 2.

Probability of Recall

10-Second Delay

30-Second Delay

Serial Position

Serial position effects influence how performances and presentations are evaluated. In contests where a winner is selected right after the last presentation, it is advantageous to be last. In contests where a winner is selected after some passage of time, it is advantageous to be first.

And In both cases, if somewhere in the middle, it is essential that the presentation significantly stand out from the rest to be remembered.

Items at the beginning and end of a list or a sequence are easier to remember than items in the middle. If recall is attempted immediately after the presentation of the list, the primacy effect and recency effect are roughly equal in strength. If recall is attempted more than 30 seconds after the presentation of the list, the primacy effect maintains, whereas the recency effect quickly diminishes.

169

Shaping Training a target behavior by reinforcing successive approximations of that behavior. Complex behaviors can be difficult to teach. Shaping is a strategy whereby complex behaviors are broken down into smaller, simpler sub-behaviors and taught one by one. The sub-behaviors are reinforced with rewards (e.g., trainees are given food) and ultimately chained together to achieve the complex behavior. For example, to teach a mouse to press a lever, the mouse is first reinforced to move close to the lever; then reinforced only when it makes contact with the lever; and eventually reinforced only when it presses the lever. The bar for getting a reward is continuously raised as the behavior gets closer and closer to the target behavior.1 Shaping can be used to train all animals, human and nonhuman, as well as computer programs. For example, rehabilitation involves shaping complex behaviors: Relearning to walk involves first standing, and then lifting one foot, and then lifting one foot and shifting forward, and so on. Roboticists are using a form of accelerated shaping to teach robots to do everything from walking to manufacturing parts to driving our cars. Shaping often occurs without awareness. For example, video games use shaping when initial game levels require simple inputs in order to “beat” the level (obtain the reinforcement) and then require increasingly difficult controller actions to master higher levels of the game. During shaping, behaviors that have nothing to do with the desired behavior can get incidentally reinforced. For example, when training a mouse to press a lever, the mouse may incidentally press a lever with one foot in the air. The reinforcement for the lever press may also inadvertently reinforce the fact that the foot was in the air. This behavior then becomes an integrated but unnecessary component of the desired behavior: The mouse lifts its foot whenever it presses the lever. The development of this kind of superstitious behavior is common with humans as well. Use shaping to train complex behaviors in games, simulations, and learning environments. Shaping does not address the “hows” or “whys” of a task and should, therefore, primarily be used to teach rote procedures and refine complex motor tasks. Shaping is being increasingly used to train complex behaviors in artificial beings and should be considered when developing adaptive and intelligent systems.2 See also Classical Conditioning; Gamification; Nudge; Operant Conditioning

1

The seminal work on shaping is The Behavior of Organisms: an Experimental Analysis by B.F. Skinner, 1938, Appleton-Century. An excellent account of Skinner’s early research and development is “Engineering Behavior: Project Pigeon, World War II, and the Conditioning of B.F. Skinner” by James Capshew, 1993, Technology and Culture, 34, 835 – 857. This principle is also known as approximation conditioning and conditioning by successive approximations.

2

See, for example, Robot Shaping: An Experiment in Behavior Engineering by Marco Dorigo and Marco Colombetti, 1997, MIT Press.

Project Pigeon Project Pigeon was a classified research-and-development program during World War II. Developed at a time when electronic guidance systems did not exist, the project used shaping to train pigeons to guide bombs. Despite favorable performance tests, the National Defense Research Committee ended the project — it seems they couldn’t get over the idea that pigeons would be guiding their bombs.

Pigeons were trained to peck at targets on aerial photographs. Once a certain level of proficiency was obtained, pigeons were jacketed and mounted inside tubes.

Glass Lenses

The pigeons in their tubes were inserted into the nosecone of the bomb. Each nosecone used three pigeons in a type of voting system, whereby the pigeon pecks of two birds in agreement would overrule the errant pigeon pecks of a single bird. Sealed in the bomb, the pigeons could see through glass lenses at the front of the nosecone.

Nosecone

Tail Surfaces

Once the bomb was released, the pigeons would begin pecking at their view of the target. Their pecks shifted the glass lens off-center, which adjusted the bomb’s tail surfaces and, correspondingly, its trajectory.

Warhead

170

Signal-to-Noise Ratio The ratio of relevant to irrelevant information. Good designs have high signal-to-noise ratios. All communication involves the creation, transmission, and reception of information. During each stage of this process, the form of the information (i.e., the signal) is degraded, and extraneous information (i.e., noise) is added. Degradation reduces the amount of useful information by altering its form. Noise reduces clarity by diluting useful information with useless information. Clarity of information can be understood as the ratio of remaining signal to added noise. For example, a graph with no extraneous elements would have a high signal-to-noise ratio, whereas a graph with many extraneous elements would have a low signal-to-noise ratio.1 The goal of good design is to maximize signal and minimize noise, thereby producing a high signal-to-noise ratio. • Maximizing signal — Clearly communicating information with minimal degradation. Simple designs incur minimal performance loads, enabling people to better focus on the meaning of the information. Signal clarity is improved through simple and concise presentation of information. Signal degradation occurs when information is presented inefficiently: unclear writing, inappropriate graphs, or ambiguous icons and labels. Signal degradation is minimized through research and careful decisionmaking. Emphasizing key aspects of the information can also reduce signal degradation — e.g., highlighting or redundantly coding important elements in a design. • Minimizing noise — Removing unnecessary elements, and minimizing the expression of necessary elements. Every unnecessary data item, graphic, line, or symbol steals attention away from relevant elements. Such unnecessary elements should be avoided or eliminated. Necessary elements should be minimized to the degree possible without compromising function. For example, the expression of lines in grids and tables should be thinned, lightened, and possibly even removed. Every element in a design should be expressed to the extent necessary but not beyond the extent necessary. Excess is noise. Seek to maximize the signal-to-noise ratio in design. Increase signal by keeping designs simple and selecting design strategies carefully. Consider enhancing key aspects of information through techniques like redundant coding and highlighting. Use well-accepted standards and guidelines when available to leverage conventions and promote consistent implementation. Minimize noise by removing unnecessary elements and minimizing the expression of elements. See also Highlighting; Interference Effects; KISS; Ockham’s Razor;

Performance Load; Progressive Subtraction; Redundancy

1

The seminal works on signal-to-noise ratio in information design are “A Decision-Making Theory of Visual Detection” by Wilson Tanner Jr. and John Swets, 1954, Psychological Review, 61, 401– 409; and Visual Display of Quantitative Information by Edward Tufte, 1983, Graphics Press.

ALL OTHER 13.1 OTHER AMERICAN 8.8 Other American 8.8

All Other 13.1

OTHER ITALIAN 8.7

MOZZARELLA 30.6

Other Italian 8.7

1997 U.S. CHEESE PRODUCTION % BY TYPE

SWISS 2.8 Mozzarella 30.6

1997 U.S. Cheese Production % by Type

Cheddar 36.0

CHEDDAR 36.0

SOYBEANS PRODUCTION Billions of Bushels

HARVESTED ACREAGE Millions of Acres

Soybeans Production Harvested Acreage Billions of Bushels

Millions of Acres

Regular Ice Cream, U.S. Millions of Gallons

MILLIONS OF GALLONS

REGULAR ICE CREAM, U.S.

YEAR

The signal-to-noise ratio of each of these representations on the left is improved by removing elements that do not convey information, minimizing the expression of remaining elements, and highlighting essential information.

Swiss 2.8

Year

171

Similarity The brain automatically assumes elements that look alike are related. Similarity, one of the Gestalt principles of perception, asserts that similar elements are perceived as a single group or chunk and are interpreted as being more related than dissimilar elements. For example, a simple matrix comprising alternating columns of dots and squares will be interpreted as a set of columns only, because the similar elements group together to form vertical lines.1 The grouping resulting from similarity reduces complexity and reinforces the relatedness of design elements. Conversely, a lack of similarity results in the perception of disparate chunks and reinforces differences among the elements. Similar elements are interpreted as being relevant to one another. A complex visual display is interpreted as having different areas and types of information depending on the similarity of color, size, and shape of its elements. Certain kinds of similarity work better than others: • Similarity of color — The strongest grouping effect; can result in elements being perceived as related, even when the elements are spread out across a large area (e.g., red text indicating required entries on a form). It is strongest when the number of colors is small and is decreasingly effective as the number of colors increases.2 • Similarity of size — Effective when the sizes of elements are clearly distinguishable from one another; an especially appropriate grouping strategy when the size of elements has additional benefits (e.g., large buttons are easier to press). • Similarity of shape — The weakest grouping strategy; best used when the color and size of other elements is uniform or when used in conjunction with size or color. Similarity of shape can be an important factor in the design of icons. Related icons may employ similar shapes, but if the shapes are too similar, viewers may miss subtle details or labels, resulting in errors and frustration. Consider the similarity principle when the goal is to indicate relatedness among elements in a design. Represent elements such that their similarity corresponds to their relatedness. Represent unrelated or ambiguously related items using different colors, sizes, and shapes. Use the fewest colors and simplest shapes possible for the strongest grouping effects, ensuring that elements are sufficiently distinct to be easily detectable. See also Closure; Common Fate; Figure-Ground; Good Continuation;

Miller’s Law; Proximity; Self-Similarity; Uniform Connectedness

1

The seminal work on similarity is “Untersuchungen zür Lehre von der Gestalt, II” [Laws of Organization in Perceptual Forms] by Max Wertheimer, 1923, Psychologische Forschung, 4, 301– 350, reprinted in A Source Book of Gestalt Psychology by Willis Ellis (Ed.), 1999, Routledge & Kegan Paul, 71– 88. See also Principles of Gestalt Psychology by Kurt Koffka, 1935, Harcourt Brace.

2

Note that a significant portion of the population is color blind, limiting the strategy of using color alone. Therefore, consider using an additional grouping strategy when using color.

The sets of shapes (left) demonstrate the efficacy of different similarity strategies, varying elements by shape, color, shape and color, and size. All can be effective strategies, but the

grouping power of color and size is greater than shapes alone. Both video game controllers (right) use a mix of these strategies to reduce complexity and improve usability.

172

Social Proof When people don’t know how to act, think, or feel, they tend to copy others. People are persuaded more by the actions of others than by anything else. When people are uncertain about what to do, they look to others for cues, or social proof, of what to do. For example, many sitcoms use laugh tracks to influence audience reaction, despite the objections of artists, directors, writers, and many viewers. Why do they do it? Social proof works: Laugh tracks make people laugh more. Two conditions maximize the influence of social proof: uncertainty and similarity.1 When people are uncertain about how to act, think, or feel, they tend to copy others. In general, when a lot of people are doing a thing, it is likely a safe and good thing. And the probability of it being safe and good increases with the number of people doing it. For example, in an experiment to deter theft at the Arizona Petrified Forest, a sign was posted that read, “Many past visitors have removed the petrified wood from the park, destroying the natural state of the Petrified Forest”. The sign unintentionally provided social proof that it was safe and good to take petrified wood, and theft at the park almost tripled.2 People are more likely to use the actions of others to decide how to act when they are perceived to be similar — e.g., similar in age, appearance, beliefs, values, etc. The effect is strongest when the observer perceives similar people to be more familiar or knowledgeable than themselves about a particular situation and is amplified when those people are attractive, popular, or in positions of authority. For example, social media platforms attempt to quantify social proof for public display in the form of friends and followers, signaling their influence within the community. This enables the most popular participants to monetize their social network through advertising but also creates incentives for both people and platforms to game the system in unethical ways.3 Consider social proof in advertising, marketing, and behavior modification contexts. Leverage uncertainty and similarity in the design, noting the amplifying effects of attractiveness, popularity, and authority. The most ethical use of social proof is when the proof is real and simply shared; it becomes increasingly questionable as social proof is embellished or fabricated. See also Cognitive Dissonance; Diffusion of Innovations; Groupthink; Nudge;

Perverse Incentives

1

The seminal academic work is “A study of some social factors in perception” by Muzafer Sherif, 1935, Archives of Psychology, 27, 187. The seminal popular work is Influence: The Psychology of Persuasion by Robert Cialdini, 1984, William Morrow and Company. This principle is also known as herd behavior and informational social influence.

2

Yes!: 50 Scientifically Proven Ways to Be Persuasive by Noah Goldstein et al., 2008, Free Press.

3

See, for example, “Social Proof: Why People Buy Twitter Followers From Devumi” by Matthew Y, 2014, Techsling. Note that, contrary to appearances, the number of social media friends and followers does not equal influence. See, for example, “Measuring User Influence in Twitter: The Million Follower Fallacy” by Meeyoung Cha et al., 2010, Proceedings of the Fourth International Conference on Weblogs and Social Media.

People wait in line outside the Apple Store hoping to buy iPhone Xs at Apple Schildergasse in Cologne, Germany. Conspicuous queues like this are social proof that something compelling is going on inside, which is why establishments sometimes artificially create them by limiting the number of people who can enter or by hiring confederates to stand in line.

The principle of social proof says so: The greater the number of people who find any idea correct, the more the idea will be correct. — Robert B. Cialdini The Psychology of Persuasion

173

Social Trap A tendency to pursue short-term gains that create long-term losses for the greater group. A social trap is a situation in which people act to obtain short-term gains and, in so doing, create losses for everyone. It is an extension of the tragedy of the commons, a scenario in which ranchers overgraze cattle on public land, which depletes the land of grasses faster than it can replenish, which then starves all the ranchers’ cattle, including the original overgrazers. The short-term gain was cattle grazing for free, but this behavior set in motion a positive feedback loop of events that caused everyone, ranchers and cattle, to suffer as a result. Social traps are the root cause of numerous economic, environmental, and societal problems, including deforestation, overfishing, toxic waste disposal, traffic, and climate change.1 Social traps are most problematic when a resource is readily available and highly desirable, when people compete to access and use that resource, and when the long-term costs are not visible or easily monitored. A number of strategies have been proposed to avoid and break the social trap pattern: • Turn long-term costs into here-and-now costs — When people pay for the long-term costs as they go, it acts as a negative feedback loop and moderates the offending behavior. For example, charging a small toll to use select stretches of highway and reduce traffic. • Set quotas and penalize freeloaders — A common method of managing natural resources is to set quotas that limit consumption according to a system’s ability to function and regenerate, punishing those who exceed them. For example, setting quota limits for seasonal hunting and fishing based on herd health and population. • Create substitute behaviors — Modify or replace offending behaviors with a substitute that alleviates the long-term costs. For example, substituting other chemicals for chlorofluorocarbons, which were used in aerosols and cooling devices and damaged the ozone layer. Mitigate the effects of social traps by enforcing sustainable limits on resource use, rewarding cooperation and punishing freeloading, and increasing the visibility of long-term costs. The most dangerous social traps are those with significant time delays and severe consequences, which is why global warming is such a wicked problem. If history is any indication, substitute behaviors and technologies are the only way to solve global-scale social traps of this kind. See also Feedback Loop; Gamification; Leverage Point; Nirvana Fallacy;

Nudge; Sunk Cost Effect

1

The seminal works are “Social traps” by John Platt, 1973, American Psychologist, 28(8), 641– 651; and “The Tragedy of the Commons” by Garrett Hardin, 1968, Science, New Series, 162(3859), 1243 –1248.

Despite everyone wanting to get home as fast as possible, traffic jams occur but rarely on tollways. Having to pay tolls to drive on the roads moderates use, mitigating the social trap.

174

Status Quo Bias A preference for things as they are, even when a change would improve things. The status quo bias refers to a preference for avoiding change and maintaining the current situation. The bias is a complex mix of emotional, perceptual, and rational phenomena, including anchoring, inertia, loss aversion, perceived threat, regret avoidance, and sunk costs, to name a few. In sum, this means people like familiar things more than unfamiliar things; they weight what could go wrong with change more heavily than what could go right; they trust things that exist more than things that don’t; they prefer the roads well traveled and the paths of least resistance; they resist change when feeling overwhelmed or presented with too many options; and they double down to justify old decisions and to avoid making new bad decisions. Several countermeasures can be employed to overcome the status quo bias in individuals and groups.1 When the status quo bias is due to loss aversion — i.e., fear of loss is overweighted against the potential for gains — countermeasures include framing the desired action as the default, rephrasing the problem to emphasize the costs of doing nothing, and bringing in outside perspectives to give a fresh assessment of the tradeoffs.2 When the status quo bias is due to uncertainty or the costs associated with switching or transitioning, countermeasures include sharing stories and case studies of success, reframing the new in a way that reduces the perceived change, using demos and visualizations to help people imagine successful change, and employing financial incentives. When the status quo bias is due to maintaining prior decisions or investments, countermeasures include providing information about the reasons why change is necessary, encouraging early adopters to influence their colleagues to change, and highlighting the perils of maintaining the status quo due to sunk costs. People wrongly associate the status quo with stability and safety. To paraphrase an old saying, people are biased to like the devil they know better than the angel they don’t. Consider the status quo bias when contemplating the introduction of change. Employ countermeasures appropriate to the causes of the bias in each context. Run experiments. Try things. Pursue continuous improvement. In the long run, a culture of “Never leave well enough alone” beats a culture of “If it ain’t broke, don’t fix it” every time. See also Diffusion of Innovations; Exposure Effect; Framing; Nirvana Fallacy;

Nudge; Sunk Cost Effect

1

The seminal work is “Status Quo Bias in decision making” by William Samuelson and Richard Zeckhauser, 1988, Journal of Risk and Uncertainty, 1, 7– 59. See also “Anomalies: The Endowment Effect, Loss Aversion, and Status Quo Bias” by Daniel Kahneman et al., 1991, Journal of Economic Perspectives, 50(1), 193 – 206.

2

“How to measure the status quo bias? A review of current literature” by Marie Godefroid et al., 2022, Management Review Quarterly.

In 1979, the status quo for portable music players was the boombox: large, portable, battery-operated radio/cassette players with recording capability and large speakers. Not surprisingly, when Akio Morita, chairman of Sony, advocated for the Walkman, a diminutive portable player with no radio, no speakers, no recording function, and small headphones, many at Sony thought him crazy. Not only was the Walkman the opposite of what the market was accustomed to; it challenged the established notion that bigger was better. Morita had to internally sell the Walkman within Sony to give the product a chance, helping colleagues see both the vision and how it solved a problem consumers didn’t even realize they had. By the time the Walkman was 60 days old, the company had sold 50,000 units in Japan. By 1989, 10 years after the launch of the first model, over 100 million Walkmans had sold worldwide.

I hope the reader will not consider it too much of a boast if I refer to my hunch that the portable stereo player, the Walkman, would be a successful and popular product despite a lot of skepticism within my own company. I was so certain of this that I said, “If we don’t sell one hundred thousand pieces by the end of this year, I will resign my chairmanship of this company”. Of course I had no intention of doing that; I just knew this product would be successful. — Akio Morita Made in Japan

175

Stickiness Properties of information that increase recognition, recall, and voluntary sharing. Stickiness refers to the ability of certain ideas to become lodged in the cultural consciousness. Stickiness applies to anything that can be seen, heard, or touched — slogans, advertisements, and products. Consider stickiness in the design of instruction, advertising, products, and other contexts involving memory.1 Six variables appear to be key in the creation of sticky ideas: 1. Simplicity — The idea can be expressed simply, without sacrificing depth (e.g., “It’s the economy, stupid”, used during Bill Clinton’s 1992 U.S.

presidential campaign). Keep messages succinct but profound. 2. Surprise — The idea contains an element of surprise, which grabs

attention (e.g., the Center for Science in the Public Interest wanted to alarm consumers about the amount of fat in movie popcorn, so they noted that it had more fat than “a bacon-and-eggs breakfast, a Big Mac and fries for lunch, and a steak dinner with all the trimmings: combined!”). Employ surprise to capture attention and motivate sharing. 3. Concreteness — The idea is specific and concrete, using plain language or imagery (e.g., John F. Kennedy’s 1962 moon speech: “We choose

to go to the moon in this decade…”). Express ideas using specific time frames and objects or events that are available to the senses. 4. Credibility — The idea is believable, often communicated by a trusted source or as an appeal to common sense (e.g., in 2007, Blendtec’s

founder Tom Dickson created and shared videos in which he tested his blenders against everything from cell phones to wooden rake handles). Favor presenting evidence; let people draw their own conclusions. 5. Emotion — The idea elicits an emotional reaction (e.g., on Halloween in the 1960s and 1970s, false rumors circulated that sadists were putting

razor blades in apples, panicking parents and effectively shutting down the tradition of trick-or-treating for much of the United States). Incorporate affective triggers to evoke strong emotional responses. 6. Story — The idea is expressed in the context of a story, increasing its

memorability and retelling (e.g., Dyson vacuum cleaners come with a booklet called The Story of Dyson, which recounts the story of James Dyson and educates customers about the technical “hows” and “whys” of the product’s design). Consider stickiness to make things more memorable and increase spontaneous sharing. Present sticky ideas early to frame the rest of the message. Remember the acronym SUCCESs: Simple, Unexpected, Concrete, Credible, Emotional, Story. Follow behavioral desire lines: Use what sticks. See also Desire Line; Inverted Pyramid; KISS; Storytelling; von Restorff Effect

1

The term stickiness was popularized by Malcolm Gladwell in his book The Tipping Point, 2000, Little, Brown, and Company. The seminal work on stickiness is Made to Stick: Why Some Ideas Survive and Others Die by Chip Heath and Dan Heath, 2007, Random House.

From WWII British motivational poster to modern-day meme, the enduring popularity of this simple message owes to its stickiness.

The Stickiness Factor says that there are specific ways of making a contagious message memorable; there are relatively simple changes in the presentation and structuring of information that can make a big difference in how much of an impact it makes. — Malcolm Gladwell The Tipping Point

176

Storytelling Evoking imagery, emotions, and understanding through the presentation of events. Storytelling is uniquely human. It is the original method of passing knowledge from one generation to the next and remains one of the most compelling methods for richly communicating knowledge. Storytelling can be oral, as in the telling of a tale; visual, as in an information graph or movie; or textual, as in a poem or novel. More recently, digital storytelling has emerged, which involves telling a story using digital media, including computerized slide shows, digital videos, and educational software. A storyteller can be any instrument of information presentation that engages an audience to experience a set of events.1 Good storytelling generally requires these fundamental elements: • Setting — Orients the audience, providing a sense of time and place. • Characters — Make the story relevant; the audience becomes involved by identifying with the characters. • Plot — Ties events together; the channel through which the story flows. • Invisibility — Awareness of the storyteller fades as the audience focuses. • Mood — Music, lighting, and style of prose create the emotional tone. • Movement — Sequence and flow of events is clear and interesting; the storyline doesn’t stall. When these elements are thoughtfully combined, you can achieve the suspension of disbelief with your audience — they willingly get lost in your story. This is the strategic goal of virtually all design. For example, Cabbage Patch Kids are simple dolls that are often described as ugly, but retailers could not keep them on the shelves in 1983. The key is that people weren’t buying dolls; they were buying stories — each Cabbage Patch Kid has a unique story told with documents like adoption papers and birth certificates with inked footprints. This creative storytelling — which was a new thing for toys in the 1980s — delighted children of all ages and made Cabbage Patch Kids a phenomenon. Use storytelling to engage an audience in a design, evoke a specific emotional response, or provide a rich context to enhance learning. When successfully employed, storytelling will cause audiences to experience and recall events of the story in a personal way — the story becomes a part of them. Great storytelling is what makes great design. See also Archetypes, Psychological; Stickiness; von Restorff Effect; Wayfinding;

Zeigarnik Effect

1

The seminal work on storytelling is Aristotle’s Poetics. Additional seminal references include The Hero with a Thousand Faces by Joseph Campbell, 1960, Princeton University Press; and How to Tell a Story; and Other Essays by Mark Twain, 1996, Oxford University Press. A nice contemporary reference on visual storytelling is Graphic Storytelling by Will Eisner, 1996, Poorhouse Press.

Setting Milestone events of the civil rights movement are presented with their dates and places. The memorial sits within the greater, historically relevant context of the Southern Poverty Law Center in Montgomery, Alabama. Characters The civil rights movement is a story of individual sacrifice toward the attainment of a greater good. Key activists and opponents are integral to the story and are listed by name. Plot Events are presented simply and concisely, listed in chronological order and aligned along a circular path. Progress in the civil rights movement is inferred as cause-effect relationships between events. No editorializing — just the facts. Invisibility The table is cantilevered to hide its structure. The black granite is minimal, providing maximum contrast with the platinum-inscribed lettering. The structure is further concealed through its interaction with water, which makes it a mirrored surface. Mood The table’s asymmetry suggests a theme of different but equal. The mirrored surface created by the water on black granite reveals the story in union with the reflected image of the viewer. The sound of water is calming and healing.

Black Granite Flowing Water Asymmetric Table

Flowing Water Events Cantilevered Table

Movement The flow of water against gravity suggests the struggle of the civil rights movement. As the water gently pours over the edge, the struggle is overcome. Simile becomes reality as water rolls down the back wall.

177

Streetlight Effect A bias to search for things where it is easiest. The streetlight effect is the tendency to focus on problems, questions, and data for reasons of convenience or availability versus reasons of relevance or import. The effect borrows from a joke: A police officer sees a drunk searching for something under a streetlight. “What have you lost?” the officer asks. “My keys”, the drunk replies. The officer agrees to help look. After a few minutes, the officer asks, “Are you sure you lost them here?” The drunk replies, “No, I lost them in the park”. Incredulous, the officer asks, “Why are we searching here then?!” The drunk replies, “It’s too dark over there. This is where the light is”. The streetlight effect is related to the aphorism, “‘Not everything that can be counted counts, and not everything that counts can be counted”. What is counted may simply be due to convenience, and what is not counted may simply be due to inconvenience. Two pervasive examples of the streetlight effect involve subject recruitment and experimental context.1 Recruiting subjects for experiments can be challenging, which is why it is common practice for researchers at universities to recruit their students. However, despite the convenience, findings based on research conducted with Western undergraduates limit the ability to generalize beyond that population — i.e., most people in the world are not from Western, Educated, Industrialized, Rich and Democratic societies, or the acronym, WEIRD. The rise of social media research and the ease of accessing online data has added a letter to this acronym: “O” for online, or WEIRDO.2 Studying phenomena in real contexts is also challenging. It is easier to conduct experiments in a lab, online, or in other artificial settings than in real-world contexts, which means that findings will often not generalize to the real world. For example, we know education is heavily influenced by what happens in the home, but despite this, most educational research is conducted in a lab or classroom. Consider the streetlight effect when designing research or evaluating research findings. In design contexts, favor research that uses real customers or users in real contexts when possible. Ensure that problems, questions, and data center on the right population in authentic contexts versus easily accessed populations and convenient contexts. The path of least resistance isn’t always the wrong path, but it is often not the path to enlightenment. See also Desire Line; Selection Bias; Survivorship Bias

1

The effect borrows from Muslim folklore, but its modern form is attributed to The Conduct of Inquiry by Abraham Kaplan, 1964, Chandler Publishing. This principle is also known as the drunkard’s search principle. The “counted counts” quote is often attributed to Einstein but likely originates with the sociologist William Bruce Cameron in his 1963 text Informal Sociology, A Casual Introduction to Sociological Thinking, Random House.

2

See, for example, “Big data’s ‘streetlight effect’: where and how we look affects what we see” by Mark Moritz, May 17, 2016, The Conversation, www.theconversation.com.

Robert McNamara applied the same data-driven analysis at the Pentagon that he used when he was president of Ford Motor Company. But there was one big difference: At Ford, McNamara could ask just about any question and get a data-driven answer to it; but in war, there are questions that you just can’t get the data to answer. This led McNamara to focus on things that could be easily measured (e.g., number of enemy killed) and to ignore the things that he couldn’t (e.g., feelings of the Vietnamese people) — which, in many cases, were the most important. In other words, he looked for his answers under the streetlight.

The first step is to measure whatever can easily be measured. This is OK as far as it goes. The second step is to disregard that which can’t be easily measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can’t be measured easily really isn’t important. This is blindness. The fourth step is to say that what can’t be easily measured really doesn’t exist. This is suicide. — Daniel Yankelovich describing The McNamara Fallacy in his 1972 paper “Corporate Priorities: A continuing study of the new demands on business”

178

Structural Forms There are three ways to create rigid things: solids, frames, and shells. Structures are assemblages of elements used to support a load or contain and protect things. In many cases, the structure supports only itself (i.e., the load is the weight of the materials); and in other cases, the structure supports itself and additional loads (e.g., a crane). Whether creating a museum exhibit, large sculpture, 3D billboard, or temporary shelter, a basic understanding of structure is essential to successful design.1 There are three basic types of structures: 1. Mass structures — Materials are put together to form a solid structure.

The strength of a mass structure is a function of the weight and hardness of the materials. Examples of mass structures include dams, adobe walls, and mountains. Mass structures are robust (i.e., small amounts of the structure can be lost with little effect on the strength of the structure) but are limited in application to relatively simple designs. Consider mass structures for barriers, walls, and small shelters, especially in primitive environments where resources are limited. 2. Frame structures — Struts are joined to form a framework. The strength

of a frame structure is a function of the strength of the elements and joints and their organization. Often a cladding or skin is added to the frame, but this rarely adds strength to the structure. Examples of frame structures include most modern homes, bicycles, and skeletons. Frame structures are relatively light, flexible, and easy to construct. Consider frame structures for most large design applications. 3. Shell structures — A thin material is wrapped around to contain a

volume. Shell structures maintain their form and support loads without a frame or solid mass inside. Their strength is a function of their ability to distribute loads throughout the whole structure. Examples of shell structures include bottles, airplane fuselages, domes, and eggs. Shell structures are lightweight and economical with regards to material but are complex to design and vulnerable to catastrophic failure if the structure has imperfections or is damaged. Consider shell structures for containers, small cast structures, shelters, and designs requiring very large and lightweight spans.2 Consider structural forms in the structural aspects of designs. Favor mass structures when simplicity and robustness are priorities. Favor frame structures when efficiency and strength are priorities. Favor shell structures when minimizing weight and enclosing volumes are priorities. See also Factor of Safety; Modularity; Redundancy; Saint-Venant’s Principle;

Scaling Fallacy

1

Excellent introductions to the dynamics of structural forms are Why Buildings Stand Up by Mario Salvadori, 1990, W.W. Norton & Company; and Why Buildings Fall Down by Matthys Levy and Mario Salvadori, 1992, W.W. Norton & Company.

2

Note that shell structures can be reinforced to better withstand dynamic forces. For example, monolithic dome structures apply concrete over a rebar-reinforced foam shell structure. The resulting structural form is likely the most disaster-resistant structure available short of moving into a mountain.

The Geocell Rapid Deployment Flood Wall is a modular plastic grid that can be quickly assembled and filled with dirt by earthmoving equipment. The resulting mass structure forms an efficient barrier to flood waters at a fraction of the time and cost of more traditional methods (e.g., sandbag walls).

The Statue of Liberty demonstrates the flexibility and strength of frame structures. Its iron frame structure supports both itself (125 tons [113.4 MT]) and its copper cladding (100 tons [90.7 MT]). Any resemblance of the frame structure to the Eiffel Tower is more than coincidence, as the designer for both structures was Gustave Eiffel.

Icosa Shelters exploit many intrinsic benefits of shell structures: They are inexpensive, lightweight, and strong. Designed as temporary shelters for the homeless, the Icosa Shelters are easily assembled by folding sheets of precision die cut material together and sealing with tape.

179

Sunk Cost Effect The tendency to keep investing in an endeavor because of past investments in that endeavor. People frequently let past investments influence their decision-making. For example, you buy a nonrefundable ticket to a show, decide you no longer want to go, but you go anyway because you paid for it. Rationally speaking, past investments, or sunk costs, should not influence decision-making. Only the cost-benefits of current options should influence decisions.1 Sunk costs refer to any type of prior investment, including money, time, effort, energy, and emotion. For example, the longer one has invested emotionally in a relationship, the harder it is to break up. When designers invest time and energy in a particular feature or direction, the resistance to edit or remove the feature or to materially change direction is a manifestation of the sunk cost effect. The aphorism, “[always be prepared to] murder your darlings”, was created to help designers (in this case, writers) to override the sunk costs of time, energy, and emotion invested into things they have created.2 People are susceptible to the sunk cost effect because they are motivated to believe they made competent decisions, because they fear losses more than they desire gains, and because they do not want to feel or appear wasteful. The effect leads people and organizations to make irrational decisions, throwing good money after bad. In cases where investments extend over long periods, people and organizations often continue failing investments to avoid reckoning with the failure. This delaying tactic is employed to avoid liability, to hope for a change in the organizational or political landscape that would avert failure, or to push responsibility for the failure on to successors. Recognizing the sunk cost effect is the first step to managing it. In decisionmaking contexts, focus on current cost-benefits only. Set clear exit criteria from the beginning to disrupt the effect and establish when investments should stop. Beware the words, “We have too much invested to quit”. Beware programs that start by soliciting small commitments or investments as they are more successful at later soliciting large commitments or investments. And beware of pitches for new investments based on past investments. See also Cognitive Dissonance; Cost-Benefit; Framing; Status Quo Bias

1

The seminal works on the sunk cost effect are “Toward a Positive Theory of Consumer Choice” by Richard Thaler, 1980, Journal of Economic Behavior & Organization, 1(1), 39 – 60; and “The Psychology of Sunk Costs” by Hal Arkes and Catherine Blumer, 1985, Organizational Behavior and Human Decision Processes, 35, 124 –140.

2

This adage is commonly misattributed to William Faulkner and Robert Louis Stevenson. The phrase seems to have been first written by the English writer Sir Arthur Quiller-Couch, in “On the Art of Writing”, 1914.

The British and French governments funded development of the Concorde SST long after they knew it would be an economic failure. Why? They had invested too much to quit.

Don’t cling to a mistake just because you spent a lot of time making it. — Aubrey de Grey (attributed)

180

Supernormal Stimulus An exaggerated feature that elicits a stronger response than the real feature. A supernormal stimulus is a variation of a familiar stimulus that elicits a response stronger than the stimulus from which it evolved. For example, female cuckoos sneak into the nests of other birds to lay their eggs. Because the cuckoo egg is typically larger and brighter than other eggs, the nest’s owner gives it preferential attention. The size and brightness of the egg are supernormal stimuli to the unwitting adoptive mother.1 Supernormal stimuli dramatically influence the way people respond to brands, products, and services. It is generally the case that anything that sells like crazy is probably some type of supernormal stimulus.2 For humans, supernormal stimuli include foods rich in salt, sugar, and fats; attention-grabbing quick cuts and motion cues in movies and video games; augmented chest-waist-hip ratios through surgery or corseting; highlighted facial features using cosmetics; exaggerated body and facial features in dolls; propaganda about menacing enemies; and contexts like the extreme scarcity of a product. The appeal of all of these things to humans is due to the fact that they manipulate instinctive responses in our brains. Our only sense of this manipulation is the fact that we know we are drawn to these things. People who do not have the ability to inhibit their impulsive drives — e.g., young children — are more susceptible to these kinds of stimuli. When stimuli are involved in well-established biases and preferences, responses are stronger. For example: • Baby-faced products like Cabbage Patch Dolls, Disney characters, and Volkswagen Beetles are highly appealing because humans, especially females, evolved to like baby faces. • Characters with exaggerated proportions, like superheroes and Barbie dolls, are highly appealing because humans evolved to see V-shaped torsos in men as a sign of strength and dominance and hourglassshaped torsos in women as a sign of fertility. • Foods like Double Stuf Oreos and McDonald’s french fries are highly appealing because humans evolved to like fat and sugar due to their caloric density. Consider supernormal stimuli to grab attention and hold interest in logos, brands, products, and advertising. Explore stimuli involved in well-established biases and preferences for greatest effect. See also Archetypes, Psychological; Baby-Face Bias; Gloss Bias; Scarcity;

Waist-to-Hip Ratio

1

The seminal work is The Study of Instinct by Niko Tinbergen, 1951, Clarendon Press. For a popular overview, see Supernormal Stimuli by Deirdre Barrett, 2010, W.W. Norton & Co.

2

The fake beauty filters available on image- and video-based social media are supernormal stimuli enhancers, which is what makes them so addictive and potentially harmful to children.

Exaggerations of things we have evolved to like — e.g., attractive features, fat and sugar, baby faces — grab our attention.

181

Survivorship Bias The tendency to overemphasize things that survive a selection process. The survivorship bias is a type of selection bias that occurs when conclusions are drawn based on a group that survived some selection process, ignoring the group that didn’t survive. The bias can lead to overly optimistic beliefs about prospects for success and often leads people to infer causation from correlation. For example, many have observed that Bill Gates, Steve Jobs, and Mark Zuckerberg all dropped out of college and conclude based on this that dropping out of college is the best path to becoming a successful entrepreneur. This analysis, of course, ignores the thousands of students who dropped out of college and did not achieve entrepreneurial success.1 The survivorship bias is driven by asymmetric visibility: Survivors are typically available and often celebrated — i.e., highly visible — whereas non-survivors are often not available and just as often ignored. This problem is then compounded by the fact that it isn’t always possible to glean the secrets of success by studying survivors: Success is often not a function of anything survivors did or what happened to them but, rather, a function of what they didn’t do or what didn’t happen to them. For this reason, studying the attributes or circumstances surrounding winners alone can lead to highly intuitive, but profoundly incorrect, conclusions about the causal factors behind their success. In fact, many survivors themselves come to develop superstitious beliefs and fictional narratives about their success, when their success was largely due to luck or circumstances beyond their control.2 The survivorship bias is best overcome by considering survivors alongside non-survivors so that comparison and contrast between their attributes and the surrounding circumstances are apparent. This is most effectively achieved through statistical graphs or similar representations, which allow people to clearly see complex patterns and relationships between variables. Beware the survivorship bias when modifying or upgrading designs in response to data or when trying to identify strategies for success. Never just look at the winners alone. Consider winners and losers together, plotting or graphing attribute and circumstance data so that differences are readily apparent. Favor statistical over anecdotal evaluation to determine causal relationships. Be skeptical of intuitive explanations and narratives based on survivors, even when they come from the survivors themselves. See also Causal Reductionism; Comparison Selection Bias; Streetlight Effect;

Visibility

1

The survivorship bias was recognized as far back as 2,000 years ago, but the seminal work is “A Method of Estimating Plane Vulnerability Based on Damage of Survivors” by Abraham Wald, 1943/1980 (reprinted), Center for Naval Analyses, CRC 432.

2

See, for example, Fooled by Randomness by Nassim Taleb, 2001, Random House.

The red dots indicate areas of combat damage received by surviving WWII bombers. Where would you add armor to increase survivability? The statistician Abraham Wald recommended reinforcing the areas without damage. Since these data came from surviving aircraft only, bombers hit in undotted areas were the ones that did not make it back.

Gentlemen, you need to put more armour-plate where the holes aren’t because that’s where the holes were on the airplanes that didn’t return. — Abraham Wald, 1942 (attributed)

182

Swiss Cheese Model A model describing how risks combine to cause accidents and other bad outcomes. The Swiss cheese model is a risk management framework based on the metaphor of Swiss cheese slices stacked next to one another. Each slice of cheese represents a layer of defense against an undesirable outcome, and the holes in the slices represent weaknesses or gaps in those defenses. Having multiple layers of defense reduces the risk of bad outcomes when the layers have different weaknesses — i.e., their holes don’t line up. For example, in preventing the spread of an airborne virus, one layer of defense is vaccination, and another layer of defense is wearing a mask. Neither defense is perfect, but because they work to prevent infection in different ways and have different shortcomings — i.e., the holes don’t line up — the two layers combined offer much stronger protection than either alone.1 The Swiss cheese model distinguishes between two kinds of failures, both represented as holes in the cheese slices: 1. Active failures — Unsafe acts committed by people. 2. Latent failures — Poorly designed elements in procedures, systems,

buildings, etc. that lend themselves to errors. Bad outcomes result when the latent-failure holes and active-failure holes line up across multiple slices, allowing undesirable things to get through. For example, latent failures could be poor safety training and installing waterbased fire extinguishers near electrical panels; an active failure would be using a water-based fire extinguisher on an electrical fire. One place the Swiss cheese metaphor runs into a bit of trouble is that the holes in the model are dynamic, not static. That is, they are continuously opening and closing at different rates. For example, poorly maintained equipment is a hole until it is serviced properly, at which point the hole would close. Also, holes that are encountered in early layers can impact holes in later layers. For example, poor training as an early latent failure increases the probability of unsafe acts as a later active layer. Consider the Swiss cheese model to prevent accidents, increase safety, and mitigate bad outcomes. Since no single layer of defense is perfect, design systems with multiple, diverse layers of defense such that the holes don’t line up. Create systems that can adapt to unanticipated situations by dynamically adding or modifying layers and closing holes once identified. See also Brown M&M’s; Causal Reductionism; Don’t Eat the Daisies;

Error, Design; Error, Human; Root Cause

1

The seminal work is Human Error by James Reason, 1990, Cambridge University Press.

Social Distancing

There is no way to eliminate risk, but there are ways to minimize it. The Swiss cheese model provides a metaphor to think about risk-reduction strategies as layers of cheese, with strengths represented as solid areas and weaknesses represented as holes. Bad outcomes happen when a threat makes it through all of the layers. Since all strategies have some weaknesses, the best way to reduce risk is to use multiple, diversified strategies such that their holes never line up.

Wearing Masks

Filtering Indoor Air

Getting Vaccinated

183

Symmetry A property of similar or exact correspondence between the configuration of elements. Symmetry is the most basic and enduring aspect of beauty. Its aesthetic appeal is likely innate and is therefore strongest in young children and in evolutionary-type contexts. For example, in evaluating facial features as a means of assessing health and genetic fitness, symmetrical faces are perceived to be more attractive than asymmetrical faces.1 In non-evolutionary-type contexts, however, the preference is sensitive to learning effects. For example, people with training in the arts and humanities tend to find asymmetrical things to be more, not less, beautiful. It seems that asymmetry is an acquired aesthetic taste that overrides the innate preference, which in this case is considered banal.2 There are three basic types of symmetry: 1. Reflection symmetry — Elements are mirrored around a central axis

or mirror line. The bodies of most animals, including humans, exhibit reflection symmetry. 2. Rotation symmetry — Elements are rotated around a common center.

Sunflowers, starfish, and snowflakes all exhibit rotation symmetry. A variation of rotation symmetry is spiral symmetry, which can be seen in hurricanes, galaxies, and nautilus shells. 3. Translation symmetry — Elements are repeated in different areas of

space. Repeating wallpaper patterns, bee honeycombs, and schools of fish exhibit translation symmetry. Aside from its aesthetic properties, symmetric forms have other qualities that are potentially beneficial to designers: Symmetric forms tend to be seen as figure images rather than ground images, which means they receive more attention and are better recalled than other elements; symmetric forms are simpler than asymmetric forms, which also gives them an advantage with regards to recognition and recall; symmetric forms tend to be more compressible and economic in their complexity, which means they can be represented with less information; and symmetry helps discriminate designed things and living organisms from inanimate objects.3 Use symmetry to convey balance and stability. Favor symmetric forms with children and mass audiences. Use simple symmetries when recognition and recall are important and combinations of symmetries when aesthetics and interestingness are important. When addressing audiences trained in art or design, incorporate asymmetry in the design. See also Alignment; Attractiveness Bias; MAFA Effect; Self-Similarity;

Wabi-Sabi

1

A seminal work on symmetry in design is The Elements of Dynamic Symmetry by Jay Hambidge, 1926, Brentano’s, Inc.

2

“Symmetry Is Not a Universal Law of Beauty” by Helmut Leder et al., 2019, Empirical Studies of the Arts, 37(1), 104 –114.

3

See, for example, “The Status of Minimum Principle in the Theoretical Analysis of Visual Perception” by Gary Hatfield and William Epstein, 1985, Psychological Bulletin, 97, 155 –186; and “Facial Resemblance Enhances Trust” by Lisa M. DeBruine, 2002, Proceedings of The Royal Society: Biological Sciences, 269(1498), 1307–1312.

Mirror Line

Reflection

Translation

Rotation

The Notre Dame Cathedral uses multiple, complex symmetries to create a structure that is as beautiful as it is memorable.

The desire for symmetry, for balance, for rhythm in form as well as in sound, is one of the most inveterate of human instincts. — Ogden Codman Jr. and Edith Wharton The Decoration of Houses

184

Testing Pyramid A multilayered framework to systematize testing and reduce defects. The testing pyramid is a framework used to efficiently identify bugs and defects. The notion is that testing strategies vary in time, cost, and effectiveness and, therefore, an intentional mix of testing strategies should be used to balance cost with benefit.1 While there are many variations of the testing pyramid, all good variations include these three testing strategies: 1. Unit testing — Focuses on testing individual components or functionalities

in isolation, independent of the greater system or context. Unit tests form the base of the pyramid because they comprise the bulk of testing efforts, estimated to be about 70% of the quality-assurance process.2 2. Integration testing — Focuses on testing clusters of components or

functionalities together, often in connection with the greater system or context. As such, integration testing is more complex and takes longer than unit testing. Integration tests form the middle of the pyramid because they comprise the next largest set of testing efforts, estimated to be about 20% of the quality-assurance process. 3. End-to-end testing — Simulates real-world functionality as closely as

possible. It is the most expensive and time-consuming to conduct, which is why it is often done poorly or skipped altogether. End-to-end tests form the top of the pyramid because they comprise the smallest set of testing efforts, estimated to be about 10% of the quality-assurance process. Good testing involves both happy-path scenarios — i.e., situations featuring typical use and no error conditions — and unhappy-path scenarios — i.e., situations featuring atypical use and extreme error conditions. Happy-path scenarios confirm functionality when things go right. Unhappy-path scenarios confirm error-trapping, user forgiveness, and system recovery when things go wrong. Product teams tend to be happy-path testers, whereas real users tend to be unhappy-path testers. Use the testing pyramid to guide quality assurance efforts. Use a mix of unit, integration, and end-to-end testing strategies prior to release, calibrating the level of testing to the risks involved. Use a mix of happy-path and unhappypath scenarios, ensuring that real users participate in the process. For lifecritical or mission-critical systems, do not abridge or shortcut integration and end-to-end testing. There will always be time pressures — don’t skip it. See also Cost-Benefit; Error, Design; Error, Human; Forgiveness; Iteration;

Prototyping; Survivorship Bias

1

Variations of the testing/QC pyramid arguably date back to WWII. A modern version of the testing pyramid was introduced in Succeeding with Agile: Software Development Using Scrum by Mike Cohn, 2009, Addison-Wesley Professional.

2

Google has popularized a 70/20/10 split: 70% unit tests, 20% integration tests, and 10% end-to-end tests. See, for example, “Just Say No to More End-to-End Tests” by Mike Wacker, Apr 22, 2015, Google Testing Blog.

In December 2019, the Boeing CST-100 Starliner test flight was cut short due to software bugs, which were identified during testing on the ground after Starliner’s launch. One bug prevented the spacecraft from docking with the International Space Station, and the other could have resulted in catastrophic damage to the capsule during re-entry. The Boeing teams had performed extensive unit testing on systems, but they cut the

more time-intensive integration and end-to-end testing short. These tests would have revealed the bugs prior to launch. Catastrophe was averted, and hopefully lessons were learned.

185

Threat Detection Threatening things are detected more efficiently than nonthreatening things. People are born with automatic visual detection mechanisms for certain threatening stimuli, such as snakes. These threatening stimuli are detected more quickly than nonthreatening stimuli, and this ability is thought to have evolutionary origins. Efficiently detecting threats no doubt provided a selective advantage for our human ancestors.1 When presented with images containing threatening elements, such as spiders, and nonthreatening elements, such as flowers, people can locate the threatening elements more quickly than the nonthreatening elements. Similarly, people can locate an angry face in a group of happy or sad faces more quickly than a happy or sad face in a group of angry faces. The ability to detect evolutionarily threatening stimuli is a function of perceptual processes that automatically scan the visual field below the level of conscious awareness. Unlike conscious processing, which is a slow and sequential process, threat detection occurs quickly and in parallel with other visual and cognitive processes.2 Almost anything possessing the key threat features of snakes, spiders, and angry faces can trigger the threat detection mechanism, such as the wavy line of a snake, the thin legs and large circular body of a spider, and the V-shaped eyebrows of an angry face. The sensitivity to threat features explains why twigs and garden hoses often frighten young children and why people have a general fear of insects that superficially resemble spiders (e.g., roaches). When people have conscious fears or phobias of the threatening stimuli, the threat detection ability is more sensitive, and search times for threatening stimuli are further reduced. Once attention is captured, threatening stimuli are better at holding attention than nonthreatening stimuli. Consider threatening stimuli to rapidly attract attention and imply threat or foreboding (e.g., designs of markers to keep people away from an area). Abstracted representations of threat features can trigger threat-detection mechanisms without the accompanying negative emotional reaction. Therefore, consider such elements to attract attention in noisy environments, such as a dense retail shelf display. Achieving a balance between maximum detectability and minimal negative affect is more art than science and, therefore, should be explored with caution and verified with testing on the target audience. See also Aposematism; Archetypes, Psychological; Color Effects; Contour Bias;

Freeze-Flight-Fight-Forfeit; Savanna Preference

1

The seminal theoretical work on threat detection in humans is The Principles of Psychology by William James, 1890, Henry Holt and Company. While the evidence suggests innate detection mechanisms for snakes, spiders, and angry faces, it is probable that similar detection mechanisms exist for other forms of threatening stimuli.

2

See “Emotion Drives Attention: Detecting the Snake in the Grass” by Arne Öhman et al., Sep 2001, Journal of Experimental Psychology: General, 130(3), 466 – 478; and “Finding the Face in the Crowd: An Anger Superiority Effect” by Christine Hansen and Ranald Hansen, 1988, Journal of Personality and Social Psychology, 54, 917– 924.

In visually complex environments, threatening stimuli are detected more quickly than nonthreatening stimuli.

I don’t like spiders and snakes And that ain’t what it takes to love me Like I wannna be loved by you — Jim Stafford “Spiders and Snakes”

186

Top-Down Lighting Bias A tendency to interpret objects as being lit from a single light source from above. Humans are biased to interpret objects as being lit from a single light source from above. This bias is found across all age ranges and cultures and likely results from humans evolving in an environment lit from above by the sun. Had humans evolved in a solar system with more than one sun, the bias would be different.1 As a result of the top-down lighting bias, dark or shaded areas are commonly interpreted as being farthest from the light source, and light areas are interpreted as being closest to the light source. Thus, objects that are light at the top and dark at the bottom are interpreted as convex, and objects that are dark at the top and light at the bottom are interpreted as concave. In each case, the apparent depth increases as the contrast between light and dark areas increases. When objects have ambiguous shading cues, the brain switches back and forth between concave and convex interpretation.2 The top-down lighting bias can also influence the perception of the naturalness or unnaturalness of familiar objects. Objects that are depicted with top-down lighting look natural, whereas familiar objects that are depicted with bottom-up lighting look unnatural. Designers commonly exploit this effect in order to create scary or unnatural looking images. There is evidence that objects look most natural and are preferred when lit from the top left, rather than from directly above. This effect is stronger for right-handed people than left-handed people and is a common technique of artists and graphic designers. For example, in a survey of over 200 paintings taken from the Louvre, the Prado, and the Norton Simon Museums, more than 75% were lit from the top left. Top-left lighting is also commonly used in the design of icons and controls in computer software interfaces.3 The top-down lighting bias plays a significant role in the interpretation of depth and naturalness and can be manipulated in a variety of ways by designers. Use a single top-left light source when depicting objects or environments in a natural way. Explore bottom-up light sources when depicting objects or environments in an unnatural or foreboding way. Explore different levels of contrast between light and dark areas to vary the appearance of depth. See also Perspective Cues; Figure-Ground; Threat Detection; Uncanny Valley

1

This prinicple is also known as top-lighting preference and lit-from-above assumption.

2

See “Perception of Shape from Shading” by Vilayanur Ramachandran, 1988, Nature, 331, 163 –166; and “Perceiving Shape from Shading” by Vilayanur Ramachandran, Aug 1988, Scientific American, 256, 76 – 83.

3

“Where Is the Sun?” by Jennifer Sun and Pietro Perona, 1998, Nature Neuroscience, 1(3), 183 –184.

From the time of its dedication in 1922, the lighting of the Lincoln Memorial was of particular concern to its designer, Daniel Chester French. Electric lighting was installed in 1926, and French followed up with photos to the commission responsible for the memorial showing exactly how it should be lit — i.e., from the top left (left image), making Lincoln look poised and thoughtful — and how it should not be lit — i.e., from the bottom (right image), making Lincoln look angry and scary. Unfortunately,

his instructions have not always been followed, as demonstrated at President Obama’s address at his first pre-inaugural concert and at George Bush Sr.’s pre-inaugural ceremony.

While the present lighting of the statue is tolerably good at some times of the day, it at no time brings out the expression of the face as it ought to. The ideal lighting for most sculpture is from above at an angle of forty-five degrees, more or less. — Daniel Chester French in a letter to the Lincoln Memorial Commission, 1922

187

Uncanny Valley Abstract and realistic depictions of human faces are appealing but faces in between are not. Anthropomorphic forms are generally appealing to humans. However, when a form is very close but not identical to a healthy human — as with a mannequin or some computer-generated renderings of people — the form tends to become distinctly unappealing. This sharp decline in appeal is called the uncanny valley, a reference to the large valley or dip in the now classic graph presented by Masahiro Mori in 1970. Some attribute any negative affective response to a simple lack of familiarity with artificial and rendered likenesses, but more recent empirical research suggests the uncanny valley is a real phenomenon. The cause likely regards innate, subconscious mechanisms evolved for pathogen avoidance — i.e., detecting and avoiding people who are sick or dead.1 The strength of the negative reaction seems to correspond to the fidelity of the likeness — a highly realistic likeness that is identifiable as artificial will evoke a stronger negative reaction than a less realistic likeness. Abnormally proportioned or positioned facial features, asymmetry of facial features, subtleties of eye movement, and unnatural skin complexions are all sufficient conditions to trigger uncanny valley effects. The uncanny valley is often observed by animators and roboticists. For example, in the movie The Polar Express, the computer-generated characters are depicted with a high degree of realism. The resulting effect was both impressively realistic and eerie. The movie raised awareness of what is called dead eye syndrome, where the lack of eye movements called saccades made the characters look zombielike, taking The Polar Express straight through the uncanny valley. Any field working with anthropomorphic forms should beware the uncanny valley. For example, there is a general perception among retailers that the effectiveness of mannequins is a function of their realism. However, barring a mannequin that is indistinguishable from a real person, the uncanny valley suggests that retailers would be better served by more abstract versus highly realistic mannequins. Consider the uncanny valley when representing and animating humanlike forms. Opt for more abstract versus realistic humanlike forms to achieve maximum acceptance. Negative reaction is more sensitive to motion than appearance, so be particularly cognizant of jerky or unnatural movements when animating anthropomorphic bodies and faces. See also Anthropomorphism; Attractiveness Bias; Magic Triangle;

Threat Detection; Top-Down Lighting Bias

1

The seminal work on the uncanny valley is “Bukimi No Tani [The Uncanny Valley]” by Masahiro Mori, 1970, Energy, 7(4), 33 – 35; see also, “Too Real for Comfort? Uncanny Responses to Computer Generated Faces” by Karl MacDorman et al., May 2009, Computers in Human Behavior, 25(3), 695 –710; and “The Uncanny Valley: Effect of Realism on the Impression of Artificial Human Faces” by Jun’ichiro Seyama and Ruth Nagayama, Aug 2007, Presence, 16(4), 337– 351.

Healthy Person

+

UNCANNY VALLEY

Moving Still

Humanoid Robot

Appeal

Bunraku Puppet Stuffed Animal Industrial Robot

Corpse



Prosthetic Hand

Zombie

Masahiro Mori’s classic graph plots familiarity or appeal of an anthropomorphic form against its degree of realism. The uncanny valley resides to the right of the continuum, dipping sharply just before the

likeness of a genuine healthy person. The mannequin images illustrate the benefits of abstraction and total realism in depicting human likenesses, as well as the perils of the uncanny valley.

188

Uncertainty Principle Measuring things can change them, often making the results invalid. The act of measuring sensitive variables in any system can alter them and confound the accuracy of the measurement. For example, a common method of measuring computer performance is event logging: Each event performed by the computer is recorded. Event logging increases the visibility of what the computer is doing and how it is performing, but it also consumes computing resources, which interferes with the performance being measured.1 The uncertainty introduced by a measure is a function of: • Sensitivity of the variables — The ease with which a variable in a system is altered by the measure. • Invasiveness of the measure — The amount of interference introduced by the measure. Generally, the more sensitive the variable, the less invasive the measure should be. For example, asking people what they think about a product’s features is a highly invasive measure that can yield inaccurate results. But inconspicuously observing the way people interact with the features is a minimally invasive measure and will yield more reliable results. In cases where highly invasive measures are used over long periods of time, it is common for systems to become permanently altered in order to adapt to the disruption of the measure. For example, the goal of standardized testing is to measure student knowledge and predict achievement. However, the high stakes associated with these tests change the system being measured — high stress levels cause many students to perform poorly; schools focus on teaching the test to give students an advantage; students seek training on how to become test wise and answer questions correctly without really knowing the answers; and so on. The validity of the testing is thus compromised, and the invasiveness of the measure changes the focus of the system from learning to test preparation. Use low-invasive measures whenever possible. Avoid high-invasive measures; they yield questionable results, reduce system efficiency, and can result in the system adapting to the measures. Consider using natural system indicators of performance when possible (e.g., number of widgets produced) rather than measures that will consume resources and introduce interference (e.g., employee log of hours worked).2 See also Abbe Principle; Feedback Loop; Garbage In–Garbage Out;

Streetlight Effect

1

This principle is based on Heisenberg’s uncertainty principle in physics. Heisenberg’s uncertainty principle states that both the position and momentum of an atomic particle cannot be known because the simple act of measuring either one of them affects the other.

2

Remember the maxim: Not everything that can be counted counts, and not everything that counts can be counted.

Weekly Food Planner MONDAY

TUESDAY

WEDNESDAY

THURSDAY

Breakfast

Breakfast

Breakfast

Breakfast

Lunch

Lunch

Lunch

Dinner

Dinner

Dinner

Dinner

FRIDAY

SATURDAY

SUNDAY

Breakfast

Breakfast

Breakfast

Lunch

Lunch

Lunch

Dinner

Dinner

Dinner

Banana 2 Pieces of Toast Tea Lunch

Apple Cheese Sandwich Lemonade

Spaghetti Spinach Salad

WATER

Many weight-loss programs promote keeping a food journal to lose weight — i.e., writing down everything you eat. Why does this work? Tracking what you eat changes how you eat.

189

Uniform Connectedness The brain automatically assumes elements connected by lines or boxes are related. Uniform connectedness, one of the Gestalt principles of perception, asserts that elements connected to one another by uniform visual properties are perceived as a single group or chunk and are interpreted as being more related than elements that are not connected. For example, a simple matrix composed of dots is perceived as columns when common regions or lines connect the dots vertically and is perceived as rows when common regions or lines connect the dots horizontally.1 There are two basic strategies for applying uniform connectedness: 1. Common regions — Formed when edges come together and bound a

visual area, grouping the elements within the region. This technique is often used to group elements in software interfaces and buttons on remote controls. 2. Connecting lines — Formed when an explicit line joins elements,

grouping the connected elements. This technique is often used to connect elements that are not otherwise obviously grouped (e.g., not located closely together) or to imply a sequence. Common interface structures that make use of the uniform connectedness principle include tabbed interfaces that use lines and colored regions to connect content with the appropriate title, and lines in a spreadsheet that create regions of related information (rows). Uniform connectedness will generally overpower the other Gestalt principles. In a design where uniform connectedness is at odds with proximity or similarity, the elements that are uniformly connected will appear more related than either the proximal or similar elements. This makes uniform connectedness especially useful when correcting poorly designed configurations that would otherwise be difficult to modify. For example, the location of controls on a control panel is generally not easily modified, but a particular set of controls can be grouped by connecting them in a common region using paint or overlays. In this case, uniform connectedness will overwhelm and correct the poor control positions. Consider the uniform connectedness principle when the goal is to visually connect or group elements in a design. Employ common regions to group text elements and clusters of control elements, and employ connecting lines to group individual elements and imply sequence. Consider this principle when correcting poorly designed control and display configurations. See also Closure; Common Fate; Figure-Ground; Good Continuation;

Proximity; Similarity

1

The seminal work on uniform connectedness is “Rethinking Perceptual Organization: The Role of Uniform Connectedness” by Stephen Palmer and Irvin Rock, 1994, Psychonomic Bulletin & Review, 1, 29 – 55.

The sets of shapes (top) demonstrate the efficacy of uniform connectedness as a grouping strategy, which overrides both color and proximity. The remote control (right) uses a mix of grouping strategies to organize a large number of functions.

190

User-Centered vs. User-Driven Design A focus on understanding and meeting user needs versus simply implementing user requests. A popular credo of early-twentieth-century hospitality and retail was, “The customer is always right”. The sentiment was fine enough, but the phrasing conflated being of service with being servile, which led to unhealthy and unintended consequences. As a result, this motto and service norm has largely fallen out of favor — except, somewhat surprisingly, in many design circles. With the rise of “user-centered design” and backlash against engineering/designer-centered design came a rise in designer servility: a mindset that design was about asking people what they want and giving it to them rather than understanding and solving their problems.1 Not only are customers/users not always right; they are very often wrong. This should not be surprising as they are not professionally trained designers. User-driven design culture mistakes engaging users in the design process with placing them at the helm, and when they are at the helm, the results are invariably disastrous.2 Users can give basic feedback on the known, the familiar — e.g., reactions to existing features and functions. However, they cannot give feedback on the unknown, on what’s possible, or on what represents the best set of tradeoffs. Nor should they be expected to. User-centered design charges designers with understanding user needs better than the users themselves, understanding the boundaries of what’s possible, and understanding the best set of tradeoffs required to achieve a goal.3 The user-driven design mindset is abetted by the tendency of people to articulate problems as solutions. This inclines many designers to do as users ask to please them, but it is not a designer’s responsibility to simply execute naïve proposals. Good designers should, instead, decode user-proposed solutions to more deeply understand the problem. In this sense, design is like detective work: Designers should listen to everyone but trust no one; record how people say they act, but then observe what they actually do.4 If all there was to design was implementing user requests, there would be no need for designers. Therefore, do not interpret user-proposed solutions as literal solutions but, rather, as clues to a problem to be solved. Integrate users early in design to help understand the problem, but do not let them direct or unduly influence the process. Consider all evidence and feedback skeptically, like a detective. Aspire to be of service but not servile. See also Box’s Law; Cost-Benefit; Design by Committee; Desire Line;

Dunning-Kruger Effect; Performance vs. Preference

1

There are alternate terms and various flavors and subtypes: user-led, designerled, participatory, cooperative, etc. Here, UCD broadly refers to designer-managed processes that actively engage users, whereas UDD refers to design processes directed by users. For background, see User-Centered System Design: New Perspectives on HumanComputer Interaction by Donald Norman and Stephen Draper (Eds.), 1986, CRC Press; and The Inmates Are Running the Asylum by Alan Cooper, 1999/2004, Sams Publishing.

2

See, for example, “Boaty McBoatface: What You Get When You Let the Internet Decide” by Katie Rogers, Mar 21, 2016, The New York Times.

3

See, for example, “User-Led Innovation Can’t Create Breakthroughs: Just Ask Apple and Ikea” by Jens Martin Skibsted and Rasmus Bech Hansen, 2011, Fast Company ; and “User-Led Innovation Can’t Create Breakthroughs” by Steve Denning, Feb 15, 2011, Forbes.

4

See, for example, “First Rule of Usability? Don’t Listen to Users” by Jakob Nielsen, Aug 4, 2001, Nielsen Norman Group, www.nngroup. com.

gn

In a classic episode of The Simpsons, “Oh Brother, Where Art Thou?”, Homer discovers his long-lost brother, Herbert Powell, who owns his own car company. Herb is convinced that his designers are out of touch with what consumers really want, and so he puts Homer in charge and tells his designers and engineers to do whatever Homer says. In an example of user-driven design, Homer tells them what he wants — and they deliver “The Homer”, the “car designed for the average man”. At the unveiling, Herb learns that the car costs $82,000 and realizes that his car company is doomed. Users should be central to the design process, but they should not drive it.

You know that little ball you put on the aerial so you can find your car in a parking lot? That should be on every car…and some things are so snazzy they never go out of style. Like tail fins and bubble domes and shag carpeting…I want a horn here, here, and here. You can never find a horn when you’re mad, and they should all play “La Cucaracha”. — Homer Simpson

191

Veblen Effect A tendency to find a product more desirable because it has a high price. The Veblen effect, proposed by the economist Thorstein Veblen, states that in some cases increasing price can, by itself, increase demand, and decreasing price can, by itself, decrease demand. Conversely, the law of demand tells us that a lower price will increase demand of a product or service and a higher price will decrease demand. The Veblen effect, an exception to the law of demand, is typically associated with luxury items and services such as art, jewelry, clothes, cars, fine wines, hotels, and luxury cruises. According to Veblen, this effect is caused by the human desire for status, of which he asserts two types: pecuniary emulation (the desire to be perceived as belonging to upper classes) and invidious comparison (the desire to not be perceived as belonging to lower classes).1 The Veblen effect is potentially applicable when a good or service is: • Plainly visible to others • Strongly associated with status or wealth • Distinguishable from competitors and knockoffs • Priced high relative to competitors An example of the Veblen effect can be seen in higher education tuiton. To increase enrollment and address complaints about rising costs, many colleges and universities reduce tuition. However, the tuition reduction decreases both the perceived quality of the education offered by the school and the prestige of the school itself, which decreases demand (i.e., student enrollment). Conversely, when universities increase tuition, student enrollment increases. However, the tuition increase creates a perception problem of price gouging and a practical problem of affordability. The solution adopted by many schools is to simultaneously increase the price of tuition and the availability of discounts and financial aid. This permits an institution to increase enrollment, increase its patina of quality and prestige, and appear benevolent by offering more student assistance. Be aware that overly abundant discounting or assistance programs will undermine the Veblen effect.2 Consider the Veblen effect in marketing and pricing. To leverage the effect, promote associations with high-status people (e.g., celebrities). Ensure that form factor and branding clearly and memorably distinguish products from competitors. Employ strategies to discourage knockoffs, including legal protection (e.g., trademarks and patents), watermarking, and aggressive counter advertising. Set pricing high based on the intangible aspects of the offering versus its marginal cost. See also Classical Conditioning; Cognitive Dissonance; Priming; Scarcity

1

The seminal work on the Veblen effect is The Theory of the Leisure Class: An Economic Study of Institutions by Thorstein Veblen, 1899, Macmillan. See also “Bandwagon, Snob, and Veblen Effects in the Theory of Consumers’ Demand” by Harvey Leibenstein, 1950, Quarterly Journal of Economics, 64, 183 – 207.

2

See, for example, “In Tuition Game, Popularity Rises With Price” by Jonathan Glater and Alan Finder, Dec 12, 2006, The New York Times.

The active ingredients of generic drugs are chemically identical to their name-brand counterparts, but even people who know this will frequently pay significantly more for the name-brand version. Why? The higher price and brand name signals higher quality, and many people are willing to pay more for the emotional reassurance that the drug is effective and safe.

Our customers do not want to pay less. If we halved the price of all our products, we would double our sales for six months and then we would sell nothing. — Marketing manager at a luxury goods firm The Economist, December 1992

192

Visibility Locate important controls, information, and items to be clearly visible. According to the principle of visibility, systems are more usable when they clearly indicate their status, the possible actions that can be performed, and the consequences of the actions once performed. For example, a red light could be used to indicate whether or not a device is receiving power; illuminated controls could be used to indicate controls that are currently available; and distinct auditory and tactile feedback could be used to acknowledge that actions have been completed. The principle of visibility is based on the fact that people are better at recognizing solutions when selecting from a set of options than recalling solutions from memory. When it comes to the design of complex systems, the principle of visibility is perhaps the most important and most violated principle of design.1 To incorporate visibility into a complex system, consider the number of conditions, number of options per condition, and number of outcomes — the combinations can be overwhelming. This leads many designers to apply a type of kitchen-sink visibility — i.e., they try to make everything visible all the time. This approach may seem desirable, but it actually makes the relevant information and controls more difficult to see due to information overload.2 Two good solutions for managing complexity while preserving visibility are: 1. Hierarchical organization — Putting controls and information into logical

categories and then hiding them within a parent control, such as a software menu. The category names remain visible, but the controls and information remain concealed until the parent control is activated. 2. Context sensitivity — Revealing and concealing controls and information

based on different system contexts. Relevant controls and information for a particular context are made highly visible, and irrelevant controls (e.g., unavailable functions) are minimized or hidden. Visible controls and information serve as reminders for what is and is not possible. Design systems that clearly indicate the system status, the possible actions that can be performed, and the consequences of those actions. Immediately acknowledge user actions with clear feedback. Avoid kitchensink visibility. Make the degree of visibility of controls and information correspond to their relevance. Use hierarchical organization and context sensitivity to minimize complexity and maximize visibility. See also Mapping; Modularity; Performance Load; Progressive Disclosure;

Recognition over Recall; Signal-to-Noise Ratio

1

The seminal work on visibility is The Design of Everyday Things by Donald Norman, 1990, Doubleday.

2

The enormity of the number of visibility conditions is why visibility is among the most violated of the design principles — it is, quite simply, difficult to accommodate all of the possibilities of complex systems.

Damage to the portside of the guidedmissile destroyer USS John S. McCain following a collision with a merchant vessel near the Straits of Malacca and Singapore. What happened? A lack of visibility about who or what was steering the ship enabled the destroyer to veer into the lane of the merchant ship before the crew could figure out what was going on. In the end, it was the mismatched speed of its two propellors that caused the ship to turn, a throttle mode that was indicated by a diminutive unchecked “Gang” checkbox. In the minutes prior to the collision, the crew of the McCain transferred helm control four times trying to gain control of its steering, but each transfer made the problem more complex, resetting modes and only partially transferring control. The crew didn’t detect that the propellers were unganged and the cause of the problem until it was too late to avoid collision. Lacking the design expertise to understand and explain what happened, the Navy blamed touchscreen technology and declared their intention to revert to physical controls.

193

Visuospatial Resonance A phenomenon in which different images are visible at different distances. Interpreting images is a process that involves interplay between visual information received by the eyes and information stored in memory. The process begins with the eyes first locating the primary lines, edges, and boundaries in an image, pattern-matching these elements against elements in memory, and then proceeding in this back-and-forth manner between eyes and memory at increasing levels of detail until the interpretation is complete.

1

The seminal work on visuospatial resonance is “Masking in Visual Recognition: Effects of TwoDimensional Filtered Noise” by Leon Harmon and Bela Julesz, Jun 15, 1973, Science, 180(4091), 1194 –1197; and “From Blobs to Boundary Edges: Evidence for Time- and Spatial-Scale-Dependent Scene Recognition” by Philippe Schyns and Aude Oliva, 1994, Psychological Science, 5, 195 – 200. See also “Hybrid Images” by Aude Oliva et al., 2006, ACM Transactions on Graphics, 25(3), 527– 532.

2

“Is It Warm? Is It Real? Or Just Low Spatial Frequency?” by Margaret Livingstone, Nov 17, 2000, Science, 290(5495), 1299.

3

“Is That a Smile? Gaze Dependent Facial Expressions” by Vidya Setlur and Bruce Gooch, in NPAR ’04: Proceedings of the 3rd International Symposium on Non-Photorealistic Animation and Rendering, 2004, ACM Press, 79 –151.

Visuospatial resonance occurs when an image achieves optimal clarity (i.e., maximum visibility of the lines, edges, and boundaries in an image) due to resonance between the spatial frequency of the image and the observer’s distance from the image.1 • Images rendered at a high spatial frequency appear as sharp outlines with little between-edge detail. High-spatial-frequency images are easily interpreted up close but are not visible from a distance. • Images rendered at a low spatial frequency appear as blurry images with little edge detail. Low-spatial-frequency images are not visible up close but are easily interpreted from a distance. Images rendered at different spatial frequencies can be combined to stunning effect, creating what are referred to as hybrid images. An early example of the hybrid image effect is the elusive smile of Leonardo da Vinci’s Mona Lisa. When observed up close and direct, Mona Lisa does not appear to be smiling. However, when observed out of the corner of the eye or at a distance, her subtle smile emerges. This effect is the result of two different expressions rendered at different spatial scales — the unsmiling mouth is rendered at a high spatial frequency, and the smiling mouth is rendered at a low spatial frequency.2 This technique has been refined to create emotionally ambiguous facial expressions, called gaze dependent facial expressions — i.e., as an observer gazes at an image, the image appears to morph.3 It is not clear whether visuospatial resonance is a bimodal (near and far) phenomenon that can only be achieved with two images or a multimodal phenomenon, potentially accommodating any number of images. Consider visuospatial resonance as a means of increasing the propositional density of static displays, increasing the interestingness of advertising posters and billboards, masking sensitive text or image information, and creating emotionally ambiguous facial expressions in artistic or photographic renderings of faces. See also Attractiveness Bias; Face Detection; Figure-Ground

This hybrid image depicts two familiar figures rendered at different spatial frequencies. Up close, you see only Albert Einstein, who is rendered at a high spatial frequency. At a distance, you see only Marilyn Monroe, who is rendered at a low spatial frequency. The distances at which the clarity of each image is optimal are the points of visuospatial resonance.

The Mona Lisa, to me, is the greatest emotional painting ever done. The way the smile flickers makes it a work of both art and science, because Leonardo understood optics, and the muscles of the lips, and how light strikes the eye — all of it goes into making the Mona Lisa’s smile so mysterious and elusive. — Walter Isaacson, interview with Maclean’s October 17, 2017

194

von Restorff Effect Uncommon things are easier to recall than common things. The von Restorff effect, proposed by psychiatrist Hedwig von Restorff, states that there is increased likelihood of remembering unique or distinctive events or objects compared to those that are common. The von Restorff effect is primarily the result of the increased attention given to the distinctive items in a set, where a set may be a list of words, a number of objects, a sequence of events, or the names and faces of people. It occurs when there is a difference in context (i.e., something is different from things around it) or a difference in experience (i.e., something is different from experiences in memory).1 Differences in context occur when something is noticeably different from other things in the same set. For example, in trying to recall a list of characters such as EZQL4PMBI, people will have heightened recall for the 4 because it is the only number in the sequence. If the 4 was instead a T (i.e., EZQLTPMBI), the T would be much more difficult to recall, since the string of text is all letters. Differences in context of this type explain why unique brands, distinctive packaging, and unusual advertising campaigns are used to promote brand recognition and product sales. Difference attracts attention and is better remembered. Differences in experience occur when something is noticeably different from past experience. For example, people often remember major events in their life (e.g., their first day of college or their first day at a new job). Differences in experience also apply to things like atypical words and faces. Unique words and faces are better remembered than typical words and faces.2 Take advantage of the von Restorff effect by making key elements in a presentation or design look different from the surrounding elements (e.g., bold text). But, apply the technique sparingly — if everything is highlighted, then nothing is highlighted. Since recall for the middle items in a list or sequence is weaker than items at the beginning or end of a list, consider using the von Restorff effect to boost recall for the middle items. Unusual words, sentence constructions, and images are better remembered than their more typical counterparts and should be considered to improve interestingness and recall. See also Habituation; Highlighting; Inattentional Blindness;

Serial Position Effects; Threat Detection

1

The seminal work on the von Restorff effect is “Analyse von Vorgangen in Spurenfeld. I. Über die Wirkung von Bereichsbildung im Spurenfeld” [Analysis of Processes in the Memory Trace: On the Effect of RegionFormation on the Memory Trace] by Hedwig von Restorff, 1933, Psychologische Forschung, 18, 299 – 342. This principle is also known as the isolation effect and novelty effect.

2

Unusual words with unusual spellings are found in abundance in the Harry Potter books of J.K. Rowling and are among the frequently cited reasons for their popularity with children.

The highly distinctive form of the Wienermobile makes it — and the Oscar Mayer brand — unforgettable.

When the world zigs, zag. — Barbara Noakes Bartle Bogle Hegarty Levi’s advertisement

195

Wabi-Sabi An aesthetic style that emphasizes naturalness, simplicity, and subtle imperfection. Wabi-sabi is at once a worldview, philosophy of life, type of aesthetic, and, by extension, principle of design. The term brings together two distinct Japanese concepts: wabi, which refers to a kind of transcendental beauty achieved through subtle imperfection, such as pottery that reflects its handmade craftsmanship; and sabi, which refers to beauty that comes with time, such as the patina found on aged copper. In the latter part of sixteenth-century Japan, the student Sen no Rikyu was tasked to tend the garden by his master. He cleared the garden of debris and scrupulously raked the grounds. Once the garden was perfectly groomed, he proceeded to shake a cherry tree, causing a few flowers and leaves to fall randomly to the ground. This is wabi-sabi.1 Wabi-sabi should not be construed as an unkempt or disorganized aesthetic, a common mistaken association often referred to as wabi-slobby. A defining characteristic of wabi-sabi is that an object or environment appears respected and cared for. The aesthetic is not disordered but, rather, naturally ordered — drawn of crooked lines and curves instead of straight lines and right angles. In some ways, the primary aesthetic ideals of wabi-sabi — impermanence, imperfection, and incompleteness — run counter to traditional Western aesthetic values. For example, Western values typically revere the symmetry of manufactured forms and the durability of synthetic materials, whereas wabi-sabi favors the asymmetry of organic forms and the perishability of natural materials. With the rise of the sustainability movement, Western ideals have begun to evolve toward wabi-sabi, albeit for different reasons. A home interior designed to be wabi-sabi would be clean and minimalist, employ unfinished natural materials, and use a palette of muted natural colors. Consider wabi-sabi when designing for Eastern audiences or Western audiences that have sophisticated artistic and design sensibilities. Incorporate elements that embody impermanence, imperfection, and incompleteness in the design. Employ these elements with subtlety, however, as their extremes can undermine the integrity of the aesthetic (e.g., a dwelling that looks as though it may collapse at any moment is too impermanent and too incomplete). Favor colors drawn from nature, natural materials and finishes, and organic forms and motifs. See also Alignment; Desire Line; MAYA; Ockham’s Razor; von Restorff Effect;

Zeigarnik Effect

1

See Wabi-Sabi: For Artists, Designers, Poets & Philosophers by Leonard Koren, 1994, Stone Bridge Press; and Wabi Sabi Style by James Crowley and Sandra Crowley, 2001, Gibbs Smith.

Deborah Butterfield uses found pieces of metal and wood in her horse sculptures. Equine wabi-sabi.

Pare down to the essence, but don’t remove the poetry. — Leonard Koren Wabi-Sabi: For Artists, Designers, Poets & Philosophers

196

Waist-to-Hip Ratio A preference for a particular ratio of waist size to hip size in men and women. The waist-to-hip ratio is a primary factor for determining attractiveness for men and women. It is calculated by dividing the circumference of the waist (narrowest portion of the midsection) by the circumference of the hips (area of greatest protrusion around the buttocks). Men prefer women with a waistto-hip ratio between 0.67 and 0.80. Women prefer men with a waist-to-hip ratio between 0.85 and 0.95.1 The waist-to-hip ratio is primarily a function of testosterone and estrogen levels and their effect on fat distribution in the body. High estrogen levels result in low waist-to-hip ratios, and high testosterone levels result in high waist-to-hip ratios. Human mate selection preferences likely evolved to favor visible indicators of these hormone levels (i.e., waist-to-hip ratios), as they are reasonably indicative of health and reproductive potential.2 For men, attraction is primarily a function of physical appearance. Women who are underweight or overweight are generally perceived as less attractive, but in all cases, women with waist-to-hip ratios approximating 0.70 are perceived as the most attractive for their respective weight group. For women, attraction is a function of both physical appearance and financial status. Financial status is biologically important because it ensures a woman of security and status for herself and her children. However, as women become increasingly independent with resources of their own, the strength of financial status as a factor in attraction diminishes. Similarly, women of modest resources may be attracted to men of low financial status when their physical characteristics indicate strong male features like dominance and masculinity (e.g., tall stature), but men with both high waist-to-hip ratios and high financial status are perceived as the most desirable. The waist-to-hip ratio has design implications for the depiction of the human form. When the presentation of attractive women is a key element of a design, use renderings or images of women with waist-to-hip ratios of approximately 0.70. When the presentation of attractive men is a key element of a design, use renderings or images of men with waist-to-hip ratios of approximately 0.90, strong male features, and visible indicators of wealth or status (e.g., expensive clothing). See also Anthropomorphism; Attractiveness Bias; Baby-Face Bias;

Supernormal Stimulus

1

The seminal work on the waist-to-hip ratio is “Adaptive Significance of Female Physical Attractiveness: Role of Waist-to-Hip Ratio” by Devendra Singh, 1993, Journal of Personality and Social Psychology, 65, 293 – 307; and “Female Judgment of Male Attractiveness and Desirability for Relationships: Role of Waist-toHip Ratio and Financial Status” by Devendra Singh, 1995, Journal of Personality and Social Psychology, 69, 1089 –1101.

2

While preferences for particular features like body weight or breast size have changed over time, the preferred waist-to-hip ratios have remained stable. For example, in analyzing the measurements of Playboy centerfolds since the 1950s and Miss America winners since the 1920s, researchers discovered that the waistto-hip ratios remained between 0.68 and 0.72 despite a downward trend in body weight.

WHR = 0.70

When asked to pick the most attractive bodies, people favored female A and male C, corresponding to WHRs of 0.70 and 0.90. Mannequins have changed with the times, but their WHRs have been 0.70 for women and 0.90 for men for over five decades.

WHR = 0.90

197

Wayfinding The process of using spatial and environmental information to navigate to a destination. Whether navigating a college campus, the wilds of a forest, or a website, people use spatial and environmental information to aid navigation. This is known as wayfinding.1

1

The seminal work on wayfinding is The Image of the City by Kevin Lynch, 1960, MIT Press. See also “Cognitive Maps and Spatial Behavior” by Roger M. Downs and David Stea, in Image and Environment, 1973, Aldine Publishing Company, 8 – 26.

2

See, for example, “Wayfinding by Newcomers in a Complex Building” by Darrell L. Butler et al., 1993, Human Factors, 35(1), 159 –173.

The basic process of wayfinding involves four stages: 1. Orientation — Determining one’s location relative to nearby objects and

the destination. To improve orientation, divide a space into distinct small parts, using landmarks and signage to create unique subspaces. Landmarks provide strong orientation cues and provide locations with memorable identities. Support orientation using clearly identifiable “you are here” landmarks and signs. 2. Route decision — Choosing a route to get to the destination. To improve

route decision-making, minimize the number of navigational choices and provide signs or prompts at decision points. Indicate the shortest route to a destination. Simple routes can be followed most efficiently with the use of clear narrative directions or signs. Maps provide more robust representations of the space and are superior when the space is very large, complex, or poorly designed. Support route decisions by minimizing the number of navigational choices and providing signs or prompts at key decision points.2 3. Route monitoring — Monitoring the chosen route to confirm that it

is leading to the destination. To improve route monitoring, connect locations with paths that have clear beginnings, middles, and ends. The paths should enable a person to easily gauge their progress using clear lines of sight to the next location or signage indicating relative location. In cases where paths are particularly lengthy, consider augmenting the sight lines with visual lures, such as pictures, to help pull people through. Support route monitoring by providing regular confirmations that the path is leading to the destination. 4. Destination recognition — Recognizing the destination. To improve

destination recognition, enclose destinations such that they form dead ends, or use barriers to disrupt the flow of movement through the space. Give destinations clear and consistent identities. Consider the stages of wayfinding when designing maps and other navigational aids. Prototype wayfinding systems in real contexts when possible. Travel routes to test and tune the placement and visibility of markers. Observe people and talk with those who get disoriented or can’t find their destination. Learn where things broke down. Those who are lost can help designers find their way. See also Error, Design; Error, Human; Mental Model; Progressive Disclosure;

Rosetta Stone

The wayfinding design of the Pittsburgh Zoo and PPG Aquarium is divided into unique subspaces based on the type of animal and environment. Navigational choices are minimal, and destinations are clearly marked by signage and dead ends. The visitor map further aids wayfinding by featuring visible and

recognizable landmarks, clear and consistent labeling of important locations, and flow lines to assist in route decision-making.

Only wait, Gretel, ’til the moon rises, then we shall see the breadcrumbs I scattered along the path; they will show us the way back to the house. — Hansel and Gretel Brothers Grimm

198

Weakest Link An element designed to fail in order to protect more important elements from harm. It is said that a chain is only as strong as its weakest link. This suggests that the weakest link in a chain is also the least valuable and most expendable link — a liability to the system that should be reinforced, replaced, or removed. This is not always the case. The weakest element in a system can be used to protect other more important elements by failing first — essentially making the weakest link one of the most important elements in the system.1 Weakest links in a system work in one of two ways. They can: 1. Fail and passively minimize damage — For example, electrical circuits

are protected by fuses, which are designed to fail so that a power surge doesn’t damage the circuit. The fuse is the weakest link in the system but, in failing, passively protects the system. 2. Fail and activate additional systems that minimize damage — For

example, automatic sprinkler systems in a building are typically activated by components that fail (e.g., liquid in a glass cell that expands to break the glass when heated), which then activate the release of the water to extinguish a fire. Applying the weakest link principle involves several steps: 1. Identify a failure condition. 2. Identify or define the weakest link in the system for that failure condition. 3. Further weaken the weakest link and strengthen the other links as

necessary to address the failure condition. 4. Ensure that the weakest link will only fail under the appropriate,

predefined failure conditions. The weakest link principle is limited in application to systems in which a particular failure condition affects multiple elements in the system. Systems with decentralized and disconnected elements cannot benefit from the principle, since the links in the chain are not connected. The weakest link in a system exists by design or by default — either way, it is always present. Therefore, consider the weakest link principle when designing systems in which failures affect multiple elements. Use the weakest link to shut down the system or activate other protective systems. Perform adequate testing to ensure that only specified failure conditions cause the weakest link to fail. Further weaken the weakest element and harden other elements as needed to ensure the proper failure response. See also Factor of Safety; Forgiveness; Modularity; Redundancy;

Structural Forms

1

An early example of the weakest link principle is the electrical fuse, which was patented by Thomas Edison in 1890. The crumple zone was patented by Mercedes-Benz engineer Béla Barényi in 1937. See, for example, “Reducing the Risk of Failure by Deliberate Weaknesses” by Michael Todinov, 2020, International Journal of Risk and Contingency Management, 9(2), 33 – 53.

Passenger Shell

Crumple Zone

Crumple zones are one of the most significant automobile safety innovations of the twentieth century. The front and rear sections of a vehicle are weakened to easily crumple in a collision, reducing the impact energy transferred to the passenger shell. The passenger shell is reinforced to better protect occupants. The total system is designed to sacrifice less important elements for the most important element in the system — the people in the vehicle.

Crumple Zone

199

WYSIWYG What a person sees in a design context should be what they get in a delivery context. WYSIWYG is an acronym that stands for What You See Is What You Get. The design principle was coined and developed at Xerox PARC in the mid1970s, guiding their efforts to print documents that matched the look of

their on-screen counterparts. The principle applies to any environment in which a design is translated from one context or medium to another — e.g., a design in CAD software is 3D-printed. In such cases, what the person sees in the design or development environment is what they should get in the final translated design — i.e., all of the salient features should be preserved in the translation across contexts or media.1 When WYSIWYG is applied well, the effect is nothing short of magical: A thing designed in one environment is materialized in another. There are two strategies for realizing this magic: 1. Real-time reflection — The artifact being designed looks like the final

translated design throughout the process. For example, the text and layout of a digital document looks the same on-screen and printed. 2. Point-in-time previews — A designer can invoke a preview to simulate

what the final translated design will look like. For example, previewing how programming code will work before compiling. There are design contexts in which the final translated design can take many forms. For example, a website will render differently depending on the device, operating system, browser, etc. In this example, the number of output combinations could easily exceed 100, which means that no one view can do the job. In such cases, preview options for the most common combinations and use cases should be provided. We are at the dawn of a second Golden Age of WYSIWYG: With the rise of 3D printing, augmented reality, and virtual reality, it has never been more important to match the designed artifact with the delivered artifact. Therefore, always provide the means for designers to see and confirm intent before fabrication, manufacturing, or production. Favor real-time reflection when the artifacts being created are simple and visual. Favor point-in-time previews when the artifacts being created are complex or need translation to resemble the final format. When delivery options can take many forms, provide preview options for the most common combinations and use cases. See also Confirmation; Feedback; Forgiveness; Garbage In–Garbage Out;

Visibility

1

The seminal work on WYSIWYG occurred with the development of a text editor named Bravo, created for the Xerox Alto by Charles Simonyi and Butler Lampson. See, for example, “The Real History of WYSIWYG” by John Markoff, Oct 18, 2007, The New York Times.

WYSIWIG was instrumental in the development of early user interfaces, but today it is largely taken for granted, if not forgotten. However, the principle is finding new relevance in the design of augmented reality, virtual reality, and 3D printing experiences. Here the company Perch has created a WYSIWYG “magic mirror” retail display that makes it

easy for customers to experiment with different cosmetic products. Customers select different makeup options, and their digital reflection instantly shows what they look like with the makeup applied. What you see in the display is what you get when you buy the product.

200

Zeigarnik Effect Incomplete or interrupted tasks are more likely to hold attention and be remembered. The Zeigarnik Effect, proposed by the psychiatrist and psychologist Bluma Zeigarnik, states that interrupted or incomplete tasks are better remembered than completed tasks. This occurs because the unconscious mind seeks closure and completion. In order to get what it wants, the unconscious mind drives the conscious mind to focus and complete the task at hand. When tasks are incomplete or get interrupted, the unconscious mind keeps working the problem; and this helps keep activity-related information in memory.1 Zeigarnik proposed the effect after researching a phenomenon first observed by her professor. He noticed that waiters could remember the details of unpaid orders better than the details of paid orders. Once customer orders were completed, waiters forgot the details of those orders; the Zeigarnik effect only applies while tasks are incomplete. In the 1980s, the Rubik’s Cube’s popularity quickly spread around the world, delighting and frustrating people of all ages. When puzzles like the Rubik’s Cube are left unsolved, people often experience a strong sense of dissatisfaction or unrest. They long for a sense of completion, a sense of closure. This need for closure is so overwhelming in some people that they can’t focus on other things — many even lose sleep as their minds continuously replay possible strategies for success. One of the main reasons for the strong desire to complete these kinds of puzzles is the Zeigarnik effect. Because incomplete tasks are better recalled, the Zeigarnik effect can be used to increase memorability. For example, students who take breaks while studying, during which they do unrelated activities such as watching television or playing games, will remember material better than students who complete study sessions without a break. The Zeigarnik effect is strongest when people are highly motivated to complete a task. If you want your audience to return after a meeting break, to watch the next broadcast, to listen to next week’s podcast, or to buy the next novel, don’t give away the ending right away. The “To be continued…” cliffhanger device works because of the Zeigarnik effect — it keeps the story front of mind and keeps people coming back for more. They need to hear the ending, just like they need to complete the Rubik’s Cube. Consider the Zeigarnik effect to engage and maintain attention. Most importantly, never use the Zeigarnik effect to… See also Closure; Flow; Gamification; IKEA Effect; Storytelling;

von Restorff Effect

1

The seminal work is “On Finished and Unfinished Tasks” by Bluma Zeigarnik, in A Sourcebook of Gestalt Psychology by Willis Ellis (Ed.), 1938, Kegan Paul, Trench, Trubner & Co. See also “Zeigarnik and von Restorff: The memory effects and the stories behind them” by Colin MacLeod, 2020, Memory & Cognition, 48, 1073 –1088.

A never-ending series of puzzles leaves Tetris players shifting blocks in their dreams, a condition dubbed the Tetris Effect.

And when the power goes off for good then I Will play it in my head until I die. — Neil Gaiman, “Virus” in Smoke and Mirrors

Credits

Front and Back Cover

016 Back of the Dresser

002 Accessibility

017 Biophilia Effect

Photographs of Dendrobates leucomelas (yellow-banded poison dart frog) and Argiope bruennichi (wasp spider) by iStock.com/GlobalP. Photograph by Gerry Manacsa.

004 Aesthetic-Usability Effect

Image from lilwater/TurboSquid.

005 Affordance

Photograph of green door from istock.com/aloha_17. Photograph of yellow door by Adewale Oshineye; used with permission.

007 Anchoring

Photograph of tablet from iStock. com/Longmongdoosi. Image of blue wave from iStock.com/A-Y-N. Image of dog face from iStock.com/ Steppeua.

008 Anthropomorphism

Photograph of Coca-Cola bottle from Stock Republique/Shuterstock.

009 Aposematism

Photograph of frog from iStock. com/GlobalP. Photograph of spider from Stock.com/AmericanWildlife. Photograph of snake from iStock. com/texcroc.

011 Appeal to Nature

Photograph courtesy of the Boston Public Library, Leslie Jones Collection.

012 Archetypes, Psychological

Images courtesy of Sue Weidemann Brill. Design concepts by Michael Brill, illustrations by Safdar Abidi.

014 Attractiveness Bias

Photographs of Kennedy and Nixon © Bettman/Corbis.

The Power Mac G5 is a registered trademark of Apple Computer, Inc. Photograph of PC interior from iStock.com/GodfriedEdelman. Photograph by Urban Land Institute via Flickr through Creative Commons license.

019 Brooks’ Law

Photograph from iStock.com/ EvgeniyShkolenko.

020 Brown M&M’s

Photograph of Van Halen from Kevin Estrada/Shutterstock.

021 Bus Factor

Images from iStock.com/Nubenamo and iStock.com/Muzyka Daria.

023 Causal Reductionism

Photograph of Edsels by Alden Jewell via Flickr through Creative Commons Attribution 2.0 Generic license.

024 Chesterton’s Fence

Photograph by Charles Kelly/AP/ Shutterstock.

025 Clarke’s Laws

Photograph by Matthew Yohe via Wikimedia through Creative Commons license.

027 Closure

The USA Network logo is a trademark of NBCUniversal Television and Streaming. The IBM logo a registered trademark of IBM. The WWF logo is a registered trademark World Wildlife Fund. All rights reserved.

028 Cognitive Dissonance

Painting of Ben Franklin by David Martin.

029 Color Effects

Photograph of tarantula from iStock. com/Noriel Barria. Photograph of black shoes from iStock/ yasinguneysu. Photograph of black purse from iStock/LarisaBozhikova. Photograph of dove from iStock. com/luxiangjian4711. Photograph of yacht from iStock.com/Matveev_ Aleksandr. Photograph of diamond ring from iStock.com/ikonacolor. Photograph of car from iStock.com/ gremlin. Image of sports jersey from iStock.com/Steve Zmina. Image of red shoes from iStock.com/ macroworld. Image of stop sign from iStock.com/coolvectormaker. Photograph of fruits and vegetables from iStock.com/firina. Image of warning sign from iStock.com/ Sudowoodo. Image of school bus from iStock.com/Bet_Noire. Photograph of forest from iStock. com/quickshooting. Image of exit sign from iStock.com/hanohiki. Image of recycle logo from iStock. com/Inna Kharlamova. Image of green soap from iStock/PhotoMelon. Photograph of green walk signal from iStock.com/LFO62. Photograph of beach from iStock.com/benedek. Photograph of blue cleaners from iStock.com/fcafotodigital. Photograph of police cap from iStock.com/koya79.

030 Color Theory

Photograph of blue agate from iStock/Minakryn Ruslan. Photograph of holly from iStock/gojak. Photograph of pansy from iStock/ narcisa. Photograph of tree frog from iStock/alptraum.

031 Common Fate

Photographs from iStock.com/ Tero Vesalainen and iStock.com/ gorodenkoff.

040 Conway’s Law

Photograph courtesy of NASA.

041 Cost-Benefit

Photograph from Flystock/ Shutterstock.

042 Creator Blindness

Photograph from Keith Homan/ Shutterstock.

048 Desire Line

Photograph by Carl Huber.

051 Don’t Eat the Daisies

Photograph from iStock.com/ mce128.

055 Error, Design

Photograph courtesy of NASA.

056 Error, Human

Photograph of flower from iStock. com/Floortje. Photograph of cat from iStock.com/knape.

057 Expectation Effects

Photograph of wine bottle from iStock.com/KoKimk. Photograph of wine from iStock.com/ huePhotography. Image of wine box from iStock.com/usha negi.

059 Face Detection

Top photograph from iStock.com/ tomeng.

060 Face-ism Ratio

Photograph of male from iStock. com/shapecharge.

063 Feature Creep

Top photograph by Alexey M via Wikimedia through Creative Commons license. Bottom left photograph from iStock.com/ MicheleBoiero. Bottom right photograph from iStock.com/rusm.

064 Feedback

Photograph by AP/Shutterstock.

067 Figure-Ground

The Paramount and Paramount Television Studios logos are trademarks of Paramount Pictures Corporation. The adidas logo is a registered trademark of adidas America, Inc. The Toblerone logo is a registered trademark of Mondelez International. The Patagonia logo is a registered trademark of Patagonia, Inc. All rights reserved.

069 Fitts’ Law

Photograph by Joe Shlabotnik via Flickr through Creative Commons license.

071 Five Tenets of Queuing

Photograph of Hidden Mickey by Loren Javier via Flickr through Creative Commons AttributionNoDerivs 2.0 Generic license. Photograph of passengers on a plane by iStock/Demkat. Photograph of elevator buttons by iStock. com/boonsom. Photograph of ticket dispenser by iStock.com/ sidewaysdesign.

074 Forgiveness

Photograph courtesy of Ballistic Recovery Systems, Inc.

075 Form Follows Function

Photographs by Judith Keller through Creative Commons ShareAlike 4.0 International license.

076 Framing

Nobody Trashes Tennessee is a registered service mark of The Tennessee Department of Transportation. Don’t Trash California © California Department of Transportation (CALTRANS). Don’t mess with Texas is a registered service mark and trademark of the Texas Department of Transportation. All rights reserved.

079 Gamification

Photograph by Flickr user KJ Vogelius via Flickr through Creative Commons license.

081 Gates’ Rule of Automation

Photograph by Steve Jurvetson through Creative Commons Attribution 2.0 Generic license.

082 Gloss Bias

Image of phone from iStock.com/ scanrail. Photograph of sports car from iStock.com/Rawpixel. Photograph of rocks from iStock. com/Amawasri.

085 Groupthink

AP Photo/David Duprey

087 Habituation

Image of phone from iStock.com/ yalcinsonat1.

089 Hick’s Law

Image of menu from iStock.com/ icomaker. Photograph of exam from iStock.com/travenian. Photograph of braking driver from iStock.com/ ppengcreative. Photograph of Lake Kariba, Zimbabwe, from iStock.com/ kitz-travellers. Photograph of camera from iStock.com/Chelnok. Image of red light from iStock.com/tioloco. Image of buttons from iStock.com/ Gearstd. Photograph of martial artist from iStock.com/FilippoBacci.

090 Hierarchy of Needs

Photograph of drone by Björn via Flickr through Creative Commons Attribution 2.0 Generic license. Photograph of dog by Richard Masoner / Cyclelicious via Flickr through Attribution-ShareAlike 2.0 Generic license. Photograph of children at the beach from iStock. com/Imgorthand.

093 Icarus Matrix

Photograph of SpaceX SN8 Flight by Ron Frazier. Both photographs of Starship SN8 wreckage by Steve Jurvetson. All images shared via Flickr through Creative Commons Attribution 2.0 Generic license.

095 Identifiable Victim Effect

Photograph of group of dogs from iStock.com/vladimirst. Photograph of single dog from iStock.com/ DanBrandenburg.

096 IKEA Effect

Photograph from iStock.com/ monkeybusinessimages.

097 Inattentional Blindness

Image of phone from iStock.com/ ET-ARTWORKS.

Credits

continued

100 Iron Triangle

Photograph from iStock.com/ hatman12.

101 Iteration

Photograph courtesy of NASA.

102 Kano Model

Photograph by Jeff Johnson for Rivian. Used with permission from Rivian.

103 KISS

Photograph courtesy of NASA.

104 Knowing-Doing Gap

AP Photo/Paul Sakuma

105 Learnability

Pac-Man is a registered trademark of Bandai Namco Entertainment Inc. Photograph of Pac-Man cabinet from Album / Alamy Stock Photo.

106 Left-Digit Effect

Photograph from iStock.com/ pablohart.

108 Levels of Invention

Photograph of charcoal stick from iStock.com/ajma_pl. Photograph of pencil from iStock.com/pidjoe. Photograph of pen from iStock. com/deepblue4you. Photograph of laser printer from iStock.com/jaroon. Image of tablet and stylus from iStock.com/pictafolio.

109 Leverage Point

Photograph from gabriel12/ Shutterstock.

112 Maintainability

Photograph by Alessio Sbarbaro via Wikimedia through Creative Commons Attribution-ShareAlike 4.0 International license.

114 Maslow’s Hammer

Photographs from Juicero PR company handout.

116 Mental Model

Photograph by Everyonephoto Studio/Shutterstock.

117 Miller’s Law

Photograph of cat by iStock.com/ Nils Jacobi. Photograph of iPhone by iStock.com/guvendemir. Photograph of Social Security card by iStock. com/undefined undefined. iStock. com. Image of credit card by iStock. com/youngID.

119 Minimum-Viable Product

Image of zappos.com from web. archive.org.

120 Mnemonic Device

The AFLAC logo is a registered trademark of American Family Life Assurance Company of Columbus. The Hulu logo is a trademark of Hulu. The 3M logo is a registered trademark of 3M. The GEICO logo is a registered trademark of Government Employees Insurance Company. The StubHub logo is a trademark of StubHub, Inc. All rights reserved.

121 Modularity

Photograph courtesy of NewDealDesign, LLC. Photography by Mark Serr Photography.

122 Nirvana Fallacy

Photograph by Becker1999 via Wikimedia through Creative Commons Attribution 2.0 Generic license.

126 Nudge

Image of human from istock.com/ DenBoma.

127 Number-Space Associations Photograph courtesy of NASA.

132 Paradox of Unanimity

Talmud Readers by Adolf Behrman.

133 Pareto Principle

Image from K2_UX via Wikimedia through Creative Commons Attribution-ShareAlike 2.0 Generic license.

137 Perspective Cues

Image of human from iStock.com/ jesadaphorn.

139 Phonetic Symbolism

Photograph of ice cream from istock. com/Magone.

140 Picture Superiority Effect

All NIKE logos are registered trademarks of NIKE, Inc. All rights reserved.

141 Play Preferences

Reprinted from Evolution and Human Behavior, Vol. 23(6), Gerianne M. Alexander and Melissa Hines, “Sex differences in response to children’s toys in non-human primates (Cercopithecus aethiops sabaeus)”, pp. 467– 479, Copyright 2002, with permission from Elsevier.

143 Premature Optimization

Photograph by Kyle James via Flickr through Creative Commons Attribution-ShareAlike 2.0 Generic license.

144 Priming

Photographs of eyes and flowers © 2008 Jupiterimages Corporation. Photograph of sink from iStock.com/ pashapixel.

145 Process Eats Goal

Photograph by frantic00/ Shutterstock.

148 Progressive Subtraction

The Apple logos are registered trademarks of Apple Inc. The Starbucks logos are registered trademarks of Starbucks U.S. Brands, LLC. The American Airlines logos are registered trademarks of American Airlines, Inc. All rights reserved.

151 Prototyping

Photograph of parking signs by Alex Millauer/Shutterstock. Photographs of redesigned parking signs used with permission from Nicole Sylianteng.

154 Reciprocity

Photograph by rblfmr/Shutterstock.

155 Recognition over Recall

Photograph of rotary phone from iStock/Spauln. Photograph of Smartphone from iStock/Ridofranz. Photographs of contacts from iStock/Nynke van Holten, iStock/ janetleerhodes, iStock/Delmaine Donson, and iStock/JohnnyGreig.

158 Root Cause

Image of RMS Titanic by HefePine23 via Wikimedia through Creative Commons Attribution-ShareAlike 2.0 Generic license.

159 Rosetta Stone

Image of Pioneer plaque courtesy of NASA. Image of Rosetta Disk courtesy of The Long Now Foundation.

160 Rule of Thirds

Photographs of Ali vs. Liston and Ali vs. Frasier © Bettman/Corbis.

162 Satisficing

Photographs courtesy of NASA.

163 Savanna Preference

Photograph of Teletubbies © 1996 – 2003 Ragdoll Ltd. Used with permission. All rights reserved.

165 Scarcity

Photographs from Running of the Brides® courtesy of Filene’s Basement. Photography by Robb Cohen and Brett Clark.

167 Self-Similarity

Mona Lisa photomosaic by Robert Silvers/Runaway Technology Inc. Photograph of acacia tree courtesy of U.S. Fish and Wildlife Service. Photograph of aqueduct by Prioryman via Wikipedia through Creative Commons license.

168 Serial Position Effects

Photograph by Andrey_Popov/ Shutterstock.

172 Social Proof

Photograph by Marco Verch via Flickr through Creative Commons Attribution 2.0 Generic license.

173 Social Trap

Photograph of traffic by Comstock/ Stockbyte/Thinkstock. Photograph of toll road by algre/iStock/Thinkstock.

176 Storytelling

Photographs of Civil Rights Memorial courtesy of Southern Poverty Law Center. Photographs by John O’Hagan. Designed by Maya Lin.

178 Structural Forms

Photographs of RDFW courtesy of Geocell Systems. Pod photograph courtesy of Sanford Ponder, Icosa Village, Inc.

179 Sunk Cost Effect

Profile drawing by Emoscopes via Wikipedia through GNU Free Documentation License. Schmematics by Julien.scavini via Wikipedia through Creative Commons Attribution-ShareAlike 3.0 Unported license.

180 Supernormal Stimulus

Image of Superhero by Gazometr/ iStock/Thinkstock. Photograph of Cheesecake by Lesyy/iStock/ Thinkstock. Image of Po © 19962003 Ragdoll Ltd. Used with permission. All rights reserved.

183 Symmetry

Photograph of Notre Dame Cathedral from iStock.com/TomasSereda.

184 Testing Pyramid

Photograph courtesy of NASA.

185 Threat Detection

Photograph of spider by GlobalP/ iStock/Thinkstock.

187 Uncanny Valley

Photograph of mannequin one © 2008 Jupiterimages Corporation. Photograph of mannequin two by Flickr user Dierk Schaefer via Flickr through Creative Commons license. Photograph of mannequin three by Flickr user Jesse Swallow via Flickr through Creative Commons license.

189 Uniform Connectedness

Photograph of remote by iStock.com/ Mycolor.

192 Visibility

Photographs courtesy of U.S. Navy.

193 Visuospatial Resonance

Hybrid image of Einstein and Monroe courtesy of Aude Oliva, MIT. Source images for Einstein © Bettman/Corbis and Monroe, Getty Images/Hulton Archive. Special thanks to Aude Oliva.

194 von Restorff Effect

Top photograph by Eli Christman. Bottom photograph by frankieleon. All images shared via Flickr through Creative Commons Attribution 2.0 Generic license.

195 Wabi-Sabi

Top photograph by Adamsofen. Bottom left photograph by Wonderlane. Bottom middle photograph by Celeste Lindell. Bottom right photograph by Paul VanDerWerf. All images shared via Flickr through Creative Commons Attribution 2.0 Generic license.

196 Waist-to-Hip Ratio

Photograph of mannequins courtesy of Adel Rootstein, Inc. Drawn images reproduced with permission from Devendra Singh.

197 Wayfinding

Map courtesy of Pittsburgh Zoo and PPG Aquarium. Illustration by David Klug.

199 WYSIWYG

Photograph used with permission from Perch Interactive.

Acknowledgments The authors would like to thank the many contributors whose works are featured and ask that readers review the Credits section to learn more about these very talented individuals and companies. Also, thanks to readers for over 20 years of feedback and suggestions since the first edition. This third edition reflects many changes based on this input. Special thanks to M. Elen Deming, Kristin Ellison, Timothy Griffin, Karsten Loepelmann, Tsai Lu Liu, Stan Love, Carl Myhill, Scott O’Connor, David Umla, Doug Wheelock, and the amazing Quarto team for their partnership and support over two decades.

About the Authors William Lidwell

Design gadfly Chief R&D Officer at Avenues The World School

Kritina Holden

NASA human factors guru Technical Fellow in Human Factors at Leidos

Jill Butler

Choreographer of bits and atoms Founder and President at Stuff Creators Design

Index

3D Realms, 143 3M, 003 80/20 rule. See Pareto Principle 9091 kettle, 037 9093 kettle, 037 A Abbe, Ernst, 001 Abbe error, 001 Abbe Principle, 001 Accessibility, 002, 112 Ackoff’s Law, 003 active failures, 182 active redundancy, 156 actual traits, 124 “addiction” archetype, 013 adidas logo, 067 Adiri Natural Nurser baby bottle, 008 Adjaye, David, 075 The Aeneid (Virgil), 066 aesthetic consistency, 035 Aesthetic-Usability Effect, 004 affect heuristic, 088 Affordance, 002, 005, 055, 056, 074, 080 AK-47 assault rifle, 103 Alessi kettles, 037 aligned incentives, 126 Alignment, 006, 051, 084, 092 Ali, Muhammad, 160 alphabet organization, 070 Amazon, 119 ambiguous figure-ground compositions, 067 American Kennel Club (AKC), 070 analogous colors, 030 Anchoring, 007, 174 Angle-of-Attack (AoA) sensor, 123 Anthropomorphism, 008 Apollo 13 spacecraft, 162 Aposematism, 009 Apparent Motion, 010 apparent unanimity, 085 Appeal to Nature, 011

Apple Computer, Inc., 016, 054, 074, 082, 083, 088, 128, 149, 172 apples to apples comparison, 032 approximation conditioning. See Shaping arbitrary icons, 094 Arch Deluxe burger, 166 archetypal forms, 012 archetypal social roles, 012 archetypal stories, 012 Archetypes, Psychological, 012 Archetypes, System, 013 Ariely, Dan, 125 arm/fire operation, 033 asymmetric visibility, 181 asymmetry, 006, 011, 110, 156, 176, 183, 187, 195 atmospheric perspective cues, 137 Attractiveness Bias, 014 attribution bias, 088 automation, 081 automotive UX, 041 axes of constraint, 036 axis of orientation, 086 B Baby-Face Bias, 008, 015, 111 Back of the Dresser, 016 Bacon, Francis, 034 Baker-Miller Pink, 030 ballistic movement, 069 Ban, Shigeru, 075 Barbie dolls, 180 barrier-free design. See Accessibility barriers, 036 Battle of Crécy, 044 beast trails. See Desire Line beautiful failures, 093 beautiful successes, 093 Beck, Harry, 130 Beethoven, Ludwig van, 066 Beetle automobile, 015, 120, 180

behavioral mimicry, 118 bell curve. See Normal Distribution benchmark variables, 032 Bernheimer, Lily, 150 best is the enemy of the good. See Satisficing Big Bend National Park, 152 Biophilia Effect, 017 Bischoff, Klaus, 015 Blendtec, 175 blinking highlights, 091 Boeing CST-100 Starliner spacecraft, 184 bold highlights, 091 boomerang effects, 138 Box, George, 018 Box’s Law, 018 braking, 089 brand perception, 139 Bravo text editor, 199 brightness, 030 British Air Ministry, 068 Brooks, Frederick P., Jr., 019 Brooks’ Law, 019 Brothers Grimm, 197 Brown M&M’s, 020 Buckley, George, 003 Buran space shuttle, 078 Bus Factor, 021 Bush, George, Sr., 186 business viability, 119 Butterfield, Deborah, 195 butterfly ballot, 006 C Cabbage Patch Kids, 176, 180 calipers, 001 Cameron, William Bruce, 177 carpenter ant principle. See Brown M&M’s Carson, David, 092 cascade failure, 156 category organization, 070

Cathedral Effect, 022 Causal Reductionism, 023 cause-event sequences, 158 Center for Science in the Public Interest, 175 Challenger space shuttle, 061 Chartres Cathedral, 083 chasm, 050 cheating incentive, 138 chemins de l’âne (donkey paths). See Desire Line Chesterton, G.K., 024 Chesterton’s Fence, 024 choice architecture. See Nudge Chou, Yu-Kai, 079 chunking, 117 Churchill, Winston, 114 Cialdini, Robert B., 172 Cinderella Castle, 137 Clarke, Arthur C., 025 Clarke’s Laws, 025 Classical Conditioning, 026 clear feedback, 126 cliffhangers, 200 Clinton, Bill, 175 Closure, 027 Coca-Cola Classic, 024 Coca-Cola Company, 008, 024 Coca-Cola “contour” bottle, 008 Codman, Ogden, Jr., 183 cognitive bias, 011 Cognitive Dissonance, 028, 042 cognitive load, 135 collateral damage, 138 color blindness. See color vision deficiency (CVD) Color Effects, 029, 030 color highlights, 091 color similarity, 171 Color Theory, 030 color vision deficiency (CVD), 009 Common Fate, 031 communication structure, 040 Comparison, 032, 181

complementary colors, 030 comprehensibility, 112 concept prototyping, 151 Concorde airliner, 078, 179 conditioning by successive approximations. See Shaping Confirmation, 002, 033, 056, 064, 074, 080, 097, 135, 197 Confirmation Bias, 034, 085 confirmation strategies, 074 il Conico kettle, 037 consider-the-opposite strategy, 034 Consistency, 016, 035, 105, 132 Constraint, 002, 036, 038, 055, 056, 068, 069, 080, 100 context sensitivity, 192 continuum organization, 070 Contour Bias, 037 Control, 002, 031, 035, 036, 038, 041, 055, 064, 069, 073, 077, 089, 094, 103, 105, 112, 113, 114, 117, 127, 147, 169, 171, 186, 189, 192 control-display relationship. See Mapping control-effect relationship, 113 conventions, 036 Convergence, 039 Conway, Melvin E., 040 Conway’s Law, 040 Cost-Benefit, 041, 122, 138, 143, 179 cost constraints, 100 cow paths. See Desire Line Coxcomb graphs, 032 craftsmanship, 016 creativity, need for, 090 Creator Blindness, 042 Croshaw, Ben “Yahtzee,” 143 Crowd Intelligence, 040, 043 Crumb, Robert, 092 crumple zones, 198 Crystal Pepsi, 042 Csikszentmihalyi, Mihaly, 073

CST-100 Starliner spacecraft, 184 cube law. See Scaling Fallacy culture of compliance, 085 D Daniels, Gilbert S., 124 DARPA, 157 dazzle camouflage, 084 dead eye syndrome, 187 Death Spiral, 044 decline stage, 146 deep propositions, 149 Defensible Space, 045 delighter features, 102 demand characteristics, 057 Depth of Processing, 046 Design by Committee, 047 design by dictator, 047 design-caused errors, 055 design-enabled errors, 055 design-induced errors, 055 design iteration, 101 design stage, 049 Desire Line, 048, 157, 175 desire paths. See Desire Line destination recognition, 197 Development Cycle, 049, 078, 101, 143, 146 development iteration, 101 development stage, 049 device settings, 089 devil’s advocate, 042 diagonal axes, 006 dialog confirmation, 033 Dichter, Ernest, 096 Dickson, Tom, 175 Diffusion of Innovations, 050 Dillon, Andrew, 074 discoverability, 105 Disney, 012, 071, 137, 180 distraction interference, 098 divine proportion. See Golden Ratio donkey paths. See Desire Line

Index

continued

Don’t Eat the Daisies, 051 “Don’t Mess with Texas” campaign, 076 Double Stuf Oreos, 180 downvotes/upvotes, 043 Drake, Frank, 159 drunkard’s search principle. See Streetlight Effect Dubuffet, Jean, 092 Duke Nukem Forever video game, 143 Dunbar, Robin, 052 Dunbar’s Number, 052 Dunning, David, 053 Dunning-Kruger Effect, 053 Durant, Thomas C., 138 Dvorak keyboard, 136 Dymaxion car, 062 Dyson vacuum cleaners, 175 E Eames LCW chair, 083 early adopters subgroup, 050 early majority subgroup, 050 Edison, Thomas, 157 Edsel automobile, 023 Edward Fry’s Readability Graph, 153 Eiffel, Gustave, 058, 178 Eiffel Tower, 178 Einstein, Albert, 128 Einstein, Ben, 114 Einstellung effect, 114 elaborative rehearsal, 046 elephant trails. See Desire Line elevation cues, 137 Emergency Alert System, 097 emotional interference, 098 end-to-end project responsibilities, 040 end-to-end testing, 184 ensemble modeling, 018 Entry Point, 054 “eroding goals” archetype, 013 Error, Design, 055

Error, Human, 055, 056 “escalation” archetype, 013 evolutionary prototyping, 151 example icons, 094 exclusive information, 165 Expectation Effects, 057 experimental context, 177 Exposure Effect, 058, 115 external consistency, 035 eyebars, 156 F F-35 Joint Strike Fighter, 072 Fabricant, Robert, 064 Face Detection, 059 Face-ism Ratio, 060 Factor of Ignorance. See Factor of Safety Factor of Safety, 061 Faith Follows Function, 062 Fallingwater House, 017 Farnsworth House, 017 Feature Creep, 063 Feedback, 002, 024, 042, 048, 049, 053, 055, 056, 064, 073, 079, 093, 101, 105, 119, 126, 129, 151, 154, 190, 192 Feedback Loop, 013, 065, 173 Fiat Chrysler, 116 Fibonacci Sequence, 066 figure elements, 067 Figure-Ground, 067 Filene’s Basement, 165 First Principles, 068 Fitts’ Law, 069 Fitts, Paul, 069 Five Hat Racks, 070 Five Tenets of Queuing, 071 five whys, 158 fixed-ratio relationships, 129 “fixes that fail” archetype, 013 Flexibility Tradeoffs, 072 Flow, 073, 074, 081, 109, 176 forced perspective, 137

forcing function. See Confirmation Ford, Henry, 131 Ford Motor Company, 131, 177 Ford Quadricycle, 131 Forgiveness, 002, 004, 056, 074, 077, 184 Form Follows Function, 075 fossil fuels, 013, 044 frame structures, 178 Framing, 051, 076, 081, 174 Franklin, Benjamin, 028 Frazier, Joe, 160 Freedom Tower, 047 Freeze-Flight-Fight-Forfeit, 077 French, Daniel Chester, 186 frequency-validity effect. See Exposure Effect Fry, Edward, 153 Fry’s Readability Graph, 153 Fuller, Buckminster, 062 functional blindness, 097 functional consistency, 035 functional fixedness, 114 functionality, need for, 090 functional mimicry, 118 functional viability, 119 G Gaiman, Neil, 200 Gall, John, 078 Gall’s Law, 078 Galton, Francis, 043 Gamification, 079 Garbage In – Garbage Out, 080 Gates, Bill, 081, 181 Gates’ Rule of Automation, 081 Gaussian distribution. See Normal Distribution gaze dependent facial expressions, 193 Gehry, Frank, 075 General Motors Corp., 104 generic structures See Archetypes, System

Geocell Rapid Deployment Flood Wall, 178 Gestalt principles of perception Closure, 027 Common Fate, 031 Figure-Ground, 067 Good Continuation, 084 Proximity, 023, 152, 189 Similarity, 031, 035, 070, 087, 113, 120, 121, 152, 171, 172, 189 Uniform Connectedness, 189 GIGO. See Garbage In – Garbage Out Gilbreth, Frank, 135 Gilbreth, Lillian, 135 GitHub, 021 Gladwell, Malcolm, 175 global warming, 044, 122, 173 Gloire cruiser, 084 Gloss Bias, 082 Goethe, Johann Wolfgang von, 088 Golden Hammer. See Maslow’s Hammer Golden Ratio, 062, 066, 083, 160 good affordances, 074 Good Continuation, 084 Google, 121, 184 GoPro cameras, 090 Göring, Hermann, 068 Gossamer Condor aircraft, 101 graphical user interface, 155 Great Pyramid of Giza, 061, 083 Grey, Aubrey de, 179 Gropius, Walter, 075 ground elements, 067 Groupthink, 042, 085, 093 growth stage, 146 Gutenberg Diagram, 086 H Habituation, 015, 087, 129 Haidt, Jonathan, 154 halo effect, 057 Hanlon’s Razor, 088

Hansel and Gretel (Brothers Grimm), 197 happy-path scenarios, 184 Havilland, Geoffrey de, 068 Hawthorne effect, 057 heat maps, 133 help appeals, 095 help strategies, 074 Henson Associates, 111 herd behavior. See Social Proof Hick’s Law, 089 Hick, W.E., 089 hierarchical organization, 192 Hierarchy of Needs, 090 Highlighting, 007, 030, 054, 091, 170 homing movements, 069 homogeneous redundancy, 156 horizon line, 067 horizontal axes, 006 Horror Vacui, 092 Hurricane Dorian, 018 Hurricane Katrina, 075, 088

interaction assumptions, 164 interaction models, 116 interchangeability, 112 Interference Effects, 098 internal-audience problem, 063 internal consistency, 035 International Space Station (ISS), 040, 055, 184 interposition cues, 137 introduction stage, 146 inverse highlights, 091 Inverted Pyramid, 099 invidious comparison, 191 iPhone, 025, 105, 172 iPod MP3 player, 083 Iron Triangle, 100 Isaacson, Walter, 193 isolation effect. See von Restorff Effect italic highlights, 091 Iteration, 047, 049, 068, 078, 093, 101, 103, 108, 128, 143, 148, 151

I iBot, 002 Icarus Matrix, 093 Iconic Representation, 094 Icosa Shelters, 178 Identifiable Victim Effect, 095 IDEO, 151 IKEA Effect, 042, 096 il Conico kettle, 037 illusion of invulnerability, 085 illusion of morality, 085 iMac computers, 128 Inattentional Blindness, 097 independent reviews, 042 informational social influence. See Social Proof innovators subgroup, 050 instrumental conditioning. See Operant Conditioning integration testing, 184

J James Webb Space Telescope, 123 Jeep, 059 Jobs, Steve, 016, 025, 181 Johnson, Kelly, 103 Johnson, Lyndon B., 014 Juicero’s Press, 114 Juicy Salif, 004, 062 Juran’s principle. See Pareto Principle K Kalashnikov, Mikhail, 103 Kamen, Dean, 002 Kano Model, 102 Kano, Noriaki, 102 kemonomichi (beast trails). See Desire Line Kennedy, John F., 014, 175 Kennedy-Nixon debate, 014

Index

continued

Kerr, Jane, 051 key elements, 159 kinematic load, 135 KISS (Keep it Simple, Stupid), 103 Knowing-Doing Gap, 104 Knuth, Donald, 143 Kodak, 044, 085 Koren, Leonard, 195 Kozak, Graham, 062 Kremer Prize, 101 Kruger, Justin, 053 Kuang, Cliff, 064 L Lady Gaga, 133 laggards subgroup, 050 landing gear, 157 lapses, 056 late majority subgroup, 050 latent failures, 182 latent human error. See Error, Design law of economy. See Ockham’s Razor law of parsimony. See Ockham’s Razor law of sizes. See Scaling Fallacy Law of the Hammer. See Maslow’s Hammer Law of the Instrument. See Maslow’s Hammer leading, 107 Learnability, 035, 105 Le Corbusier, 066, 075 Left-Digit Effect, 106 Legibility, 091, 107 Leonardo da Vinci, 083, 193 Levels of Invention, 108 Leverage Point, 109 Libeskind, Daniel, 047 Lidwell, William, 047 limited access, 165 limited number, 165 limited time, 165

“limits to growth” archetype, 013 Lincoln, Abraham, 099, 186 Lincoln Memorial, 186 Lindstrom, Martin, 026 linear perspective cues, 137 line orientations, 130 List, John A., 166 Liston, Sonny, 160 lit-from-above assumption. See Top-Down Lighting Bias load assumptions, 164 location organization, 070 Lockheed Skunk Works, 103 Lodge, Henry Cabot, 014 Loewy, Raymond, 115 London Tube map, 130 Long Now Foundation, 159 loonshots, 108 Louvre Museum, 186 M MacCready, Paul, 101 Maeda, John, 103 “Mae West” bottle, 008 MAFA Effect, 110 Magic Triangle, 111 Maintainability, 016, 103, 112, 121 maintenance rehearsal, 046 Make It Right Foundation, 075 Mami kettle, 037 Maneuvering Characteristics Augmentation System (MCAS), 123 mannequins, 092, 187, 196 Mapping, 036, 055, 113 Mars Climate Orbiter, 080 martial arts, 089 Maslow’s Hammer, 114 Maslow’s Hierarchy of Needs, 090 mass structures, 178 maturity stage, 146 MAYA (Most Advanced Yet

Acceptable), 115 McDonald’s, 166, 180 McNamara, Robert, 177 McNerney, James, 003 Mental Model, 023, 056, 116 menus, 089 mere-exposure effect. See Exposure Effect Method Dish Soap bottle, 008 micrometers, 001 Mies van der Rohe, 017, 075 Miller, George, 117 Miller’s Law, 117 Mimicry, 011, 118 mindguards, 085 minimal barriers, 054 Minimum-Viable Product, 119, 162 mirroring hypothesis. See Conway’s Law mistakes, 056, 080 mixed redundancy, 156 Mnemonic Device, 120 Modularity, 112, 121, 167 Modulor system, 066 Mona Lisa (Leonardo da Vinci), 193 Mondrian, Piet, 083 moonshots, 108 Moore, Geoffrey A., 050 Morandi Bridge, 112 Morandi, Riccardo, 112 Mori, Masahiro, 187 Morita, Akio, 174 Moscow Theater, 076 Mosquito aircraft, 068 Mozart, Wolfgang Amadeus, 066 multi-touch technology, 025 multivariate graphs, 032 multivariate thinking, 046 Muppets, 111 Murphey, Charlene, 087 Musk, Elon, 068 Muybridge, Eadweard, 010

N Nagappan, Raj, 100 NASA, 047, 103, 109, 162 nautilus shell, 083 negative feedback loops, 065 negative frames, 076 negative reinforcement, 129 neutral stimulus, 026 New Coke formula, 024 Newman, Oscar, 045 New United Motor Manufacturing Inc. (NUMMI), 104 Nightingale, Florence, 032 NIKE, 140 ninja-proof seats, 150 Nirvana Fallacy, 122 Nixon, Richard, 014 Noakes, Barbara, 194 Noguchi, Isamu, 062 nonvisual anthropomorphism, 008 Normal Distribution, 124 Norman, Donald, 005, 035 Northrop, John, 157 Norton, Boyd, 163 Norton Simon Museum, 186 No Single Point of Failure, 123 Not Invented Here, 040, 125 Notre Dame Cathedral, 083, 183 Novak, David, 042 Nova Tactica strategy map, 038 novelty effect. See von Restorff Effect Nudge, 126 Number-Space Associations, 127 O Obama, Barack, 149, 186 oblique effect, 130 Ockham’s Razor, 088, 128 olifantenpad (elephant trails). See Desire Line Oliver, Vaughan, 092 operability, 002 Operant Conditioning, 129

opioid epidemic, 109 order effects, 168 orientation, 064, 197 Orientation Sensitivity, 130 outgroup bias, 085 OxyContin, 109 P Pac-Man video game, 105 pants, for landing gear, 157 paper-cutting machines, 033 Paradox of Great Ideas, 131 Paradox of Unanimity, 132 Paramount logo, 067 Pareto Principle, 133, 143 Pareto, Vilfredo, 133 Parthenon, 083 Patagonia logo, 067 path-of-least-resistance principle. See Performance Load paths, 036 Payload Deployment and Retrieval System, 127 Peak-End Rule, 134 pecuniary emulation, 191 Pentagon, 177 PepsiCo, 042 perceived relatedness, 031 perceptibility, 002 perceptual blindness. See Inattentional Blindness Perch, 199 Perdue, Harold Scott, 072 perfect solution fallacy. See Nirvana Fallacy performance features, 102 Performance Load, 135, 170 Performance vs. Preference, 136 Perspective Cues, 137 Perverse Incentive, 138 Pfeffer, Jeffrey, 104 Philips Electronics, 063 Phonetic Symbolism, 139 physical constraints, 036

Picasso, Pablo, 058 pictorial superiority effect. See Picture Superiority Effect Picture Superiority Effect, 140 Pioneer space probes, 159 pirate paths. See Desire Line Pitt, Brad, 075 Pittsburgh Zoo, 197 placebo effect, 057 Play Preferences, 141 point-in-time previews, 199 point-of-sale systems, 007 points of prospect, 054 Poka-Yoke, 142 pop-out effect, 130 positive feedback loops, 065 positive frames, 076 positive reinforcement, 129 PPG Aquarium, 197 Prado Museum, 186 predator-prey relationship, 013 predatory behavior, 089 prediction markets, 043 Premature Optimization, 143 primacy effect, 168 primary optical area, 086 Priming, 022, 144 principle of least effort. See Performance Load principle of simplicity. See Ockham’s Razor proactive interference, 098 Process Eats Goal, 145 Product Life Cycle, 146, 148 proficiency, need for, 090 Progressive Disclosure, 147 progressive lures, 054 Progressive Subtraction, 103, 148 Project Ara, 121 project management triangle. See Iron Triangle Project Pigeon, 169 Propositional Density, 149, 193 Prospect-Refuge, 150

Index

continued

Prototyping, 049, 068, 101, 103, 128, 151, 162 proximal cause, 158 Proximity, 023, 152, 189 psychological constraints, 036 punishment, 129 Pygmalion effect, 057 Q quadratic colors, 030 quality problems, 080 QWERTY keyboard, 136 R Rackam, Horace, 131 Rajapaksa, Gotabaya, 011 Raskin, Aza, 101 Readability, 130, 153 real-time reflection, 199 recall memory, 155 recency effect, 168 Reciprocity, 154 Recognition over Recall, 155 recursion, 167 Redundancy, 021, 123, 156, 167 reflection symmetry, 183 reliability, need for, 090 repetition effect. See Exposure Effect repetition-validity effect. See Exposure Effect requirement stage, 049 responsiveness, 105 Restorff, Hedwig von, 194 retroactive interference, 098 Reverse Salient, 157 reversibility of actions, 074 reversible figure-ground relationships, 067 Rivian electric truck, 102 road signs, 089 Root Cause, 023, 158, 173 Rosenthal effect, 057 Rose, Todd, 124

Rosetta Disk, 159 Rosetta Stone, 159 rotation symmetry, 183 Roth, David Lee, 020 route decision, 197 route monitoring, 197 route planning apps, 043 Rubik’s Cube, 200 Rule of Thirds, 160 “Running of the Brides” event, 165 S saccades, 187 Safety Factor. See Factor of Safety safety nets, 074 Sagan, Carl, 159 Sahlin, Don, 111 Saint-Exupery, Antoine de, 148 Saint-Venant, Adhémar Barré de, 161 Saint-Venant’s Principle, 161 Salyut space station, 040 Salzman, Linda, 159 SAPS (Status, Access, Power, and Stuff) model, 079 Sasson, Steven, 085 Satisficing, 162 saturated colors, 030 savanna hypothesis. See Savanna Preference Savanna Preference, 017, 163 Scaling Fallacy, 164 Scarcity, 165, 180 Schiphol airport, 126 Schopenhauer, Arthur, 131 scope constraints, 100 sectio aurea. See Golden Ratio Segway Human Transporter, 065 Selection Bias, 166, 181 self-censorship, 085 Self-Similarity, 167 Sen no Rikyu, 195 Serial Position Effects, 134, 168 shading cues, 137

shape similarity, 171 Shaping, 169 shell structures, 178 Signal-to-Noise Ratio, 170 Silver Bridge, 156 similar icons, 094 Similarity, 031, 035, 070, 087, 113, 120, 121, 152, 171, 172, 189 simple tasks, 089 simplicity, 002, 105 The Simpsons television show, 190 Sinclair ZX81 computer, 125 single context comparison, 032 Siri, 088 Six Sigma management practice, 003 size cues, 137 Skylab space station, 040 slips, 033, 056, 080 small-scale user testing, 042 small multiples, 032 small-team organizational structures, 040 smart defaults, 126 SNARC (spatial-numerical association of response codes) effect, 127 Snow, John, 046 Social Proof, 172 social trails. See Desire Line Social Trap, 173 Soho cholera map, 046 Sony, 174 SpaceX, 068 spaghetti plots, 018 spatial intuitions, 127 spotlight effect, 088 Stafford, Jim, 185 Stalin, Joseph, 058, 095 standard normal distribution. See Normal Distribution standby redundancy, 156

Starck, Philippe, 004, 062 Starliner spacecraft, 184 statistical traits, 124 Statue of Liberty, 178 Status Quo Bias, 174 Stickiness, 175 Stillion, John, 072 stimulus generalization, 087 stimulus-response compatibility. See Mapping Stonehenge, 083 Storytelling, 012, 099, 176 Stradivari, 083 Stradivarius violin, 083 strategic critiques, 042 Streetlight Effect, 177 strong fallow area, 086 Stroop interference, 098 Structural Forms, 178 structured choices, 126 subject recruitment, 177 SUCCESs (Simple, Unexpected, Concrete, Credible, Emotional, Story), 175 “success to the successful” archetype, 013 sudden scarcity, 165 Sullivan, Louis, 075 Sunk Cost Effect, 093, 100, 125, 179 Sunstein, Cass R., 126 Supernormal Stimulus, 110, 180 surface mimicry, 118 surface propositions, 149 Surowiecki, James, 043, 085 surveillance, 045 Survivorship Bias, 181 Swigert, John L., Jr., 162 Swinmurn, Nick, 119 Swiss Cheese Model, 182 Sydney Opera House, 100 Sylianteng, Nikki, 151 symbolic barriers, 045 symbolic icons, 094

symbols, 036 Symmetry, 110, 183, 195 system models, 116 T Tacoma Narrows Bridge, 065 TAGRI (They Ain’t Gonna Read It) principle, 020 Talmud, 132 targeting, 064 Teletubbies television show, 163, 180 terminal optical area, 086 territoriality, 045 Tesla Model 3 automobile, 081 Testing Pyramid, 184 testing stage, 049 test options, 089 Tetris Effect, 200 Tetris video game, 200 text blocks, 107 texture gradient cues, 137 Thaler, Richard H., 126 Theory of Inventive Problem Solving (TRIZ), 108 Threat Detection, 027, 029, 030, 185 Three Mile Island, 064 threshold features, 102 throwaway prototyping, 151 time constraints, 100 time organization, 070 Timex Sinclair 1000 computer, 125 Titanic, 158 Toblerone logo, 067 toothbrush theory. See Not Invented Here Top-Down Lighting Bias, 186 top-lighting preference. See Top-Down Lighting Bias touch screens, 041 Toyota Motors Corp., 104, 158 Toyota Production System, 104, 142 “tragedy of the commons”

archetype, 013 transcontinental railroad, 138 translation symmetry, 183 triadic colors, 030 trigger stimulus, 026 triple-constraint. See Iron Triangle trivial many rule. See Pareto Principle Truck Factor. See Bus Factor truth effect. See Exposure Effect Tschichold, Jan, 092 Tu-144 airliner, 078 Twain, Mark, 138 two-step confirmation, 033 typeface, 002, 091, 107 type problems, 080 type size, 107 U ugly failures, 093 ugly successes, 093 Uncanny Valley, 008, 187 Uncertainty Principle, 188 underline highlights, 091 unhappy-path scenarios, 184 Uniform Connectedness, 189 Union Pacific Railroad Company, 138 unit testing, 184 unstable figure-ground relationships, 067 upvotes/downvotes, 043 usability, need for, 090 U.S. Air Force (USAF), 124 User-Centered vs. User-Driven Design, 190 USS John S. McCain, 192 V value perception, 092 Van Halen rock band, 020 variable-ratio relationships, 129 Vasa warship, 063 Vaught, RaDonda, 087

Index

continued

Veblen Effect, 191 Veblen, Thorstein, 191 verification principle. See Confirmation vertical axes, 006 Virgil, 066 Visibility, 023, 036, 043, 077, 112, 135, 165, 173, 181, 188, 192, 193, 197 visible goals, 126 visual anthropomorphism, 008 Visuospatial Resonance, 193 vital few rule. See Pareto Principle Vitruvian Man, 083 Volkswagen, 015, 120, 132, 180 Volkswagen Beetle automobile, 015, 120, 180 von Restorff Effect, 115, 194 VSS Enterprise, 142 W Wabi-Sabi, 195 Waist-to-Hip Ratio, 014, 196 Wald, Abraham, 181 Walkman music player, 174 Walt Disney World, 137 warning strategies, 074 Watson-Watt, Sir Robert, 122 Wayfinding, 043, 070, 089, 197 Weakest Link, 198 weak fallow area, 086 Wharton, Edith, 183 Wheelock, Doug, 103 Wienermobile, 194 William of Ockham, 128 Wilson, Edward O., 017 Wilson, S. Clay, 092 Wireless Emergency Alert System, 097 Wölfli, Adolf, 092 Wright, Frank Lloyd, 017, 058, 075, 150 WYSIWYG (What You See Is What You Get), 199

X Xerox, 147, 155, 199 XPrize, 157 Y Yankelovich, Daniel, 177 Z Zappos, 119 Zeigarnik, Bluma, 200 Zeigarnik Effect, 200 Z pattern of processing. See Gutenberg Diagram Zuckerberg, Mark, 181