Key MBA Models: The 60+ Models Every Manager and Business Student Needs to Know 9781292016856, 9781292016870, 9781292016863, 9781292016887, 129201685X, 1292016868

Key MBA Modelsis a one-stop-shop for all business course students and practicing managers. It contains the core manageme

1,370 244 2MB

English Pages 296 [297] Year 2015

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Key MBA Models: The 60+ Models Every Manager and Business Student Needs to Know
 9781292016856, 9781292016870, 9781292016863, 9781292016887, 129201685X, 1292016868

Table of contents :
Cover......Page 1
Contents......Page 8
About the authors......Page 11
Acknowledgements......Page 12
Introduction......Page 14
Part 1: Organisational behaviour......Page 18
Chapter 1: Change management: Kotter's eight-step model......Page 20
Chapter 2: Cognitive biases in decision making......Page 24
Chapter 3: Emotional intelligence......Page 28
Chapter 4: Managing work groups: Belbin team roles......Page 31
Chapter 5: Matrix management......Page 35
Chapter 6: Mintzberg's managerial roles......Page 39
Chapter 7: Motivation: Theory X and Theory Y......Page 44
Chapter 8: Negotiating techniques: BATNA......Page 48
Chapter 9: Schein's model of organisational culture......Page 51
Chapter 10: 360-degree assessment......Page 54
Part 2: Marketing......Page 58
Chapter 11: Customer lifetime value......Page 60
Chapter 12: Ethnographic market research......Page 64
Chapter 13: Market orientation......Page 68
Chapter 14: Multichannel marketing......Page 72
Chapter 15: Net promoter score......Page 75
Chapter 16: The 4Ps of marketing......Page 79
Chapter 17: Pricing strategies: dynamic pricing......Page 83
Chapter 18: Product life cycle......Page 87
Chapter 19: Segmentation and personalised marketing......Page 91
Part 3: Strategy and organisation......Page 96
Chapter 20: The ambidextrous organisation......Page 98
Chapter 21: The BCG growth-share matrix......Page 102
Chapter 22: Blue ocean strategy......Page 107
Chapter 23: Core competence and the resource-based view......Page 111
Chapter 24: Corporate social responsibility: the triple bottom line......Page 115
Chapter 25: Corporate strategy: parenting advantage......Page 119
Chapter 26: Five forces analysis......Page 124
Chapter 27: Game theory: the prisoner's dilemma......Page 128
Chapter 28: Generic strategies......Page 132
Chapter 29: The McKinsey 7S framework......Page 136
Part 4: Innovation and entrepreneurship......Page 140
Chapter 30: Brainstorming......Page 142
Chapter 31: Design thinking......Page 146
Chapter 32: Disruptive innovation......Page 150
Chapter 33: Greiner's growth model......Page 155
Chapter 34: Open innovation......Page 160
Chapter 35: The seven domains assessment model for entrepreneurs......Page 164
Chapter 36: Stage/gate model for new product development......Page 169
Chapter 37: Scenario planning......Page 173
Part 5: Accounting......Page 178
Chapter 38: The accural method in accounting......Page 180
Chapter 39: Activity-based costing......Page 183
Chapter 40: The balanced scorecard......Page 187
Chapter 41: The DuPont identity......Page 191
Chapter 42: Economic value added......Page 195
Chapter 43: Ratio analysis......Page 200
Part 6: Finance......Page 204
Chapter 44: Black-Scholes options pricing model......Page 206
Chapter 45: Bond valuation......Page 210
Chapter 46: Capital asset pricing model......Page 216
Chapter 47: Capital budgeting......Page 219
Chapter 48: Modern portfolio theory......Page 224
Chapter 49: Modigliani-Miller theorem......Page 229
Chapter 50: Time value of money......Page 233
Chapter 51: Valuing the firm......Page 238
Chapter 52: Weighted average cost of capital......Page 248
Part 7: Operations......Page 252
Chapter 53: Agile development......Page 254
Chapter 54: The bullwhip effect......Page 258
Chapter 55: Decision trees......Page 262
Chapter 56: Just-in-time production......Page 266
Chapter 57: Sensitivity analysis......Page 270
Chapter 58: The service-profit chain......Page 273
Chapter 59: Six Sigma......Page 276
Chapter 60: Theory of constraints......Page 280
Chapter 61: Total quality management......Page 284
Index......Page 288

Citation preview

Praise for Key MBA Models

This book is not just an enjoyable read. It also offers practical models to accelerate your learning of key management concepts and boost your capacity to engage in meaningful, executive conversations. It’s a must-read for anyone who wants to perform more effectively each and every day. CYRIL BOUQUET, PROFESSOR OF STRATEGY, INSTITUTE OF MANAGEMENT DEVELOPMENT (IMD)

I would love to have had this elegant and crisp reference guide before, during and after every business school class, including Julian Birkinshaw’s! Some of the frameworks feel like reunions with old friends, some completely new introductions. All are explained with beautiful simplicity. I expect this to become the model Model handbook. RICHARD HYTNER, DEPUTY CHAIRMAN, SAATCHI & SAATCHI WORLDWIDE

The greatest value of this book is that it tells us the topmost models that managers should not only know about but should also know when and how to use. It’s an excellent book to use for the regular reinforcement of business and management concepts. RAVI ARORA, VICE PRESIDENT, TATA QUALITY MANAGEMENT SERVICE

Key MBA Models

JULIAN BIRKINSHAW KEN MARK

Key MBA Models The 60+ models every manager and business student needs to know

Pearson Education Limited Edinburgh Gate Harlow CM20 2JE United Kingdom Tel: +44 (0)1279 623623 Web: www.pearson.com/uk First published 2015 (print and electronic) © Pearson Education Limited 2015 (print and electronic) The rights of Julian Birkinshaw and Ken Mark to be identified as authors of this work have been asserted by them in accordance with the Copyright, Designs and Patents Act 1988. Pearson Education is not responsible for the content of third-party internet sites. ISBN:

978-1-292-01685-6 978-1-292-01687-0 978-1-292-01686-3 978-1-292-01688-7

(print) (PDF) (ePub) (eText)

British Library Cataloguing-in-Publication Data A catalogue record for the print edition is available from the British Library Library of Congress Cataloging-in-Publication Data Birkinshaw, Julian M.  Key MBA models : the 60+ models every manager and business student needs to know / Julian Birkinshaw, Ken Mark.     pages cm   Includes index.   ISBN 978-1-292-01685-6   1. Management. I. Mark, Ken. II. Title.   HD31.B49747 2015  658--dc23                                2015002445 The print publication is protected by copyright. Prior to any prohibited reproduction, storage in a retrieval system, distribution or transmission in any form or by any means, electronic, mechanical, recording or otherwise, permission should be obtained from the publisher or, where applicable, a licence permitting restricted copying in the United Kingdom should be obtained from the Copyright Licensing Agency Ltd, Saffron House, 6–10 Kirby Street, London EC1N 8TS. The ePublication is protected by copyright and must not be copied, reproduced, transferred, distributed, leased, licensed or publicly performed or used in any way except as specifically permitted in writing by the publishers, as allowed under the terms and conditions under which it was purchased, or as strictly permitted by applicable copyright law. Any unauthorised distribution or use of this text may be a direct infringement of the authors’ and the publisher’s rights and those responsible may be liable in law accordingly. All trademarks used herein are the property of their respective owners. The use of any trademark in this text does not vest in the authors or publisher any trademark ownership rights in such trademarks, nor does the use of such trademarks imply any affiliation with or endorsement of this book by such owners. 10 9 8 7 6 5 4 3 2 1 19 18 17 16 15 Print edition typeset in 9.25 Helvetica Neue Pro by 3 Printed by Ashford Colour Press Ltd, Gosport NOTE THAT ANY PAGE CROSS REFERENCES REFER TO THE PRINT EDITION

Contents

About the authors x Acknowledgements xi Introduction xiii

PART ONE  Organisational behaviour 1 1 2 3 4 5 6 7 8 9 10

Change management: Kotter’s eight-step model 3 Cognitive biases in decision making 7 Emotional intelligence 11 Managing work groups: Belbin team roles 14 Matrix management 18 Mintzberg’s managerial roles 22 Motivation: Theory X and Theory Y 27 Negotiating techniques: BATNA 31 Schein’s model of organisational culture 34 360-degree assessment 37

PART TWO Marketing 41 11 12 13 14 15 16 17 18 19

Customer lifetime value 43 Ethnographic market research 47 Market orientation 51 Multichannel marketing 55 Net promoter score 58 The 4Ps of marketing 62 Pricing strategies: dynamic pricing 66 Product life cycle 70 Segmentation and personalised marketing 74

PART THREE  Strategy and organisation 79 20 21 22 23

The ambidextrous organisation 81 The BCG growth-share matrix 85 Blue ocean strategy 90 Core competence and the resource-based view 94

vii

24 25 26 27 28 29

Corporate social responsibility: the triple bottom line 98 Corporate strategy: parenting advantage 102 Five forces analysis 107 Game theory: the prisoner’s dilemma 111 Generic strategies 115 The McKinsey 7S framework 119

PART FOUR  Innovation and entrepreneurship 123 30 31 32 33 34 35 36 37

Brainstorming 125 Design thinking 129 Disruptive innovation 133 Greiner’s growth model 138 Open innovation 143 The seven domains assessment model for entrepreneurs 147 Stage/gate model for new product development 152 Scenario planning 156

PART FIVE Accounting 161 38 39 40 41 42 43

The accrual method in accounting 163 Activity-based costing 166 The balanced scorecard 170 The DuPont identity 174 Economic value added 178 Ratio analysis 183

PART SIX Finance 187 44 45 46 47 48 49 50 51 52

viii

Black-Scholes options pricing model 189 Bond valuation 193 Capital asset pricing model 199 Capital budgeting 202 Modern portfolio theory 207 Modigliani-Miller theorem 212 Time value of money 216 Valuing the firm 221 Weighted average cost of capital 231

Contents

PART SEVEN Operations 235 53 54 55 56 57 58 59 60 61

Agile development 237 The bullwhip effect 241 Decision trees 245 Just-in-time production 249 Sensitivity analysis 253 The service-profit chain 256 Six Sigma 259 Theory of constraints 263 Total quality management 267 Index 271

Contents

ix

About the authors

Julian Birkinshaw is Professor of Strategy and Entrepreneurship at the London Business School and Director of the Deloitte Centre of Innovation and Entrepreneurship. He is the author of 12 books, including Reinventing Management and Becoming a Better Boss. Ken Mark is CEO of The Martello Group and GM of Steribottle Inc. He was Finance Director at Diversified Resources International and worked for Procter & Gamble and Harvard Business School. He has written case studies for Harvard, Ivey Business School and the London Business School.

x

Acknowledgements

We would like to thank our colleagues at London Business School, Ivey School of Business and other schools for their advice on selecting the management models. In particular, we thank Dr Colette Southam of Bond University, Australia, for reviewing several sections of the book. We also thank our students, in particular those in the EMBA and MBA programmes at London Business School, for providing input into our selection process.

Publisher’s acknowledgements We are grateful to the following for permission to reproduce copyright material: Figures Figure on page 86 adapted from The BCG Portfolio Matrix from the Product Portfolio Matrix, © 1970, The Boston Consulting Group (BCG); Figure on page 104 from Strategic Management: Issues and cases, 2nd edn., John Wiley & Sons (Dobson, P.W., Starkey, K. and Richards, J. 2009), p. 105; Figure on page 109 from ‘How competitive forces shape strategy’, Harvard Business Review, March/ April, 21–38 (Porter, M.E. 1979), Copyright © 1979 by the Harvard Business School Publishing Corporation, all rights reserved. Reprinted by permission of Harvard Business Review; Figure on page 120 from ‘Structure is not an organization’, Business Horizons 23(3), 14–26 (Waterman, R.H., Peters, T.J. and Phillips, J.R. 1980), reprinted with permission from Elsevier; Figure on page 135 adapted from The Innovator’s Dilemma: When new technologies cause great firms to fail, Harvard Business Review Press (Christensen, C.M. 1997), p. xvi, Copyright © 1997 by the Harvard Business School Publishing Corporation, all rights reserved. Reprinted by permission of Harvard Business Review; Figure on page 140 from ‘Evolution and revolution as organizations grow’, Harvard Business Review 76(3), 55–60 (Greiner, L. 1998), Copyright © 1998 by the Harvard Business School Publishing Corporation, all rights reserved. Reprinted by permission of Harvard Business Review; Figure on page 150 from The New Business Road Test: What entrepreneurs and executives should do before launching a lean start-up, 4th edn., FT Publishing (Mullins, J.W. 2013), used with permission from John Mullins. Tables Table on page 15 from ‘The Nine Belbin Team Roles’, http://www.belbin.com/rte. asp?id=3, © 2012–2014 BELBIN Associates, reproduced with permission of Belbin, www.belbin.com; Table on page 23 from The Nature of Managerial Work, Pearson Education, Inc., Upper Saddle River, New Jersey (Mintzberg, H. 1983), p. 92, reprinted and electronically reproduced by permission.

xi

Text Extract on page 238 from The Agile Manifesto, http://agilemanifesto.org/, used courtesy of The Agile Manifesto. In some instances we have been unable to trace the owners of copyright material, and we would appreciate any information that would enable us to do so.

xii

Acknowledgements

Introduction

There are many models and frameworks in use in the business world today, and it is hard to keep track of them all. We wrote this book to help you make sense of the most important of these models – to understand where they came from, when you might use them, how to use them and what their biggest benefits and weaknesses are. The title Key MBA Models reflects the fact that these models are all taught to students at business schools seeking to get an MBA (Masters in Business Administration). The MBA is a generalist degree – in other words, it is designed to provide students with a broad grounding in all the key aspects of business. This book reflects the breadth of the MBA. It has seven sections, each one corresponding to a typical core course in the first year of an MBA programme. Each section has between six and ten key models. Think of it as an ‘MBA in a book’ – a summary of the key models and frameworks that an MBA student learns in his or her core classes. Who should read this book? If you are doing an MBA, this is an easy-to-access summary of the key models you are being taught, with useful pointers about how they should be applied, and follow-up readings if you want to know more. If you are an executive or a manager who didn’t go to business school, the book is a valuable reference guide. If your subordinates or colleagues start throwing out unfamiliar terms they picked up at a business school, you want to know what they are talking about. Most of the concepts in the world of business are actually pretty straightforward – this book provides enough details on 60 of the most important ones to get you up to speed. Finally, the book should also be of interest to prospective MBA students, who are studying in advance of entering an MBA programme, or who are fascinated by the prospect of doing an MBA. If the models and concepts described here look valuable and interesting, then you should take the plunge and sign up for a programme. While we have written this as an ‘MBA in a book’, it goes without saying that you learn vastly more in the course of an MBA programme than could ever be picked up in a single text.

What is included? In researching this book, we reviewed the course materials at the business schools we work with, or where we have good friends (such as London Business School, Richard Ivey School of Business, INSEAD, Wharton and Harvard), and we sought to identify the most important models, frameworks and concepts that students were taught during their ‘core’ courses. (In most programmes, the core courses are followed by a range of ‘elective’ courses that allow students to specialise). We

xiii

also market-tested our initial selection with a group of students and graduates, by asking them how important they thought each model was. This allowed us to finetune our choices. While our selection process was careful, the final list of models is still highly subjective. It is a bit like choosing the most influential people in history, or the best movies of the last 20 years: there is some data you can use to support your choices, but ultimately there is a lot of judgment involved, and we wouldn’t expect anyone else to agree 100 per cent with the list we finally settled on. One important criterion we used, for example, was to deliberately include a mix of ‘classic’ and ‘contemporary’ models in each section, so you can develop some perspective on the evolution of the subject matter. The book is organised into seven parts, corresponding to how most business schools structure their core courses, which in turn reflects the academic disciplines that faculty are organised into. Each part includes between six and ten models, arranged in alphabetical order, and at the beginning of each part we have written a brief overview to explain how the models we chose fit together. Of course, there are many important topics in each of these areas that we don’t have space to cover. The further reading lists provided at the end of each chapter offer useful pointers for where to get additional information. We will be the first to acknowledge that our chosen structure is a very ‘traditional’ way of looking at the business world. Some business schools have sought to develop cross-disciplinary or integrative approaches to their teaching, for example by focusing around real-world business challenges. But they are in the minority – the vast majority of business schools still organise their courses as we have done here. To keep the book to a reasonable length, we have had to make some tough choices. We have not included any models that describe the ‘macro’ business environment, whether in terms of economic theory, government policy, law or trade regulations. We have steered clear of basic statistical models and tools, and we have spent relatively little time on individual-level psychological issues, or on the challenges of starting a business from scratch. As a general rule, we have focused on issues that are the concern of the firm or business as a whole. Ultimately, these are things that a ‘general manager’ in a firm needs to know.

What is a model? We have used the term ‘model’ very loosely in this book to include frameworks, concepts, models and tools. We decided that it was more important to cover the key ideas that MBA students are exposed to in their core courses, than to stick narrowly to a dictionary definition. For example, ‘open innovation’ is an important concept in the world of innovation and strategy today, so we have a chapter on it, even though it isn’t a model as such. Technically speaking, a model is a simplified version of something more complex – it helps you understand a specific phenomenon by identifying its key elements. A framework is a way of structuring your understanding of a multi-faceted phenomenon, often by pulling together a number of diverse elements. A concept is a xiv

Introducti on

high-level idea, a way of looking at the world that provides new insight. A tool is a practical way of applying a body of thinking to address a particular task. These distinctions are of academic interest only: what matters is that this book includes what we believe to be the most important models, frameworks, concepts and tools in each area.

How you should read the book For most readers, this is primarily a reference book – something to dip into, to remind you what a particular model is for, or to help you understand a concept you hadn’t heard before. For others, it might be a way to get up to speed on an entire subject. If you are moving into a marketing role, for example, it would be very useful to read up on the nine marketing models included here to make sure you understand the lie of the land. There may also be readers who are entering the business world for the first time, in which case reading the whole book from start to finish would make good sense. Julian Birkinshaw Ken Mark

Introdu ction

xv

[

PA R T O N E

]

Organisational behaviour

1

Organisational behaviour refers to the ways people interact with each other in the business world. While many elements of the MBA degree are based on financial and statistical analysis, the reality is that this quantitative approach only takes you so far. To get things done inside a company, you need to understand what makes people tick and how to get them to work effectively together. That is what the field of organisational behaviour is all about. A useful starting point is to look at the work of managers: what do they actually do on a day-to-day basis? The classic reference here, dating from the 1970s, is Henry Mintzberg’s Managerial Roles – a framework that shows how multifaceted and fragmented is the life of a busy executive. While Mintzberg’s ideas continue to be important, there was a noticeable shift in business schools during the 1980s and 1990s away from ‘management’, which many people thought was too narrow and control-orientated, and towards ‘leadership’, which is about mobilising people to work together to achieve a vision. Many views on leadership have been put forward. In this section we discuss ‘emotional intelligence’ as one key characteristic of effective leaders. We also feature ‘360-degree assessment’ as a very important way to help individuals become more effective leaders. Another important strand of thinking in the field of organisational behaviour is how leaders and managers influence others. Most decision-making processes, for example, are not as rational as we might expect. The chapter on ‘cognitive biases in decision making’ discusses why people often make snap judgments that are flawed, and how effective leaders can overcome these types of bias to make better decisions. There is also a chapter on ‘negotiating techniques’, with a specific focus on BATNA, which stands for the ‘best alternative to a negotiated agreement’. This chapter describes the tactics that you should use to influence others in the specific context of a negotiation. A lot of work in organisations gets done through work groups or teams, and there is a great deal written about the dynamics of effective and ineffective teams. The chapter called ‘managing work groups: Belbin team roles’ describes one wellknown model for defining the typical roles that individuals play in teams. At the level of the organisation as a whole, leaders use a variety of formal and informal methods to shape the behaviour of their employees. On the formal side of the spectrum are the official roles and lines of reporting that determine who is accountable for what. We describe one such structure in detail, ‘matrix management’, as a way of highlighting the value and the limitations of this approach. On the informal side of the spectrum, leaders get things done by understanding what makes people tick. One chapter discusses ‘motivation’ and, in particular, the classic distinction between ‘Theory X and Theory Y’, which continues to be a very useful way of characterising the underlying drivers of human effort. We also consider how behaviour in an organisation is shaped more broadly, using ‘Schein’s model of organisational culture’, and we discuss the tactics leaders use to get people to act differently in the chapter on ‘change management: Kotter’s eight-step model’.

2

PA RT ONE : Org an i s ational beh av iour

Change management: Kotter’s eight-step model

1

Many executives struggle to implement change in their organisations, and the larger the firm, the bigger the challenge. There are many recipe books for how to implement a change programme, and John Kotter’s eight-step model is probably the most well-regarded.

When to use it ●●

To implement a change in your organisation – for example, a new formal structure, a new IT system or a different way of serving your customers.

●●

To diagnose why an earlier attempt at making changes has failed, and to come up with a corrective course of action.

Origins The challenge of managing change has existed for as long as organisations have existed. However, it was only when researchers started to understand the behavioural aspects of organisations, and the notion that employees might resist a change they didn’t buy into, that our modern view of change management started to emerge. For example, Kurt Lewin, an academic at MIT in the 1940s, showed how it was important to get employees out of their existing way of looking at the world before attempting a major change. Systematic approaches to change management started to emerge in the postwar years, often led by consulting companies such as McKinsey and The Boston Consulting Group (BCG). During the 1980s and 1990s there were several attempts made to formalise and codify the process. The eight-step model proposed by Harvard professor John Kotter is probably the most well-known. Others include Claes Janssen’s ‘four rooms of change’ model and Rosabeth Moss Kanter’s ‘change wheel’. 3

What it is Change management is difficult partly because it appears easy. However, inertia is a very powerful force and we are all innately suspicious of attempts to disrupt the status quo in our organisations. Executives responsible for setting the strategy of the organisation will often see threats and opportunities much more clearly than those in more narrow roles, so a large part of their job in any change is about communicating why change is necessary. John Kotter’s eight-step model therefore is all about ‘people’ – it is about how you get employees to adopt a planned change, making the required changes in their work patterns and attitudes. Kotter’s model has eight steps, which should be undertaken in the prescribed order: 1 Create urgency 2 Form a powerful coalition 3 Create a vision for change 4 Communicate the vision 5 Remove obstacles 6 Create short-term wins 7 Build on the change 8 Anchor the changes in corporate culture

How to use it Kotter provides a great deal of detail about how each step of the model should be implemented. Obviously, much depends on the specific circumstances, and a leader should always be prepared to adapt the change plan, depending on the reaction he or she receives. Below is a brief description of how to do each step.

Step 1: create urgency This is about convincing employees in the organisation that there are problems or opportunities that need addressing. For example, Stephen Elop, the CEO of Nokia in 2012, tried to create urgency for change by talking about the ‘burning platform’ that Nokia was on – and how they needed to be prepared to consider dramatic changes to their business model. Urgency can also be created by starting an honest dialogue with employees about what’s happening in the market-place. Often, customer-facing staff can be your strongest allies in this regard as they have daily direct feedback about the market. If many people start talking about the need for change, the urgency can build and feed on itself.

4

PA RT ONE : Org an i s ational beh av iour

Step 2: form a powerful coalition While you, as the leader, need to take charge of a major change effort, you cannot do it on your own. So it is important to get key opinion leaders from within the organisation to work with you. You can find effective opinion leaders throughout the organisation – they don’t necessarily follow the traditional company hierarchy. These people need to commit visibly to the change, and then to champion the change within their own part of the organisation.

Step 3: create a vision for change Often people have very different views of what the future might look like. As leader, you need to create and articulate a clear vision so people can see what it means to them – how it taps into their own interests, and how they might be able to contribute. You don’t have to do this alone – involving key staff at this stage speeds up implementation later on as they feel more ownership for the change and have a bigger stake in its success.

Step 4: communicate the vision In large organisations it is very difficult to get your message across to everyone, as there are often many layers between you (as a leader) and those operating on the front line. Effective leaders spend a lot of time giving talks, addressing people through multiple media and using their own direct subordinates to help spread the word.

Step 5: remove obstacles Even a well-articulated and communicated vision doesn’t get everyone on board. There will always be some people resisting, or some structures that get in the way. So you need to work actively to remove obstacles and empower the people you need to execute the vision.

Step 6: create short-term wins People have short attention spans, so you need to provide some tangible evidence that things are moving in the right direction early on – typically within a few months. Of course, there is often some ‘game playing’ here, in that the quick wins were often underway before the change programme started. But that rarely detracts from the value they provide in creating momentum.

Step 7: build on the change Kotter argues that many change projects fail because victory is declared too early. Quick wins are important, but you need to keep on looking for improvements, so that the organisation doesn’t slide back into its old ways of working.

1: Ch a nge m an agement: K otter’s e i ght-ste p model

5

Step 8: anchor the changes in corporate culture Finally, to make any change stick it needs to become part of the everyday way of working. This means embedding it in your corporate culture – telling stories about the change process and what made it successful, recognising key members of the original change coalition and including the change ideals and values when hiring and training new staff.

Top practical tip Change management is all about people, and about making relatively small shifts in the way they behave. According to Kotter, your role as a leader is therefore about engaging with employees at an emotional level. They have to be able to ‘see’ the change (for example, through eye-catching situations where problems are resolved) and to ‘feel’ it (such as by gaining some sort of emotional response that motivates them to act). This helps to reinforce the desired behaviours.

Top pitfall The Kotter model is very good for top-down change, where the top executives are motivated to change and have a well-informed view of where the organisation needs to go. There are some organisations, unfortunately, where these assumptions do not hold, in which case Kotter’s model does not work. Such organisations either need a change in leadership, or they need a bottom-up process of change.

Further reading For information on the ‘four rooms of change’ go to www.claesjanssen.com Kanter, R.M. (1992) The Challenge of Organizational Change: How companies experience it and leaders guide it. New York: Free Press. Kotter, J. (1996) Leading Change. Boston, MA: Harvard Business School Press.

6

PA RT ONE : Org an i s ational beh av iour

Cognitive biases in decision making

2

A cognitive bias is a way of interpreting and acting on information that is not strictly rational. For example, you might hire a candidate for a job because they went to the same school as you did. There are many types of cognitive bias – with both positive and negative consequences – so it is important to understand how they work.

When to use it ●●

To understand how you make decisions, so that you can avoid making bad ones.

●●

To understand how others reach their point of view in discussions.

●●

To influence the decision-making processes in your organisation.

Origins While there is a long history of research on cognitive biases, most people agree that the ‘fathers’ of the field are psychologists Amos Tversky and Daniel Kahneman. During the 1960s they conducted research seeking to understand why people often made flawed decisions. At that point in time, most people believed in ‘rational choice theory’, which suggested that humans would make logical and rational deductions based on the evidence available. However, they showed conclusively that this is not so. For example, when faced with the prospect of losing, for example, £1,000, individuals become very risk-averse; whereas the prospect of winning the same amount of money encourages them to become risk-takers. This insight, among many others, helped them to develop an entirely new way of looking at decision making. Humans don’t use algorithms, in the way a computer might. Rather, they use heuristics, or rules of thumb, that are simple to compute but introduce systematic errors. 7

Kahneman and Tversky’s experiments spawned an entire stream of research that spread beyond psychology into other disciplines, including medicine and political science. More recently, the field of economics embraced their ideas, resulting in the creation of behavioural economics and the awarding of the Nobel Prize in Economics to Kahneman in 2002.

What it is Cognitive bias is a general term used to describe the workings of the human mind that may lead to perceptual distortion, inaccurate judgment or illogical interpretation. Cognitive biases come in many different forms. Some affect decision making – for example, the well-known tendency for groups to default into consensus (‘group think’) or to fail to see the truth in assembled data (‘representativeness’). Some affect individual judgment – for example, making something appear more likely because of what it is associated with (‘illusory correlation’), while others affect the workings of our memory – for example, by making past attitudes similar to present ones (‘consistency bias’). There are also biases that affect individual motivation, such as the desire for a positive self-image (‘egocentric bias’). The table opposite lists some of the most well-known examples of cognitive biases.

How to use it The main way you use cognitive biases in the workplace is by being aware of their existence, and then taking steps to avoid their damaging side effects. For example, imagine you are in a business meeting and you are being asked to decide whether to go ahead with a proposal to launch a new product. Because of your knowledge of cognitive biases, you ask yourself a number of questions: ●●

Is there a reason to think that the people making the recommendation are suffering from bias, for example confirmation bias in their assessment of the potential market size, or are they trying to manipulate the group into a decision based on how they have framed the problem?

●●

Was there a high-quality discussion around the table? Did people have an opportunity to voice their concerns? Was the relevant information brought to bear on the discussion? Or did minority voices get drowned out?

On the basis of this analysis, it is your job to counter the biases that you think may be creeping in. For example, if you think someone is being selective with the data they are presenting, you can ask an independent expert to provide their own set of data. If you think a meeting has reached agreement too soon, you can call on someone to put forward a counter-argument. One of the most important jobs of the chairman in a meeting, in fact, is to be conscious of these potential biases and to use his or her experience to avoid egregious errors.

8

PA RT ONE : Org an i s ational beh av iour

Name

Description

Framing

The relative appeal or value of an option or an item fluctuates depending on how it is presented. For example, we expect to pay more for a Coke in a 5* hotel than at a railway station vending machine. Context is key.

Confirmation bias

The tendency to search for information in a way that confirms our preconceptions and discredits information that does not support our view.

Fundamental attribution error

The tendency for us to over-emphasise personality-based explanations for behaviours observed in others. If a driver in front of us swerves unexpectedly, our automatic reaction is to label him a ‘bad driver’, whereas in fact he might have swerved to avoid something lying on the road.

Availability

The more easily we can recall an event or a group of people, the more common we perceive these events or people to be.

Representativeness

When we are asked to judge the probability that an object belongs to a particular category, our estimate is based on how representative that object is of the category, and it ignores the underlying distribution.

Anchoring

Establishing the perceived value of something at an arbitrarily high or low level. This is commonly observed in negotiations, for example with a salesperson setting a high price and then discounting it, making us feel as though we have got a better deal.

Much the same logic applies to other aspects of work in organisations. When discussing the performance of a subordinate, or when talking to a potential customer, you need always to be alert to the likely cognitive biases they have, and how these might get in the way of a good outcome. There are so many cognitive biases out there that it takes many years of experience to master this process.

Top practical tip Here is a specific tip for managing a meeting, put forward by Daniel Kahneman. Before a difficult decision has to be made, ask everyone around the table to write down their views on a piece of paper. Then, when it is their turn to speak, they have to say what they wrote on their paper. This avoids people being swayed in their opinions by what the person before them said.

2: Cogn i ti ve b ia ses i n de ci s ion m a k ing

9

Top pitfalls It is possible to abuse your knowledge of cognitive biases by over-analysing things. In many business contexts, speed of decision making is important, so all the techniques described above are helpful but they can also slow things down a lot. The trick, as always, is about balance – the right blend of careful analytical thinking and intuition-based judgment. The other big pitfall is that it is much easier to recognise cognitive biases in others than in yourself, so don’t make the mistake of thinking you are immune from bias. Ask others to guide you on this – ask them to challenge your thinking, and to tell you if you are falling into one of the traps we have discussed above.

Further reading Kahneman, D. (2012) Thinking, Fast and Slow. London: Penguin Books. Rosenzweig, P. (2007) The Halo Effect. New York: Free Press. Thaler, R.H. and Sunstein, C.R. (2008) Nudge: Improving decisions about health, wealth, and happiness. New Haven, CT: Yale University Press.

10

PA RT ONE : Org an i s ational beh av iour

Emotional intelligence

3

Emotional intelligence is the ability to monitor your own and other people’s emotions. This helps you to discriminate between different emotions and label them appropriately, which in turn helps you to guide your thinking and behaviour and increase your personal effectiveness.

When to use it ●●

To help you do your job as a manager or a leader of others.

●●

To decide who to hire or promote.

●●

To assess and improve the quality of leadership across an organisation.

Origins The concept of emotional intelligence has been around for many decades. Its origins lie with research done by Edward Thorndike in the 1930s, who came up with the notion of ‘social intelligence’, or the ability to get along with others. In the 1970s, Howard Gardner, the educational psychologist, showed that there were multiple forms of intelligence, which helped to legitimise the idea that less academically trained forms of intelligence were important. The term ‘emotional intelligence’ was first used by researcher Wayne Payne in 1985, in his doctoral dissertation. Since then, three different approaches to emotional intelligence have emerged. The ability model of Peter Salovey and John Mayer focuses on the individual’s ability to process emotional information and use it to navigate the social environment. The trait model of K.V. Petrides focuses on an individual’s self-perceived attributes – it sees emotional intelligence as a set of human traits. The third and most popular approach, put forward by author Daniel Goleman, is a blend of the other two, in that it combines abilities and traits.

11

What it is There are many different types of human intelligence – some people are good at mathematics, some are good with words, others have musical skills or good hand-eye coordination. Emotional intelligence is one such type of intelligence. It is very hard to measure, but it turns out to be vitally important in the workplace, and especially for leaders of organisations. One of the hallmarks of great leaders, it is argued, is that they are good at sensing how others are feeling, and adapting their message and their style of interaction accordingly. Great leaders are also very aware of their own strengths and weaknesses, which is another important facet of emotional intelligence. The popular model of emotional intelligence put forward by Daniel Goleman concentrates on five components: 1 Self-awareness: The ability to recognise and understand personal moods and emotions and drives, and their effect on others. 2 Self-regulation: The ability to control disruptive impulses and moods, suspend judgment and think before acting. 3 Internal motivation: A passion to work for internal reasons that go beyond money and status. 4 Empathy: The ability to understand the emotional make-up of other people. 5 Social skills: Proficiency in managing relationships and building networks, finding common ground and building rapport.

How to use it You can use the concept of emotional intelligence in an informal or a formal way. The informal way is to reflect on Goleman’s five traits as desirable attributes that you or others should have. Do you think you are self-aware? Do you have strong empathy and social skills? This sort of casual analysis can lead to useful insights about things you might do differently, or the type of training course you might want to take. The formal way is to use the official diagnostic surveys created by academics. John Mayer, one of the originators of the concept, says ‘In regard to measuring emotional intelligence – I am a great believer that [ability testing] is the only adequate method to employ’. There are several different surveys available. ‘EQ-I’ is a self-report test designed by Reuven Bar-On to measure competencies including awareness, stress tolerance, problem solving and happiness. The ‘Multifactor Emotional Intelligence Scale’ gets test-takers to perform tasks based on their ability to perceive, identify and understand emotions. The ‘Emotional Competence Inventory’ is based on ratings from colleagues on a range of emotion-based competencies.

12

PA RT ONE : Org an i s ational beh av iour

Top practical tip Emotional intelligence is, by nature, very hard to measure. We would all like to think we are emotionally intelligent, but we aren’t! So the top practical tip is to get multiple points of view. Sometimes you do this through anonymous feedback and sometimes through group-based coaching sessions, where people give each other candid feedback.

Top pitfalls The problem with a concept such as emotional intelligence is that it sounds so alluring that everyone wants it. However, it takes a lot of time to change your own way of working and how you relate to others. So the biggest pitfall is to think that being assessed and understanding how emotionally intelligent you are is the endpoint. In fact, this is really just the starting point, because it is then that the hard work of making changes begins. Typically, you cannot do this on your own – you need a helpful boss or colleague, or personal coach.

Another pitfall is to misuse emotional intelligence as a way of manipulating others. For example, if you have a really good understanding of how your personal style affects others, you might be able to lure them into doing something they didn’t really intend to do. There is a fine line between being skilful and being manipulative, and it is important not to overstep that line.

Further reading Goleman, D. (2006) Emotional Intelligence: Why it can matter more than IQ. New York: Random House. Grant, A. (2014) ‘The dark side of emotional intelligence’, The Atlantic, 2 January. Petrides, K.V. and Furnham, A. (2001) ‘Trait emotional intelligence: Psychometric investigation with reference to established trait taxonomies’, European Journal of Personality, 15(6): 425–448. Salovey, P., Mayer, J. and Caruso, D. (2004) ‘Emotional intelligence: Theory, findings, and implications’, Psychological Inquiry, 15(3): 197–215.

3: E mot i on al intell igen ce

13

4

Managing work groups: Belbin team roles

These days, many people work in teams. Teams can be very productive, because they allow people with different skills to work together in creative ways, but they can also be highly dysfunctional. Figuring out what makes teams effective is therefore important, and a key part of this is to understand the different roles individuals play in teams. The ‘Belbin Team Inventory’ is a well-known personality test that measures your preference towards nine different roles you can play in a team.

When to use it ●●

To build productive working relationships in teams.

●●

To improve your own self-awareness, in terms of the roles you typically play in a team context.

●●

To build and develop high-performing teams.

Origins Researchers have studied the dynamics of teams for many years. One of the most famous studies was conducted in the early 1970s at Henley Business School (outside London). Under the leadership of Dr Meredith Belbin, researchers studied the dynamics of competing teams during an executive training programme. They recorded the types of contributions made by participants during the team meetings, and they also examined how effective the teams were in their activities. The researchers had expected to see the high-intellect teams performing better (based on the team members’ performance in various IQ tests). However, it turned out that the highest-performing teams were actually those where team members performed a balanced mix of roles. Some teams, for example, had too many people who wanted to take the lead, while others had too few. This analysis led Belbin and his team to identify nine different ‘team roles’ that individuals typically play in teams, 14

and they argued that the most successful teams are those in which all nine roles are (to varying degrees) filled. Many subsequent studies have built on Belbin’s analysis and indeed challenged some aspects of his findings. But the basic notion, that people need to fulfil different roles in order for the teams to function effectively, has been strongly endorsed. Many firms have subsequently used Belbin’s analysis as a way of understanding both individual and team behaviour.

What it is In a team environment, people tend to take on certain roles that are a function of their own personal strengths and also their relationship with the others in that team. A ‘team role’ is defined as a tendency to behave, contribute and interrelate with others in a particular way. Belbin identified nine such roles, categorised into three groups: ‘action orientated’, ‘people orientated’ and ‘thought orientated’. They are summarised in the table below.

Contribution

Allowable weaknesses

Plant

Creative, imaginative, freethinking. Generates ideas and solves difficult problems.

Ignores incidentals. Too preoccupied to communicate effectively.

Resource investigator

Outgoing, enthusiastic, communicative. Explores opportunities and develops contacts.

Over-optimistic. Loses interest once initial enthusiasm has passed.

Coordinator

Mature, confident, identifies talent. Clarifies goals. Delegates effectively.

Can be seen as manipulative. Offloads own share of the work.

Shaper

Challenging, dynamic, thrives on pressure. Has the drive and courage to overcome obstacles.

Prone to provocation. Offends people’s feelings.

Monitor evaluator

Sober, strategic and discerning. Sees all options and judges accurately.

Lacks drive and ability to inspire others. Can be overly critical.

Teamworker

Cooperative, perceptive and diplomatic. Listens and averts friction.

Indecisive in crunch situations. Avoids confrontation.

4: Ma n ag ing work grou ps: B elb i n te am roles



Team role

15

Team role

Contribution

Allowable weaknesses

Implementer

Practical, reliable, efficient. Turns ideas into actions and organises work that needs to be done.

Somewhat inflexible. Slow to respond to new possibilities.

Completer finisher

Painstaking, conscientious, anxious. Searches out errors. Polishes and perfects.

Inclined to worry unduly. Reluctant to delegate.

Specialist

Single-minded, self-starting, dedicated. Provides knowledge and skills in rare supply.

Contributes only on a narrow front. Dwells on technicalities.

Source: The Nine Belbin Team Roles, www.belbin.com/rte.asp?id=3. Reproduced with permission.

For each role there is a typical set of contributions that an individual makes to a team, and also a characteristic weakness that goes with that role. Belbin calls these weaknesses ‘allowable’ on the basis that they don’t necessarily get in the way of the team achieving its objectives, as long as people understand and compensate for them.

How to use it The Belbin team roles model can be used in several ways. At an individual level, you can use it as a way of understanding your own preferred way of working, and perhaps to develop your strengths in areas where you are relatively weak. This sort of self-awareness can be very valuable in a team situation, as it reduces the risk of getting annoyed with others who see their role in the team differently to how you see yours. At a team level, the Belbin inventory can be a useful way of assessing the balance between team members before a project starts (or perhaps just after it has got underway). This can alleviate some of the difficult dynamics between team members. For example, a team with too many Shapers and Coordinators will often see sparks fly, because these individuals all want to be in charge. A team with too many Completer Finishers and Specialists, on the other hand, might struggle to get moving initially, or might lack creativity. At an organisational level, you can also use the Belbin inventory as a way of selecting which people to put into teams. Of course, there is no perfect combination of roles, as many people are sufficiently skilled to consciously play different roles depending on the circumstances, but at a minimum it is a good idea to make sure there are some people with expertise in each of the action, interpersonal and thought categories.

16

PA RT ONE : Org an i s ational beh av iour

Top practical tip People gravitate to certain team roles on the basis of their areas of expertise and also their day-to-day work. So consultants will often move, by default, into the role of a shaper or completer finisher, whereas people working in an R&D lab may take on the role of a specialist or plant. This is important to keep in mind, because teams can quickly become unbalanced if they have too many people playing a particular role. If you find yourself overseeing a team of consultants, for example, you should be careful to avoid the ‘too many chiefs, not enough Indians’ scenario. Be prepared to make some deliberate changes to the make-up of the team by drafting in people who fill any notable gaps in the team profile, or by reassigning certain people to other teams.

Top pitfalls Belbin’s team roles are based on observed behaviour and interpersonal styles, but there is no suggestion that people have predefined ways of working. In fact, many people are highly flexible, and your own interpersonal style is likely to depend to a large degree on the situation in which you find yourself. In one team you might find yourself as a Coordinator, in the next you could take on the role of Specialist or Plant. The biggest pitfall, in other words, is to assume that your preferred way of working, as predicted by the Belbin team role inventory, is your only way of working. It is useful to know your preference, but it is much better to be adaptable.

It is also useful to know that there are many other guides to team behaviour, and

indeed some of them have stronger academic underpinnings than Belbin’s model. The advantage of Belbin’s work is that it is widely known and makes intuitive sense, but you should always be open to alternative ways of managing team dynamics.

Further reading Belbin, R.M. (1996) Team Roles at Work. Oxford, UK: Butterworth-Heinemann. Katzenbach, J.R. and Smith, D.K. (2005) The Wisdom of Teams – Creating the High-Performance Organization. New York: McGraw-Hill Professional.

4: Ma n ag ing work grou ps: B elb i n te am roles

17

5

Matrix management

A matrix is an organisational structure through which people report to two or more bosses. For example, the manager at Unilever responsible for ice cream sales in France might report to the head of the ice cream business in London and also to the country manager responsible for all of Unilever’s business in France. Most large firms use some sort of matrix, often with more than two dimensions. Matrix management creates many challenges for those operating in it.

When to use it ●●

To structure your organisation so that you can operate effectively in a complex world.

●●

To help managers understand their sometimes-conflicting responsibilities.

●●

To adapt your organisation to new challenges.

Origins The first known matrix structure was put in place by aircraft manufacturer McDonnell in the early 1950s, as a way of reconciling the demands for efficient delivery of specific projects (such as an aircraft order from the US government) while also achieving high levels of functional specialisation for each different activity. In this structure, teams of employees essentially worked for two bosses – one project boss and one functional boss. Other forms of matrix structure soon followed. International firms, such as IBM, Dow Chemical and Digital Equipment, created global matrix structures during the 1970s with reporting lines to strategic business unit heads on one side and country heads on the other. Professional services firms, such as Citibank and McKinsey, developed matrix structures with their service lines on one side and their city- or country-based resource pools on the other. 18

But matrix structures are not easy to implement because they tend to slow decision making down, and during the 1980s a lot of firms moved away from them and back towards simpler structures with a single primary line of reporting. Interestingly, though, the matrix came back into favour again in the 1990s, partly because of the necessities of managing multiple objectives, and partly thanks to high-profile firms (such as the Swiss/Swedish engineering firm ABB) advocating this structure. While many observers continue to criticise the matrix structure, it is a ‘necessary evil’ for large firms operating in complex business environments.

What it is A matrix structure indicates formal relationships – the boxes and lines on an organisation chart. The concept of matrix management refers to the entire way of working that goes around the matrix structure, which includes the various systems for monitoring and evaluating performance, for sharing information and for allocating resources, as well as the informal way that people behave within such a structure. Almost all large firms operate in a complex business setting where they face demands that pull them in more than one direction. For example, consider an international firm operating in the food industry. One set of demands pulls it towards ‘global integration’, which means organising its factories and development activities around its major product areas. Another set of demands pulls it towards ‘local responsiveness’, which means thinking about the needs of customers on a countryby-country basis. Faced with these conflicting demands, the firm can choose to focus primarily on one of them, which allows it to create a simple hierarchical structure where everyone has a single boss. Alternatively, the firm can choose to focus on both at the same time, which is achieved by creating a matrix structure that helps the firm to address its conflicting demands in equal measure. However, it can also slow decision making down, because if a manager’s two bosses have contrary views then they need to resolve their differences before the manager can take action. Some firms use a ‘balanced’ matrix structure, where both the manager’s bosses have equal power and responsibility. However, this approach tends to result in large numbers of meetings to resolve disagreements. Most firms have therefore moved to an ‘unbalanced’ form of matrix, where a manager has a ‘solid-line’ reporting relationship to one boss and a ‘dotted-line’ relationship with one or more other bosses. The dotted-line bosses have no formal authority over the manager in question; instead, they typically provide guidance and information, and they help to coordinate activities. Some firms use this unbalanced form of matrix to coordinate activities on four or five different dimensions. For example, a manager at IBM might have a solid-line reporting relationship with a business unit head, such as consulting, then dotted-line relationships with a country head (e.g. USA), a functional head (e.g. sales) and also someone responsible for a ‘vertical’ client sector (e.g. pharmaceuticals).

5: Matr ix management

19

How to use it Matrix management has a number of well-established advantages and disadvantages. The advantages include: ●●

delivering work across the firm in an effective way;

●●

overcoming organisational silos;

●●

responding flexibly to external demands;

●●

developing the skills of managers.

The disadvantages include: ●●

having multiple bosses, which can be confusing and lead to frustration;

●●

managers becoming over-burdened;

●●

senior managers spending enormous amounts of time in internal meetings;

●●

decision making becoming very slow.

To overcome these disadvantages, firms have to give a lot of attention to the formal and informal structures that support the matrix. As noted above, one useful approach is to make the matrix deliberately unbalanced, so that it is clear where the primary line of reporting lies. This means that in cases where the two bosses disagree, one of them has the right to overrule the idea, and this helps to speed up decision making. Another part of making matrix management work is to ensure that the supporting systems are helpful. For example, a well-designed IT system that provides frequent updates to everyone on, for example, production volumes or sales numbers, ensures that decisions are made on the basis of high-quality information. Equally, incentive systems are often created to reward senior managers on the performance of the whole business, rather than one part. In both cases, this reduces the risk of managers making decisions to support their parochial, or ‘pet’, interests. The third part of making matrix management work is to give attention to the informal culture of the firm. It is often said that you should seek to create a ‘matrix in the mind of the manager’ so that, regardless of the formal structure, every manager tries to act in the overall interests of the firm, rather than the narrow perspective of their business unit or function. There are many ways to work on improving this informal culture, for example by creating training programmes and events where managers in different areas get to know each other better, or by transferring managers from one job to another, so that they get a taste for how the whole firm works.

20

PA RT ONE : Org an i s ational beh av iour

Top practical tip If you work in a matrix structure, the first thing to do is to make sure you understand how it works. Is it a balanced matrix with two equal bosses? Or is it an unbalanced matrix in which one line of reporting dominates? Armed with this information, you are then in a better position to make the right judgments about what work to prioritise, or from whom to seek permission when pursuing a new project. If you are in doubt, follow the money. The more budget one dimension of the matrix has, the more powerful it is.

Top pitfall Every organisational structure has its weaknesses, and over time those weaknesses tend to compound. A matrix structure that is balanced will, over time, become extremely slow-moving and bureaucratic. A matrix that is unbalanced will tend to give too much power to certain people at the expense of others, which will not be good for the firm as a whole. So the biggest pitfall is allowing the existing structure to endure for a very long period of time.

Further reading Bartlett, C.A. and Ghoshal, S. (1990) ‘Matrix management: Not a structure, a frame of mind’, Harvard Business Review, July–August: 138–45. Davis, S.M. and Lawrence, P.R. (1977) Matrix. Boston, MA: Addison-Wesley. Galbraith, J.R. (2013) Designing Organizations: Strategy, structure, and process at the business unit and enterprise levels. Hoboken, NJ: John Wiley & Sons.

5: Matr ix management

21

6

Mintzberg’s managerial roles

What do managers actually do in the workplace? Henry Mintzberg showed that managers do three types of things: informational work that involves sharing and disseminating information to those around them; interpersonal work that involves working with individuals to get the most out of them; and decisional work that involves doing deals and making tough choices. By understanding the nature of these different roles, we can all become more effective as managers.

When to use it ●●

To understand and critique your own style of managing.

●●

To evaluate the overall effectiveness of your organisation.

●●

To design a training or development programme for managers.

Origins Henry Mintzberg wrote his PhD thesis at the MIT Sloan School of Management, with a study focusing on the day-to-day work habits of chief executive officers (CEOs). Up to this point, people had tended to focus on the functions of management, such as planning, organising, controlling and budgeting. By following executives around, and monitoring how they actually spent their time, Mintzberg was able to put forward a completely different perspective on the nature of managerial work. His ideas gained popularity very quickly, and spawned a whole field of further research on the topic. Over the years, many people have shifted their focus towards leadership – that is, the way an executive inspires others to follow him or her. But understanding management, the art of getting work done through others, is still vitally important, and Mintzberg’s original research on managerial roles is where it all started.

22

What it is In writing up his research on CEOs, Mintzberg made some important observations about how they spend their time in the workplace: ●●

Managers work at an unrelenting pace, and they are strongly orientated to action.

●●

They strongly favour verbal media, telephone calls and meetings over documents.

●●

They spend most time with their subordinates and external parties, and relatively little with their superiors.

●●

Their activities are relatively short in duration (an average of 9 minutes), and are highly varied and fragmented.

Based on these observations, Mintzberg identified ten distinct roles that executives perform during their day-to-day working lives, as shown in the table below.

Role

Activity

Examples

Informational

Monitor

Seek and acquire work-related information.

Scan/read trade press, periodicals, reports; attend seminars and training; maintain personal contacts.

Disseminator

Communicate/ disseminate information to others within the organisation.

Send memos and reports; inform staffers and subordinates of decisions.

Spokesperson

Communicate/ transmit information to outsiders.

Pass on memos, reports and informational materials; participate in conferences/ meetings and report progress.

Figurehead

Perform social and legal duties, act as symbolic leader.

Greet visitors, sign legal documents, attend ribboncutting ceremonies, host receptions, etc.

Interpersonal

6: Mi ntzberg’s m a na ger ia l roles



Category

23

Category

Role

Activity

Examples

Interpersonal (continued)

Leader

Direct and motivate subordinates, select and train employees.

Includes almost all interactions with subordinates.

Liaison

Establish and maintain contacts within and outside the organisation.

Business correspondence, participation in meetings with representatives of other divisions or organisations.

Entrepreneur

Identify new ideas and initiate improvement projects.

Implement innovations; plan for the future.

Disturbance handler

Deal with disputes or problems and take corrective action.

Settle conflicts between subordinates; choose strategic alternatives; overcome crisis situations.

Resource allocator

Decide where to apply resources.

Draft and approve of plans, schedules, budgets; set priorities.

Negotiator

Defend business interests.

Participate in and direct negotiations within team, department and organisation.

Decisional

Source: Mintzberg, H. (1983) The Nature of Managerial Work. Upper Saddle River, NJ: Pearson Education, Inc. Reproduced with permission.

In summary, Mintzberg’s research exposed many of the myths of the time about what managers really do. For example, he challenged the idea that CEOs think through problems in advance, or with deep foresight – many of them are, in fact, highly reactive and opportunists in the way they work, and they are comfortable being constantly interrupted. Mintzberg’s work was also influential in what became known as the ‘situational’ view of leadership. In other words, rather than arguing that some executives have stronger leadership and management capabilities than others, he argued that 24

PA RT ONE : Org an i s ational beh av iour

capabilities are context-specific. The most effective leaders are those who can tailor their behaviour to what is required in the circumstances.

How to use it If you are a manager, Mintzberg’s roles provide a useful way of helping you to think through your responsibilities. Take each of the ten roles in turn, and ask yourself two questions: ‘How much time do I actually spend doing this?’ and ‘How much time should I spend doing this in an ideal world?’ This simple analysis helps you to figure out what sorts of things to prioritise, and what sorts of activities you should be downplaying or getting out of altogether. If you are trying to make an entire organisation work more effectively, you can use Mintzberg’s roles to design some sort of management development programme. Where are the biggest weaknesses in the skill sets of the company’s managers? What sort of training do they need? For example, many companies find that their managers aren’t sufficiently skilled in leading others, or that they lack skills in negotiating, and this is very helpful in defining the appropriate training programme for them. Remember that these different roles are complementary – they build on each other, and together they make up a rounded whole. According to Mintzberg: ‘The manager who only communicates or only conceives never gets anything done, while the manager who only “does” ends up doing it all alone.’

Top practical tip At a practical level, most managers spend far too much time on task-orientated roles, such as working on emails or attending meetings, and not enough on interpersonal roles such as coaching and developing their subordinates. So if you want to become more effective as a manager, keep a timesheet and note down how much time you spend on each activity over the course of a week. By showing how you are actually spending your time, you create the impetus to change your behaviour, and to invest more time in the work that really matters.

Top pitfall Mintzberg’s original analysis was descriptive, which means he simply documented what he observed, rather than claiming that there was some sort of ‘ideal’ mix of roles that managers should undertake. So the biggest pitfall here is to think that

every manager needs to do all these things. Instead, the ten roles should be used as a diagnostic tool, and you should understand that, depending on the job, some roles will be far more important than others.

6: Mi ntzberg’s m a na ger ia l roles

25

Further reading Mintzberg, H. (1983) The Nature of Managerial Work. Upper Saddle River, NJ: Pearson Education, Inc. Mintzberg, H. (2012) Managing. Oakland, CA: Berrett-Koehler. Tenbglad, S. (2006) ‘Is there a “new managerial work”? A comparison with Henry Mintzberg’s classic study 30 years later’, Journal of Management Studies, 43(7): 1437–1461.

26

PA RT ONE : Org an i s ational beh av iour

Motivation: Theory X and Theory Y

7

Theory X and Theory Y represent differing perspectives on human motivation. Theory X says that people are basically lazy – they don’t want to work. To ensure that they do a good job, you have to pay them, and you have to link their pay directly to what they produce. Theory Y says that people like to work – they want to do a good job and develop themselves. Paying them well is less important than providing them with a stimulating working environment.

When to use it ●●

To understand what makes people tick and how to manage them better.

●●

To design jobs that get the best out of people.

●●

To develop an effective compensation or incentive system.

Origins Douglas McGregor was a social psychologist working at MIT in the 1950s, alongside other leading management theorists such as Ed Schein and Kurt Lewin. He put forward his ideas in his classic book, The Human Side of Enterprise. He was dismayed at the extent to which many companies treated their employees as cogs in a machine, and he thought that an alternative view, based around tapping into people’s innate desire to do a good job, needed to be developed. Theory X was, in essence, the ‘bad old way of working’ that had been labelled as ‘scientific management’ in the 1920s. Theory Y was the alternative, more humanistic approach that McGregor put forward. His work built on the earlier ideas of Abraham Maslow, who had put forward the idea that people have a hierarchy of needs – from basic physiological needs and safety through to love, esteem and self-actualisation.

27

What it is There are two basic views of human nature in the workplace. Theory X assumes that employees are naturally unmotivated and dislike working, which means managers need to provide a lot of oversight to ensure that employees get their work done. This means, for example, careful supervision and also a heavy reliance on incentives – financial rewards for completing work, penalties for those who shirk or make errors. In contrast, Theory Y assumes that employees are intrinsically motivated to work, and will naturally do a good job if given the right opportunities. According to this world-view, managers have a very limited role – their job is to define what has to be done to provide support where necessary, and then just to get out of the way. Responsibility, in a Theory-Y world, lies primarily with those doing the work, whereas in a Theory-X world, the responsibility stays with the managers overseeing the work. Which theory is right? Well, they are both partially correct. The reality is that people vary enormously in terms of what motivates them, ranging from mostly money (Theory X) on one end of the spectrum through to the joy of work (Theory Y) on the other end. By understanding where employees sit on this spectrum, companies can develop management styles and incentive systems to match those different drivers. For example, there are many factories, call centres and fast-food operations that operate on a Theory-X model, while many professional services companies, R&D operations and start-ups use a Theory-Y model. Theory X/Theory Y is the most famous model in the field of human motivation, but thinking in this area has continued to evolve. A recent extension is ‘self-determination theory’ (see Further reading).

How to use it Theory X/Theory Y is a way of looking at the world, rather than an immediately practical tool. So there are many different ways of applying it. At an individual level, you can use it to diagnose what makes a person tick – what sort of factors lead them to put in long hours. Some people subscribe to Theory X, and are motivated primarily by making money; some subscribe to Theory Y and are motivated by being given interesting work to do; others lie somewhere in between, and are motivated by the opportunity to work with interesting colleagues, or to gain the respect of those around them. If you are able to identify the relative importance of these different drivers for people in your team, you will know which of their buttons to press, and you will end up doing a much better job of managing them. Note, however, that what makes a person tick isn’t completely innate: it is also partly driven by the context in which they are working. At an organisational level, Theory X/Theory Y underlies a lot of the basic choices a company makes about how work gets done. For example, do you use a strict command-and-control style of working (Theory X), or do you empower your people and encourage them to participate in decision making (Theory Y)? Does your 28

PA RT ONE : Org an i s ational beh av iour

incentive system reward individuals for their specific outputs (Theory X), or is it based on the entire team or organisation doing well (Theory Y)? A key point here is that the theory you choose has self-reinforcing qualities. For example, if you design the organisation using Theory-X principles, then you will force people to work in a narrow, target-driven way. There is no clear right or wrong here. Rather, you need to start with a point of view of what the right theory is for your organisation, and then you have to build the structure and incentive system in a way that is consistent with that theory.

Top practical tip While it is tempting to see Theory X and Theory Y as opposing views, the reality is that they are both partially true. We are all motivated to some degree by money, just as we are all motivated to some degree by doing a good job for its own sake.

Given that Theory Y is an inherently more productive and inspiring world-view,

it therefore makes sense to try to make it work. For example, if you manage a team of employees, try to give them space to make their own choices and to take responsibility for their actions. Look for ways of supporting and encouraging them, rather than just telling them what to do. Try to reward them for effort, or on a collective basis, rather than just on narrow measures of outputs. It is challenging to do these things, but it can also be very rewarding. And remember, you can always fall back on Theory X if this approach doesn’t work.

Top pitfall Whichever theory you subscribe to in your company, you run the risk of creating undesirable side-effects. If you work in an organisation that subscribes to a Theory-X style of management, you run the risk of creating a hostile and distrustful atmosphere. Many unionised operations have fallen into this trap, and it makes for unhappy feelings among workers and managers alike. If you work in an organisation that subscribes to a Theory-Y style of management, the atmosphere is likely to be much more positive, but with the risk of a lack of control and coherence. Some employees may take advantage of you as a Theory Y manager, for example by spending excessive amounts of money on travel, or taking long lunch-breaks.

7: M ot i vat ion: Theory X and Theory Y

29

Further reading Deci, E. and Ryan, R. (2002) Handbook of Self-determination Research. Rochester, NY: University of Rochester Press. McGregor, D. (1960) The Human Side of Enterprise. New York: McGraw-Hill. Maslow, A. (1954) Motivation and Personality. New York: Harper. Pink, D.H. (2011) Drive: The surprising truth about what motivates us. New York: Riverhead Books.

30

PA RT ONE : Org an i s ational beh av iour

Negotiating techniques: BATNA

8

BATNA stands for ‘best alternative to a negotiated agreement’. Whenever you negotiate with another party, there is a chance that the negotiation may break down and that you will have to fall back on some alternative course of action. Your BATNA is this alternative course of action. Being clear on what it is, and what is the other party’s BATNA, is crucial to effective negotiation.

When to use it ●●

To get a better deal for yourself whenever you negotiate anything, for example a pay rise or buying a house.

●●

To help your firm handle complex negotiations, for example acquiring another firm or resolving a dispute with a labour union.

Origins People have been negotiating with one another since civilisation began, and over the years a lot of academic research has been done to establish the factors that contribute to a successful negotiated outcome in a business setting. The term BATNA was coined by researchers Roger Fisher and William Ury in their 1981 book Getting to Yes: Negotiating Agreement Without Giving In. While the notion of understanding your fall-back position in a negotiation was in existence before, the BATNA label proved very powerful as a way of focusing attention on the key elements of a negotiation. It is now a very widely used term.

What it is To negotiate effectively, you need to understand your BATNA. This is your ‘walkaway’ option – the course of action you will take if the negotiation breaks down and 31

you cannot agree with the other party. Sometimes it is really clear what the BATNA is; however, it is not necessarily obvious. Let’s say you have developed a new consumer product, but you are struggling to agree a reasonable price with a major supermarket chain. Is your best alternative here to sell it through a smaller chain of supermarkets? To sell it online? Or to drop the product launch altogether? There are many other factors at play here, and these include non-quantifiable things such as your personal reputation.

How to use it According to Fisher and Ury, there is a simple process for determining your BATNA that can be applied to any negotiation: ●●

Develop a list of actions you might conceivably take if no agreement is reached.

●●

Improve some of the more promising ideas and convert them into practical options.

●●

Select, provisionally, the one option that seems best.

Note that a BATNA is not the same thing as a ‘bottom line’ that negotiators will sometimes have in mind as a way of guarding themselves against reaching agreements where they give too much. A bottom line is meant to act as a final barrier, but it also runs the risk of pushing you towards a particular course of action. By using the concept of a BATNA, in contrast, you think less about the objectives of a specific negotiation and instead you focus on the broader options you have to achieve your desired objectives. This provides you with greater flexibility, and allows far more room for innovation than a predetermined bottom line.

Negotiation dynamics If you are clear what your own BATNA is, you know when to walk away from a negotiation. It is also important to think about the BATNA of the other party. For example, if you are trying to negotiate terms for a new job with an employer, and you know there are other good people short-listed for that job, then the employer’s BATNA is simply to hire the next person on the list. The catch here is that you often do not know what the other party’s BATNA really is – and indeed it is often in their interests to keep this as hidden as possible. One useful variant of BATNA is the notion of an EATNA – an ‘Estimated Alternative To a Negotiated Agreement’ – which applies in the (frequent) cases where one or both parties are not entirely clear what their best alternative looks like. For example, in any courtroom battle both sides are likely to believe they can prevail, otherwise they wouldn’t be there. Should you reveal your BATNA to the other party? If it is strong, there are benefits to disclosing it as it forces the other party to confront the reality of the negotiation. If your BATNA is weak, it is generally best not to disclose it – sometimes it is possible to ‘bluff’ your way to a better outcome than you would otherwise achieve. 32

PA RT ONE : Org an i s ational beh av iour

Top practical tip The key idea of thinking through your BATNA is that it doesn’t lock you into one course of action. People often enter a negotiation with predetermined positions (such as a particular price) underpinned by certain interests (fears, hopes, expectations), and this often means that the negotiation breaks down, with both sides losing out.

Your challenge as a negotiator is to look beyond the positions initially commu-

nicated and to uncover and explore the interests that gave rise to these positions. This often involves some creativity, and some openness on both sides to broaden the dimensions on which they believe they are negotiating.

Top pitfalls While it is useful to think through the dynamics of a negotiation, and to try to anticipate the other party’s BATNA, the biggest pitfall is not knowing your own BATNA. For example, in a corporate situation you will often find yourself exposed to internal pressure from colleagues to make an agreement happen, which may mean committing to terms that are unfavourable. By being explicit on your BATNA in advance, and communicating this to colleagues, you can avoid finding yourself in an awkward position. A related pitfall is to be overconfident that you know the BATNA of the other party. If an employee comes to you asking for a raise because he has an offer from a competing firm, do you really know how serious he is about taking that offer? Obviously you will have some intuitive sense of his preferences, and this will guide your negotiation, but if you tell yourself he is not serious about leaving, you run the risk of low-balling your offer to him and losing a good person.

Further reading Burgess, H. and Burgess, G. (1997) The Encyclopaedia of Conflict Resolution. Santa Barbara, CA: ABC-CLIO. Fisher, R. and Ury, W. (1981) Getting to Yes: Negotiating agreement without giving in. London: Random House.

8: N egot iating techniques: BATN A

33

9

Schein’s model of organisational culture

Every organisation has a culture – an unwritten set of norms and expectations about how people should behave. Cultures are hard to define and very hard to change, but they matter a lot because they shape behaviour and they influence who chooses to join and leave. Many attempts have been made to make sense of culture over the years. Ed Schein, a Professor at MIT, has put forward one of the most influential models.

When to use it ●●

To diagnose the culture where you work.

●●

To understand what sorts of behaviour are acceptable, and why.

●●

To help you make changes.

Origins There is a long history of thinking about organisational culture, sometimes using the related concept of ‘organisation climate’. Ed Schein was a Professor at MIT during the 1970s who worked alongside other famous behavioural researchers such as Kurt Lewin and Douglas McGregor. His studies focused on how individuals in organisations made sense of their own roles and motivations, and how organisations enabled or frustrated individuals in pursuit of their objectives. His 1985 book, Organizational Culture and Leadership, provided a defining statement of what culture is and how it can be influenced by the leaders of the organisation. Many subsequent studies have built on his work, often with the intention of defining particular types of culture. Important examples are Daniel Denison’s Corporate Culture and Organizational Effectiveness and Rob Goffee and Gareth Jones’ The Character of the Corporation. 34

What it is According to Schein, culture is a subjective interpretation, a set of assumptions, that people make about the group or organisation in which they work. It operates at three levels: ●●

artefacts – what you experience with your senses;

●●

espoused beliefs and values;

●●

basic underlying beliefs.

Artefacts are all tangible, overt or verbally identifiable elements in an organisation. For example, if you go to Google’s campus, you see people wearing casual clothing and the buildings are quite funky, with unusual furniture, leisure equipment, games rooms and so on. You don’t even have to speak to anyone to pick up on some aspects of Google’s culture. Espoused beliefs and values are what people in the organisation say their norms of behaviour are. It is how employees represent the organisation both to themselves and to others. Sometimes it done through written vision and values statements, sometimes it is through the more informal ways that people talk. Shared basic assumptions are the taken-for-granted aspects of how individuals behave in an organisational context. These are usually subconscious, and so well-integrated into the patterns of behaviour in a workplace that they aren’t even acknowledged. At Google, for example, it is assumed that if you hire the cleverest people and give them lots of freedom, then they will come up with ground-breaking innovations. These three layers can be understood through the iceberg metaphor – artefacts and espoused beliefs are visible, but the shared basic assumptions are under the surface and are actually the most important part.

How to use it Schein’s model is primarily descriptive – in other words, it is designed to help you diagnose the culture of the organisation you work for, or are offering advice to. So rather than just focus on the visible artefacts, Schein’s work encourages you to get at the underlying beliefs and are assumptions, so that you understand what is really going on. In his 1985 book, Schein recommends the following steps in deciphering and assessing cultures: ●●

visit and observe;

●●

identify artefacts and processes that puzzle you;

●●

ask insiders why things are done that way;

●●

identify espoused values and ask how they are implemented;

●●

look for inconsistencies and ask about them;

9: Sc he i n’s model of org an is ation al culture

35

●●

figure out from the above the deeper assumptions that determine the observed behaviour.

One caveat to this approach is that culture has a subjective quality. In other words, any conclusions you reach about the deeper assumptions that make the organisation tick may not be things others agree with.

Top practical tip Schein’s model is valuable because it forces people to get to grips with the unwritten assumptions about behaviour in their organisations. Rather than focus on the visible artefacts, you should therefore always ask why the organisation works the way it does, and seek to get at some of these underlying assumptions.

Top pitfall Make sure not to confuse organisational beliefs and values with individual beliefs and values. Psychologists will tell you that our beliefs and values as individuals are set at a very young age, certainly before we are adults, and that they are almost impossible to change. So when we talk about organisations having certain ‘values’, we are not saying that all the individuals in the organisation fully conform to those values. Hopefully, individuals will, for the most part, choose to work for organisations whose values mirror their own, but this will not always be the case. One consequence of this point is that when we talk about culture change in organisations, we are typically seeking to change the way people behave in the first instance, and to encourage some of their underlying values to become more salient. We are not asking them to change their underlying values – that is more or less impossible.

Further reading Denison, D. (1990) Corporate Culture and Organizational Effectiveness. Hoboken, NJ: John Wiley & Sons. Goffee, R. and Jones, G. (2003) The Character of the Corporation. London: Profile Books. Schein, E.H. (1985) Organizational Culture and Leadership. San Francisco, CA: Jossey-Bass.

36

PA RT ONE : Org an i s ational beh av iour

360-degree assessment

10

360-degree feedback is a management tool that gives employees the opportunity to receive feedback from multiple sources. It is also known as a 360-degree review. It is called 360-degree feedback because the feedback comes from all around (subordinates, peers, supervisors, customers, etc.).

When to use it ●●

To give you feedback on your performance and your management style from those around you to help you create an effective personal development plan.

●●

To help the firm assess your performance, and to make pay and promotion decisions.

●●

To monitor the standard of leadership or the culture of the organisation as a whole.

Origins The idea of getting feedback from different sources to appraise performance is as old as civilisation itself. For example, an imperial rating system was used during the Wei Dynasty, in third-century China, to evaluate the performance of people at the imperial court. More recently, the German military used multiple-source feedback during World War II, with soldiers evaluated by peers, supervisors and subordinates to provide insight and recommendations on how to improve performance. During the 1950s, behavioural theorists gave a lot of attention to employee motivation and job enrichment, with a view to making work more intrinsically appealing. It was in this context that 360-degree assessment, as we know it today, was invented. 37

The individual often credited with its invention was organisational psychologist Clark Wilson through his work with the World Bank. The original tool was called the ‘Survey of Management Practices’ (SMP), and was used by Clark in his teaching at the University of Bridgeport in Connecticut, USA. The first company to adopt Clark’s SMP was the DuPont Company in 1973; it was then picked up by others, including Dow Chemicals and Pitney Bowes. By the 1990s, 360-degree feedback was in widespread use, with literally dozens of survey instruments in existence. Human resources consultants began to pick up on the concept as well, which further contributed to its dissemination.

What it is Under the traditional annual appraisal system, an employee’s review was conducted once a year by their immediate boss. But if that boss didn’t have a good understanding of their work, or if the boss lacked emotional intelligence, the review was often a complete waste of time. 360-degree feedback (also known as multi-rater feedback or multi-source feedback) is the antidote to these perfunctory and biased reviews. 360-degree feedback is based on the views of an employee’s immediate work circle. Typically, it includes direct feedback from an employee’s subordinates, peers and supervisors, as well as a self-evaluation. In some cases it includes feedback from external sources, such as customers and suppliers or other interested stakeholders.

How to use it 360-degree feedback allows individuals to understand how others view their effectiveness as an employee, co-worker or staff member. There are four typical components: ●●

self-appraisal;

●●

superior’s appraisal;

●●

subordinates’ appraisal;

●●

peers’ appraisal.

Self-appraisal is where you evaluate your own achievements, and your strengths and weaknesses. The superior’s appraisal is the traditional part of the process, where he or she offers a verdict on how well you have delivered on your objectives over the last year or so. Appraisal by subordinates is the key part of the 360-degree feedback process, in that it allows the people working for you to indicate how well you have managed them – for example, how clearly you have communicated with them, how well you have delegated and how much coaching support you have provided. Finally, appraisal by peers (also known as internal customers) can help you figure out how good you are at working collaboratively across the firm, for example

38

PA RT ONE : Org an i s ational beh av iour

by being responsive to their requests and helping out on projects that aren’t your direct responsibility. In terms of the methodology for implementing a 360-degree feedback tool, the process has the following steps: ●●

The individual who is being reviewed identifies all the key individuals (superior, subordinates, peers) whose inputs should be solicited.

●●

A survey is sent to all these individuals, clarifying that the data they provide will be anonymised. The survey typically includes a series of closed-end questions (such as, ‘How effective is this individual at communicating with his/her team, 1 = very poor, 3 = average, 5 = very good’), and also some open-ended questions (such as, ‘Please explain why you gave this rating’).

●●

The results of the surveys are pulled together, and a report is prepared for the individual, giving the average ratings and the anonymised written answers.

●●

The individual is given the results and discusses them with a ‘coach’, who has expertise in interpreting these sorts of surveys and who can suggest ways of developing any weaker areas.

Most large firms today use some sort of 360-degree assessment system. It is viewed as a powerful developmental tool because when conducted at regular intervals (say yearly) it tracks how an individual’s skills and management capabilities are improving. One issue that is frequently debated is whether 360-degree assessment should be used solely for personal development, or whether it should also be used as an input into pay and promotion decisions. While the information it provides is important, the risk of using it in pay and promotion decisions is that people start to ‘game’ the system – for example, by asking their employees to give them high ratings. This may not work, of course, but whatever the outcome, the result is likely to be a tainted set of results. This is why most people argue that 360-degree assessment should be used primarily as a developmental tool – that is, purely to help people become more effective in their work.

Top practical tip While 360-degree feedback is certainly a better way of providing feedback than the traditional top-down approach, it requires thoughtful implementation. So if your firm has never used it, you should get help from a human resource consultancy in putting the methodology in place. In particular, care is needed in soliciting feedback from subordinates and peers, making sure it is all anonymous, and pulling the data together in a meaningful way.

10: 360-degree a ssessment

39

Top pitfall As a manager, you may be shocked when you first receive 360-degree feedback, because the ratings you get from your subordinates will often show you aren’t as good at managing as you thought you were. The biggest mistake you can make is to go on a ‘witch hunt’ to find out who gave you the bad ratings – not only is this against the rules, it also destroys trust. The second biggest mistake is to ignore the results and assume they are wrong. The results are telling you the perceptions of your employees, and even if you don’t agree with their views, their perceptions are their reality and consequently impact significantly on how you interact together – for better or for worse. So, if you receive 360-degree feedback, be sure to take it seriously, and get advice from a colleague or a coach on how to adapt your way of working to improve your ratings next time.

Further reading Edwards, M. and Ewen, A.J. (1996) 360-Degree Feedback. New York: AMACOM. Handy, L., Devine, M. and Heath, L. (1996) 360-Degree Feedback: Unguided missile or powerful weapon? London: Ashridge Management Research Group. Lepsinger, R. and Lucia, A.D. (1997) The Art and Science of 360-Degree Feedback. San Francisco, CA: Jossey-Bass.

40

PA RT ONE : Org an i s ational beh av iour

[ PA R T T W O ] Marketing

41

A useful definition of marketing is ‘seeing the world through the eyes of the customer’. Many companies suffer from an internal orientation, by focusing for example on the characteristics of the products they sell, rather than the actual needs of their customers. Marketers try to avoid this internal focus by taking the perspective of the customer in everything they do. The name given to this approach is ‘market orientation’. The field of marketing has been in existence for around 100 years, and several of the original ideas have stood the test of time. One is the so-called ‘4Ps of marketing’, which define the key elements – product, place, price, promotion – that need to be taken into account when developing a marketing strategy. Another is the ‘product life cycle’ – the notion that every product goes through a cycle from launch to growth and maturity and then to decline. By understanding this life cycle, you can make better decisions about how to position and price your products in a competitive market. Marketing has also moved forward and there have been many recent advances in thinking, often driven by the emergence of more sophisticated ways of capturing and analysing information about customer behaviour. Notions of ‘segmentation and personalised marketing’ have developed a lot over the last 20 years. The emphasis has shifted from ‘mass-market’ advertising through targeting of specific segments of the market, to a focus on personalised marketing, where each individual can be targeted as a function of his or her internet usage. Pricing strategies likewise have shifted over the years, from a standard price for all, through differentiation according to customer segments, and now towards ‘dynamic pricing’, where prices fluctuate according to changes in supply and demand, often in real time. The channels through which products are sold have also evolved, and today the emphasis, especially for digital products, has shifted to ‘multichannel marketing’, which is a way of creating a coherent and silmultaneous offering across multiple media. Understanding of customers has also become more sophisticated. Marketers no longer assume they know what customers want. Instead, they use ‘ethnographic market research’ as a way of getting inside the minds and behaviours of customers, so their needs can be better served. Relatedly, and thanks in large part to better quality information, a lot more is known about the importance of customer loyalty. The concept of ‘customer lifetime value’ is a very useful way of quantifying the value of a long-term customer, and it has direct implications for the development of loyalty schemes. The ‘net promoter score’ is a specific measure of how much customers really value your product or service. It has become highly popular as a way of tracking customer satisfaction.

42

PA RT TWO : Ma rket i ng

Customer lifetime value

11

One useful way of thinking about your customers is as an investment: you invest in them early on, to attract their custom and to generate loyalty, then over time you recoup that investment with a steady flow of sales. Customer lifetime value is a way of modelling this concept in a quantitative way. It has important implications for how you allocate resources at different stages in your relationship with customers.

When to use it ●●

To estimate the potential future value of your customers.

●●

To decide how much to invest in customer loyalty programmes and customer relationship management systems.

●●

To make corrections if you are losing customers.

Origins The notion of customer lifetime value was developed in the late 1980s by combining insights into the drivers of customer loyalty with net present value calculations from the world of finance. It was first discussed in the book Database Marketing by Shaw and Stone (1988). The concept was then developed in a number of academic papers, and also by consultancies such as Edge Consulting and BrandScience.

What it is Customer lifetime value has a very specific technical meaning, and there is a methodology for calculating it. It can also be used in a more qualitative way to provide insights into how firms allocate their marketing investments.

43

Considering the technical meaning first, customer lifetime value (CLV) is the net present value of the future profits to be received from a given number of newly acquired or existing customers during a given period of years. As shown below, it is possible to come up with a specific estimate for CLV, and this becomes very useful when you are thinking, for example, about creating a loyalty programme or investing in a customer relationship management database. More broadly speaking, CLV is an important concept because it encourages firms to shift their focus from quarterly profits to the long-term health of their customer relationships. For example, it is usually the case that retaining a customer is more cost-effective than gaining a new customer. This insight has led many firms to invest in loyalty programmes, and to offer deals to their established customers.

How to use it CLV is calculated by making some assumptions about the customer’s expected retention and spending rate, plus some other factors that are easy to determine. Consider the following example.

The Early Years Company Per customer

Year 1

Year 2

Year 3

Year 4

Year 5

Sales

$100

$100

$100

$100

$100

Acquisition cost

$55

Contribution margin

$50

$50

$50

$50

$50

Profit

$(5)

$50

$50

$50

$50

Discount rate. at 10%

1.00

0.91

0.83

0.75

0.68

 NPV, at a 10% discount rate

$(5)

$45

$41

$38

$34

  Retention rate

100%

90%

80%

70%

60%

    NPV, accounting for retention rate

$(5)

$41

$33

$26

$20

Customer lifetime value

$116

Here we estimate the CLV of an average customer by following her purchases of childcare products over a period of 5 years, after which we assume her child has grown out of the life stage for which the products are suited. She buys $100-worth of goods a year and the products are highly profitable, delivering the firm a contribution margin of $50. Contribution margin is what’s left after variable expenses such as cost of goods sold and advertising are deducted. Given an acquisition cost 44

PA RT TWO : Ma rket i ng

for this customer of $55, we see that the company loses money at first, but gains incremental contribution margin of $50 for each of the remaining four years. To get to CLV, we use a discount rate of 10 per cent to get to the net present value, or NPV, of each year’s profit or loss. Next, given that there is a chance she will switch to another brand or stop buying the products, we assume a retention rate for each year. Thus, in the fifth year there is a 60 per cent chance that she will continue to buy the products. Adding up the NPV figures after the retention rate is accounted for gives us the CLV for the average customer. This initial calculation can be extended by considering different customer segments. For example, you might identify three different segments based on age or spending habits, and you would then adjust the numbers in the table according to your expectations about how much they spend, how loyal they are, and so on. You might add another layer of analysis by considering if there are any marketing programmes that can be put in place to increase the retention rate per year. Different customer segments will have varying CLVs, and conducting an analysis will uncover high-value and low-value customers. On the basis of this analysis, you might then decide to tailor your marketing efforts accordingly. For example, devising plans to retain a small group of high-value customers may be the best use of limited funds.

Top practical tip Because CLV involves detailed financial analysis, it is tempting to treat the numbers with a great deal of respect. However, it is important to remember that there are some very big assumptions built into this calculation, so before doing anything with your analysis you need to revisit these assumptions and make sure they make sense. Having done that, you can then use the methodology to make some judgments about marketing expenditure. For example, does it make sense to put millions of dollars into a customer loyalty card? What impact do you expect this to have on the retention rate and the spending rate? This analysis will show that some proposed investments actually have benefits that are lower than their costs.

Top pitfalls One common pitfall in the calculation is to compute total revenue or gross margin without taking into account the full array of costs involved. More broadly, some people make the mistake of using CLV in a highly technical way, and forget the real purpose and meaning behind it.

11: Customer l i fet i me value

45

Further reading Berger, P.D. and Nasr, N.I. (1998) ‘Customer lifetime value: Marketing models and applications’, Journal of Interactive Marketing, 12(1): 17–30. Shaw, R. and Stone, M. (1988) Database Marketing. Hoboken, NJ: John Wiley & Sons. Venkatesan, R. and Kumar, V. (2004) ‘A customer lifetime value framework for customer selection and resource allocation strategy’, Journal of Marketing, 68(4): 106–125.

46

PA RT TWO : Ma rket i ng

Ethnographic market research

12

Ethnography is the study of human behaviour in its most natural and typical context. Ethnographic methods have long been used in anthropology by researchers who want to understand other cultures. Marketers are now adapting their techniques to help them develop better products and services for consumers.

When to use it ●●

To develop a deep understanding of how consumers use your products or services, so you can improve them.

●●

To identify unmet consumer needs, so that you can innovate more effectively.

Origins The concept of ethnography can be traced back to Gerhard Friedrich Müller, a professor of history and geography, who delineated it as a methodology while participating in an expedition to Kamchatka (1733–43). In the field of management, ethnography had been recognised as a way of conducting research on organisations for many years, but it was only in the early 1990s that those techniques started to be applied to the study of consumers. Marketing is all about developing an understanding of why consumers make the decisions they do. Marketers traditionally had relied mostly on surveys and focus groups to gather this information, on the basis that when probed and asked in the right way people will give you useful information. Unfortunately, the reality is more complex, and sometimes people actually cannot explain why they behave the way they do, nor do they always know what they want. So, to overcome this problem, marketers realised that they needed to get much closer to their target customers and to engage in what is called participant observation. 47

These new techniques started to reveal profound new insights into consumer behaviour. During the 1990s they were picked up by academics and marketing consultancies, and they are now a standard part of the market research toolkit.

What it is Ethnographers are trained to observe and make sense of what goes on around them, the goal being to provide a detailed, in-depth description of everyday life and practice. By deliberately immersing themselves in the setting they are studying, they develop an ‘insider’s point of view’, which is often starkly different to the standard view that an outsider might develop. From the perspective of market research, ethnography therefore means observing customers in their own environment. For example, if you work for a breakfast cereal company, having breakfast with a family, watching how the children choose and mix their cereals and observing their parents’ reactions gives you important clues about how you might position that product in the future. Ethnographic market research is useful for exploring how existing products and services are used, and also for spotting opportunities for breakthrough innovations.

How to use it For ethnographic techniques to work properly, the market researcher (that is, the ethnographer) should have the same role as the consumer. One way to achieve this is to recruit consumers to become ethnographers – you recruit them to take part in the study, and you train them to record their observations about a purchase decision, their thoughts and feelings at the time, and so on. This is not easy to do for an untrained person, but the benefit is that the researcher is an actual consumer, so his or her experience is exactly what you want to study. The other option is for the trained market researchers to put themselves in the shoes of the consumer. For example, by visiting customers in their homes, researchers can gain powerful insights into real product requirements. One frequently-quoted example is Panasonic, who used ethnographic market research to design an electric shaver for women. By recording how women used such products in their home environments, Panasonic developed an ergonomic design with the right colours and style to fit the US market. The product, named the Lady Shaver, was a great success. A comprehensive ethnographic study can take many months, because the researchers have to spend time acculturating themselves into the setting they are studying before winning the trust of those around them. However, many companies use a simpler version of the technique, which has many of the same benefits and can be conducted in a matter of weeks. The key elements of this are as follows: ●●

48

Define the consumer groups you are interested in, and the aspect of their life as consumers that you want to understand better. For example, it might

PA RT TWO : Ma rket i ng

be consumers choosing which products to buy at a supermarket, or a family waiting at an airport, or a couple trying to decide what sort of financial savings product to buy. ●●

Identify the appropriate times and locations when you can observe them. For the supermarket and airport examples, this is obvious. But gaining access to a couple’s conversations about financial savings products is very tricky. You may want to take part, for example, in their dinnertime conversations.

●●

Collect multiple types of qualitative data – take notes, record answers to contextual interviews, use videos and diagrams where appropriate.

●●

Conduct systematic analysis, looking for patterns of behaviour, contradictions (where a customer says one thing and does another), and identifying the cultural characteristics of the customer segment being investigated.

Top practical tip Ethnographic market research is difficult to do well, and it can also be very frustrating because quite a lot of the time spent on it is ‘wasted’. The really useful insights can be gained in a matter of instants, but you don’t know in advance when those insights will arise.

If you are conducting an ethnographic study, the first rule is therefore patience:

make sure to spend the time getting to know the participants in the study, rather than rushing to conclusions. Secondly, be respectful of their interests. Sometimes they will be happy to have you around; sometimes they see you as an encumbrance. So prepare well, make sure they are comfortable with you, and find ways to get them talking and acting naturally. That is when the useful insights start to emerge.

Top pitfall The most difficult thing about ethnographic market research is to suspend your judgments and preconceived ideas. With any existing product or service, we have an existing notion of how it is used or why people buy it. This may be mostly correct, but the whole point of this exercise is to look for evidence that does not support

those existing beliefs. You need to become adept at non-directive interviewing, so you don’t accidentally ‘guide’ participants to the results you were expecting, and you need to deliberately seek out evidence or insights that contradict your expectations.

12: E thnogr ap hic m a rket rese ar ch

49

Further reading An overview of ethnographic market research can be found at lexicon.ft.com Arnould, E.J. and Wallendorf, M. (1994) ‘Market-oriented ethnography: Interpretation building and marketing strategy formulation’, Journal of Marketing Research, 31(4): 484–504. Zaltman, G. (1997) ‘Rethinking market research: Putting people back in’, Journal of Marketing Research, 34(4): 424–437.

50

PA RT TWO : Ma rket i ng

Market orientation

13

Market orientation is an approach to business that starts from the perspective of the customer and works back from there. Some studies have emphasised the specific capabilities involved in being market orientated (such as capturing and using market information), while others have focused on the positive mind-set of the firm’s employees towards their customers. There is strong evidence that marketorientated firms are more successful in the long term than those who are not.

When to use it ●●

To understand your firm’s capabilities for addressing or shaping market needs.

●●

To assess the internal culture of your firm.

●●

To identify opportunities for improvement in your responsiveness to market needs.

Origins The underlying notion of market orientation is very old. One of the traditional definitions of marketing is ‘seeing the world through the eyes of the customer’. This may seem obvious to say, but the reality is that many firms fall into the trap of being product-centric: they focus narrowly on the product itself and neglect the actual needs or concerns of their prospective customers. The concept of market orientation is about helping companies avoid the product-centric trap. Market orientation provides a set of tools, techniques and frameworks to help companies understand the wants and needs of their prospective customers, and to figure out how to address those needs more effectively. There was a resurgence of interest in the concept of market orientation in the early 1990s, through two separate academic studies. Kohli and Jaworski (1990) 51

defined market orientation as ‘the organization-wide generation of market intelligence, dissemination of the intelligence across departments and organization-wide responsiveness to it’. Narver and Slater (1990), on the other hand, defined it as ‘the organization culture that most effectively and efficiently creates the necessary behaviours for the creation of superior value for buyers and, thus, continuous superior performance for the business’. In other words, Narver and Slater focused on the underlying culture of the firm, while Kohli and Jaworski emphasised specific capabilities. From an academic perspective, this difference of opinion spurred a great deal of debate and a reinvigoration of this concept. From a practical point of view, the distinction is not so important, because both are evidently important: a market-orientated culture will develop market-orientated capabilities, which in turn will support its culture.

What it is Market orientation is a way of looking at the world by which a firm focuses on the needs and desires of customers, rather than on the products and services it sells. Rather than focusing on establishing selling points for existing products, market orientation is about tailoring offerings to meet the needs of customers. There are two complementary views on how to develop a market orientation. One view focuses on the specific capabilities you need to put in place. These are: ●●

generation of market intelligence (for example through market research and customer surveys);

●●

dissemination of the intelligence across departments;

●●

organisation-wide responsiveness to it.

The other view focuses on the culture that a firm needs to develop to become market orientated. Such a culture has three components: ●●

customer orientation;

●●

competitor orientation;

●●

inter-functional coordination.

Both these perspectives emphasise the point that marketing isn’t just about what the marketers do; it is about the entire firm being aligned around the needs of the market.

How to use it You can use the concept of market orientation in two ways. First, it can be used as a diagnostic tool for understanding how market-orientated the various parts of your firm are. The following statements (which should be rated on a scale from 1 to 10) are taken from a study by Deshpande and Farley (1998) and provide a succinct way of measuring market orientation: 52

PA RT TWO : Ma rket i ng

●●

Our business objectives are driven primarily by customer satisfaction.

●●

We constantly monitor our level of commitment and orientation to serving customer needs.

●●

We freely communicate information about our successful and unsuccessful customer experiences across all business functions.

●●

Our strategy for competitive advantage is based on our understanding of customers’ needs.

●●

We measure customer satisfaction systematically and frequently.

●●

We have routine or regular measures of customer service.

●●

We are more customer-focused than our competitors.

●●

I believe this business exists primarily to serve customers.

●●

We poll end-users at least once a year to assess the quality of our products and services.

●●

Data on customer satisfaction are disseminated at all levels in this business unit on a regular basis.

The other way of using the concept of market orientation is to guide you in the development of specific techniques and capabilities. For example, if your firm scores badly in this diagnosis, you might conclude that you need better information on customer intelligence, or you might decide that the problem lies in internal coordination. This then helps you to focus on the weak link that needs fixing.

Top practical tip Market orientation is an attractive concept, and it is easy to get people to agree that their firm should be more market orientated. However, it is much more difficult to turn that basic idea into something practical. The real value of the marketorientation concept lies in the specific techniques and capabilities you put in place to get people across the firm taking inputs from the market-place seriously. This is particularly important the further away employees are from customers, for example in a manufacturing operation or in a back-office support function.

Top pitfall An important issue you often face when talking about market orientation is whether

the firm should always listen to the customer. Of course, it is mostly a good idea to listen and respond to the customer. But you run two risks in doing this: one is

13: Ma rket or i entation



that you focus on what customers say they want, rather than what they actually

53

want; the other is that you focus too much on your existing customers and you ignore those who are not buying your product. This is why many people prefer the term ‘market orientated’ to ‘customer orientated’ – because it ensures you don’t get stuck in a narrow view of what your existing customers say they want. But you cannot be market orientated without listening.

Further reading Deshpande, R. and Farley, J.U. (1998) ‘Measuring market orientation: Generalization and synthesis’, Journal of Market-Focused Management, 2(3): 213–232. Kohli, A.K. and Jaworski, B.J. (1990) ‘Market orientation: The construct, research propositions, and managerial implications’, The Journal of Marketing, 54(2): 1–18. Narver, J.C. and Slater, S.F. (1990) ‘The effect of a market orientation on business profitability’, Journal of Marketing, 54(4): 20–34.

54

PA RT TWO : Ma rket i ng

Multichannel marketing

14

In business, channels are the different routes through which a product reaches an end-consumer. For example, you can buy a computer from a specialist retailer, a generalist retailer (like a supermarket), online, or over the phone. Multichannel marketing refers to how a company uses multiple channels at the same time, to reach the widest possible customer base or to be as profitable as possible.

When to use it ●●

To evaluate different routes to market, and to decide how to combine them as effectively as possible.

●●

To help reinforce your chosen segmentation strategy.

●●

To avoid ‘channel conflict’, where two or more channels are offering the same product at different prices.

Origins The notion that firms sell their products through different channels has been around for more than a hundred years. For example, the US bricks-and-mortar retailer, Sears, launched its first catalogue for direct selling to homes in 1894. As business supply chains became more professionally managed, the issue of how to manage multiple channels to market became increasingly important. For consumer products, the big challenge was how to sell directly to customers (for example, by phone or catalogue) without upsetting retailers or brokers. For industrial products, a range of different middlemen was often used (such as brokers, importers, licensees and aggregators) and there was often a risk of conflict between them. The rise of the internet in the 1990s made channel management more complex than before, and led to the concept of multichannel marketing as a way of making the most of all the different channels to market, especially for consumer products. 55

What it is Multichannel marketing is the means by which a company interacts with customers through different channels – websites, retail stores, mail-order catalogues, direct mail, email, mobile, etc. Implicit in this definition is the notion that these are two-way channels, with customers both receiving and providing information through them. Taking a step back, channels are the various routes through which a product reaches the end-consumer. If the product is a physical one, it can be sold directly (such as Dell selling you a computer), or indirectly (such as HP selling you a computer via a retailer). If the product is a digital one, the number of channels is much larger – think, for example, about how many devices you can access the BBC News channel on. Historically, different segments of consumers tended to use different channels, so most firms would worry particularly about how to sell their products through two or more channels without upsetting one or the other. For example, HP would have liked to sell its personal computers ‘direct’ to consumers using the Dell model in the 1990s, but they did not do so for fear of alienating their retailers. Increasingly (and especially for digital products) consumers are using multiple channels. This is where ‘multichannel’ marketing comes in: the challenge firms face is how to give consumers a choice about which channels to use and when. For example, if you buy a movie from the iStore, you want to be able to watch that movie on any number of different devices, not just the one you bought it on.

How to use it There are some important guidelines to bear in mind when developing a multichannel marketing strategy:

56

●●

Consistency of message across channels: Customers increasingly interact with firms through a variety of channels prior to and after purchasing a product or service. When developing a marketing campaign, you have to consider all the different ways that customers can come into contact with your firm, and ensure your messages are consistent across these channels. Many firms have developed internal processes and technologies to support this approach.

●●

Consistency of experience across channels: You do not want a great sales experience to be destroyed by poor after-sales care. So, having established the type of customer experience you are seeking to provide, you need to ensure this is rolled out across the different channels. This includes small things such as how you address customers (for example, first name versus surname with title), as well as bigger things such as how much discretion do your service representatives have to resolve problems.

PA RT TWO : Ma rket i ng

●●

Pool customer knowledge in one place: Maintaining a single view of customer behaviour allows you to respond to customers in the most appropriate way. This might be done using a shared database of customer contact information and updating it in real time. Alternatively, it can be done through close collaboration across departments or with key account managers.

Top practical tip If you are selling through channels, rather than direct to your customer, you will need to develop multichannel marketing strategies for two audiences: the endcustomer and the channel agents. Both are equally important and they need to be consistent, as your channel agents will be exposed to both.

Top pitfall Being everywhere (such as bricks-and-mortar, online, in newspapers, direct email, etc.) is not the same as having multichannel marketing. Use your knowledge of your customers to maximise your use of the channels they prefer.

Further reading Bowersox, D.J. and Bixby Cooper, M. (1992) Strategic Marketing Channel Management. New York: McGraw-Hill. Rangaswamy, A. and Van Bruggen, G.H. (2005) ‘Opportunities and challenges in multichannel marketing: Introduction to the special issue’, Journal of Interactive Marketing, 19(2): 5–11. Stern, L.W. and El-Ansary, A. (1992) Marketing Channels, 4th edition. Englewood Cliffs, NJ: Prentice-Hall.

14: Mult ic hannel m a rket ing

57

15

Net promoter score

The net promoter score is a way of measuring how likely customers are to recommend a company’s products or services to others. Because it is so easy to use, it has become very popular as a proxy measure of customer satisfaction and customer loyalty.

When to use it ●●

To measure customer satisfaction in a clear and reliable way.

●●

To track changes in customer satisfaction over time.

●●

To evaluate and reward employees for delivering high levels of customer satisfaction.

Origins It has been known for a long time that customer satisfaction is a useful measure of success, because satisfied customers tend to become repeat purchasers of your products, and they also help to increase the value of your brand by telling their friends and colleagues about their positive experiences. Fred Reichheld, a partner at the consultancy Bain & Company, took these important ideas about customer satisfaction and developed a formal methodology for measuring them accurately and consistently over time. Because of its focus on a single metric, the net promoter score, the methodology was easy to apply, and it quickly became very popular. Reichheld described the results of his research in a series of books, including The Loyalty Effect and The Ultimate Question. Today, the net promoter score is the most widely used single measure of customer satisfaction and customer loyalty.

58

What it is The net promoter score (NPS) is based on the idea that a company’s customers can be divided into three categories: promoters, passives and detractors. Promoters like your product so much that they talk it up with their friends. Detractors are the opposite – they say negative things about it. Passives are not too bothered either way. The net promoter score asks a single question: ‘How likely is it that you would recommend company X to a friend or colleague?’. Customers respond on a 0–10 point rating scale and are categorised as follows: ●●

Those answering 9 or 10 out of 10 are promoters – they are loyal and enthusiastic, and they help to spread the word to others about the company and its products.

●●

Those answering 7 or 8 out of 10 are passives – they are satisfied and will keep buying the product or service in question, but they lack enthusiasm and may be vulnerable to competitive offerings.

●●

Those answering 6 or below are detractors – they are sufficiently unhappy that they may share their negative views about the company or its products with others.

The net promoter score is calculated by taking the percentage of respondents who are promoters, subtracting the percentage who are detractors and ignoring the data on passives. For example, a score of +10 per cent would tell you that you have ten per cent more promoters than detractors. This is a useful indicator of the overall satisfaction of your current body of customers. It is intuitively obvious why this calculation makes sense. Detractors can spread negative sentiment, while promoters can spread positive sentiment. In today’s world, where social media help to disseminate individuals’ views widely, the people who are most (or least) excited about your products have a disproportionate influence over others.

How to use it Many firms use the net promoter score in a very rigorous way, often getting a consultancy company in to sample customers in a non-biased way, ask them the question and then track the results over time. This methodology can be applied at the level of the firm as a whole, or it can be done at the level of an individual operating unit. For example, a hotel chain might sample its customers and then break down the results according to which particular hotel they stayed at most recently. By tracking the number over time, you also get a good sense of whether a firm is trending in the right direction or not. Studies by Fred Reichheld have shown that the NPS is a good indicator of overall customer loyalty, and indeed of how successful (over the long term) the company is. You can also use the net promoter score in a more informal way. Because the question is so simple, it can easily be added to any existing survey that you might

15: Net promoter s c ore

59

use with your customers. You can then perform a simple calculation to come up with a net score. For example, if there were 300 customers in the survey and, based on the answer to the question, there were 185 promoters, 105 passives and 10 detractors, the NPS score would be (185/300) divided by (10/300) = 18.5. An NPS score that is positive is thought to be good and an NPS score above 50 is generally viewed as excellent. For example, in 2013, Apple’s iPad had an NPS score of 69. Firms use the net promoter score in various ways. First, it is a conceptually simple and easy-to-use metric that can be shared with front-line employees. It can also be used as a motivational tool, to encourage employees to improve and to provide the best customer experience possible.

Top practical tip Net promoter programmes are not traditional customer satisfaction programmes, and simply measuring your NPS does not lead to success. To take the NPS methodology seriously you need to make sure it is embedded in your firm’s systems, so that leaders talk about it frequently and you have the necessary follow-up systems in place to ensure that you know how to improve a negative score and maintain a positive one.

Top pitfalls While it is tempting to use the net promoter score because it seems so straightforward, experience has shown that many firms have struggled to implement it effectively. There are several pitfalls. One is to use it in an ad hoc way. A single measure, let’s say it is +5 per cent, doesn’t tell you anything very useful. What matters is how this compares to your competitors, or how it compares to the previous quarter. So a systematic approach is key. A related pitfall is to end up with a biased sample. It is much easier to find happy customers than unhappy ones, so if your methodology for sampling customers is a bit flimsy, the results could be very biased, which does no one any favours.

60

PA RT TWO : Ma rket i ng

A third pitfall is to turn the NPS from a monitoring tool into a performancemanagement tool. If you start evaluating your salespeople on their NPS metric, there is a risk that they will focus on that number to the detriment of actually doing the right thing. There have been cases noted where salespeople have specifically asked their customers to give them a 9 or 10 rating (out of 10), because their bonus depended on it. Like any performance measures, the NPS can be manipulated if it becomes too important.

Further reading Reichheld, F. (1995) The Loyalty Effect: The hidden forces behind growth, profit and lasting value. Boston, MA: Harvard Business Press. Reichheld, F. (2003) ‘The one number you need to grow’, Harvard Business Review, December: 46–54. Reichheld, F. (2006) The Ultimate Question: Driving good profits and true growth. Boston, MA: Harvard Business Press.

15: Net promoter s c ore

61

16

The 4Ps of marketing

When launching a new product or service, you have to think carefully about a range of factors that will determine its attractiveness to consumers. The most widely used way of defining this ‘marketing mix’ is to think in terms of the 4Ps: product, price, place and promotion. This is a useful checklist to ensure that you have thought through the key elements of your value proposition.

When to use it ●●

To help you decide how to take a new product or service offering to market.

●●

To evaluate your existing marketing strategy and identify any weaknesses.

●●

To compare your offerings to those of your competitors.

Origins While its origins are much older, marketing really took shape as a professional discipline in the post-war years. An influential article by Neil Borden in 1964 put forward the ‘concept of the marketing mix’, which was about making sure all the different aspects of the product or service were targeted around the needs of a particular type of consumer. Subsequently, Jerome McCarthy divided Borden’s concept into four categories – product, price, place and promotion – and the 4Ps were born. Marketing professor Philip Kotler is also associated with the 4Ps, as he was influential in popularising them during the 1970s and 1980s.

What it is In a competitive market, consumers have lots of choice about what to spend money on, so you have to be very thoughtful about how to make your offering attractive to 62

them. The 4Ps is simply a framework to help you think through the key elements of this marketing mix. ●●

Product/service: What features will consumers find attractive?

●●

Price: How much will they be prepared to pay?

●●

Place: Through what outlets should we sell it?

●●

Promotion: What forms of advertising should we use?

At its heart, the 4Ps is about market segmentation: it involves identifying the needs of a particular group of consumers, and then putting together an offering (defined in terms of the 4Ps) that targets those needs. The 4Ps are used primarily by consumer products companies, who are seeking to target particular segments of consumers. In industrial marketing (where one business is selling to another business), the 4Ps are less applicable, because there is typically a much greater emphasis on the direct relationship between the seller and buyer. Product and price are still important in industrial marketing, but place and promotion less so. An alternative to the 4Ps is Lauterborn’s 4Cs, which presents the elements of the marketing mix from the buyer’s perspective. The four elements are the customer needs and wants (which is the equivalent of product), cost (price), convenience (place) and communication (promotion).

How to use it Start by identifying the product or service that you want to analyse. Then go through the four elements of the marketing mix, using the following questions to guide you.

Product/service ●●

What needs does the product or service satisfy? What features does it have to help it meet these needs?

●●

How does it look to customers? How will they experience it? What sort of brand image are you trying to create?

●●

How is it differentiated from the offerings of your competitors?

Price ●●

What is the value of the product/service to the consumer?

●●

How price-sensitive is the consumer?

●●

How are competitor offerings priced? Will you price at a premium or discount to competitors?

●●

What discounts or special deals should be offered to trade customers?

16: The 4Ps of m a rket i ng

63

Place ●●

Where do buyers usually look for your product/service? Through what media or channels will you make it available?

●●

Do you need to control your own distribution, or even your own retail experience for this product/service?

●●

How are your competitors’ offerings distributed?

Promotion ●●

Through what media, and with what sort of message, will you seek to reach your target market?

●●

When is the best time to promote your product/service? Are there certain times of the day or week that are better? Is there seasonality in the market?

●●

Can you use free PR (public relations) to reach your target market?

It is useful to review your marketing mix regularly, as some elements will need to change as the product or service evolves, and as competitive offerings become available.

Top practical tip First of all, it is important to make sure your answers to the questions above are based on sound knowledge and facts. Many marketing decisions are based on untested assumptions about what consumers need, and often new product launches are successful because they challenge those assumptions.

Second, a successful product launch is one where there is a high degree of con-

sistency between the various different elements. Increasingly, and this is especially true in the online world, it is possible to target a very specific set of consumers, so choices about place and promotion are now far more critical than they might have been in the past.

Top pitfall Taken too literally, the 4Ps can narrow your focus unduly. For example, if you are working on developing an online version of a magazine or newspaper, you may seek to replicate the ‘product’ and ‘price’ that worked in a paper-based world. However, that would be a mistake because consumers use digital content very differently, and the approach firms take when charging for online services is often

64

PA RT TWO : Ma rket i ng

very different to what worked with traditional products. Similarly, a focus on ‘promotion’ runs the risk of getting you into a campaign-based mentality of increasing web-page hits, when the business may benefit from more valuation content such as blogs or infographics.

The 4Ps are a useful way of structuring your thinking about the elements of the

marketing mix, but you should always be prepared to depart from this structure if it helps you do something a bit more creative.

Further reading Borden, N.H. (1964) ‘The concept of the marketing mix’, Journal of Advertising Research, 24(4): 7–12. Kotler, P. (2012) Marketing Management. Harlow, UK: Pearson Education. Lauterborn, B. (1990) ‘New marketing litany: Four Ps passé: C-words take over’, Advertising Age, 61(41): 26. McCarthy, J.E. (1964) Basic Marketing: A managerial approach. Homewood, IL: Irwin.

16: The 4Ps of m a rket i ng

65

17

Pricing strategies: dynamic pricing

Your pricing strategy is the choice you make about how much to charge customers for your product or service. It is a key element of the marketing mix, and it needs to be consistent with all the other elements of the mix (product, place, promotion) to ensure that the product/service has the best chance of success in a competitive marketplace. Dynamic pricing is a specific pricing strategy that allows you to change your prices rapidly in response to variations in demand.

When to use it ●●

To decide how much to charge for a new product or service.

●●

To understand the pricing choices made by your competitors.

●●

To identify opportunities to make additional profits for your firm.

●●

To adapt your prices in response to changes in demand.

Origins The original studies of pricing were conducted in microeconomics, and were based on the simple notion that firms should choose the optimum price/output to maximise their profit. Gradually, these theories were adapted to the realities of the business world. For example, by creating slightly different products at different price points, firms could take advantage of the different levels of ‘willingness to pay’ from their prospective customers. Prices were also seen as changing over time – for example, as the cost of production came down. And the pricing strategies of competitors were also brought into the mix, often using game theory to help predict how they might respond, for example, when you raise your prices.

66

The notion of dynamic pricing, while it had existed earlier, really took off in the dawn of the internet era. Internet technology gave firms more detailed information about customer buying behaviour than before, and at the same time the internet created enormous transparency on pricing of products and services. These trends have allowed firms in many industries to adjust their prices in real time, increasing them when demand is high and reducing them when it is low.

What it is Your pricing strategy depends on three broad sets of factors. The first is the profit targets that your product/service is expected to achieve; most companies have clear expectations about what is an acceptable level of profitability. The second is customer demand, and their overall willingness to pay. The third is competition: in an established market, your pricing strategy is highly constrained by current prices; in a new market, where you don’t have immediate competitors, you clearly have a far greater degree of freedom in what you can charge. Taking these sets of factors into account, your pricing strategy is then a strategic choice that typically seeks to maximise your profitability over the long term. A number of different models are used, often varying significantly by industry. ●●

Target-return pricing: Set the price to achieve a target return-oninvestment. This is very common in established categories, such as most supermarket products.

●●

Cost-plus pricing: Set the price at the production cost, plus a certain profit margin. This is becoming less common but is still seen in some sectors – for example, government procurement.

●●

Value-based pricing: Base the price on the effective value to the customer relative to alternative products. This is common in emerging product areas, such as games and written content online, or a new line of smartphones.

●●

Psychological pricing: Base the price on factors such as signals of product quality or prestige, or what the consumer perceives to be fair. Many luxury goods are priced in this way.

Over the last 15 years, dynamic pricing has emerged as a fifth model. It is particularly common in markets where the product is ‘perishable’ and the available capacity is fixed – for example, airline seats, holiday bookings and hotel rooms. And it has been made possible by the internet, which gives both customers and suppliers much greater information than they had before. In these markets, you want to charge as much as possible to fill up all the available capacity. This is why skiing holidays cost twice as much during half-term holidays as in the regular season, and why airline prices vary almost on a daily basis.

17: Pr ici ng str ategies: dynamic pr icing

67

How to use it Here is an example of dynamic pricing. If you want to book a hotel room online, you will notice that the prices vary from day to day. From the hotel’s point of view, the right rate to charge for a room per night is what the customer is prepared to pay. If the rate is too low, they are leaving money on the table; if the rate is too high, they may price themselves out of the market. So the changes in prices are all about the hotel trying to match supply and demand. As demand increases, prices rise; if demand stalls, prices go down again. As the date of your hotel stay approaches, the situation becomes even more complex, because the hotel realises it would prefer to sell an available room at a very low price rather than leave it empty. If you end up booking at the last minute, you sometimes get a great deal (because there is a lot of unsold demand) and you sometimes pay a fortune (because there are only a few rooms left). The techniques that firms use for dynamic pricing are complex. They involve lots of information about prior demand levels, expectations about future demand, competitor products and prices, and the volume of products you have available to sell over what period. Pricing changes typically are made automatically using software agents called pricing ‘bots’.

Top practical tip The most important thing in defining your pricing strategy is to understand your customer’s willingness to pay. It is easy to figure out how much your product costs and to use that information to anchor your price. But it is typically better to start out by asking how much value the customer gets from your product, and to work back from there. Sometimes, it is even possible to increase the perceived value of the product by charging more for it (this works for luxury goods, for example).

The advent of the internet has made it much easier to experiment with different

pricing strategies, and to adapt pricing quickly in response to demand. Amazon.com was a pioneer in the dynamic pricing of books, and the low-cost airlines such as Easyjet.com and Southwest Airlines were early movers in dynamic pricing in their industry.

68

PA RT TWO : Ma rket i ng

Top pitfalls There are a couple of obvious pitfalls associated with dynamic pricing. One is that you don’t want to become too well known for dropping your prices to very low levels when demand is low. Customers will figure out you are doing this, and they will withhold their purchase until the last minute. Many firms get around this problem by selling their lowest-price products in a disguised way through a middleman: for example, if you want to get a deal on a hotel room through lastminute.com, you often don’t find out the name of the hotel until you have actually booked it.

Another pitfall is that too much variation in pricing can upset customers – they

might perceive the differences as unfair and they can become confused, which leads them to take their custom elsewhere. So most firms that use dynamic pricing are careful not to change their prices too much or too often.

Further reading Raju, J. and Zhang, Z.J. (2010) Smart pricing: How Google, Priceline and leading businesses use pricing innovation for profitability. Upper Saddle River, NJ: FT Press. Vaidyanathan, J. and Baker, T. (2003) ‘The internet as an enabler for dynamic pricing of goods’, IEEE Transactions on Engineering Management, 50(4): 470–477.

17: Pr ici ng str ategies: dynamic pr icing

69

18

Product life cycle

Every product goes through a ‘life cycle’, from introduction to growth to maturity and then decline. By understanding this life cycle, and where a particular product lies on it, you can make better decisions about how to market it.

When to use it ●●

To decide how to position a specific product, and how much money to invest in it.

●●

To manage a portfolio of products.

●●

To decide how to launch a new product.

Origins Like so many management concepts, the product life cycle had been recognised informally before it was discussed in an explicit way. One of the first articles written on the subject was ‘Exploit the product life cycle’ by marketing professor Ted Levitt in 1965. The purpose of this article was to argue that your marketing strategy should vary depending on the stage in the life cycle of your product. Many subsequent studies picked up and extended Levitt’s ideas. There have also been many variants on the product life cycle theme. For example, in the sphere of international business, Raymond Vernon argued that multinational firms would often create a new product in a developed region, such as the USA or Europe, and then as it matured in that region it would gradually be rolled out in less-developed countries. Researchers have also studied the industry life cycle (that is, the pattern of growth and decline for the entire set of providers of a product category such as personal computers), and they have studied the life cycle of diffusion (focusing on the speed of uptake of a population when faced with a new technology). 70

What it is Every product has a life cycle, meaning that it goes through predictable phases of growth, maturity and decline. Older products eventually become less popular and are replaced by newer, more modern products. There are many factors at work in this process – some are related to the features of the product itself, some are more to do with changing social expectations and values. Some products have very long life cycles (such as refrigerators), others have very short life cycles (for example, mobile phones). The product life cycle model describes the four specific life style stages of introduction, growth, maturity and decline, and it suggests that a different marketing mix is suitable for products at each stage. For example, in the early stages of introduction and growth, it is often helpful to put in a lot of investment, as it helps to secure revenue later on. ●●

Introduction: This stage is typically expensive and uncertain. The size of the market is likely to be small, and the costs of developing and launching a product are often very high.

●●

Growth: This stage involves a big ramp-up in production and sales, and often it is possible to generate significant economies of scale. The budget for marketing and promotion can be very high at this stage, as you are trying to build market share ahead of your competitors.

●●

Maturity: Here, the product is established, but in all likelihood there will also be a lot of competitors. The aim for the firm is to maintain its market share, and to look for ways of improving the product’s features, while also seeking to reduce costs through process improvements. Margins are typically highest at this stage of the life cycle.

●●

Decline: At some point, the market for a product will start to shrink. This is typically because an entirely new product category has emerged that is taking the place of this product (for example, smartphones are supplanting laptop computers), but it can also be because a market is saturated (that is, all the customers who will buy the product have already purchased it). During this stage it is still possible to make very good profits, for example by switching to lower-cost production methods, or by shifting the focus to lessdeveloped overseas markets.

How to use it There are many tactics that marketers can employ at each stage of the product life cycle. Some typical strategies at each stage are described as follows:

18: Produ c t l i fe cy cle

71

Introduction ●●

Invest in high promotional spending to create awareness and inform people.

●●

Adopt low initial pricing to stimulate demand.

●●

Focus efforts to capitalise on demand, initially from ‘early adopters’, and use them to promote your product/service where possible.

Growth ●●

Advertise to promote brand awareness.

●●

Go for market penetration by increasing the number of outlets for the product.

●●

Improve the product – new features, improved styling, more options.

Maturity ●●

Differentiate through product enhancements and advertising.

●●

Rationalise manufacturing, outsource product to a low-cost country.

●●

Merge with another firm to take out competition.

Decline ●●

Advertise – try to gain a new audience or remind the current audience.

●●

Reduce prices to make the product more attractive to customers.

●●

Add new features to the current product.

●●

Diversify into new markets, for example less-developed countries.

Top practical tip The product life cycle model does a good job of describing the stages a product goes through, but it is not definitive. There are many products out there (such as milk) that have been mature for decades, and there are also other products (such as laptop computers) that moved quickly from growth to decline without spending much time in the mature stage.

So to use the product life cycle in a practical way, it is useful to think through the

different trajectories a product might take. For example, is it possible to ‘reinvent’ a mature product in a way that gives it additional growth? In the mid-1990s coffee was clearly a mature product, but Howard Schulz created Starbucks as a way of revitalising coffee and turning it into a growth product.

72

PA RT TWO : Ma rket i ng

Another way of using the product life cycle is to think in terms of the portfolio of products your firm is selling. As a general rule, products in the introduction and growth phases are cash flow negative, while those in the maturity and decline phases are cash flow positive. So, having products at multiple stages provides some useful balance.

Top pitfalls One of the pitfalls of the product life cycle is that it can be self-fulfilling. If you are a marketer and you see a product approaching its decline phase, you might decide to stop actively marketing it, and this inevitably will lead to the decline of that product. Alternatively, you might believe the product should receive additional investment, but then struggle to persuade your boss, who is in charge of the entire portfolio of products.

Good marketers therefore draw on a variety of data to help them decide which

stage a product is in, and whether that phase might be prolonged – perhaps through a fresh marketing campaign or by enhancements to the product.

Further reading Day, G. (1981) ‘The product life cycle: Analysis and applications issues’, Journal of Marketing, 45(4): 60–67. Levitt, T. (1965) ‘Exploit the product life cycle’, Harvard Business Review, November– December: 81–94. Vernon, R. (1966) ‘International investment and international trade in the product cycle’, The Quarterly Journal of Economics, 80(2): 190–207.

18: Produ c t l i fe cy cle

73

19

Segmentation and personalised marketing

Segmentation is the process of slicing up the ‘mass market’ for a particular product or service into a number of different segments, each one consisting of consumers with slightly different needs. Personalised marketing is an extreme version of segmentation that seeks to create a unique product-offering for each customer.

When to use it ●●

To match your various product-offerings to the needs of different segments of consumers.

●●

To identify parts of the market whose needs are not currently being adequately served.

●●

To capture a higher price for a product or service, on the basis that it is better-suited to the needs of a particular segment or individual.

Origins The original thinking about market segmentation occurred in the 1930s through economists such as Edward Chamberlin, who developed ideas about aligning products with the needs and wants of consumers. Around the same time, the first high-profile experiments in segmentation were taking place at General Motors. Up to that point, Ford Motor Company had been the dominant auto manufacturer with its one-size-fits-all Model T Ford. Under CEO Alfred P. Sloan, General Motors came up with a radical alternative model, namely to offer ‘a car for every person and purpose’. By the 1930s, GM had established five separate brands, with Cadillac at the top end, followed by Buick, Oldsmobile, Oakland (later Pontiac) and then Chevrolet at the bottom end. This segmentation model was extremely successful, helping GM to become the biggest auto company in the world for much of the post-war period. 74

The theoretical ideas about market segmentation were developed by Wendell Smith. In 1956, he stated that ‘Market segmentation involves viewing a heterogeneous market as a number of smaller homogeneous markets in response to differing preferences, attributable to the desires of consumers for more precise satisfaction of their varying wants’. A later study by Wind and Cardozo in 1974 defined a segment as ‘a group of present and potential customers with some common characteristic which is relevant in explaining their response to a supplier’s marketing stimuli’. The concept of personalised marketing emerged in the 1990s, thanks in large part to the enormous volumes of information that companies could get access to about their customers. For example, internet software allows companies to identify where customers are signing in from, keep records of customers’ transactions with them, and to use ‘cookies’ (small software modules stored on a PC or laptop) to learn about consumers’ other shopping interests. This data has enormous benefits, as it allows companies to personalise their offerings to each customer. Similar concepts have also been proposed, including one-to-one marketing and mass-customisation.

What it is The goal of segmentation analysis is to identify the most attractive segments of a company’s potential customer base by comparing the segments’ size, growth and profitability. Once meaningful segments have been identified, firms can then choose which segments to address, and thus focus their advertising and promotional efforts more accurately and more profitably. Market segmentation works when the following conditions are in place: ●●

It is possible to clearly identify a segment.

●●

You can measure its size (and whether it is large enough to be worth targeting).

●●

The segment is accessible through your promotional efforts.

●●

The segment fits with your firm’s priorities and capabilities.

There are many ways of identifying market segments. Most firms use such dimensions as geography (where the customers live), demography (their age, gender or ethnicity), income and education levels, voting habits and so on. These are ‘proxy’ measures that help to sort people into like-minded groups, on the assumption that such people then behave in similar ways. In the days before the internet, such proxy measures were the best bet. However, since the advent of the internet and the ‘big data’ era, it is now possible to collect very detailed information about how individuals actually behave online and in their purchasing choices. This has made it possible to do a far more accurate form of segmentation, even down to the level of tailoring to individuals. For example, Amazon sends you personalised recommendations on the basis of your previous purchases, Yahoo! allows you to specify the various elements of your home page, and Dell lets you configure the components of your computer before it is assembled.

19: Segmentati on and personalised m arket ing

75

How to use it The basic methodology for market segmentation is well established: ●●

Define your market – for example, retail (individual) banking in the UK.

●●

Gather whatever data you can get your hands on to identify the key dimensions of this market. This includes obvious information about age, gender, family size and geography, and then important (but sometimes harder to gather) information about education and income levels, home ownership, voting patterns, and so on. Sometimes this is data you have collected from your existing customers, but be careful in this situation because you also want information about non-customers who could become customers.

●●

Analyse your data using some sort of ‘clustering’ methodology, to identify subsets of the overall market that have similar attributes. For example, you can almost always segment your market by income level, and identify high-, medium- or low-end customers in terms of their ability to pay. However, this may not be the most important dimension. If you are selling a digital product, for example, customer age and education level may be more important.

●●

Based on this analysis, identify and name the segments you have identified, and then develop a strategy for addressing each segment. You may choose to focus exclusively on one segment; you may decide to develop offerings for each segment.

For individualised marketing the same sort of logic applies, but the analytical work is so onerous that it is all done by computer. For example, UK supermarket Tesco was the first to offer a ‘club card’ that tracked every single purchase made by an individual. Tesco built a computer system (through its affiliate, Dunhumby) that analysed all this data, and provided special offers to customers based on their prior purchasing patterns. If you had previously bought a lot of breakfast cereal, for example, you might get a half-price deal on a new product-offering from Kellogg’s.

Top practical tip Market segmentation is such a well-established technique, that it is almost selfdefeating. In other words, if all firms use the same approach to segment their customers, they will all end up competing head to head in the same way. The car industry, for example, has very well-defined segments, based around the size of the car, how sporty it is, and so on.

76

PA RT TWO : Ma rket i ng



So the most important practical tip is to be creative in how you define segments,

in the hope that you can come up with a slightly different way of dividing up your customers. In the car industry, for example, the ‘sports utility vehicle’ segment did not exist until 20 years ago, and the company that first developed a car for this segment did very well.

Top pitfall Segmentation has its limitations as it needs to be implemented in the proper manner. Some segments are too small to be worth serving; other segments are so crowded with existing products that they should be avoided. It is also quite easy to over-segment a market, by creating more categories of offerings than the market will bear. In such cases, consumers become confused and may not purchase any of your offerings.

Finally, segmentation is challenging in entirely new markets, because you don’t

know how consumers will behave. Sometimes their actual buyer behaviour bears no resemblance to what market research suggested they would do. As a general rule, segmentation is a more useful technique in established markets than in new ones.

Further reading Peppers, D. and Rogers, M. (1993) The One to One Future: Building relationships one customer at a time. New York: Doubleday Business. Sloan, A.P. (1964) My Years with General Motors. New York: Doubleday Business. Smith, W.R. (1956) ‘Product differentiation and market segmentation as alternative marketing strategies’, Journal of Marketing, 21(1): 3–8. Wind, Y. and Cardozo, R.N. (1974) ‘Industrial market segmentation’, Industrial Marketing Management, 3(3): 153–166.

19: Segmentati on and personalised m arket ing

77

[PA R T T H R E E] Strategy and organisation

79

A firm’s strategy explains where it is going and how it intends to get there – it involves figuring out where to play (which products to sell to which customers) and how to play (how it positions itself against its competitors). There are many, many views about how to define strategy – the famous management thinker, Henry Mintzberg, once identified ten different perspectives. This section focuses on the most well-known models and frameworks. The most well-known view is the one proposed by Michael Porter at Harvard Business School in 1979: a firm should define its strategy by first of all understanding the structure of the industry it is competing in (using the ‘five forces analysis’), and then choosing a position within that industry that is most defensible for long-term competitive advantage (‘generic strategies’). Linked to this view is the notion that competition between firms has a dynamic quality, and can be modelled as a ‘game’, where one firm’s behaviour interacts with another’s (‘game theory: the prisoner’s dilemma’). This competitive perspective is very valuable, but it is almost entirely externally focused. An alternative perspective, which became increasingly important through the 1990s, was to look inside the firm and to understand how its internal resources and capabilities could become a source of advantage (‘core competence and the resource-based view’). More recently, attention has shifted away from the search for sustainable, long-term advantage in established markets, and towards opening up new markets where traditional competitive dynamics do not apply (‘blue ocean strategy’). All of these models are concerned with the strategy of an individual business. But many firms are actually operating in multiple businesses at the same time, so there are also some important models that help make sense of this type of complexity. One classic model was the ‘BCG growth-share matrix’, which was a useful way of making sense of the variety of businesses in a firm’s portfolio. This model has given way in recent years to more sophisticated models, the most useful of which is the parenting advantage framework developed at the Ashridge Strategic Management Centre (‘corporate strategy: parenting advantage’). Another whole dimension of corporate strategy is the question of sustainability over the long term, and a lot of attention today is focused on such issues (‘corporate social responsibility: the triple bottom line’). Finally, this section addresses strategy and organisation because the ability of a firm to execute its strategy depends to a large degree on how it is organised internally. There are many models and frameworks we could have included here, but because of the space constraints we focused on just two. The classic ‘McKinsey 7S framework’ provides an overarching perspective on the various dimensions of the firm that are important – we need to understand formal structures and systems, but we also need to keep in mind the ‘soft’ aspects of shared goals, management style and individual skills. More recently, a lot of research has been done on the ‘ambidextrous organisation’, which provides a way of thinking through the different ways of organising that allow a firm to exploit its current sources of advantage while also exploring for new opportunities in the future.

80

PA RT THREE : S tr ategy and org an isation

The ambidextrous organisation

20

For a firm to be successful over the long term, it has to be profitable today while also investing in its future. An ambidextrous organisation is one that gets this balance right: it is able to exploit its existing strengths and make money in the short term, while at the same time it is able to explore opportunities for future profitability and growth. Just as being ambidextrous means being able to use both the left and right hand equally, an ambidextrous organisation is one that is equally adept at exploration and exploitation.

When to use it ●●

To diagnose your firm’s long-term strategic position.

●●

To identify ways of getting the right balance between short-term and longterm thinking.

●●

To make changes to your structure or your culture to help your short-term/ long-term balance.

Origins It has been recognised for many years that firms need to balance long- and shortterm thinking. The concept of organisational ambidexterity, as such, was first put forward by Robert Duncan in 1976. He recommended that firms create a separate unit responsible for R&D and business development, protected from the short-term demands of the market-place. This idea was then picked up by Mike Tushman and Charles O’Reilly in the late 1990s, in an article and a book about how firms need to get a balance between evolutionary and revolutionary change. Since then, research on the notion of ambidexterity has grown dramatically and many different arguments have been put forward about how the right balance might be achieved.

81

The theoretical concept underlying ambidexterity research was developed by James March in an influential article in 1991. He argued that firms needed to balance exploration (developing new products and services) and exploitation (selling existing products and services) over time. However, it is easy for firms to get into ‘learning traps’ whereby they become very good at one of these activities and drive out their investment in the other one. Some firms, for example, become so successful at selling their existing products that they ignore the need to invest in new products.

What it is An ambidextrous organisation is one that balances exploration and exploitation. Exploration includes things such as search, variation, risk-taking, experimentation, discovery or innovation, while exploitation includes such things as refinement, production, efficiency, implementation and execution. Both are necessary for longterm success. Companies that focus only on exploration face the risk of wasting resources on ideas that may not prove useful or may never be developed. Those that focus only on exploitation will do well for a time, but will fail to renew their product-offerings and will gradually decline. There are three ways of building an ambidextrous organisation. One is to focus on structural ambidexterity, which means creating separate units for exploitation and exploration. For example, a standard approach in large firms is to create separate units for ‘exploration’ activities, such as R&D or business development. This is a way of protecting them from the day-to-day demands of a manufacturing or selling operation, where thinking is much more short-term focused. The second approach is contextual ambidexterity, which uses behavioural and social means to get the necessary balance between exploration and exploitation. This approach works on the principle that individual employees are best positioned to decide how to spend their time on a day-to-day basis, and that there are important synergies between exploration and exploitation. The role of top management, in the contextual approach, is to create a supportive culture (or context) that encourages their employees to make the right trade-offs between exploration and exploitation. The third approach is temporal ambidexterity, which means essentially switching back and forth over time between exploration and exploitation. This approach recognises that it is very difficult to manage exploration and exploitation at the same time in the same operation, but rather than separate them out in space, the separation occurs on a temporal basis. For example, a firm might prioritise developing new technologies for a couple of years, then focus its attention on commercialising them for a couple of years, before switching back to development.

82

PA RT THREE : S tr ategy and org an isation

How to use it While these three approaches are often presented as alternative models, the reality in most large firms is that all three are used to varying degrees at the same time. Therefore the first practical question, in applying this model, is to be clear on the level of analysis you are working at. To keep things simple, we will consider two levels of analysis here: 1 The firm as a whole: If you are concerned about the long-term success of your firm, the biggest concern you typically have to worry about is disruptive threats to your existing business model (for example, the internet making the business model of traditional newspapers obsolete). In such a case, structural ambidexterity is often the most appropriate way forward. This means, for example, creating a separate unit that works on your digital strategy, or it means creating a corporate venturing unit or skunkworks that experiments with lots of new business ideas at the same time. By giving these exploration-orientated investments the space and time they need, you are much more likely to tap into the emerging opportunities in your market. You can also use temporal ambidexterity in this situation – for example, by making long-term investment opportunities a priority for the whole firm, at least for a while. 2 An operating unit: If you are the manager of a specific operating unit (perhaps a factory or a sales team or a call centre) your job is to make that unit as successful as possible. This means delivering on its short-term performance goals, but also looking continuously for ways to improve its operations, perhaps by finding better ways of working, or by identifying new sales opportunities. This is also an ambidexterity challenge, in that you are trying to balance short-term exploitation (hitting your targets) and long-term exploration (finding ways to develop and improve). In such a case, the more appropriate model is typically contextual ambidexterity. This means creating a supportive internal culture or context, so that employees take responsibility for achieving the necessary balance between exploration and exploitation. A supportive culture pushes people to deliver to a very high level (focusing on discipline and stretch), while also taking care of the needs and concerns of its people (focusing on support and trust).

20: The amb idextrous org an is ation

83

Top practical tip To make an organisation ambidextrous, you have to work continuously on finetuning the structure and the context. This is because all organisations gravitate towards one way of working – usually towards exploitation because of the shorttermism of the financial markets. In such cases, you have to work hard to push the other way, to ensure that a focus on exploration is not lost entirely. At the firm level, this means creating special-purpose units to invest in new technologies, or it might mean acquiring firms that can do the things you are missing. At the operating-unit level, this means pushing people to experiment more, and to create a tolerance for making mistakes.

Top pitfall It is easy to get bogged down in the theoretical arguments about what ambidexterity is and the different models for making it work. A much more practical way of using the concept is simply to think in terms of the need for an overall balance between making money today and investing in what will make money tomorrow. Your job as a manager, at whatever level you operate, is to find formal and informal ways of keeping that balance.

Further reading Duncan, R. (1976) ‘The ambidextrous organisation: Designing dual structures for innovation’, in Kilmann, R.H., Pondy, L.R. and Slevin, D. (eds.) The Management of Organization Design (pp. 167–188). New York: North Holland. March, J.G. (1991) ‘Exploration and exploitation in organizational learning’, Organization Science, 2(1): 71–87. Tushman, M.L. and O’Reilly, C.A. (1997) Winning Through Innovation: A practical guide to managing organizational change and renewal. Cambridge, MA: Harvard Business Press.

84

PA RT THREE : S tr ategy and org an isation

The BCG growthshare matrix

21

Most firms operate in more than one line of business. In such multi-business firms, it can be challenging to figure out how they all fit together and where the priorities for future investment might be. The BCG ‘growth-share matrix’ is a simple model to help with this analysis.

When to use it ●●

To describe the different lines of business within a multi-business firm.

●●

To help you prioritise which businesses to invest in and which ones to sell.

Origins In the post-war era, firms in the USA and Europe had grown in size dramatically. Conglomerate firms such as ITT, GE and Hanson had started to emerge – typically they had large numbers of unrelated business, all controlled using financial measures from the centre. The BCG growth-share matrix was invented by The Boston Consulting Group in response to this diversification trend. It offered an intuitive way of mapping all the different businesses controlled by a firm onto a 2×2 matrix, and some simple guidelines for how each of those businesses should be managed by those in the corporate headquarters. It became very popular among large firms because it helped them to get their hands around what was often a very diverse set of businesses. The simplicity of the BCG matrix was also one of its limitations, and over the years a number of variants were put forward, for example by the consultancy McKinsey and by General Electric (GE). Versions of this matrix were used throughout the 1970s and 1980s, but the trend moving into the 1990s was towards much less diversification, as people realised there were few synergies available among unrelated businesses. Big conglomerates were broken up, sometimes by private-equity 85

based ‘corporate raiders’ and sometimes by their own leaders as a way of creating focus. The BCG matrix gradually fell out of favour, though it is still used today – often in a fairly informal way.

What it is The BCG matrix has two dimensions. The vertical axis indicates ‘market growth’, and it is a measure of how quickly a specific market is growing. For example, the market for milk may be growing at 1 per cent per year, while the market for smartphones may be growing at 10 per cent per year. The horizontal axis indicates ‘relative market share’ (that is, market share relative to the market-share leader), and it is a measure of how strong your business is within that market. For example, you might have 20 per cent in the slow-growing milk market and a 4-per-cent share in the fast-growing smartphone market. Relative market share HIGH

LOW

Stars

Question marks

HIGH LOW

(CASH USAGE)

Market growth rate

(CASH GENERATION)

$ Cash cows

Dogs

Source: Adapted from The BCG Portfolio Matrix from the Product Portfolio Matrix, © 1970, The Boston Consulting Group (BCG). Reproduced with permission.

Each business line is plotted on the matrix, and the size of the circle used is typically indicative of the amount of sales coming from that business (in top-line revenues). Each quadrant on the matrix is then given a name:

86

●●

High growth/high share: These are ‘star’ businesses that are the most attractive part of your portfolio.

●●

High growth/low share: These are known as ‘question mark’ businesses because they are relatively small in market share, but they are in growing markets. They are seen as offering potential.

PA RT THREE : S tr ategy and org an isation

●●

Low growth/high share: These are your ‘cash cow’ businesses – very successful but in low-growth, mature markets. Typically they provide strong, positive cash flows.

●●

Low growth/low share: These are ‘dog’ businesses and are considered to be the weakest ones in your portfolio. They need to be turned around rapidly or exited.

The vertical ‘growth’ dimension is a proxy measure for the overall attractiveness of the market in which you are competing, and the horizontal ‘share’ dimension is a proxy for the overall strength of your business in terms of its underlying capabilities.

How to use it By positioning all your businesses within a single matrix, you immediately get a ‘picture’ of your corporate portfolio. This in itself was a useful feature of the BCG matrix, because some of the conglomerate firms in the 1960s and 1970s, when the matrix was popular, had 50 or more separate lines of business. The matrix also provides you with some useful insights about how well the businesses are doing and what your next moves should be. Cash cow businesses are generally in mature and gradually declining markets, so they have positive cash flows. Question mark businesses are the reverse – they are in uncertain growth markets, and they require investment. One logical consequence of this analysis, therefore, is to take money out of the cash cow businesses and invest it in the question mark businesses. These then become more successful, their market share grows and they become stars. As those stars gradually fade, they become cash cows, and their spare cash is used to finance the next generation of question mark businesses. Dog businesses, as noted above, are generally exited as soon as possible, though sometimes they can be rapidly turned around so that they become question marks or cash cows. While this logic makes sense, it actually gives the corporate headquarters a very limited job to do. If you think about it, the reason we have ‘capital markets’ in developed countries such as the UK and the USA is to provide firms with access to capital that they can invest. If you think a firm is in an attractive market, you are likely to invest more of your money in it; if you think it is in a bad market, you might sell your shares. It therefore makes very little sense for the corporate headquarters to restrict itself to the role of moving money around between businesses – the capital markets typically can do that more efficiently. The biggest limitation of the BCG matrix, in other words, is that it underplays the potentially important role that the corporate HQ can play in creating value across its portfolio. Nowadays, diversified firms have a much more sophisticated understanding of the ways they add and destroy value – for example, by sharing technologies and customer relationships among businesses, and transferring knowledge between lines of business. The opportunities for synergies of this sort are completely ignored by the BCG matrix.

21: The BC G growth-sh are m atr ix

87

Top practical tip As a first step in making sense of your business portfolio, the BCG matrix can be very useful. It gives you an indication of where the most- and least-promising opportunities lie. However, you should be very cautious about drawing strong conclusions from the analysis.

Remember, the market-growth and market-share dimensions are proxies for the

underlying attractiveness of the market and the underlying strength of the business respectively. It is often useful to try out other ways of measuring these dimensions. For example, you can do a ‘five forces analysis’ of a market to understand its overall attractiveness in detail, rather than assuming that growth is the key variable. You should also think very carefully about how to define market share. For example, does BMW have a very low share (10 per cent) in the luxury sedan car market? Depending on how you define the boundaries around the market, the position of the business changes dramatically. Again, it is important to think carefully about what the analysis tells you and how much the result is a function of the specific numbers you used as inputs.

Top pitfall One particularly dangerous aspect of the BCG matrix is that it can create selffulfilling prophecies. Imagine you are running a business that has been designated as a cash cow. Your corporate headquarters tells you that your spare cash will be taken away from you and invested in a question mark business. This means you cannot make any new investments, and as a result your market share drops further. It is a self-fulfilling prophecy. The obvious way to guard against this risk is to try to make a case for reinvesting in the business. Even though the business is mature, it may still have opportunities to regenerate and grow, given the right level of investment. Hopefully the corporate executives at headquarters are sufficiently enlightened that they can see this potential.

88

PA RT THREE : S tr ategy and org an isation

Further reading Campbell, A., Goold, M., Alexander, M. and Whitehead, J. (2014) Strategy for the Corporate Level: Where to invest, what to cut back and how to grow organizations with multiple divisions. San Francisco, CA: Jossey-Bass. Kiechel, W. (2010) Lords of Strategy: The secret intellectual history of the new corporate world. Boston, MA: Harvard Business School Press.

21: The BC G growth-sh are m atr ix

89

22

Blue ocean strategy

Most firms compete in established ‘red ocean’ markets, with well-entrenched competitors and a clearly-defined set of customer expectations. Occasionally, a firm will create a ‘blue ocean’ market for a product or service that had not previously been recognised – a strategy that typically yields far greater profitability.

When to use it ●●

To understand what is distinctive (if anything) about your current strategy.

●●

To make your existing strategy more distinctive.

●●

To identify opportunities for entirely new offerings.

Origins Researchers have understood for many years that successful firms are often the ones that ‘break the rules’ in their industry. For example, in the 1970s, Swedish furniture manufacturer Ikea rose to prominence thanks to its new way of making and selling furniture. The emergence of the internet in the mid-1990s made it easier for new firms to break the rules in established industries, and it was during this era that the term ‘business model’ became widely used. Amazon.com, for example, competed against traditional book retailers with a distinctive business model – its formula for making money was very different to that of Barnes & Noble or Waterstones. Many researchers studied the process of business model innovation during this period and into the 2000s, with a view to providing advice to firms about how they could develop new business models themselves, or protect themselves against entrants with new business models. Important publications were Leading the Revolution (Hamel), and All the Right Moves (Markides). But the most influential work on business model innovation was probably Blue Ocean Strategy by INSEAD 90

professors Chan Kim and Renee Mauborgne. While the underlying ideas across these publications are similar, Blue Ocean Strategy provides the most comprehensive guide for how to define and develop new market opportunities.

What it is Kim and Mauborgne divide the world of business opportunities into red and blue oceans. Red oceans are the established industries that exist today, typically with well-defined boundaries. Firms compete for market share using the well-understood rules of the game, but as this market gets crowded, the prospects for profits and growth are reduced. Red-ocean industries include automobiles, consumer products and airlines. Blue oceans are those industries that do not exist today – they are an unknown market space, untainted by competition. In a blue ocean, demand is created rather than fought over, and there are bountiful opportunities for growth. Competition in blue oceans is irrelevant because the rules of the game are waiting to be set. Apple is famous for identifying blue oceans – for example, they created the market for legal online music-selling (iTunes) and the market for tablet computers (the iPad).

How to use it Blue ocean strategy is a set of tools to help firms identify and colonise these blueocean opportunities. The starting point is to understand customer values – the underlying wants or needs that customers have – and to seek out novel ways of addressing these values. This can best be done using a ‘strategy canvas’, which lists the range of customer values on the horizontal axis and then the extent to which each value is being met on the vertical axis (see the figure overleaf). By plotting your own firm’s profile on the strategy canvas, and then comparing it to the profile of close and distant competitors, you can see visually how distinctive your current strategy is. In the figure overleaf, firms A and B have very similar strategies, while firm C has a distinctive strategy. The strategy canvas captures the current state of play in the known market space, which allows you to see what factors the industry competes on and where the competition currently invests. It also opens up a conversation about what might be changed – it helps you explore areas where firms are satisfying existing customer needs poorly, and it allows you to brainstorm possible new sources of value that no firm has yet addressed. These four questions are used to guide such a conversation: ●●

What factors can be eliminated that the industry has taken for granted?

●●

What factors can be reduced well below the industry’s standard?

●●

What factors can be raised well above the industry’s standard?

●●

What factors can be created that the industry has never offered?

22: Blue o ce a n str ategy

91

Firm A High Firm C To what level is this value provided by the firm?

Firm B

Low Low cost

Quality

Service

Speed

Convenience

Customer values Source: Based on data from Kim, C. and Mauborgne, R. (2005) Blue Ocean Strategy. Boston, MA: Harvard Business School Press.

The objective of this analysis is ‘value innovation’, which is about pursuing differentiation and low cost at the same time, and creating additional value for your firm and for your customers at the same time. Note that this is very different to the original notions of competitive strategy developed by Michael Porter, in which firms were advised to choose differentiation or low cost. As a general rule, you can be differentiated and low cost at the same time while you are in a blue ocean, but gradually blue oceans are colonised by other firms, the industry rules become set and the ocean turns to red. Once the ocean is red, Porter’s original arguments about having to choose between differentiation and low cost are once again valid.

Top practical tip Blue ocean strategy provides you with a comprehensive set of tools for analysing your customers, their met and unmet needs, your competitors’ offerings, and so forth. However, the heart of blue-ocean thinking is about coming up with some sort of creative insight about a product or service that doesn’t currently exist. And this is really hard to do, because we are all, to some degree, prisoners of our prior experience. Breaking out from this experience involves looking for inspiration in unusual places. For example, you can look at solutions provided in alternative industries or by other firms. One useful guideline is to say to yourself, ‘What would Steve Jobs do?’ or ‘What would Richard Branson do?’ in this industry, as they are both famous for challenging the normal rules of the game in whatever market they have entered.

92

PA RT THREE : S tr ategy and org an isation

Another approach is to make sure that the team of people involved in the discussion are very heterogeneous, and include people who are fairly new to your firm and therefore more open to unusual ideas.

Top pitfall While the concept of blue ocean strategy is attractive, one important limitation is that the number of genuinely blue-ocean opportunities out there is small. Many potential blue oceans end up being mirages or actually fairly small markets. The process of developing a blue ocean strategy for your firm can therefore be a bit frustrating, and often yields less exciting benefits than you had hoped for.

Further reading Hamel, G. (2000) Leading the Revolution. Boston, MA: Harvard Business School Press. Markides, C. (1999) All the Right Moves. Boston, MA: Harvard Business School Press. Kim, C. and Mauborgne, R. (2005) Blue Ocean Strategy. Boston, MA: Harvard Business School Press.

22: Blue o ce a n str ategy

93

23

Core competence and the resourcebased view

Firms don’t just become profitable because of the generic strategy they have chosen. Sometimes, their internal resources and capabilities are sufficiently unique that other firms cannot match them. These models help a firm to understand and to develop their internal capabilities so that they become a source of competitive advantage.

When to use it ●●

To understand why one firm is more profitable than another, even when they have the same position in the market.

●●

To improve profitability and growth in your own firm.

Origins Thanks largely to Michael Porter’s influential work, thinking about strategy in the 1980s was all about how firms positioned themselves within their chosen industry – it was externally focused. But, to be successful, a firm has to also give attention to its internal resources and competencies – it has to have the skills to translate its intentions into action. Around 1990 there was a shift in thinking towards these internal perspectives. In that year, a landmark paper by Gary Hamel and C.K. Prahalad, called ‘The core competence of the corporation’, suggested that the secret of long-term success was to understand and build on the underlying competencies that make your firm distinctive. Around the same time, Jay Barney wrote a highly-influential academic paper arguing that success is built around having valuable, rare and hard-to-imitate resources. Barney’s work built on some earlier academic studies, but it was his contribution that really opened up the resource-based view to academic researchers around the world. 94

What it is According to Hamel and Prahalad, a ‘core competency’ is a harmonised combination of multiple resources and skills that distinguish a firm in the market-place. Such a competency fulfils three criteria: ●●

it provides potential access to a wide variety of markets;

●●

it should make a significant contribution to the perceived customer benefits of the end-product;

●●

it is difficult to imitate by competitors.

Examples used by Hamel and Prahalad include Canon’s core competencies in precision mechanics, fine optics and micro-electronics, or Disney’s core competency in storytelling. Core competencies are not just valuable in existing markets, they can also be used to build many products and services in different markets. For example, Amazon used its state-of-the-art IT infrastructure to develop an entirely new business, Amazon Web Services. Core competencies emerge through continuous improvements over time, and indeed this is one of the reasons they are hard to copy. The ‘resource-based view’ is a theory of competitive advantage based on how a firm applies its bundle of tangible or intangible resources towards market opportunities. Resources have the potential to create competitive advantage if they meet certain specific criteria: ●●

valuable;

●●

rare (not freely available for everyone to buy);

●●

inimitable (not quickly copied);

●●

non-substitutable.

For example, a firm owning a diamond mine has the potential for competitive advantage, because its diamonds meet these criteria. A more interesting example would be McKinsey, the consultancy, which over the years has built a set of valuable relationships with its key clients that its competitors cannot match. Many observers have argued that it is useful to separate out ‘resources’, which are assets that can be bought and sold, from ‘capabilities’, which are bundles of resources used in combinations to achieve desired ends. There are obvious parallels between the ‘core competence’ and ‘resourcebased’ views, but they are not identical. Core-competence thinking has been used on a more applied basis, with many firms talking colloquially about what their core competencies are, whereas the resource-based view is the preferred way of thinking about these issues in academic research.

23: Core c omp eten c e a nd the resour ce-b a sed v i ew

95

How to use it It is useful to use a structured framework for analysing your firm’s core competencies. Here is one standard approach: ●●

Start with brainstorming what matters most to your customers or clients: what do they need, what is valuable to them? What problems do they have that you can potentially address?

●●

Then think about the competencies that lie behind these needs. If customers value small products (such as mobile phones), then the relevant competence might be around miniaturisation and precision engineering. If they are looking for advice on high-level matters, the relevant competence could be relationship management.

●●

Brainstorm your existing competencies – the things people think the firm is good at, and the things you are better at than your competitors. For each of these, screen them against the tests of relevance, difficulty of imitation and breadth of application

●●

Now put the two lists together, and ask yourself where there is overlap between the really challenging or important things your customers need and what your firm is really good at. The points of overlap are, in essence, your core competencies.

●●

In many cases, the overlap between the two lists is far from perfect, which opens up a number of supplementary questions. For example, if you have no core competencies, look at the ones you could develop, and work to build them. Alternatively, if you have no core competencies and it doesn’t look as if you can build any that customers would value, then you might consider other ways of creating uniqueness in the market, perhaps through clever positioning.

Top practical tip The definition of a core competence is very exclusive. In other words, if you apply the criteria very strictly, most firms do not end up with any core competencies at all. So the framework provided here should typically be applied fairly loosely – it is useful to consider the various criteria around value, rarity and inimitability, but only as a way of thinking through how a competitor might attempt to beat you, or how you might sharpen up your own competitive position, rather than as an end in itself.

96

PA RT THREE : S tr ategy and org an isation

Top pitfall The biggest risk with core-competence analysis is that the exercise becomes highly internally focused. It is often quite interesting to debate what you are good at, because everyone has a view. But it often devolves into a very negative conversation about what goes wrong, with finger-pointing between departments. This is why a core-competence discussion should always go back and forth between what your firm is good at and what customer needs you are attempting to satisfy.

Further reading Barney, J.B. (1991) ‘Firm resources and sustained competitive advantage’, Journal of Management, 17(1): 99–120. Barney, J.B. and Hesterley, W. (2005) Strategic Management and Competitive Advantage. Upper Saddle River, NJ: Prentice Hall. Prahalad, C.K. and Hamel, G. (1990) ‘The core competence of the corporation’, Harvard Business Review, 68(3): 79–91.

23: Core c omp eten c e a nd the resour ce-b a sed v i ew

97

24

Corporate social responsibility: the triple bottom line

Corporate social responsibility (CSR) is a form of self-regulation that many firms have taken on, as a way of monitoring and ensuring their active compliance with the spirit of the law, ethical standards and international norms. One well-known approach to making CSR practical is to think about your ‘triple bottom line’ – making money for your shareholders, while also measuring the contribution you make to the communities you operate in, and to the sustainability of the planet’s scarce resources.

When to use it ●●

To address the broad responsibilities your firm has to all its stakeholders.

●●

To think through the costs you incur in using natural resources, and to account for these costs in an effective way.

●●

To identify opportunities for creating ‘shared value’ between your firm and certain stakeholders.

Origins The relationship between business and society has concerned economic philosophers for centuries. Karl Marx, for example, believed that profit making by businesses was inherently unfair, because it involved shareholders making money at the expense of workers. Thomas Malthus, in a very different way, was concerned about industrialisation and growth because he believed the planet’s finite resources would not be able to keep up with the demand for them. Discussions about the role of business in society gathered momentum in the post-war years. In a famous contribution to the debate, Nobel laureate Milton Friedman argued that society would be best-served if businesses sought to make as much money as possible while staying within the law. Many others countered that business should be more proactive, by following the spirit of the law and by 98

seeking to help all stakeholders in their activities. The term ‘corporate social responsibility’ was first used in the 1960s, and during the 1980s and 1990s it became an important part of the strategic debate for most large firms. A number of related terms also emerged over the years, including such terms as ‘corporate citizenship’, ‘sustainability’, ‘stakeholder management’, ‘responsible business’ and ‘shared value’. Specific tools have also been introduced, for example the ‘triple bottom line’ proposed by John Elkington in 1994. Today, every large firm gives considerable thought to the CSR agenda, partly through their own initiative and partly because of the pressure they are under from local community groups and from non-governmental organisations such as Greenpeace or Friends of the Earth.

What it is Corporate social responsibility (CSR) is a very broad term that covers a range of different ways of acting. At its heart it is a form of self-regulation, meaning that it involves complying with a set of behaviours and standards that go beyond what is currently required by law. As a general rule, larger firms tend to be more active in CSR initiatives, because of the amount of resources they consume, the number of people whose lives they touch and the extent to which they are in the public spotlight. There are also big differences from country to country: for example, some commentators have identified a difference between the Continental European and the Anglo-Saxon approaches to CSR. A number of different approaches have been identified. One is CSR as a form of ‘corporate philanthropy’, which includes providing donations and aid for non-profit organisations and communities. While such donations are appreciated by the communities in question, it is an approach that has little operational or strategic impact, and it often means that the firm in question does not actually engage in the real challenges of sustainability. A second approach is to see CSR as ‘risk management’, which involves investing in local communities or partner organisations in such a way that their needs are directly taken into account. For example, many resource companies (oil and gas, mining) put large amounts of investment into training and supporting the communities where they operate. It has a significant operational and strategic impact, but critics argue that such firms only invest up to the point where they gain their ‘licence to operate’. A third approach is CSR as ‘creating shared value’, which is based on the notion that corporate success and social welfare are interdependent. Business needs a healthy, educated workforce, sustainable resources and adept government to compete effectively, so over the long term, it is argued, there is no trade-off between what is good for society as a whole and what is good for business. While conceptually distinct, these three approaches cannot always be separated out in reality. Many firms argue that they are pursuing the creation of shared value, for example, while their critics argue that they are adopting a risk-management approach, by doing just enough to get their licence to operate. 24: Cor p or ate social res ponsib il ity: the triple bottom line

99

How to use it One way of turning these discussions about corporate social responsibility into reality is to develop effective ways of measuring the investments made by businesses and their consequences for various different stakeholders. The term ‘social accounting’ is used to cover the multitude of ways that accounting systems can be adapted to embrace CSR principles. One specific example is the notion of a triple bottom line, which is an accounting framework with three dimensions: social, environmental (or ecological) and financial. (The three dimensions are also commonly called the three Ps: people, planet and profit.) Interest in triple-bottom-line accounting has grown over the years, and many organisations have adopted some version of this framework for evaluating their performance. In practical terms, the triple bottom line adds social and environmental bottom lines alongside the traditional financial bottom line. Standard approaches for accounting for social and environmental costs are starting to emerge, so that it is now possible to quantify all three bottom lines at the same time. A further education college, for example, might keep track of the number of disadvantaged citizens it finds jobs for, and its carbon footprint, alongside its more traditional financial metrics.

Top practical tip If you work for a firm that is concerned about enhancing its social responsibility, you need to spend some time getting up to speed with all the latest thinking, because this is a rapidly-evolving area. The triple bottom line is a useful framework, but there are many alternatives, and you need to ensure that the approach you take is one that the stakeholders in your business recognise and buy into. You also need to take stock of the existing initiatives that are underway, because today every firm (even small ones) has some activities geared towards, say, reducing its carbon footprint, recycling rubbish, or supporting local community interests. Once you know what your firm is currently doing, you can then start thinking about your actual CSR strategy. This involves identifying a handful of initiatives where you have the biggest opportunity to make a difference – ideally in terms of creating shared value, as this is the approach that has the greatest long-term potential.

100

PA RT THREE : S tr ategy and org an isation

Top pitfall There is a lot of talk about ‘greenwashing’ in the business press. This refers to large firms creating initiatives that give the impression of caring about CSR, while their underlying activities don’t actually change very much. If your firm takes sustainability issues seriously, it is therefore important to ensure that you provide lots of evidence for how much money you are putting into them and in what ways, so that your efforts are taken seriously by outside observers.

Further reading Bhattacharya, C.B, Sen, S. and Korschun, D. (2011) Leveraging Corporate Social Responsibility: The stakeholder route to business and social value. Cambridge, UK: Cambridge University Press. Elkington, J. (1997) Cannibals with Forks: The triple bottom line of twenty-first century business. Oxford, UK: Capstone. Friedman, M. (1970) ‘The social responsibility of business is to increase its profits’, The New York Times Magazine, 13 September. Porter, M.E. and Kramer, M.R, (2006) ‘Strategy and society: The link between competitive advantage and corporate social responsibility’, Harvard Business Review, 84(12): 42–56.

24: Cor p or ate social res ponsib il ity: the triple bottom line

101

25

Corporate strategy: parenting advantage

For firms with multiple business units, the ‘corporate’ strategy at the centre involves deciding which businesses to be in, and how best to add value to those businesses. Parenting advantage is a way of making sense of the portfolio of businesses within such a firm, and identifying which businesses to focus on.

When to use it ●●

To understand how the portfolio of businesses within a single corporation fit together.

●●

To identify opportunities for creating greater synergies between businesses.

●●

To identify opportunities for breaking up the corporation or spinning off certain businesses.

Origins Academics and business people have made a conceptual split between businessunit strategy and corporate-level strategy for many years. For example, back in the 1930s, General Motors was one of the first companies to create separate business units under a single corporate structure. Most strategy research in the 1980s focused on business-level strategy, thanks largely to Michael Porter’s influential ideas. Corporate strategy never attracted quite so much interest, but beginning in the late 1980s there were several important studies, one from Michael Porter himself, looking at the different ways the parent company can add value to its businesses. A stream of books and articles by Andrew Campbell, Michael Goold and their colleagues at Ashridge Strategic Management Centre were highly influential, focusing on understanding the specific ways that the centre (the ‘corporate parent’) can best add value to the businesses within a firm.

102

Research by David Collis and Cynthia Montgomery at Harvard Business School was also important in shaping thinking about how the firm creates value that is greater than the sum of its parts.

What it is Corporate strategy is the sum of the choices made by the decision makers at the centre of a multi-business firm. The executives running each individual business are responsible for defining their competitive strategy: what markets should they compete in, and what position should they choose in those markets? The executives at the corporate centre are responsible for defining their corporate strategy: what portfolio of businesses should the firm be in, and how can they add value to those separate businesses? Most large firms are actually in more than one business, so corporate strategy is an important issue. However, there are many types of multi-business companies. Some are ‘related diversified’ businesses, such as Unilever, where there are several different businesses but they are operating in similar markets with similar underlying skills. Some are ‘unrelated diversified’ businesses, such as General Electric, where the businesses are completely different – from healthcare to financial services to aircraft engines. There are also ‘holding companies’ such as Warren Buffett’s company, Berkshire Hathaway, where the businesses are completely different and no attempts are made to create synergies between them. Most of the thinking on corporate strategy applies to related diversified and unrelated diversified companies, and the most sophisticated way of thinking about this, developed by Andrew Campbell, Michael Goold and Marcus Alexander, is the notion of parenting advantage. The concept of parenting advantage focuses on the distinctive value that a parent company can provide to the business units in its portfolio. Every corporate parent adds some value to its businesses – for example, lower costs of borrowing money or access to a corporate brand. However, every corporate parent also destroys some value, perhaps because it meddles too much in the details of the businesses or because of the cost of employing people in the corporate HQ. A company is said to have a parenting advantage when the value it adds to a particular business, minus the value it destroys, is higher than what would be achieved by a different corporate parent. For example, when Microsoft decided to buy Skype a few years ago, it was making a bet that it could be more helpful to Skype as a corporate parent than, say, Google or Facebook. To the extent that Microsoft has been able to help Skype become more successful and integrate Skype’s offerings with its existing communication offerings, then it has a parenting advantage.

25: Cor p or ate strategy: parent ing advantage

103

How to use it The concept of parenting advantage can be made operational by considering two important dimensions of the relationship between the businesses in the firm and the corporate parent. The first dimension is about understanding the ‘value creation’ opportunities that the parent has with its businesses. For example, at General Electric the top executives believe they can create value by transferring knowledge across businesses, especially through the development and movement of highly-skilled general managers. In contrast, 3M Corporation shares resources very effectively – it has a portfolio of technologies held by the firm as a whole, that get combined and leveraged to create many different products. The second dimension is about the potential for value destruction, typically because the parent doesn’t fully understand the business in question, or because the critical success factors needed to make it work are fundamentally at odds with the parenting characteristics.

High Ballast

Fit of business’s critical success factors with parenting characteristics

Heartland

Edge of heartland

Alien territory Low

Low

Value trap

High

Fit of business improvement opportunities with parent’s value-creation insights Source: Dobson, P.W., Starkey, K. and Richards, J. (2009) Strategic Management: Issues and cases, 2nd edn. New York: John Wiley & Sons. Reproduced with permission.

With these dimensions in mind, you can then draw a matrix that focuses on the relationship of each business with the parent in terms of business improvement opportunities (horizontal dimension) and the potential for the parent to destroy value through critical success factors not fitting (vertical dimension). The heartland businesses are the core of the firm – they are in areas that the parent company executives understand well, and they typically benefit significantly from being part of the corporation as a whole. Businesses that are close to this category are called edge-of-heartland businesses. 104

PA RT THREE : S tr ategy and org an isation

At the other extreme, alien territory businesses are poorly understood by the parent, and run the risk of being mismanaged or neglected. They should typically be sold off as soon as possible. Ballast businesses are those the parent understands well, but where there are few opportunities to add value, given the rest of the portfolio. For example, a traditional core business (such as IBM’s old mainframe business as it moved into software and services) is often ballast, in that it is large and profitable but the opportunities for additional value added are not clear. A bit of ballast can be a good thing, as it provides stability and weight to the corporation, but it also slows things down. Sooner or later, ballast businesses are typically sold off – IBM eventually sold its PC business, for example. Value traps are the opposite of ballast businesses. In such cases, the parent has an insight about how they can be improved, but there are also elements of misfit that are likely to lead to value destruction. For example, when GE bought an investment bank (Kidder Peabody) there was logic for doing this, because it gave GE access to banking capabilities while the bank itself benefited from GE’s low cost of capital. But Kidder Peabody’s highly egotistical and bonus-driven culture was at odds with the GE culture, and a rogue trader ended up losing the company many hundreds of millions of dollars. GE wisely sold the company. More generally, unless a parent can find ways to align its own characteristics more clearly with those of its value-trap businesses, it is usually best to divest them.

Top practical tip The parenting-advantage framework is a very useful way of mapping out the different businesses in a multi-business firm, because it allows you to think carefully about how the parent both creates and destroys value in its businesses. It is significantly more useful than the better-known BCG growth-share matrix, which simply looks at the attractiveness and performance of the various businesses, without any concern for how the parent might add value to those businesses.

Top pitfall The parenting-advantage framework allows you to develop a point of view about how the corporate centre can become a better ‘parent’ by sharing resources and transferring knowledge across businesses. However, it is important to realise that many synergy programmes are actually mirages – they look promising from a distance, but when you get into the details you realise that pushing the promised benefits is often more trouble than it is worth. So a good parent company is very selective about when and how it seeks to add value to the businesses in its portfolio.

25: Cor p or ate strategy: parent ing advantage

105

Further reading Campbell, A., Goold, M. and Alexander, M. (1995) ‘Corporate strategy: The quest for parenting advantage’, Harvard Business Review, 73(2): 120–132. Collis, D.J. and Montgomery, C.A. (1997) Corporate Strategy: Resources and the scope of the firm. Burr Ridge, IL: Irwin. Porter, M.E. (1987) ‘From competitive advantage to corporate strategy’, Harvard Business Review, 65(3): 43–59.

106

PA RT THREE : S tr ategy and org an isation

Five forces analysis

26

This is a way of analysing the attractiveness of an industry. In some industries, such as pharmaceuticals, profit margins are typically very high, while in other industries, such as retailing, profit margins are consistently lower. Five forces analysis explains why this is the case – and what a firm can do to make its industry more profitable.

When to use it ●●

To understand the average profitability level of an existing industry.

●●

To identify opportunities for your firm to become more profitable.

Origins Michael Porter was a junior faculty member at Harvard Business School in the mid-1970s, with a background in the micro-economic theories of ‘industrial organisation’. That body of literature was mostly concerned about how to prevent firms from making too much money (for example, by preventing them from getting monopoly power). Porter realised that he could reframe those ideas as a way of understanding why firms in some industries were so consistently profitable anyway. He wrote a number of academic papers showing how industrial organisation and strategic thinking could be reconciled. He also popularised his ideas through a classic article in the Harvard Business Review in 1979, called ‘How competitive forces shape strategy’. His book on the same subject, Competitive Strategy, was published the following year. The five forces was the first rigorous framework for analysing the immediate industry in which a firm was competing. Before that, most managers had used ‘SWOT’ analysis (strengths, weaknesses, opportunities, threats), which is a useful but unstructured way of listing issues facing a firm.

107

What it is Five forces analysis is a framework for analysing the level of competition within and around an industry, and thus its overall ‘attractiveness’ to the firms competing in it. Collectively, these make up the micro-environment of forces close to the firm, in contrast to the macro-environment (such as geopolitical trends) that typically affect an industry indirectly or more slowly. The five forces are defined as follows: 1 Threat of new entrants: Profitable markets attract new entrants, which in turn decreases the level of profitability. However, it is not always possible for firms to enter profitable markets, perhaps because of the financial or technological hurdles that have to be crossed, or perhaps even for regulatory reasons. It is therefore useful to think in terms of the barriers to entry in an industry that make it hard for new competitors to enter. Some industries (such as pharmaceuticals and semiconductors) have very high barriers to entry; other industries (including retailing and food products) have relatively low barriers to entry. 2 Threat of substitute products or services: The existence of products outside the immediate market but serving similar needs makes it easier for customers to switch to alternatives, and keeps a lid on the profitability in the market. For example, bottled water might be considered a substitute for Coke, whereas Pepsi is a competitor’s similar product. The more successful bottled water is, the tougher it is for Coke and Pepsi to make money. 3 Bargaining power of customers (buyers): The price customers pay for a product is heavily influenced by the strength of their negotiating position. For example, if Wal-Mart or Tesco decides to put your new line of spaghetti sauces in their supermarkets, you have to accept whatever price they will pay (as long as you aren’t actually losing money on the deal). There are many potential sources of customer bargaining power, such as how large they are, how important they are and how easy it is for them to switch between suppliers. 4 Bargaining power of suppliers: This is the mirror-image of the previous force. Some suppliers are providing components or services that are so important to you that they can charge extremely high prices. If you are making biscuits and there is only one person who sells flour, you have no alternative but to buy it from them. Intel is famous for its bargaining power in the PC/laptop industry – by persuading customers that its microprocessor (‘Intel Inside’) was the best in the market, they dramatically increased their bargaining power with Dell, Lenovo and HP. 5 Intensity of competitive rivalry: This refers to how competitors in the industry interact with one another. Of course, most firms believe they have very ‘intense’ competitive rivalry, but the truth is that the nature of rivalry varies dramatically from industry to industry. For example, the ‘big four’ accounting firms and many retail banks compete, for the most part, in a very gentlemanly way, by emphasising brand and service and by avoiding

108

PA RT THREE : S tr ategy and org an isation

price competition. On the other hand, the airline industry is well known for its cut-throat pricing and for its flamboyant personalities who make personal attacks on each other. Taken together, these forces provide a comprehensive view of the attractiveness of an industry, measured in terms of the average level of profitability of the firms within it. Threat of new entrants

Bargaining power of customers

Rivalry among existing competitors

Bargaining power of suppliers

Threat of substitutes

Source: Porter, M.E. (1979) ‘How competitive forces shape strategy’, Harvard Business Review, March/ April: 21–38. Copyright © 1979 by the Harvard Business School Publishing Corporation, all rights reserved. Reprinted by permission of Harvard Business Review.

How to use it The five forces framework can be used in two ways: first as a way of describing the current situation in your industry, second as a way of stimulating ideas about how to improve your firm’s competitive position. As a descriptive tool, it is useful to go through each of the forces in turn and assess how powerful they are, in your current situation. Michael Porter’s book, Competitive Strategy, provides a detailed checklist of items to think about when doing this analysis. This can be summarised, for example, by using a single ‘+’ sign for a force moderately in your favour, or ‘–’ for a force strongly against you. This analysis should help you understand why your industry is relatively attractive or relatively unattractive. It also pinpoints which forces represent the biggest threat. For example, doing this analysis in the pharmaceutical industry in Europe one would likely conclude that the bargaining power of buyers, and specifically governments, is the biggest single threat.

26: Fi ve for c es a na lys i s

109

The second part of using the five forces is to use the analysis to brainstorm ways of improving your competitive position – so that you can ‘push back’ on the strongest forces affecting you. For example, in pharmaceuticals, if the bargaining power of government buyers is very high, one way of pushing back is to increase your own bargaining power – either by coming up with a really exciting drug that consumers start clamouring for, or by becoming bigger and more powerful yourself (witness, for example, Pfizer’s attempts in 2014 to buy AstraZeneca).

Top practical tip First, make sure your analysis of the forces affecting your industry is accurate. One way to do this is to actually figure out the average profitability of the firms in your industry, and to then compare this to the average (for example the average return on invested capital in US industries in the period 2000–2008 was 12.4 per cent). If your industry is actually more profitable than the average, one would expect to see a relatively favourable set of forces.

Second, it makes sense to focus your effort on the one or two forces (out of five)

that are the most critical to your future profitability. Your analysis might lead you to conclude, say, that the bargaining power of customers and internal competitive rivalry are the biggest threats. Your subsequent strategy discussions should then focus on those topics.

Top pitfalls The biggest pitfall analysts make is to think that they have done their job by describing the five forces. In reality, this analysis is just the starting point – it provides an understanding of the current situation, but it doesn’t tell you anything about what you should do differently.

The other pitfall is defining the boundaries of the industry incorrectly. You need

to give a lot of thought to which competitors are your immediate competitors. For example, this often means focusing on a specific country. If you work in banking, the relevant industry might be ‘retail banking in France’ or ‘commercial banking in Canada’, rather than just ‘banking’ as a whole.

Further reading Porter, M.E. (1979) ‘How competitive forces shape strategy’, Harvard Business Review, March–April: 21–38. Porter, M.E. (1980) Competitive Strategy. New York: Free Press. 110

PA RT THREE : S tr ategy and org an isation

Game theory: the prisoner’s dilemma

27

Firms don’t always think directly about what is right for their customers; sometimes they also think strategically about how their competitors will behave, and adapt their own behaviour accordingly. Game theory provides a set of models, such as the ‘prisoner’s dilemma’, which help a firm plan its strategy, taking all such factors into account.

When to use it ●●

To select a course of action in a highly competitive situation.

●●

To understand the likely moves of your competitors.

●●

To help you in negotiations.

Origins Game theory has deep academic roots. It emerged from the field of mathematics, and its invention is typically attributed to John von Neumann who wrote an initial paper on the subject in 1928 and then a book, Theory of Games and Economic Behaviour, in 1944 (co-authored with Oskar Morgenstern). The term ‘game’ here refers to any sort of competitive scenario where two or more players are trying to make decisions that interact with each other. While there are many subfields within game theory, the focus here is on one well-known type of game, known as the ‘prisoner’s dilemma’. The first mathematical discussion of this game was developed in 1950 by Merrill Flood and Melvin Dresher at the RAND Corporation in the USA. Because it involved two players who were trying to predict how each other might behave, its application to Cold War politics, and the use of nuclear weapons, was obvious. Research in this area proliferated through the 1950s, with many different types of games being analysed. A key

111

concept developed during this period was the notion of a ‘Nash equilibrium’– the outcome of a game where neither party has an incentive to change their decision. Gradually, the concepts of game theory were brought into the business world, and applied in a variety of areas. For example, game theory is very useful for understanding how firms develop strategies in oligopolies – that is, where there are small numbers of competitors who watch each other carefully. It is also very useful for understanding negotiations between individuals and between businesses.

What it is Game theory is a way of understanding decision making in those situations where multiple participants have competing or conflicting objectives. For example, if your firm is competing head to head with another firm, the price you charge isn’t affected only by what customers will pay, it is also affected by your competitor’s price. In such a situation, you need to be ‘strategic’ in your thinking, which means getting inside the head of your competitor and acting in a way that takes its likely decision into account. The most famous model in game theory is the ‘prisoner’s dilemma’. Consider the situation where two suspected felons are caught by the police, and interrogated in separate rooms. They are each told: (a) if you both confess you will each go to jail for ten years, (b) if only one of you confesses, he gets one year in jail, the other gets 25 years, and (c) if neither of you confesses, you each get three years in jail. This game is deliberately set up to expose how tricky ‘games’ of this type can be. The optimal outcome is for neither of the prisoners to confess, so they each get three years. But because they cannot communicate with each other, their best individual decision is to confess, and the net result (assuming they both do this) is they each get ten years. In other words, maximising individual outcomes does not necessarily aggregate up to the optimal welfare for a group. All sorts of extensions and modifications to this model are possible. For example, many real-life ‘games’ are actually repeated interactions between parties, so that what you do in one game influences what you do in the next game. Some games involve sequential moves, rather than simultaneous moves. Some games are ‘zerosum’, meaning there is a fixed amount of value to be divided up, while others are ‘non zero-sum’, which means that by acting in a cooperative way you can increase the amount of value.

How to use it The mathematics involved in game theory is challenging, and most people in the business world don’t have the time or the inclination to understand the details. However, it is possible to draw out some simple rules of thumb from the prisoner’s dilemma analysis. First, try to figure out all the choices you can make and all the choices the other party can make. In the prisoner’s dilemma, each party has two choices (confess or 112

PA RT THREE : S tr ategy and org an isation

not confess) and the consequences of each choice are very clear. In the real world, you have to make some calculated guesses. For example, you could choose a ‘high’ or a ‘low’ price for your new product, your competitor can do likewise, and you can estimate the likely market share and profits that follow from each scenario. This is called a ‘pay-off matrix’. Next, you look at the analysis to see if there is a ‘dominant strategy’ for your firm – defined as one with pay-offs such that, regardless of the choices of other parties, no other strategy would result in a higher pay-off. Clearly, if you have such a dominant strategy, you should use it. You can also see if there are any ‘dominated strategies’, which are clearly worse than others, regardless of what other parties do. These can be eliminated. These steps often clarify the choices you face. For example, if you eliminate one dominated strategy, it sometimes becomes clear you have a dominant strategy that should be the right way forward. You keep iterating in this way until a dominant strategy emerges, or the game cannot be simplified any further. In the latter case, you then have to make a judgment based on whatever other factors you think might be important.

Top practical tip Game theory has become very popular because the principles are simple and the applications are far-reaching. For example, you may find yourself competing with certain colleagues for a promotion opportunity, which involves both cooperating with them and competing at the same time. You may have to decide whether to be the first-mover in launching a new product (and taking a risk on it flopping) or being the fast-follower. You may find yourself bidding for a government contract against a couple of cut-throat competitors, and having to make tricky choices about what price or what service to offer. In all cases like these, the principles of game theory are useful as a way of thinking through how your choice and that of your competitors interact with each other. It is rarely necessary to do the formal mathematical analysis – a simple backof-the-envelope payoff matrix, and a search for dominant and dominated strategies is typically all you need.

Top pitfall One important point to keep in mind is how sophisticated the other party is likely to be in its analysis. If you do your calculations assuming the other party is highly

27: Ga me theory: the pri soner’s d i lemm a



strategic, but they end up being very simple-minded, you can lose out. And the

113

reverse is also true – don’t underestimate how smart your competitor is. A famous example of the latter scenario was when the UK government had an auction for its third-generation mobile telephony licences in 2000. The auction was designed very carefully, with only four licences available and five likely bidders. It raised £22.5 billion, vastly more than expected, and the result for all four ‘winners’ was that they overpaid.

Further reading Dixit, A.K. and Nalebuff, B. (1991) Thinking Strategically: The competitive edge in business, politics, and everyday life. New York: W.W. Norton & Company. Von Neumann, J. and Morgenstern, O. (2007) Theory of Games and Economic Behavior (60th Anniversary Commemorative Edition). Princeton, NJ: Princeton University Press.

114

PA RT THREE : S tr ategy and org an isation

Generic strategies

28

Within a typical industry, some firms have a competitive advantage – they consistently make more money than others. Generic strategies are the basic choices firms make – low cost, differentiated, focused – that enable them to develop competitive advantage.

When to use it ●●

To choose how to position your firm or product in a market-place.

●●

To analyse your competitors’ strategies.

●●

To build an organisation and a set of capabilities to support your chosen position.

Origins Michael Porter is the originator of the concept of generic strategies. There had been research on different strategies before (for example, Ray Miles and Charles Snow talked about ‘analysers’, ‘defenders’ and ‘prospectors’ in the late 1970s), but Porter defined the way we talk about strategies today through his 1980 book, Competitive Strategy. Building on his theories about industry structure, he argued that for a firm to defend itself against the ‘five forces’ that affect industry profitability, it should adopt one of three basic positions – differentiation, low cost and focus. Failure to choose one of these generic positions, he suggested, would result in a firm getting stuck in the middle and doomed to low levels of profitability. Many researchers built on Porter’s ideas and showed that they were (in general) supported by the empirical evidence. However, Porter was also criticised – observers noted that some firms are both differentiated and low cost at the same time, for example Toyota cars in the 1970s and 1980s were both cheaper and higher quality than their American or European counterparts. Today it is generally 115

accepted that Porter’s arguments are correct for mature industries in ‘steady state’, but that for industries in transition, where the rules of the game are unclear, there is scope for a firm to be differentiated and low cost at the same time. Porter subsequently developed a whole set of ideas about how a firm should deliver on its generic strategy. In 1985 he introduced the notion of a ‘value chain’ of linked activities, all of which should be aligned around the chosen strategic position. In 1996 he introduced the notion of an activity system (the set of interlinked elements that the firm controls). The more tailored this system is to a particular strategy, the harder it is for any other firm to copy it in its entirety, and the greater the competitive advantage of the firm.

What it is Generic strategies are the basic choices a firm makes about how it competes in a particular market, so that it can generate competitive advantage. There are three generic strategies: lower cost, differentiation, or focus. Porter argues that a firm must only choose one of the three or run the risk of wasting precious resources and confusing its customers and employees. In any market, there are multiple potential segments that you can target, but to keep things simple Porter distinguishes between a ‘focus’ strategy based on one segment and a ‘broad-based’ strategy covering multiple segments. There is also the basic choice about how you position yourself against competitors. Put these two dimensions together, and you end up with three generic strategies: ●●

Offering the lowest price to customers in multiple segments is a ‘costleadership strategy’.

●●

Targeting customers in multiple segments based on attributes other than price (e.g. higher quality or service) is a ‘differentiation strategy’.

●●

Concentrating on one or a few segments is a ‘focus strategy’. You can further separate this out into a cost-focus or differentiation-focus strategy.

According to Porter, you need to choose one of these strategies and stick with it. Those companies that fail to choose, and end up attempting to be all things to all people, end up stuck in the middle. There are plenty of cases of firms getting stuck in the middle – for example General Motors and Volkswagen were much less profitable than Toyota (low cost) or Mercedes (differentiated) during the 1980s and 1990s, and more recently Nokia in phones and Tesco in the supermarket industry have fallen into this trap. While these basic choices seem obvious today, it is worth noting that they represented quite a departure from the prevailing view at the time they were invented. At that point, most firms were focusing on size and scale (that is, market share) as a way of pushing costs lower through the experience curve. Porter’s generic strategies helped to remind firms that being better and more focused than competitors was often just as effective as being lower cost.

116

PA RT THREE : S tr ategy and org an isation

How to use it Generic strategies are designed to help you make the fundamental choices about where to compete and how to compete in a particular market. These choices involve first of all defining the attributes of the product or service you are offering and, second, building an internal set of activities and capabilities to deliver on that chosen offering. In terms of the attributes of the product or service you are offering here are some basic tips: ●●

Cost leadership means reducing the overall costs of the product or service, which means looking into all the different elements of your cost structure and finding ways to reduce or eliminate them. For many years, Dell had clear cost leadership in the PC/laptop industry, because they eliminated many of the traditional costs (such as retailer mark-up) by selling direct to customers. Firms with cost leadership can either charge the same as others and make more profits as a result, or they can reduce their prices in order to gain market share and drive competitors out.

●●

Differentiation is about making your products or services more attractive than those of your competitors. If you are successful in this endeavour, your customers will pay a premium. There are many routes to a differentiation strategy. Some firms emphasise R&D so they can innovate more effectively, others deliver high-quality service or they sell their product on the basis of exclusivity and branding. In all these cases, additional money is invested in the hope that it will be more than recouped by the possibility of charging a premium.

●●

Focus is about concentrating on a particular niche, and by understanding the particular needs of the customers in that niche better than anyone else, you are able to breed loyalty and charge more. Often, focus strategies are built on personal relationships or on providing a particular bundle of services that a more generalist competitor cannot afford to provide. The challenge with adopting a focus strategy is that niches are often small, and make it very difficult to grow.

Top practical tip The challenge many firms face is that they are seduced by opportunities for growth, whether or not those options are aligned with their chosen position. So once you have chosen a generic strategy, you need to find ways to reinforce that chosen strategy, otherwise you will find yourself on the route to being stuck in the middle. For example, Southwestern Airlines is a famous low-cost airline in the USA. When

28: Gener ic str ategies



faced with suggestions for new features on the flight (say, a free meal) or a new

117

advertising campaign, executives will always push back and say: ‘How does this proposal help us to get our passengers to their destination safely and on time at the lowest possible cost?’ Sometimes the extra expense can be justified, but sometimes it actually detracts from the chosen position.

Top pitfall Generic strategies are a really useful analytical tool for understanding the basic choices competitor firms are making in a particular industry, and to give you guidance about your own strategic choices. But they should still be used with care. As the notion of blue ocean strategy makes clear, generic strategies are choices within the existing ‘rules of the game’, but the best opportunities for making money occur when the rules of the game are in flux. So don’t make the mistake of assuming that there is an inevitable trade-off between, for example, cost leadership and differentiation. Always look for opportunities to challenge such assumptions.

Further reading Miles, R.E. and Snow, C.C. (1978) Organizational Strategy, Structure and Process. Redwood City, CA: Stanford University Press. Porter, M.E. (1980) Competitive Strategy. New York: Free Press. Porter, M.E. (1996) ‘What is strategy?’, Harvard Business Review, November– December: 61–78.

118

PA RT THREE : S tr ategy and org an isation

The McKinsey 7S framework

29

The McKinsey 7S framework is a simple but powerful way of describing the key elements of a business organisation. There are seven key elements (strategy, structure, systems, shared values, staff, skills and style), and for the firm to be functioning effectively all seven have to work in a coherent and aligned way.

When to use it ●●

To improve the overall performance of a firm.

●●

To implement a chosen strategy.

●●

To diagnose the problems inside a firm that is struggling with change.

●●

To align people and activities following a major change.

Origins The origins of the McKinsey 7S framework are well known. It was developed by Tom Peters, Robert Waterman, Anthony Athos and Richard Pascale through a series of meetings in the late 1970s. At that time, Peters and Waterman were consultants at McKinsey, while Athos and Pascale had academic positions. In the late 1970s most of the thinking about organising was focused on the formal aspects of structure. The concept of matrix management, for example, had emerged in the mid-1970s as a way of helping a firm to address two very different sets of objectives. Tom Peters had been strongly influenced by behavioural scientists such as James March and Karl Weick while studying for his MBA at Stanford. This helped to shift the conversations with his colleagues towards some of the ‘softer’ aspects of how people get things done in organisations, and ultimately to the creation of the 7S framework. Thanks partly to the power of alliteration, this framework was quickly picked up by McKinsey consultants around the world. It was also featured in an

119

article by Peters, Waterman and Phillips in 1980, and in Peters’ bestselling book, In Search of Excellence, in 1982. The 7S framework was ground-breaking because it brought together all the different aspects of what makes a firm tick in a single diagram. Up to that point, most people had focused on some parts but not others; the 7S framework forced them to take a more holistic view. It continues to be used to this day.

What it is The McKinsey 7S framework is a tool for understanding the internal situation of an organisation. There are three ‘hard’ elements (strategy, structure and systems) and four ‘soft’ elements (shared values, skills, staff and style). The framework is based on the principle that the seven elements are all important, and for the firm to perform well these seven elements need to be aligned and mutually reinforcing. If a change is proposed – for example, a new strategy, a merger or a change of leadership – the framework can be used as a diagnostic tool (such as to figure out where the problems will lie) or as a way of implementing the proposed change (such as to focus the change effort around one specific element). For the change to work, all seven elements will have to be aligned to support it.

Structure

Strategy

Systems

Superordinate goals

Skills

Style

Staff

Source: Waterman, R.H., Peters, T.J. and Phillips, J.R. (1980) ‘Structure is not organization’, Business Horizons 23(3): 14–26. Reproduced with permission.

120

PA RT THREE : S tr ategy and org an isation

How to use it The 7S framework is a high-level orientating device – it helps you do an initial diagnosis of how a firm is working, and to identify key areas where there may be problems. This analysis then allows you to drill down into greater detail, depending on where the issues emerge. The seven elements can be defined as follows: 1 Strategy: The plan of action, devised by those at the top of the firm, to build and maintain competitive advantage. 2 Structure: The formal allocation of people into their respective areas of responsibility. The boxes and arrows on an organisation chart provide a graphical representation of structure. 3 Systems: The procedures and rules (sometimes fairly informal) that guide people in how they work on a day-to-day basis. For example, this includes budgeting, performance management and hiring. Information technology plays a big part in making these systems work nowadays. 4 Shared values: The underlying beliefs people have about what constitutes appropriate behaviour in the firm. These were called ‘superordinate goals’ in the original version of the 7S framework. Often the word ‘culture’ is used here. 5 Style: How the leaders of the firm behave, and in particular how they relate to others in the firm. Style is closely related to shared values, but it refers especially to leadership behaviour. 6 Staff: The people employed in the firm, including such things as their attitudes and motivations. 7 Skills: The practical capabilities employees bring to their work. For example, if you are a senior executive in a hotel chain and you are seeking to implement a new strategy geared around very high levels of service, you would use the 7S framework to understand how the other six Ss should be altered to fit with this new strategy. For example, do you need to hire new people with more serviceorientated attitudes (staff)? Do you need to send your existing people on a training course (skills)? Perhaps you need to bring in some senior people from the Four Seasons or the Ritz Carlton to set the right tone (style)? Or do you need a more effective IT system to give you better information about whom your most important customers are (systems)? It is worth pointing out that shared values sits in the middle of the framework because it is hardest to change and also linked to everything else. While you can think of any of the other six as ‘levers’ to pull, the shared values are changed only gradually over time as a function of how all the other elements fit together.

29: The McKi nsey 7 S fr amework

121

Top practical tip The 7S is most usefully thought of as a framework or checklist, rather than as a model. In other words, it doesn’t allow you to draw any strong conclusions about what a firm should do differently, but it is very useful as a way of making sure you are looking in the right places. It is also very useful as you are concluding a project. For example, if you have just put together an action plan for the priorities that the firm needs to work on over the next six months, you can refer back to the 7S framework to make sure you are not missing anything important.

Top pitfall When an organisation is going through some sort of change process, there is real value in keeping things focused and straightforward to make sure employees understand what you are doing. So at any one time, you are likely to be focusing only on one or two of the elements in the 7S framework: you might have a ‘values project’ or a ‘skills programme’ or an IT systems implementation. The pitfall would therefore be to try to do too much at the same time. Even if you need to make changes across the board, you need to phase those changes in gradually, rather than all at once.

Further reading Pascale, R. and Athos, A. (1981) The Art of Japanese Management. New York: Simon & Schuster. Peters, T. and Waterman, R. (1982) In Search of Excellence. New York: Harper & Row. Waterman, R.H., Peters, T.J. and Phillips, J.R. (1980) ‘Structure is not organization’, Business Horizons, 23(3): 14–26.

122

PA RT THREE : S tr ategy and org an isation

[ PA R T F O U R ] Innovation and entrepreneurship

123

Many MBA programmes put a lot of emphasis on innovation and entrepreneurship. Innovation is about exploiting new ideas – coming up with business ideas, then developing them further so they become commercially viable. Entrepreneurship is a related concept, and it is defined as the pursuit of opportunities without regard to the resources you control – it typically applies to independent (start-up) businesses, but it can also be applied in the corporate setting. In this section we talk about eight of the most important models for helping innovators or entrepreneurs in their efforts. Coming up with new business ideas is the first step in the process. A classic way of doing this is ‘brainstorming’, which is a group-based process for generating a lot of ideas around a common theme. More recently, it has become popular to apply ‘design thinking’ to the innovation challenge. This is an approach that blends intuitive and rational views of the world, and it puts a lot of emphasis on experimentation and prototyping. There are also many sophisticated models for structuring the innovation process in large companies. The ‘stage/gate model for new product development’ is used as a way of sorting through the hundreds of new business ideas put forward in a company to decide which ones are worth investing in. ‘Scenario planning’ is a way of looking out to the future to identify new opportunity areas, and then using these insights to decide what new technologies, products or services the company should invest in. For independent entrepreneurs, the ‘seven domains assessment model’ is a useful way of thinking through the nature of an opportunity and also the skills and expectations of the founding team before moving forward with a business plan. ‘Greiner’s growth model’ provides a way of thinking through the internal challenges faced by a business as it grows. Many small businesses fail because of internal disagreements about who is responsible for what, and the Greiner model is helpful for anticipating and resolving such disagreements. Finally, we describe two very important conceptual models that apply equally to established and start-up companies. ‘Disruptive innovation’ is a way of understanding why some new technologies make it possible for start-ups to overthrow established leaders, as happens a lot in the world of digital media, while other new technologies help the existing leaders to stay ahead. ‘Open innovation’ is a way of understanding the increasingly networked approach to innovation we see today. Established companies, for example, no longer do all their research and development work themselves – they will often partner with start-ups, and they will often use ‘crowdsourcing’ techniques to tap into new ideas beyond their traditional boundaries.

124

PA RT FOUR : Innovat ion and entrepreneurship

Brainstorming

30

Where do new ideas come from? Sometimes they come out of nowhere – the ‘eureka’ moment – but usually they emerge through discussions between people. Brainstorming is the generic name given to any sort of structured interactive process for coming up with new ideas.

When to use it ●●

To help you or your team to come up with new ideas.

●●

To solve a particular business problem.

●●

To energise people in your firm so they contribute more.

Origins The concept of brainstorming was developed by advertising executive Alex Osborn back in 1939. Frustrated by the inability of his employees to come up with creative solutions for ad campaigns, he began hosting group-thinking sessions to improve the quality of ideas. His books Your Creative Power (1948) and Applied Imagination (1953) summarised his methodology. Since Osborn, the concept of brainstorming has been adapted in many different ways. For example, design firm IDEO popularised the notion of a ‘deep dive’ in the 1990s, which is a brainstorming technique for developing new design ideas. More recently, IBM pioneered the notion of an ‘innovation jam’, which is an online methodology for brainstorming with many thousand participants. There are also specific tools available to help with brainstorming. For example, creativity guru Edward de Bono came up with the ‘six thinking hats’ methodology, in which people take different roles in the conversation depending on which ‘hat’ they are wearing.

125

While brainstorming has been criticised in a number of ways, it is still accepted as the most effective way of generating lots of ideas from members of a team or organisation.

What it is In its original form, brainstorming was devised as a lightly structured process to improve the quality and quantity of ideas coming from a group meeting. Many people in a work environment are somewhat inhibited – they may not like speaking up in a public setting, and they may be worried about what others think of their ideas. The biggest challenge, Alex Osborn observed, is that once an idea has been criticised it dies, and often the individual who proposed it then clams up and offers no more ideas. Brainstorming therefore works on four basic principles: 1 Focus on quantity: The purpose, at least in the early parts of the brainstorm session, is to come up with as many diverse ideas as possible. 2 Withhold criticism: Challenging other people’s ideas should be put ‘on hold’ during the brainstorm, and reserved for a later stage of the process. 3 Welcome unusual ideas: These can be generated by looking from new perspectives, or by providing props and stimuli that cause people to take a fresh point of view. Even if some ideas are completely implausible, they sometimes provide inspiration for other, more practical, ideas. 4 Combine and improve ideas: The best ideas often emerge through combination, so during a brainstorm people are encouraged to build on each other’s ideas and suggest new ways of combining them.

How to use it While there are many sophisticated approaches to brainstorming available, the basic formula is straightforward and it involves the following steps: 1 Define the purpose: The most effective brainstorm session addresses a specific challenge – for example, how to increase sales to a specific customer, or how to solve a technical challenge with a particular product. If the remit is too broad, people tend to bring all sorts of pet interests and irrelevant concerns to the discussion. 2 Prepare the group: Osborn originally envisioned about 12 participants, a mix of experts and novices. You can conduct a brainstorm with as few as 5–6 people. You can also use a much larger group, but in such cases you need to split them into subgroups of 5–10 people. Obviously the choice of participants depends on the specific purpose of the brainstorm. However, it is clear that a room full of like-minded people won’t generate as many creative ideas as a diverse group.

126

PA RT FOUR : Innovat ion and entrepreneurship

In terms of location, an offsite event is usually better because it prevents people from dashing out to meetings. In terms of time, brainstorm sessions vary from two hours to two days. The brainstorm needs lots of space, whiteboards, flipcharts, sticky notes and pens. Sometimes it is also useful to have physical props available so people can make prototypes of their new product ideas.

A group facilitator is also needed – someone who is in charge of structuring the process, recording contributions, and so on. It shouldn’t necessarily be the team manager – you just want someone who is good at working with teams.

3 Guide the discussion: At the outset, the ‘ground rules’ are established so that everyone knows, for example, that they shouldn’t criticise each other’s ideas. The purpose of the brainstorm is explained, and the steps in the process are described. The details of how the process works will vary from case to case, but typically people will be asked to put some individual thought into the problem or challenge first, perhaps by writing their ideas on sticky notes, and then they will be asked to work in groups (or one large group), presenting their ideas and building on the ideas of others. If you are the facilitator, your role includes encouraging everyone to contribute, and stopping any one individual from dominating. You can offer your own ideas as well, but do so sparingly. Push people to stick to one conversation at a time, and be conscious of dips in energy levels. Take plenty of breaks (one every 60–90 minutes) to help people retain their concentration. Your most important task as the facilitator is to decide when to switch from a ‘divergent’ mode of discussion, where every idea gets recorded and built on, to a ‘convergent’ mode, where you start summarising, combining and evaluating ideas. Ultimately, the brainstorm needs to result in a small number of ideas that are taken forward for further development, so the final stage is always a convergent one. However, it is often useful to go through a couple of waves of convergent and divergent thinking during the brainstorm as a whole.

Top practical tip Many people use the term ‘brainstorming’ in a highly informal way. You might hear someone say ‘let’s brainstorm about this’ as part of a formal meeting. There is nothing wrong with this as such, but the risk is that brainstorming gets a bad name if this approach fails to yield any interesting outcomes. So always keep the following rules in mind as the ‘minimum’ components of a brainstorm discussion: ●●

generate multiple ideas – don’t just follow one;

●●

don’t let anyone criticise ideas prematurely – otherwise the brainstorm grinds to a halt;

●●

be explicit about moving from a divergent discussion to a convergent one.

30: B r ai nstorm i ng

127

Top pitfalls There are two classic mistakes with brainstorming. One is to define the problem in such general terms that you get lots of irrelevant ideas. For example, if you ask people for their views on ‘making a more effective workplace’ you will get suggestions about the food in the canteen and parking spaces. Make sure to narrow the question down sufficiently to get sensible answers.

The other mistake is to think that the problem is solved when the brainstorm is

concluded. Unfortunately, this is actually when the hard work begins, as ideas need developing and implementing. So you should always preface a brainstorm by letting people know that there will be work to do afterwards.

Further reading De Bono, E. (1999) Six Thinking Hats. London: Penguin. Kelley, T. and Littman, J. (2002) The Art of Innovation: Lessons in creativity from IDEO. London: Profile Books. Osborn, A.F. (1953) Applied Imagination: Principles and procedures of creative problem-solving. New York: Charles Scribner’s Sons. Osborn, A.F. (1948) Your Creative Power. New York: Scribner.

128

PA RT FOUR : Innovat ion and entrepreneurship

Design thinking

31

Design thinking is an approach to innovation that blends traditional rational analysis with intuitive originality. Rather than focusing on developing clever new technologies, or on hoping that someone has a ‘eureka’ moment, design thinking is an approach that involves iterating between these two modes of thinking. It is characterised by experimentation and rapid prototyping, rather than careful strategic planning.

When to use it ●●

To understand how innovations emerge in a business setting.

●●

To develop new products and services.

●●

To create a more experimental and innovative culture in your firm.

Origins The notion of design thinking has become extremely popular in the business world over the last decade. It has roots in two different bodies of work. One is the pioneering work done by Nobel laureate Herbert Simon on ‘artificial intelligence’. In his 1969 book, The Sciences of the Artificial, he wrote that ‘engineering, medicine, business, architecture and painting are concerned not with the necessary but with the contingent – not with how things are but how they might be – in short, with design’. The other is the world of industrial design and design engineering, in which designers sought to create buildings, town plans and products that blended form and function. Design thinking was brought into the business world in the 1990s. IDEO, a California-based industrial design firm led by David Kelley, was one of the first proponents of this methodology. Kelley went on to lead the ‘D School’ (design school) at Stanford University. More recently, the idea has been formalised and popularised

129

further, through books by Tim Brown, current CEO of IDEO, and Roger Martin, former Dean of the Rotman School of Business. Design thinking builds on many established management tools, such as brainstorming, user-focused innovation and rapid prototyping. It offers a methodology for bringing these various tools together.

What it is Design thinking is an approach to innovation that matches people’s needs with what is technologically feasible and what is viable as a business strategy. It can be viewed as a solution-focused approach to innovation, in that it seeks to address an overall goal rather than solve a specific problem. Design thinking differs from established ways of thinking in some important ways. The analytical scientific method, for example, begins with defining all the parameters of a problem in order to create a solution, whereas design thinking starts with a point of view on the possible solution. Critical thinking involves ‘breaking down’ ideas, while design thinking is about ‘building up’ ideas. Moreover, rather than using traditional inductive or deductive reasoning, design thinking is often associated with abductive reasoning. This is a way of hypothesising about what could be, rather than focusing on what is. Design thinking employs a different methodology to traditional innovation approaches (as described below). It also requires a different type of individual. Design thinkers need to be: ●●

empathic – to see the world through the eyes of others;

●●

optimistic – to assume that a better solution always exists;

●●

experimental – to have a desire to try out new ideas and to see many of them fail;

●●

collaborative – to be happy working with others and not taking personal credit for results.

How to use it You can apply design thinking through a four-step process: 1 Define the problem: This sounds simple, but usually it requires quite a lot of work to get to a clear statement of the problem that needs addressing. For example, if you work for a university, and you are getting feedback that the lectures are poor, you might conclude that the problem is (a) poor quality lecturers, who need training, or (b) the lecture rooms are badly designed and need a refit. However, a design-led approach to this problem would be

130

PA RT FOUR : Innovat ion and entrepreneurship

to look at the bigger picture, and ask what the purpose of the lectures is in the first place. This reorientates the analysis towards providing students with a high-quality education, which may involve fewer traditional lectures. For example, it might need more online learning, or small group tutorials. To define the problem, you often have to suspend your views about what is needed, and instead pursue an ethnographic approach – for example, observing users of your products or services, and identifying the problems or issues they face. Another approach is to use relentless questioning, like a small child, by asking ‘why?’ multiple times until the simple answers are behind you and the true issues are revealed. 2 Create and consider many options: Even talented teams fall into ingrained patterns of thinking, which often means jumping to solutions quite quickly. Design thinking forces you to avoid such shortcuts. No matter how obvious the solution may seem, many options need to be created for consideration. This might mean working in small groups of competing teams, or deliberately building a highly diverse team. 3 Prototype, test and refine: Out of this process, you typically end up with a handful of promising options. These ideas should all be pushed forward as quickly as possible, often using crude prototyping methods so that people can see how the idea might work in practice. There are usually several iterations in this step, as you go back and forth between what is possible and what your users need. Sometimes, this process reveals flaws in the original specification of the problem, in which case you have to go all the way back to the beginning. 4 Pick the winner and execute: At this point, you should be sufficiently confident that the idea works and that you can commit the significant resources needed to execute it. You should also have established, at this stage, that the idea is commercially viable and technologically feasible.

Top practical tip Design thinking is a way of looking at the world that is subtly different to the traditional approach. The methodology described above does not sound radically different to what people are used to, so you have to work very hard to remind participants in a design-led project what the points of difference really are. This means, first of all, spending a lot of time getting the problem definition correct and, second, being prepared to go through multiple iterations in coming up with a solution.

31: Des i gn th ink i ng

131

Top pitfall Sometimes a design-led approach to innovation leads to elegant ‘designs’ that are well received by users and technologically feasible, but they fail to pass the test of commercial viability. These are the most difficult cases to deal with. Sometimes it is possible to redesign them sufficiently that they become commercial viable, but if this is not the case, then you must drop them.

Further reading Brown, T. (2014) Change by Design. New York: HarperCollins. McKim, R.H. (1973) Experiences in Visual Thinking. Pacific Grove, CA: Brooks/Cole Publishing. Martin, R.L. (2009) The Design of Business: Why design thinking is the next competitive advantage. Boston, MA: Harvard Business Press. Simon, H.A. (1969) The Sciences of the Artificial, Vol. 136. Cambridge, MA: MIT Press.

132

PA RT FOUR : Innovat ion and entrepreneurship

Disruptive innovation

32

Innovation is the engine of change in most industries. But there are some industries where innovation hurts the existing leaders (for example, in the case of digital imaging and Kodak), and there are others where innovation helps the existing leaders (for example, video on demand and Netflix). To help make sense of this puzzle, Clay Christensen developed his theory of innovation. He showed that some innovations have features that make them disruptive, while others have sustaining qualities. It is very useful to understand which is which.

When to use it ●●

To make sense of who the winners and losers are when an industry is going through change.

●●

To understand whether an innovation is a threat or an opportunity.

●●

To decide how your firm should respond.

Origins Academic research has given a lot of attention to innovation over the years. Most people start with Joseph Schumpeter’s notion of ‘creative destruction’, which suggests that the process of innovation leads to new products and technologies, but at the expense of what came before. For example, firms producing typewriters were all ‘destroyed’ when the personal computer took off. But not all innovation leads to creative destruction – sometimes it helps to support those firms who are already in a strong position. Research by Kim Clark and Rebecca Henderson in 1990 addressed this point by showing that the most dangerous innovations (from the point of view of established firms) were architectural innovations, meaning that they changed the way the entire business system functioned. 133

Clay Christensen, supervised in his doctoral dissertation by Kim Clark, took this idea one step further by introducing the idea that some new technologies are disruptive: they have a profound effect on the industry, but because of the way they emerge the established firms are very slow to respond to them. Christensen’s ideas were first published in a 1995 article with Joe Bower, and then developed in two books, The Innovator’s Dilemma in 1997 and The Innovator’s Solution (with Michael Raynor) in 2003. Christensen’s ideas about disruptive innovation have become extremely popular, both because they are highly insightful and also because the emergence of the internet in 1995 meant that a lot of industries experienced high levels of disruption in the ensuing decade.

What it is A disruptive innovation is an innovation that helps to create a new market. For example, the arrival of digital imaging technology opened up a new market for creating, sharing and manipulating pictures, and replaced the traditional market based on film, cameras and prints. Kodak was wiped out, and new firms with new offerings, such as Instagram, appeared in its place. A sustaining innovation, in contrast, does not create new markets, but helps to evolve existing ones with better value, allowing the firms within to compete against each other’s sustaining improvements. The arrival of electronic transactions in banking, for example, might have been expected to disrupt the industry but it actually helped to sustain the existing leaders. Christensen’s theory helps to explain why firms such as Kodak failed to respond effectively to digital imaging. One argument might be that the existing leaders failed to spot these new technologies as they emerged, but this is rarely true. Kodak, for example, was well aware of the threat of digitisation, and even invented the world’s first digital camera back in the 1980s. In reality, established firms are usually aware of these disruptive innovations, but when those innovations are at an early stage of development they are not actually a threat – they typically do a very poor job of helping to address the existing needs of the market. The earliest digital cameras, for example, had very poor resolution. For an established firm such as Kodak, the priority is to listen to and respond to the needs of their best customers, which means adapting their existing products and services in more sophisticated ways. Disruptive innovations may start out offering low-end quality, but they become better over time and eventually they become ‘good enough’ to compete head to head with some of the existing offerings in the market. In the world of photography, this transition occurred in the early 2000s with the arrival of digital cameras, and then cameras built in to the early smartphones. Throughout this transition, the established firms often continue to invest in the new technologies, but they don’t do so very seriously – because they are still making lots of money using their traditional technologies.

134

PA RT FOUR : Innovat ion and entrepreneurship

g use

mandin

Performance

Most de

se uality u y og ol n ch te it ve p sru Di use -quality Medium High-q

y use

Low-qualit

Time Source: Adapted from Christensen, C.M. (1997) The Innovator’s Dilemma: When new technologies cause great firms to fail. Boston, MA: Harvard Business Review Press. Copyright © 1997 by the Harvard Business School Publishing Corporation, all rights reserved. Reprinted by permission of Harvard Business Review.

In contrast, new firms enter the market and throw all their weight behind these disruptive innovations. They often identify new services (such as sharing photos over the internet) and gradually they take market share away from the established firms. By the time the established firm has fully recognised the threat from the disruptive innovation, it is often too late to respond. Kodak spent most of the 2000s attempting to reposition itself as an imaging company, but it lacked the capabilities to make the transition and it was handicapped throughout by the difficulty of transitioning out of its old way of doing business. In summary, disruptive innovations tend to ‘come from below’ – they are often quite simple technologies or new ways of doing things, and they are ignored by established firms because they only address the needs of low-end customers, or even non-customers. But their improvement is then so fast and so substantial that they end up disrupting the existing market.

How to use it It is obvious that start-up firms like disruptive innovations, and indeed many venture capitalists actively seek out opportunities to invest in these types of opportunities. The more interesting question is how established firms use this understanding of disruptive innovations to help protect themselves. The basic advice is as follows: ●●

Keep track of emerging technologies: In most industries there are lots of new technologies bubbling up all the time, and as an established firm you need to keep track of them. Most of these technologies end up 32: Di sru p tive innovation

135

having no commercial uses, or they end up helping you to enhance your existing products or services (that is, they are sustaining innovations, in Christensen’s terms). But a few of them have the potential to be disruptive innovations, and these are the ones you should watch extremely closely. It is often a good idea to buy stakes in small firms using these technologies, and to put some R&D investment into them. ●●

Monitor the growth trajectory of the innovation: When you see an innovation that is creating a new market, or is selling to low-end customers, you need to monitor how successful it is becoming. Some low-end innovations remain stuck at the low end of the market. Some evolve (for example, through faster computer processing speeds) and move up to address the needs of higher-end customers. These latter ones are the potentially disruptive innovations.

●●

Create a separate business to commercialise the disruptive innovation: If the innovation is looking threatening, the best way of responding is to create a separate business unit with responsibility for commercialising that opportunity. This business unit should be given a licence to cannibalise the sales of other business units, and to ignore the usual corporate procedures and rules so that it can act quickly. By giving it a lot of autonomy, the new business unit has the opportunity to behave in the same way as would a start-up company. If it is successful, you can later think about how best to link its activities up to the activities of the rest of the firm.

Top practical tip The main reason established firms struggle with disruptive innovation is behavioural; it is rarely the case that they lack the necessary technological skills. Usually, the problem is that they fail to respond quickly because of the internal dynamics of the organisation.

So if you are worried about the threat of disruptive innovation, your firm needs

to develop such qualities as paranoia and humility. Being ‘paranoid’ means having an awareness of all the possible technologies that might hurt your business. And being ‘humble’ means thinking about the needs of low-end customers as well as those at the top of the market.

136

PA RT FOUR : Innovat ion and entrepreneurship

Top pitfall The concept of disruptive innovation is important and scary. However, the reality is that many low-end technologies never actually become much more than that. While you have to be alert to the possibility of disruption, you shouldn’t assume that all low-end innovations will develop in such a way that they end up hurting your business.

Further reading Christensen, C.M. and Bower, J.L. (1996) ‘Customer power, strategic investment, and the failure of leading firms’, Strategic Management Journal, 17(3): 197–218. Christensen, C.M. (1997) The Innovator’s Dilemma: When new technologies cause great firms to fail. Boston, MA: Harvard Business Review Press. Christensen, C.M. and Raynor, M.E. (2003) The Innovator’s Solution. Boston, MA: Harvard Business Press. Henderson, R. and Clark, C. (1990) ‘Architectural innovation: The reconfiguration of existing product technologies and the failure of established firms’, Administrative Science Quarterly, 35(1): 9–30. Lepore, J. (2014) ‘The disruption machine: What the gospel of innovation gets wrong’, The New Yorker, 23 June.

32: Di sru p tive innovation

137

33

Greiner’s growth model

Start-up firms often go through painful transitions as they grow. These transitions cause anguish and uncertainty for the people in the firm, and they can even cause the firm to fail. Greiner’s growth model provides a roadmap for the typical stages of growth of a start-up firm, and the transition points between stages. This analysis helps a growing firm to anticipate its transitions, and thereby to manage them more effectively.

When to use it ●●

To help you manage growth when you are a start-up or growing firm.

●●

To understand where growth problems are likely to arise, and to plan for them.

●●

To better understand the challenges small or growing firms face when you are doing business with them.

Origins The growth model was published by Larry Greiner in 1972. While there may have been other research on this subject before that date, it has since all been forgotten. Greiner’s model was based on his insight and experience of firm growth, rather than on rigorous academic research. His analysis was immediately popular, and many subsequent studies have been done looking at the various stages of growth and transitions that small or growing firms go through. Greiner himself wrote a followup article in 1998, confirming most of his earlier ideas and adding some additional thoughts, including the notion of growing through alliances.

138

What it is There are five phases in the growth of a firm. Each phase is characterised by a period of evolution at the start, and ends with a revolutionary period of turmoil and change. The resolution of the revolutionary period determines whether a firm will move forward to the next phase of evolutionary growth, or whether it will get into deep difficulties and, perhaps, end up being sold or closed down. The five phases can be summarised as follows: 1 Growth through creativity – resulting in a leadership crisis. 2 Growth through direction – resulting in an autonomy crisis. 3 Growth through delegation – resulting in a control crisis. 4 Growth through coordination – resulting in a red-tape crisis. 5 Growth through collaboration.

How to use it The Greiner growth model is a useful diagnostic tool, and the first step in using it is to understand which phase your firm is currently in. The descriptions below provide some additional detail to help you diagnose where you sit: 1 Growth through creativity: In this phase, the founders are busy creating products and opening up markets. There are typically only a handful of people, so communication works informally, and people are happy to work long hours. However, as the firm starts to grow, more employees join and the founders find themselves spending time managing their staff, and spending less time actually running the business. Often the founders are technically-orientated people who aren’t good at the people side of things. This phase ends with a ‘leadership crisis’, and with the need for professional management. Occasionally, the founders are able to change direction and become professional managers themselves, but usually external leaders are brought in at this stage. 2 Growth through direction: Growth continues through the imposition of more formal systems and procedures – for example, budgets and the creation of separate functional activities such as marketing and production. However, the top-down style of working that was necessary to get the firm organised starts to become a burden, and people in front-line positions start to resent the ‘micro-managing’ that comes from above. This phase ends with an ‘autonomy crisis’, often manifested in disagreements between layers of managers and in high staff turnover. The solution is typically some sort of delegation of responsibility closer to where the action is. 3 Growth through delegation: With lower-level managers freed up to react fast to new opportunities, and with top managers dealing with the bigger strategic issues, the firm continues to grow. Those at the top get used to

33: Gre i ner’s growth model

139

the idea that they cannot do everything. However, this new arrangement creates its own problems because delegation assumes that those on the front lines have the skills and capabilities required to take responsibility. Sometimes this isn’t the case, with many promoted into management positions without the training they need. The result is often a ‘control crisis’, where mistakes have been made and top management feel obliged to step in. 4 Growth through coordination: Growth continues with a greater level of coordination among the previously isolated business units – for example, by the creation of product groups or service practices. There are often related changes, such as capital allocated on a more central basis or incentives based more on firm-wide performance. But, increasingly, levels of coordination create complexity, and growth may end up being stifled again through what Greiner calls a ‘red-tape crisis’. 5 Growth through collaboration: To resolve the crisis of red-tape, firms need to adopt even more sophisticated ways of working, whereby coordination is achieved not through formal structures and systems but through flexible working around a common sense of purpose. Often this approach includes team-based financial rewards and sophisticated information systems.

Crisis of ?

Size of organisation

Revolution Evolution Crisis of red tape

Growth through collaboration

Crisis of control

Growth through coordination

Crisis of autonomy

Crisis of leadership

Growth through delegation Growth through direction

Growth through creativity

Age of organisation

Source: Greiner, L.E. (1998) ‘Evolution and revolution as organizations grow’, Harvard Business Review, 76(3): 55–60. Copyright © 1998 by the Harvard Business School Publishing Corporation, all rights reserved. Reprinted by permission of Harvard Business Review.

140

PA RT FOUR : Innovat ion and entrepreneurship

The original version of Greiner’s growth model ends at this point. In 1998 he wrote a follow-up article, suggesting that collaboration ended with an internal growth crisis that could only be resolved through the use of external alliances. However, not all observers have agreed with this argument, as the challenges of managing alliances are, in most cases, just as complex as the challenges of managing internal activities. Having established where your firm sits on this growth curve, you can draw up a plan of action. The key thing to think about is whether you are at or approaching some sort of transition. Typical signs that you are at a transition point include (a) lots of disagreements about where the firm is heading or how it is being managed, (b) people leaving the firm because they feel it is going in the wrong direction and (c) senior people spending most of their time dealing with problems or fixing errors. If you are one of those senior people, a large part of this analysis is about understanding your own role – is it possible that you are too controlling of the people below you, or perhaps you have delegated too much power to them before they are ready? Obviously, the actual changes you end up making will be specific to the situation you find yourself in, but the Greiner model provides a very useful first step to suggest which direction you need to go in.

Top practical tip The crises identified by Greiner all occur because people in the firm have somewhat different perspectives about how things should be done. So to use this model properly, you should ensure that you get many points of view before taking action. For example, if the people at the top think they have delegated too much and are facing a control crisis, while those on the front line think there is a red-tape crisis, then you face a really tricky problem. The key challenge in this situation is simply to build some sort of consensus about where the firm lies on the curve. This can be done through surveys and interviews of significant numbers of people, and then through a meeting where the different points of view are aired and reconciled.

Top pitfall The key thing to remember about the growth curve is that not every firm goes through every stage in a predictable way. Some firms get stuck at a particular phase of growth, and some end up resolving one crisis and then regressing back to an earlier set of problems. You can even identify very large firms that are still struggling with early-stage problems. Rupert Murdoch is in his eighties, yet he continues

▲ 33: Gre i ner’s growth model

141

to manage News Corporation with a very strong hand. Arguably, his firm has never moved beyond the second stage of Greiner’s model.

So don’t use the growth curve in a deterministic way. Instead, use it as a way of

structuring a discussion about the causes of a firm’s organisational problems, and the typical solutions that overcome those problems.

Further reading Greiner, L.E. (1998) ‘Evolution and revolution as organizations grow’, Harvard Business Review, 76(3): 55–60 (note: this is a reprint of the original 1972 article with an additional commentary from the author). Kazanjian, R.K. and Drazin, R. (1989) ‘An empirical test of a stage of growth progression model’, Management Science, 35(12): 1489–1503.

142

PA RT FOUR : Innovat ion and entrepreneurship

Open innovation

34

Historically, most firms treated innovation as a highly proprietary activity – they kept their development projects secret and they filed lots of patents to protect their intellectual property. Today, the buzzwords are ‘open innovation’ – which is about using ideas and people from outside your firm’s boundaries to help you develop new products and technologies, as well as sharing your own technologies with external parties on a selective basis.

When to use it ●●

To help you develop new products and services more quickly.

●●

To tap into opportunities and ideas outside your firm’s boundaries.

●●

To commercialise ideas and intellectual property you have no use for.

Origins As with many hot ideas, the concept of open innovation seems very modern, but has a long history. One useful starting point is the famous Longitude Prize that was offered by the British government in 1714 to the inventor who could come up with a way of measuring the longitude of a ship at sea. The prize ultimately was won by John Harrison, a little-known clockmaker, who invented the first reliable maritime chronometer. Rather than just hire the smartest engineers and ask them to solve the problem, the British government opened the problem up to the masses, and the outcome was successful. Large firms have used formal R&D labs for about 100 years, and these have always had some degree of openness to external sources of ideas. However, the approach changed significantly during the 1980s and 1990s, partly because of the exponential growth in the amount of scientific knowledge produced during this era and partly because of the emergence of the internet, which made sharing over 143

large distances much easier. Through this period, firms experimented with a variety of new approaches to innovation, including corporate venturing, strategic alliances with competitors, in-licensing of technology, innovation competitions and innovation jams. Berkeley professor Hank Chesbrough provided a useful way of pulling these various models together through his book, Open Innovation, published in 2003. Since then, studies of open innovation have proliferated, and many different angles have been explored, both practical and theoretical. New approaches to open innovation are also emerging all the time. For example, a recent idea is ‘crowdfunding’ – where an individual might seek financing for an entrepreneurial venture from a ‘crowd’ of backers through an online platform.

What it is In a world where knowledge is distributed widely, companies need to find ways of tapping into that knowledge if they are to out-innovate their competitors. This can be done through any number of different mechanisms, including acquisitions, joint ventures, alliances and in-licencing, as well as more recent innovations such as crowdsourcing and crowdfunding. Companies also need to use external partners to help them commercialise their own ideas – for example, via out-licensing or by creating spin-out ventures. The basis of an open innovation strategy, in other words, is a network of relationships with external partners who work collaboratively to develop innovative new products and services. However, it should also be clear that this approach requires a significant shift in mind-set and management approach, because companies rarely have exclusive intellectual property rights over innovations developed in partnership with others. In the traditional ‘closed innovation’ world, companies generated competitive advantage by protecting their intellectual property; in an ‘open innovation’ world, competitive advantage is likely to accrue to those companies who collaborate best, or who are fastest to move into new opportunities. Most large companies, especially those working in high-tech sectors such as information technology and life sciences, have now embraced the principle of open innovation. This trend has been driven by a number of factors, including the exponential growth in the amount of scientific knowledge in the world, the availability of external partners and venture capital funding, and the ease of sharing ideas through internet-mediated platforms.

How to use it Open innovation is a high-level concept, so it is used through a number of different tools and methodologies. Here are some of the more popular ones: ●●

144

Customer immersion: This involves working intensively with customers and prospective customers – for example, to get their input into proposed new products or to get them to help design new products themselves.

PA RT FOUR : Innovat ion and entrepreneurship

●●

Crowdfunding: This uses a platform (such as Kickstarter) so that an individual can suggest an idea they want to work on, and other individuals and firms can then invest some seed money to get it started. In this case, the ‘open’ part of the process is about gaining access to money, rather than gaining access to people or technologies.

●●

Idea competitions: This involves inviting large numbers of people (both inside the firm and outside) to take part in a competition to come up with new ideas. Sometimes these are managed through online forums, such as IBM’s celebrated ‘innovation jams’; sometimes they are in-person ‘trade shows’, where people showcase their ideas to their colleagues. These and related models provide the company with inexpensive access to a large quantity of innovative ideas.

●●

Innovation networks: Many companies seek to get sustained access to pools of expertise outside their boundaries by creating innovation networks. For example, in the IT industry, software developers may be invited to join developer networks to help identify and fix problems, often with financial rewards available. Lego has a parallel model with its communities of lead users, who get involved in designing and improving new Lego products before they are released.

●●

Product platforms: This involves the company introducing a partiallycompleted product, or platform, on which contributors can then build additional applications or features. Because they bring many different ideas and skills to the development process, these contributors are typically able to extend the platform’s functionality and appeal in ways the company would not have thought of.

This list of approaches is not comprehensive. For example, it excludes many of the more well-established approaches to open innovation, such as in-licensing, corporate venturing and strategic alliances. Moreover, new approaches to open innovation are emerging all the time.

Top practical tip A key shift in mind-set is required to make open innovation work, because the firm no longer owns or controls its ideas in the way that it did before. Of course, there are some industries, such as pharmaceuticals, where patents are still highly important. But in increasing numbers of industries, the underlying technology is either shared between firms or is made available for everyone to use through a public licence (such as the contents of Wikipedia or the Linux software platform). In such cases, firms create commercial value either through the speed of bringing a technology to market, or by combining freely-available technologies in new ways, or by selling proprietary services on top of open technologies.

▲ 34: Op en i nnovation

145



Another part of the shift in mind-set is that you cannot expect to tap into ideas

from external sources without also being open to sharing your own ideas. Working in an open innovation environment requires trust and reciprocity between individuals, and a highly-secretive attitude will quickly be picked up by the people with whom you are dealing.

Top pitfalls Many firms have experimented with the concept of open innovation by creating some sort of idea scheme, where they ask people inside the firm, or sometimes people outside as well, to come up with suggestions for improvements. There are two big mistakes you can make with such a process. One is to ask a really openended question, such as ‘How can we make our firm a better place to work?’, because it will yield all sorts of random ideas, such as more salads in the canteen or a pet care facility. You need to ensure that the questions you ask are sufficiently targeted that you get relevant and practical answers. The second pitfall is to create such a scheme without the resources you need to read, filter and act on the ideas that are proposed. Without such resources, the scheme often gets overloaded and the ideas get ignored, resulting in disappointment and cynicism among those who got involved.

Further reading Chesbrough, H.W. (2003) Open Innovation: The new imperative for creating and profiting from technology. Boston, MA: Harvard Business Press. Chesbrough, H.W., Vanhaverbeke, W. and West, J. (eds.) (2006) Open Innovation: Researching a new paradigm. Oxford, UK: Oxford University Press. West, J. and Bogers, M. (2013) ‘Leveraging external sources of innovation: A review of research on open innovation’, Journal of Product Innovation Management, 31(4): 814–831.

146

PA RT FOUR : Innovat ion and entrepreneurship

The seven domains assessment model for entrepreneurs

35

Many entrepreneurs launch their business ideas without a great deal of careful analysis. Sometimes this is effective, but it can result in very basic mistakes being made. The seven domains model is a simple framework for entrepreneurs to use to make sure they have asked the key questions about their business idea before launching it.

When to use it ●●

To assess your business idea before launching it.

●●

To identify the biggest weaknesses in your business plan, so you can address them.

●●

To understand why some new business ideas succeed and others fail.

Origins The seven domains assessment model was developed by John Mullins at London Business School, and described in his 2003 book, The New Business Road Test. It was developed as an integrative framework rather than as a new theory as such, and it therefore builds directly on a number of other bodies of thinking. It has become widely used by entrepreneurs when they start to develop their ideas, and also in the teaching of entrepreneurship at business schools. The seven domains model is not a business plan. It can be thought of as a feasibility study, to establish whether a new business idea is worth pursuing, and where the biggest weaknesses are. If the analysis looks promising, then the entrepreneur will typically develop a fully-fledged business plan in order to get access to funding from a venture capitalist or a bank. Alternatively, the entrepreneur may prefer to employ a ‘lean start-up’ model, which means starting small and looking for ways to get revenues quickly so that the venture can become self-funding. 147

What it is The model suggests that there are seven sets of factors that the potential entrepreneur needs to take into account before moving ahead with his or her venture: 1 Macro-market attractiveness: Is the overall market for your product or service growing? 2 Micro-market attractiveness: Do you have a niche of prospective customers who are interested in buying your product or service right now? 3 Industry attractiveness: Is the structure of the industry you plan to operate in sufficiently attractive that you can make money out of your product or service? 4 Sustainable advantage: Is there anything about your offering that will allow you to defend yourself from competitors? 5 Mission, aspirations and propensity for risk: Is there a good fit between the proposed business idea and your management team’s interests? 6 Ability to execute on the critical success factors: Do you and your team have the necessary capabilities to succeed? 7 Connectedness up and down the value chain: Do you know people in this industry who can open doors for you and get you started?

How to use it The seven domains model is designed to be used as a checklist, where you go through each of the domains in turn and you evaluate how well-positioned you are on each one. A shortened form of this checklist is included here; for the full length version, read The New Business Road Test by John Mullins (see Further reading).

Market domain/macro level – market attractiveness This is about understanding the potential overall size of the market for your product or service. Its size today, in terms of such things as the number of customers or value of sales, is important. But more important still is the prospect of future growth. Is the overall market growing significantly, or is it static or even declining in size?

Market domain/micro level – sector market benefits and attractiveness This domain is the immediate focus for your product or service. Successful entrepreneurs typically start by finding a small niche of customers who love their product, and then they branch out from there. Questions to ask include: ‘Who are the likely initial customers?’, ‘What is the problem or need you are addressing?’ and ‘What alternative solutions to these problems/needs do customers have at the moment?’.

148

PA RT FOUR : Innovat ion and entrepreneurship

Industry domain/macro level – industry attractiveness This is essentially a five forces analysis of the industry you are planning to operate in. The point here is that even if you can find a market for your product or service, the basic structure of the industry may be so unattractive that you cannot actually make much money. So you need to consider the threat of new entrants and substitute products, and also the bargaining power of potential customers and suppliers, as well as the internal rivalry that exists in the industry.

Industry domain/micro level – sustainable advantage Here the focus switches to your ability to defend your business against imitators. One way of doing this analysis is to use a core competency/resource-based perspective, which says a firm’s resources should be rare, valuable and hard to imitate. You may also want to think about intellectual property protection.

Team domain – mission, aspirations, propensity for risk Here, you are analysing your commitment and the interests of your team. You need to ask yourself why you want to start this business, what makes you passionate about doing it and how it fits with your medium- or even long-term career plans. The same analysis should also be done for your team or your business partners.

Team domain – ability to execute on critical success factors The focus here is on your and your team’s capabilities. It seems obvious that you need capabilities that are aligned with the ‘critical success factors’ for your venture, but there are still many entrepreneurs who launch businesses without the necessary skills. So think carefully about the type of decisions or activities that will harm the business significantly if you get them wrong, and ask yourself if you and your team have the experience, or the distinctive skills, to do these things right. If you see a gap in skills or capabilities, who can you bring on board to fill this gap?

Team domain – connectedness up, down, across value chain This final domain is about who you know, rather than what you know. Starting a business is a lot about personal connections, so you need to think carefully about your contacts in key areas: do you know people who can give you access to distribution, or help you source key components? Do you know any angel investors who might provide some initial funding? If you are entering a foreign market, do you know people who can get you started there?

35: The seven dom ai ns a ssessment model for entre preneurs

149

Market domain

Industry domain

Market attractiveness

Industry attractiveness

Macro level Mission, aspirations, propensity for risk

Team domains

Ability to execute on CSFs

Connectedness up and down value chain

Micro level

Target segment benefits and attractiveness

Competitive and economic sustainability

Source: Mullins, J.W. (2013) The New Business Road Test: What entrepreneurs and executives should do before launching a lean start-up, 4th edn. Harlow, UK: Pearson Education. Reproduced with permission.

Top practical tip Whenever you use the seven domains model, it is worth remembering that some of the domains are more important than others in the early stages of a venture. For example, the ‘micro-market’ attractiveness is really critical early on – unless you get those initial sales, then all of the other analysis about the size of the market as a whole, and the sustainability of your advantage, is irrelevant. It is also really important to think at an early stage about the members of your team, and the extent to which they have the right contacts up and down the value chain. Again, without these initial contacts, the business is likely to fall at the first hurdle.

Top pitfall It is always tempting to seek out the positive aspects of your answers to the questions in each of the seven domains. However, for this analysis to be effective, you need to be highly self-critical. Often it is useful to get someone else to evaluate your business idea on these dimensions, because they may see the gaps and shortfalls to which you are blind.

150

PA RT FOUR : Innovat ion and entrepreneurship

Further reading Mullins, J.W. (2013) The New Business Road Test: What entrepreneurs and executives should do before launching a lean start-up, 4th edition. Harlow, UK: Pearson Education. Ries, E. (2011) The Lean Start-Up: How today’s entrepreneurs use continuous innovation to create radically successful businesses. London: Random House.

35: The seven dom ai ns a ssessment model for entre preneurs

151

36

Stage/gate model for new product development

The stage/gate model is a structured process for managing the development of new products in a corporate setting. It consists of a number of stages where development work is done, with a review meeting or ‘gate’ at the end of each one. The process is designed to help promising ideas get the resources they need to develop, while also killing off those ideas that don’t have potential.

When to use it ●●

To make your new product development process more structured.

●●

To help you invest in high-potential ideas, and to kill off ideas that don’t look promising.

●●

To give you a better understanding of your overall product pipeline.

Origins The idea that investment in development projects should be phased over time has a very long history. In the world of chemical and industrial engineering, for example, methodologies for sequencing investment over multiple phases were developed in the 1950s, and in the 1960s NASA had a phased review process. In a completely different setting, the venture capital industry developed its own models for sequential investing during the post-war years, and these stages are nowadays formalised as a series of ‘funding rounds’. The pharmaceutical industry has also developed a highly rigorous process for drug approval, with pre-clinical development and then four phases of clinical development. The stage/gate methodology for new product development was introduced by Robert Cooper in a 1986 article, ‘Winning at new products’, and then a book of the same name. Cooper formalised many of the intuitive ideas about sequencing

152

investments in new products into a structured methodology, which became widely adopted by firms around the world. The stage/gate methodology has sometimes been criticised for being overly constraining. Alternatives, such as a design thinking approach to innovation, have been suggested as offering greater flexibility.

What it is The stage/gate methodology is a structured process for making product development more effective. Before it was introduced, many firms had ad hoc processes for product development, which meant that projects got funded according to the political power of whoever was supporting them, and there was little coherence to the overall portfolio of products in development. The stage/gate methodology defines a series of stages from idea to launch. Each stage consists of a predefined set of activities that must be completed successfully before proceeding to the next stage. The entrance to each stage is a gate, which is typically a meeting at which progress is reviewed. These meetings provide control to the process, because they allow the senior executives running them to monitor progress and to decide which projects should go forward and which should be killed.

How to use it The methodology consists of five stages and five gates. This description is based primarily on Robert Cooper’s methodology, though it should be noted that many other variants have been proposed over the years: ●●

Discovery: This is also called the ‘fuzzy front end’ of the process, and it involves various informal ways of coming up with possible new product or service ideas. It ends with the first gate, the ‘idea screen’, which is usually a very light-touch assessment of whether the idea looks promising.

●●

Scoping: This is a quick, preliminary investigation of the project, typically no more than a couple of weeks of work. For example, the project team might see if other firms have launched something similar, they might look into the technical feasibility, or they might talk to some prospective customers.

●●

Build the business case: This is a much more detailed investigation of the potential of the idea, typically involving both marketing people (who examine customer interest) and technical people (who dig into technical feasibility).

●●

Development: This involves a detailed design and development of the new product, including some simple product tests (to show that there is a viable market opportunity). A production plan and market launch plan are also put in place here.

36: Stage/gate model for new produ ct develo pment

153

●●

Testing and validation: This stage involves extensive product tests in the marketplace, the R&D lab and the manufacturing plant.

The launch marks the beginning of full production, marketing and selling. There is also a post-launch conducted, to ensure that the product meets its required quality standards. The stage/gate process has enormous benefits because it provides structure and discipline to what might otherwise be a rather chaotic process. It can be useful as a way to accelerate product development, because it helps prioritise resources towards the most promising products. It also helps those at the top of the firm to retain control: they can see how many products are at each stage in the process, and if there are any obvious gaps (in terms of what the market needs) then they can be filled either by acquiring what is missing or by accelerating the development of certain products. There are also benefits in terms of coordination. Many traditional product development processes were done in silos, with research handing their work over to development and then on to marketing. A well-managed stage/gate process ensures cross-functional coordination – for example, by bringing in marketing people at an early stage of development. The biggest limitation of the stage/gate process is that it can become excessively formalised, and this can result in both a slowing down of the development process as a whole, and also a very narrow view of what types of products get through. Often, the most creative projects actually get killed in such a process because they don’t fit with the expectations of those doing the reviews.

Top practical tip The key point to remember when using the stage/gate process is that it can slow you down as well as speed you up. Over the years, most formal processes become a little bit bureaucratic, and the people overseeing each review start to demand more proof before the product idea is allowed to proceed. This can result in a very slow and painful development process, which can also be rather demotivating for the project team. The solution to this problem is to change things around every few years. This might mean merging two of the stages together, or putting different people in charge of each process. It might also mean allowing highly-promising projects to bypass certain stages.

154

PA RT FOUR : Innovat ion and entrepreneurship

Top pitfall The biggest single problem with stage/gate processes is that they kill off the most interesting and unusual new product ideas. Indeed, many ‘disruptive innovations’ actually start out as highly unpromising ideas. To counter this threat, a number of large firms have created separate funding vehicles that are deliberately set up to invest in things that fail to meet the usual criteria as set out in a stage/gate process. For example, Shell has a ‘Gamechanger’ unit, and Reuters used to have a ‘Greenhouse fund’. These types of venturing units create their own managerial challenges, but they are a good way of providing some initial level of funding to high-risk projects.

Further reading Cooper, R.G. (2001) Winning at New Products: Accelerating the process from idea to launch. New York: Basic Books. Cooper, R.G. and Kleinschmidt, E.J. (1986) ‘An investigation into the new product process: Steps, deficiencies, and impact’, Journal of Product Innovation Management, 3(2): 71–85. Furr, N. and Dyer, J. (2014) The Innovator’s Method: Bringing the lean start-up into your organization. Boston, MA: Harvard Business Press.

36: Stage/gate model for new produ ct develo pment

155

37

Scenario planning

Scenario planning is a methodology for understanding how long-term changes in the business environment (such as political shifts or new technologies) might affect your firm’s competitive position, so that you can prepare accordingly.

When to use it ●●

To help you understand how the business world is changing.

●●

To identify specific threats and opportunities.

●●

To adjust your strategy so that you are prepared for whatever might happen in the future.

Origins The oil company Royal Dutch Shell was the inventor of scenario planning. The idea originated from the military world. In the aftermath of the Second World War, a group led by Herman Kahn at the Rand Corporation started developing ‘scenarios’ about the possible future conflicts that might take place. His ideas were then picked up by a team at Shell in the late 1960s, led by Ted Newland and Pierre Wack. By 1972, the scenario planning team had put together six scenarios, focusing on the price of oil and also the likely future behaviour of oil producers, consumers and national governments. When Shell’s top management saw these scenarios, they realised how different the world might look if, for example, oil prices were to shoot up. So they committed to using scenario planning as a formal part of their overall strategic planning process. The first oil crisis hit in 1973, with the formation of OPEC in the Middle East and dramatic increases in oil price. None of Shell’s competitors was prepared for this situation, whereas Shell had had some forewarning. This event underlined the

156

power of scenario planning, and the methodology was quickly adopted by many large companies.

What it is Making sense of the future is always challenging. One approach is to look at major trends (such as rising population, decreasing oil reserves) and to extrapolate from them. However, this approach fails to recognise that major discontinuities will sometimes occur (for example, a new technology for oil drilling or a political revolution in China), or that there are complex interactions between trends. Scenario planning overcomes these uncertainties by explicitly acknowledging that there are many possible futures. A smart approach to planning does not assume that the world will work in a certain way ten years from now. Instead, it identifies two or three likely scenarios, and examines the assumptions underlying each one. This helps the firm to make the right investments. For example, a company such as Shell has to keep in mind the possibility that oil reserves will run dry at some point, which might mean making investments into alternative sources of energy such as wind or biofuels. An effective scenario planning process doesn’t just paint a picture of how the world might look in the future, it also shapes the strategic decisions made by the firm and it helps them decide what sort of innovation projects to prioritise.

How to use it Some firms, including Shell, have highly sophisticated scenario planning teams, and the process for developing scenarios can take many months. However, you can also use scenario planning in a far more modest way. A set of scenarios can be developed in as little as a couple of days. Here are the typical steps involved.

Collect information about how the world is changing There are many ‘futurists’ out there who write books and give lectures about the major trends that are shaping the world. These trends can be usefully categorised as follows: ●●

Political factors – wars, changes in government, rising nationalism.

●●

Economic factors – free trade zones, currency fluctuations, recessions.

●●

Social and demographic factors – ageing population, attitudes to privacy, consumerism.

●●

Technological factors – 3D computing, mobile technology, driverless cars.

The first task in scenario planning is to gather as much information about these sorts of trends as possible, and then to think about how these are relevant for your industry. It is often useful to gather a group of colleagues together to brainstorm

37: Sc en ar io pl ann i ng

157

about how these trends might play out, so that you can understand their secondorder consequences.

Divide the trends into two categories As you analyse these trends, and you think about how they might interact with each other, you will realise that it is impossible to foresee everything. For instance, an increased trade deficit may trigger an economic recession, which in turn creates unemployment and reduces domestic production. It is therefore useful to divide what you discuss into two categories: ●●

Predetermined factors – things that we know will happen. For example, it is predetermined that there will be an ageing population in the developed countries of the world.

●●

Uncertainties – things that may happen. We don’t know for sure whether China will remain stable or whether driverless cars will become accepted.

Identify and describe the scenarios The predetermined factors can be set aside now – they should of course be factored into your strategic plan, but they aren’t important to the next step of the scenario development process. So focus on the uncertainties you have developed, and from that list identify what seem to be the most critical ones in terms of the future development of your industry. For example, if you work in the IT sector, the extent of adoption of new technologies by the population is one key uncertainty, and the extent to which power continues to be centralised in your countries of operation might be another (see the figure below). By placing these two most-critical uncertainties onto a 2×2 matrix, you can identify four possible scenarios. You should then give each of these scenarios a name (see the hypothetical example below), and you should describe briefly what each one means for your industry and for your firm in particular.

158

New technologies adopted patchily

New technologies adopted widely

Continued centralisation of power in society

“Back to the future”

“Enlightened authority”

Decentralisation of power in society

“Pockets of opportunity”

“People power”

PA RT FOUR : Innovat ion and entrepreneurship

Apply the scenarios in your strategic planning process The scenarios are useful for many things. Firstly, they are a good way of discussing the future with the top executives in the firm and other stakeholders as well. Typically, most people will have a ‘default’ future in mind that sits in one box of the matrix, and by exposing them to the alternative scenarios they become aware of their own assumptions. Secondly, the scenarios should be used in a more formal way to ensure that you are making the right decisions about the future. For example, one scenario for Shell might be that all future oil reserves are owned by the governments of the countries in which they are found. This would mean that Shell cannot expect to own oil reserves itself, and instead it has to become a provider of technical expertise to countries such as Venezuela or Nigeria. Shell would need to alter its strategy considerably, and it would require investment in a somewhat different set of capabilities to those it has today.

Top practical tip The most difficult part of the scenario planning process is identifying the key uncertainties in your business environment. It is relatively easy to draw up a long list of trends, but the really important step is figuring out which of these trends are both critical to the future success of your industry and highly uncertain. So if you are using the process described above, make sure to allow enough time to try out various alternatives here, so that the scenarios you come up with are the most revealing.

Top pitfall For scenario planning to be useful, it has to be properly integrated into the decisionmaking process at the top of your organisation. There are many firms that have conducted careful scenario planning exercises, only for the results to be ignored by those at the top.

37: Sc en ar io pl ann i ng

159

Further reading Schwartz, P. (1996) The Art of the Long View: Paths to strategic insight for yourself and your company. London: Random House. Wack, P. (1985) ‘Scenarios: Shooting the rapids – how medium-term analysis illuminated the power of scenarios for Shell management’, Harvard Business Review, 63(6): 139–150. Schoemaker, P.J.H. (1995) ‘Scenario planning: A tool for strategic thinking’, Sloan Management Review, 36(2): 25–40.

160

PA RT FOUR : Innovat ion and entrepreneurship

[ PA R T F I V E ] Accounting

161

If you want to understand how a business works, the starting point is to look at its accounts (the profit and loss statement and the balance sheet). Most MBA programmes have a foundational course on accounting to make sure students understand how accounts are put together, and how they should be interpreted. In this section of the book, we describe three basic models that have been in use for a hundred years or so. The ‘accrual method in accounting’, as opposed to cashbased accounting, is a way of ensuring that business activities are accounted for when they happen, rather than when money is paid out or received. ‘Ratio analysis’ is a methodology for analysing and making sense of a company’s financial statement. The ‘DuPont identity’ is essentially a sophisticated approach to ratio analysis that helps you to understand the drivers of a company’s financial performance. We also describe three relatively new models in the area of management accounting. ‘Activity-based costing’ was devised in the 1980s as a way to accurately allocate costs to activities. The ‘balanced scorecard’ was developed in the 1990s to include non-financial measures of performance alongside financial measures, so that a more complete picture of performance can be gained. ‘Economic value added (EVA)’ is a way of calculating the underlying profitability of a business after taking account of how much capital it uses. EVA is not particularly new as a concept, but its popularity in the early 2000s makes it an important model to understand.

162

PA RT F IVE : A cc ount i ng

The accrual method in accounting

38

There are two basic ways of keeping the accounts for a business. The cash method accounts for revenues and expenses on the basis of cash flows – that is, when money is received or paid out. The accrual method accounts for revenue and expenses on the basis of when activities occur – for example, when a sale is agreed, regardless of when the cash associated with that sale is actually transferred. While the cash method is used a lot in small businesses and for personal finances, the accrual method is now the preferred method of accounting in medium and large companies.

When to use it ●●

To provide information about how well your business is doing.

●●

To clarify that cash flow and profitability are not the same thing.

Origins The precise origins of the accrual method in accounting are unclear. Some people trace it back to an Italian friar, Luca Pacioli, whose book on double-entry bookkeeping was published in 1494. Others have argued that this method was first used in the late sixteenth century by Dutch traders, who made risky and lengthy voyages to the other side of the world. In 1602, the Dutch government sponsored the creation of the Dutch East India Company. For this firm, accounting no longer centred on each voyage as was previously the case. Rather, the firm reported its accounts periodically, which resulted in the development of accrual accounting. The benefits of the accrual method became clear, and it quickly spread to other businesses. Today, almost all large businesses use the accrual method – the cash method is used mostly for personal finance and for running small businesses. 163

What it is The accrual method recognises transactions when they occur, without needing cash to be received or disbursed. Cash-basis accounting, the traditional approach, tracks financial information only when cash is received or paid out. It does not matter if the raw materials are meant to be used in next year’s production; the purchase is simply recorded when cash is paid for them. Accrual-basis accounting allows for revenues and expenses to be matched to the correct time periods, and simplifies the task of analysing financial reports. Unlike cash-basis accounting, accrual accounting allows managers to know if their firm is generating profits from its operations, whether cash is being collected on time and whether the firm is taking full advantage of suppliers’ credit terms. Here is a quick comparison of the two methods: Cash basis

Accrual basis

Revenues are recorded when cash is received.

Revenues are recorded when earned, regardless of when cash is collected, even if customers take a long time to pay.

Expenses are recorded when cash is paid for them.

Expenses are recorded when products or services are being produced, regardless of when cash was paid for them.

The profit and loss statement and balance sheet reflect when there are cash inflows and outflows, and may not be a good guide for how profitable the firm is. The advantage is that one can track the firm’s ability to manage its cash from period to period.

Financial statements accurately show whether a firm is profitable or not because revenues and expenses are recorded when incurred.

Typically used for smaller companies and not-for-profit organisations, where the timing of cash inflows and outflows are a primary concern.

Used by medium to large firms that have more complex operations, need debt (loans, lines of credit) and report frequently to shareholders.

How to use it The accrual method allows firms to monitor the entire cycle from product or service creation to receiving cash from customers. When products are sold, revenues and expenses are matched in that time period. If credit was extended, the accrual method recognises an increase in accounts receivable, allowing managers to track the collection of cash from customers in future time periods. At the end of the year,

164

PA RT F IVE : A cc ount i ng

the analysis of the financial statements can reveal if the firm earned a profit on the products or services it sold. By matching the cost of producing finished goods to the period in which the goods were sold, companies can see if their production costs are competitive. As credit terms are often extended by suppliers, the payment of invoices can be optimised to take advantage of payment windows and discounts.

Top practical tip If you are reviewing financial statements for a small firm, you need to be clear which accounting method they are using, because there will be significant differences between accounts created under the cash or accrual methods.

Top pitfall The accrual method may require a judgment call to be made about when revenues and expenses are recognised, and it is very easy to create misleading financial statements if you don’t know what you are doing. This is why it takes many years for accountants to gain their qualifications.

Further reading Atrill, P. and McLaney, E. (2014) Accounting and Finance: An introduction. Harlow, UK: Pearson. Elliott, B. and Elliott, J. (2013) Financial Accounting and Reporting, 16th edition. Harlow, UK: Pearson.

38: The acc ru al method i n acc ount i ng

165

39

Activity-based costing

Activity-based costing (ABC) is a methodology for assigning costs to products or services based on the resources they actually consume. Before ABC was invented, overhead costs were typically assigned to product lines as a proportion of direct costs, such as labour hours. This was a simple way of doing things, but it often led to inaccuracies in assessing how much items actually cost to manufacture. ABC resolves those inaccuracies.

When to use it ●●

To get an accurate picture of the costs of manufacturing individual products.

●●

It is particularly valuable in a complex production environment where many products use the same inputs.

Origins The concept of ABC originates in work by George Staubus on activity costing and input-output accounting during the 1970s. The chain of events that led to ABC being created in its current form can be traced back to Keith Williams, manager of the cost account at John Deere, the manufacturing company. In 1984, Williams trialled his ideas about new ways of allocating overhead costs, and within a couple of years the benefits (in terms of more accurate cost information) were clear, and the methodology spread to other companies, including GM and Weyerhaeuser. These innovative methods came to the attention of Robert Kaplan, a professor at Harvard, and Robin Cooper, who were then instrumental in choosing the name ‘activity-based costing’. Through Kaplan and Cooper’s writing and teaching, ABC was subsequently adopted by many companies, and it is now a standard way of allocating costs in a factory environment.

166

What it is ABC provides a more accurate estimate of cost per unit produced because it does a better job of linking the drivers of costs to what is produced. Traditionally, overhead costs, including utilities, building maintenance and head office costs such as marketing and customer support, were assigned to product lines based on their direct costs. This method – called absorption costing – seemed reasonable in the past because the bulk of manufacturing costs tended to consist of materials and labour, and indirect costs were proportionately smaller. But with the advent of automation, an increase in the complexity of products and the need for customisation, two activities often required vastly different levels of overhead. For example, a standard machine produced on an automated assembly line and a customised machine might use up similar amounts of direct costs (such as materials and labour). The latter, however, would require more indirect costs including design, testing and quality control. In such a case, assigning overhead based on direct costs alone would make little sense – the mass-produced machine would seem more costly to produce than it actually is, and the customised machine would have an artificially low cost price. This misallocation of costs could end up having significant consequences for the investment decisions and profitability of the firm.

How to use it ABC begins with an analysis of the activities in an operation, assigns costs to these activities (cost pools), identifies cost drivers that have an impact on the costs, determines a cost per unit, then estimates the true product cost per unit. We’ll work one overhead activity – ‘quality control’ – through the ABC methodology to show how these indirect costs should be allocated: ●●

Identify activities and cost pools: There are many activities that account for the overhead resources in a manufacturing operation. Some of these are building repairs and maintenance or sales and marketing costs. One of these activities is quality control, a service that tries to ensure that products meet the quality standards demanded by customers. We can determine the cost pool for this activity because we can add up the cost to hire a team of quality control engineers and quality control staff, and any tools and materials purchased and used in the process. Let’s say the total cost pool for quality control is $300,000 per year.

●●

Find the cost driver(s) for each activity: Next, we need to find out what processes drive the quality control costs up. In a typical manufacturing operation, we might assume that quality control costs can be divided between the lines of businesses based on volume. But we discover that it is the number of quality control inspections that is the cost driver for this activity.

39: A c ti vi ty-b ased c ost i ng

167

●●

Determine the cost driver’s rate per activity: Using our standard and custom machine-manufacturing example from above, we find the custom machines business takes up 75 per cent of the quality control activity cost pool, even though it only accounts for 10 per cent of the total units produced in our factory. If we know that we produced 1,000 custom machines, then we can determine that the cost of each quality control session for the custom machines line – known as the rate per activity – costs ($300,000 × 75 per cent)/1,000 = $225.

●●

Assign the cost driver’s costs to that activity: If we simply allocated the quality control costs based on volume, we would have assigned $300,000/10,000 = $30 as the rate per activity for each custom machine. But this figure really underestimates the true costs for quality control visits in the custom machines business.

●●

Calculate the true production cost per unit: If production costs, under absorption costing, were $1,000 per unit, the more accurate production cost under ABC is $1,000 + ($225 – $30) = $1,195 per unit. With this information in mind, management might find that they are underpricing their customised machines and, at the same time, overpricing their standard machines.

When a complete ABC analysis is conducted, a clear picture emerges of how overhead costs are consumed. To reduce costs, managers can identify situations where cost drivers can be better managed. For example, there may be ways to reduce the number of quality control checks for customised machines, without affecting the final quality of the product. If the number of quality checks can be reduced, the quality control engineers can be utilised elsewhere.

Top practical tip Introducing activity-based costing is not a simple task. You need to break your business activities down into their component parts, and to work through the series of steps described above in painstaking detail. And as with any changes to how a firm works, there are likely to be people who don’t buy into the model. You should therefore try a pilot scheme before implementing the system throughout your organisation. It can be used in conjunction with the existing system during this time. A lot of companies also use consultants to help them get up to speed in the methodology.

168

PA RT F IVE : A cc ount i ng

Top pitfall Activity-based costing became popular in the 1980s but faced a bit of a backlash in the 1990s, as some companies struggled with how to apply it effectively. Many companies were reluctant to give up the systems they were used to; others did not want to learn new techniques. As with many innovative approaches to management, the biggest pitfall is thinking that ABC is a ‘quick fix’. In reality, it requires a lot of effort and adjustment so that people develop the skills to use it effectively.

Further reading Kaplan, R.S. and Cooper, R. (1988) ‘Measure costs right: Make the right decisions’, Harvard Business Review, September–October: 96–103. Kaplan, R.S. and Cooper, R. (1997) Cost and Effect: Using integrated cost systems to drive profitability and performance. Cambridge, MA: Harvard Business School Press. Mol, M. and Birkinshaw, J.M. (2008) Giant Steps in Management. Harlow, UK: Pearson. Ness, J.A. and Cucuzza, T.G. (1995) ‘Tapping the full potential of ABC’, Harvard Business Review, July–August: 130–138.

39: A c ti vi ty-b ased c ost i ng

169

40

The balanced scorecard

Traditional management accounts, with their emphasis on financial performance, tell us how a firm has performed in the past, but offer little information about how it might perform in the future. To provide a more rounded view of how a firm is doing, Robert Kaplan and David Norton developed the ‘balanced scorecard’, a performance measurement system that considers not only financial measures, but also customer, business process and learning measures.

When to use it ●●

To develop a rounded view of how a firm, or the various business lines within a firm, are performing.

●●

To encourage managers to prioritise good customer service and develop their people, rather than just delivering on their financial goals.

Origins The balanced scorecard was invented by Art Schneiderman, a senior manager at Analog Devices in the USA, in 1987. Schneiderman was responsible for putting together the performance measures that were reviewed by the board, and while his CEO, Ray Stata, wanted the non-financial information about innovation activities, the COO, Jerry Fishman, was only interested in the hard performance data. Schneiderman struggled with how to reconcile the views of his two bosses, until one day: ‘The light bulb lit: combine the financial and non-financial metrics as a single agenda item. I added a small number of key financials at the top of the scorecard, and the problem was solved to everyone’s satisfaction.’

170

This was the genesis of the Analog Devices ‘corporate scorecard’. It might have stayed hidden in that company, except that Schneiderman got to know Robert Kaplan, a professor, who liked the idea so much that he featured it in an article in 1993. Kaplan and his colleague, David Norton, renamed it the ‘balanced scorecard’ and it quickly became popular with a large number of firms.

What it is The balanced scorecard considers four perspectives on a firm’s performance: 1 Financial perspective: Understanding the figures on the financial statements is, of course, important, but it provides a very narrow and backward-looking point of view on how a firm is performing. 2 Customer perspective: Customer satisfaction can be seen as the proverbial canary in the coal mine: if it is suffering, tough times are ahead. Unsatisfied customers will take their business elsewhere and this will have an impact on financial performance in the future. 3 Process perspective: Tracking key metrics such as factory utilisation, waste rates and quality provides managers with an indication of how well their operations are running. 4 Innovation perspective (also called the ‘learning and growth’ perspective): This perspective covers training programmes for employees, attitudes to individual and corporate self-improvement and the development of new ideas. By keeping all four perspectives in mind, managers are more likely to make decisions that balance the short- and long-term needs of the business. There is also a sequential logic to the four perspectives. Investing in learning and growth (for people) helps to make business processes work more effectively, which in turn enhances the quality of products and services provided to customers. To the extent that customers are satisfied, it is likely that the firm’s financial performance will also improve.

How to use it Kaplan and Norton suggest a multi-step process for a firm that is thinking of implementing a balanced scorecard: ●●

Define vision: Be clear on what the overall purpose of your organisation is, and what its strategic priorities are.

●●

Identify perspectives and critical success factors: How do you interpret the four key elements of the scorecard, and what do you have to do well to succeed in each?

40: The b a lan ced s c ore ca rd

171

●●

Identify measures: What are the specific operational measures you will use to show you are delivering on your critical success factors? It is often very hard to develop valid and reliable measures of what matters. For example, we can all agree that highly-skilled people are important to the learning dimension of the scorecard, but it is tricky to get a good measure of people’s skills – we often end up falling back on a simple measure, such as how many have advanced qualifications.

●●

Evaluate: This is about following through on your chosen measures, and keeping track of them consistently over time.

●●

Identify strategies and create action plans: For the scorecard to have value, you have to be prepared to take actions as a result of the information it provides.

Top practical tip To be effective as a management tool, a balanced scorecard doesn’t sit in isolation. Rather, it should exist to reinforce the firm’s strategy. In practice, this means that the measures used in the scorecard are chosen because they are key ‘levers’ for driving the firm’s strategy. For example, if you work in a hotel chain, you know that customer satisfaction is an important measure that will directly impact loyalty, word-of-mouth sales and, ultimately, profitability. And you also can predict that customer satisfaction is linked to the quality of service from staff, the cleanliness of the rooms, and so on. These are the sorts of measures that should find their way into the balanced scorecard. The scorecard should then be reinforced by top executives talking about these dimensions of performance as important, and by a culture in which customer satisfaction and service quality are recognised as keys to success.

Top pitfalls As with any sort of measurement tool, choosing the wrong measures can have all sorts of negative consequences. So you need to take great care to include valid and meaningful measures in your balanced scorecard, as they have will have an effect on how people behave.

The other pitfall with the balanced scorecard is to include too many metrics. This

is an easy mistake to make, because in the interests of being balanced you keep on adding additional metrics whenever they are suggested. But of course the more measures you include, the harder it is to keep track of all of them. A good balanced scorecard typically includes only two or three metrics for each major category, giving 10–12 in total.

172

PA RT F IVE : A cc ount i ng

Further reading Kaplan, R.S. and Norton, D.P. (1992) ‘The balanced scorecard – measures that drive performance’, Harvard Business Review, January–February: 71–79. Kaplan, R.S. and Norton, D.P. (1993) ‘Putting the balanced scorecard to work’, Harvard Business Review, 71(5): 134–147. Kaplan, R.S. and Norton, D.P. (1996) The Balanced Scorecard: Translating strategy into action. Boston, MA: Harvard Business School Press. Mol, M. and Birkinshaw, J.M. (2008) Giant Steps in Management. Harlow, UK: Pearson.

40: The b a lan ced s c ore ca rd

173

41

The DuPont identity

The determinants of a firm’s return on equity (ROE) can be analysed using a tool called the ‘DuPont identity’, named for the company that popularised its use. The DuPont identity expresses the ROE in terms of the firm’s profitability, asset efficiency and leverage.

When to use it ●●

To analyse a firm’s financial performance.

●●

To drill down on the underlying drivers of profitability.

●●

To compare the performance of competing firms.

Origins The methodology for financial analysis that we now call the DuPont identity was invented in 1914 by F. Donaldson Brown, an electrical engineer employed by E.I. DuPont de Nemours and Company (DuPont). After DuPont bought a stake in General Motors Corp. (GM), Brown was put in charge of imposing financial discipline on the automaker. Alfred P. Sloan, the legendary former chairman of GM, credited the automaker’s success from the 1920s to the 1950s to the systems Brown put in place. This success gave the DuPont identity a high level of visibility among US corporations. It remains a popular approach to financial analysis to this day.

What it is Return on equity (ROE) is an important number for investors, because it is a clear measure of how well a firm creates value for its shareholders. But ROE is a ratio that can be manipulated by managers looking to boost the value of their firm. For 174

example, debt can be taken on to buy back shares – reducing the equity denominator but adding risk to the firm. Brown’s insight was to separate out the drivers of ROE so managers could see the source of improvements in that ratio. These components are: ●●

Operating efficiency (how tightly managed a firm’s operations are) – as measured by its net profit margin.

●●

Asset use efficiency (how effective a firm is at squeezing value out of its assets) – as measured by asset turnover.

●●

Financial leverage (how much borrowed money the firm is using) – as measured by the equity multiplier.

[

ROE =

Net income Sales

[ [ ×

[ [

Asset turnover

×

Total assets Total equity

[

{

{

{ Net profit margin

Sales Total assets

Equity multiplier

These three components create an ‘identity’, which means that when multiplied together, the result, by definition, is return on equity.

How to use it By breaking out the three components of return on equity, you get a far more detailed picture of how a firm is performing, and this helps you to make sense of its drivers of profitability, and how it is doing compared to its competitors. For example, let’s consider the following hypothetical selected information for Kortright Storage:

Kortright Storage Balance sheet at the end of 2013 Total assets

25,000

Shareholders’ equity

 5,000

Income statement for 2013 Revenue

10,000

Net income

 2,000

Using the formula above, we can calculate that Kortright Storage’s ROE is: ROE = ($2,000/$10,000) × ($10,000/$25,000) × ($25,000/$5,000) = 0.20 × 0.40 × 5 = 0.40 or 40 per cent 41: The DuPont i dent i ty

175

This analysis gives you the three ratios that make up ROE: net profit margin is ($2,000/$10,000) or 20 per cent; total asset turnover is ($10,000/$25,000) or 40 per cent; and the equity multiplier is ($25,000/$5,000) or 5. Each of these numbers can then be monitored over time, and compared with those of competitors. An increase in the ROE due to a higher net income figure is a welcome sign. But if the increase in ROE is due to a share buyback that reduces shareholders’ equity – but which results in more debt – then the firm could be over-leveraged. Similarly, an over-leveraged firm that takes on more equity to reduce its risk will see its ROE drop, but this should be seen as a good move by investors in the firm. Since there are four figures that can have an impact on ROE – sales, net profit, total assets and shareholders’ equity – a firm’s condition can improve or deteriorate in the short term without an impact on ROE. For example, declining net income can be masked by a combination of a higher asset turnover figure and a higher equity multiplier. Scrutinising a firm’s results by using the DuPont identity would uncover this deteriorating situation. There is also an ‘extended’ DuPont model that takes into account the impact of taxes and interest. Formally, the extended DuPont formula is:

[

Net income

ROE = EBT

[ [ [ [ [ [ EBT

EBIT

Sales

× × × EBIT Sales Total assets Operating profit margin

Equity multiplier

×

Total assets Total equity

[

{

Interest burden

{

{ {

{ Tax burden

[ [

Financial leverage

The latter two terms in this formula are exactly the same as before. The difference is that the first term (profit margin ratio) is decomposed into three elements: ●●

Net income/earnings before taxes = tax burden ratio.

●●

Earnings before taxes/earnings before interest and taxes = interest burden ratio.

●●

Earnings before interest and taxes/sales = operating profit margin.

The primary advantage of this revised formula is that it allows managers to look at a firm’s operating profits, before the impact of interests or taxes. In addition, managers can examine a firm’s tax burden and interest burden. If taxes are high, the tax burden ratio falls and depresses ROE. Similarly, if interest expenses are large due to high levels of leverage, the interest burden ratio is pushed downwards, thus lowering ROE.

176

PA RT F IVE : A cc ount i ng

Top practical tip The DuPont identity provides a way of drilling down into the underlying drivers of a firm’s profitability. It is a simple and practical approach. However, looking at a single firm’s ratios at a single point in time does not tell you very much. When using the DuPont model it is best to look at how one firm’s ratios change over time, or to compare two similar companies.

For example, if one company has a lower ROE than a competitor, there could be

several explanations. One possibility might be that creditors see the firm as a risky bet so they charge higher interest. Another might be that it simply faces higher tax rates that could be remedied by restructuring its operations. Asking questions like these leads to better knowledge of the firm and how it should be valued.

Top pitfall Like any analysis based on financial statements, the DuPont analysis is based on past accounting figures, which may not reflect current conditions in the firm. So you need to be careful to understand how these ratios are changing over time. It is also important to note that the DuPont analysis does not consider the firm’s cost of capital – that is a separate question that is dealt with in another part of the book.

Further reading Soliman, M.T. (2008) ‘The use of DuPont analysis by market participants’, The Accounting Review, 83(3): 823–853. White, G.I., Sondhi, A.C. and Fried, D. (1994) The Analysis and Use of Financial Statements. Hoboken, NJ: John Wiley & Sons.

41: The DuPont i dent i ty

177

42

Economic value added

Economic value added (EVA) takes into account a firm’s total invested capital when evaluating its financial performance. The final figure is based on subtracting a capital charge – on invested capital – from the firm’s net operating profit after taxes. The final figure can be thought of as the real return investors are getting on their money.

When to use it ●●

To get an accurate measure of the return a firm has generated, in a given time period, on the capital invested in the firm.

●●

To uncover opportunities to improve the way a firm uses its invested capital.

Origins The notion that a firm’s true ‘economic profit’ requires an adjustment for its cost of capital dates back at least to the 1960s. The idea waned in popularity during the 1980s, when a boom in restructurings and leveraged buyouts encouraged investors to focus on simpler measures of corporate performance, such as return on equity or earnings per share. When the boom ended in the 1990s and investors were looking for a higher level of scrutiny on their investments, Stern Stewart & Co., a consulting firm, ‘reinvented’ the concept as ‘economic value added’, trademarking the term.

What it is EVA is a tougher standard than other measures for evaluating how capital is used in a firm. Return on equity, for example, can be boosted simply by taking on more debt at the expense of financial strength; earnings per share can be dressed up by using cash to buy back shares, even though a firm’s profitability has not changed. 178

The EVA calculation aims to gather together all of the economic capital – provided by shareholders and lenders – on which a firm relies to run its business, assigns a cost to that capital, then deducts that ‘capital charge’ from the firm’s net operating profit after taxes, or NOPAT. Here is the formula for EVA: Net sales – Operating expenses Operating profit (EBIT) – Taxes Net operating profit after tax (NOPAT) – Capital charges (Invested capital × Cost of capital) = Economic value added While this calculation seems conceptually straightforward, there are some financial adjustments needed, as described below. As long as EVA is positive, it can be said that value is being created in the firm – the firm is earning more than the amount it incurs on its capital. However, it is also true that a firm can be profitable (in terms of what is reported in the profit and loss account) while actually destroying value for shareholders because their capital is being under-utilised.

How to use it While conceptually simple, EVA is actually quite tricky to calculate accurately, as it requires some adjustments to the standard figures that are reported in financial statements. Many of the figures needed for the EVA calculation can be taken straight from the financial statements. It’s the last part, the adjustments that convert accounting accruals to cash and recognising off-balance-sheet sources of funds, that takes time. EVA is best used in manufacturing-intensive firms, which rely on fixed assets such as property and equipment as part of their business model. When an EVA calculation is conducted in such firms, the true test is whether the return from investing in these firms is higher than what the investor can earn from other ventures, or even from investing in the stock or bond market. Owners preparing a firm for sale will also benefit from an EVA calculation to identify pockets of inefficiencies that can be remedied before the firm is put up for sale, with the objective of attracting a higher bid for the firm. Here is an example of how EVA can be calculated and how the EVA figure differs from what can be learned from other financial measures. Let’s say Cambrian Manufacturing makes oilfield equipment and has the following financial results for 2013: 42: Ec onom ic value added

179

Balance sheet

Income statement – Selected details

Assets

Amount

Current assets

$20,000

Net sales

$155,000

Long-term assets

$60,000

Operating expenses

$140,000

Total assets

$80,000

Operating profit (EBIT) Taxes Net operating profit after tax (NOPAT)

Liabilities Current liabilities   Accounts payable   Taxes payable

$15,000 $4,000 $11,000

$2,000 $500

  Other current liabilities

$6,000

  Short-term debt

$2,000 $10,500

Long-term liabilites

$20,000

Shareholders’ equity

$49,500

Total liabilities and equity

$80,000

The NIBCLS are accounts payable, taxes payable and other current liabilities. To keep the example as simple as we can, the NOPAT is shown. Here are other assumptions we’ve made: ●●

Cambrian has a few operating leases and the present value of these leases is $15,000;

●●

Cambrian’s interest rate on both its short-term and long-term debt is 9 per cent, which means that interest payments are ($2,000 + $22,000) × 9 per cent = $1,980;

●●

other income and expenses, which we will use to compute a net income figure for Cambrian, add up to $200, net.

Given these assumptions, the following tables calculate invested capital, the adjustments to be made, the capital charge and, finally, the EVA figure:

180

PA RT F IVE : A cc ount i ng

Total invested capital Total assets

$80,000

Less: NIBCLS   Accounts payable

$2,000

  Taxes payable

$500

  Other current liabilities

$6,000

Invested capital

$71,500

Adjustments Invested capital

$71,500

  Add: Present value of operating leases

$15,000

Invested capital, adjusted

$86,500

Capital charge Invested capital, adjusted

$86,500

Cost of capital Capital charge

15% $12,975

Economic value added Net operating profit after tax (NOPAT)

$11,000

Capital charge

$12,975

Economic value added

$(1,975)

The EVA figure for Cambrian is negative; one possible reason is that Cambrian had too much invested in long-term assets. To improve performance, it might consider selling under-utilised assets. In contrast, if we were to calculate a return on equity figure for Cambrian, it would show what many would consider a healthy figure for a manufacturing operation: ●●

Net profit = NOPAT, $11,000 – interest expenses, $1,980 + other income and expenses, $200 = $9,220.

●●

Return on equity = $9,220/ $49,500 = 18.6 per cent.

42: Ec onom ic value added

181

Top practical tip A detailed EVA calculation requires a lot of work to capture all of the adjustments, which is why Stern Stewart made a lot of money helping companies to use this methodology. However, rather than bringing consultants in to help, you can usually get a rough estimate of a firm’s cost of capital quite quickly and use that to analyse how profitable it is, according to EVA principles.

This is worthwhile, because it helps to focus the minds of those in the firm. Top

managers typically feel more responsible for a measure that they have more control over (such as EVA) rather than one that they feel they cannot control as well (such as market price per share).

Top pitfall An EVA analysis can give you a distorted picture when used on a year-to-year basis. For example, in high-growth firms, the bulk of the value can be attributed to future growth, which won’t materialise in the short term. You are generally better off using EVA as a current period measure of how much economic value a firm creates or destroys, then using the information to suggest improvements to the firm’s operations.

Further reading Stern, J.M. (1998) ‘The origins of EVA’, The Chicago Booth Business School Magazine, Summer edition, www.chicagobooth.edu/magazine/summer98/Stern. html Stern, J.M., Shiely, J.S. and Ross, I. (2003) The EVA Challenge: Implementing valueadded change in an organization. Hoboken, NJ: John Wiley & Sons. Stewart, B. (2013) Best-Practice EVA: The definitive guide to measuring and maximizing shareholder value. Hoboken, NJ: John Wiley & Sons.

182

PA RT F IVE : A cc ount i ng

Ratio analysis

43

How do you know if a firm is doing well, is an industry leader, or if it can meet its debt obligations? Ratio analysis is a form of financial statement analysis that is used to obtain a quick indication of a firm’s financial performance in several key areas.

When to use it ●●

To compare a firm’s financial performance with industry averages.

●●

To see how a firm’s performance in certain areas is changing over time.

●●

To assess a firm’s financial viability – whether it can cover its debts.

Origins The first cases of financial statement analysis can be traced back to the industrialisation of the USA in the second half of the nineteenth century. In this era, banks became increasingly aware of the risks of lending money to businesses that might not repay their loans, so they started to develop techniques for analysing the financial statements of potential creditors. These techniques allowed banks to develop simple rules of thumb about whether or not to lend money. For example, during the 1890s, the notion of comparing the current assets of an enterprise to its current liabilities (known as the ‘current ratio’) was developed. Gradually, these methods became more sophisticated, and now there are dozens of different ratios that analysts keep track of.

What it is There are four main categories of financial ratios. Consider the following financial results for three global firms in the smartphone industry: 183

Firm A

Firm B

Firm C

Net sales

217,462

170,910

17,497

Cost of sales

130,934

106,606

10,138

Gross margin

86,528

64,304

7,359

Net income (loss)

28,978

37,037

(1,017)

Total shareholders’ equity

142,649

123,549

9,169

Accounts receivable, net

23,761

13,102

3,994

1,002

22,367

2,536

Inventories

18,195

1,764

1,107

Land and buildings

71,789

3,309

779

Cash equivalents

57,751

146,761

5,061

203,562

207,000

34,681

Accounts payable

Total assets

Looking at the raw figures, we can see that Firm A has the highest sales, Firm B has the highest net income (among other marketing leading figures) and Firm C has the lowest level of inventory relative to its sales. In order to compare one firm’s results against another’s, the creation of common, standardised ratios is needed. The four major categories of ratios are as follows: 1 Profit sustainability: How well is your firm performing over a specific period? Will it have the financial resources to continue serving its customers tomorrow as well as today? Useful ratios here are: sales growth (sales for current period/sales for previous period), return on assets (net profit/total assets) and return on equity (net profit/shareholders’ equity). 2 Operational efficiency: How efficiently are you utilising your assets and managing your liabilities? These ratios are used to compare performance over multiple periods. Examples include: inventory turns (sales/cost of inventory), days receivable (accounts receivable/[sales/365]) and days payable (accounts payable/[cost of goods sold/365]). 3 Liquidity: Does your firm have enough cash on an ongoing basis to meet its operational obligations? This is an important indication of financial health. Key ratios here are: current ratio (current assets/current liabilities), and quick ratio (cash + marketable securities + accounts receivable/current liabilities). 4 Leverage (also known as gearing): To what degree does your firm utilise borrowed money and what is its level of risk? Lenders often use this information to determine a firm’s ability to repay debt. Examples are: debtto-equity ratio (debt/equity), and interest coverage (EBIT/interest expense). 184

PA RT F IVE : A cc ount i ng

These are some of the standard ratios used in business practice and are provided as guidelines. Not all these ratios will provide the information you need to support your particular decisions and strategies; you can also develop your own ratios and indicators based on what you consider important and meaningful to your firm.

How to use it By comparing the ratio of a figure – or a combination of figures – to another figure, trends can be observed. Using just the data from the table opposite, the following ratios can be created:

Firm A

Firm B

Firm C

Inventory turns

7.2

60.4

9.2

Days inventory

50.7

6.0

39.9

Days receivable

39.9

28.0

83.3

2.8

76.6

91.3

ROA

14.2%

17.9%

–2.9%

ROA, excluding cash and cash equivalents

19.9%

61.5%

–3.4%

ROE

20.3%

30.0%

–11.1%

Days payable

You can see that while Firm A is generating the most revenue, Firm B is likely to be an industry leader because it is very liquid, efficient and profitable. Having this standardised set of ratios allows managers to make better decisions – for example, which firms to invest in and which firms to monitor. Shareholders also use ratios to understand how their firms are performing versus their peers.

Top practical tip Make sure to calculate a number of different ratios to ensure you get an accurate picture. Each ratio gives some insight, but the more you have, the more rounded your view becomes. And as with all forms of financial analysis, ratios don’t provide the ‘answer’ as to why a firm is performing well or badly – they are simply a way of homing in on the key questions that need to be answered in a more considered way.

43: Rati o analysis

185

Top pitfall For ratios to be meaningful they need to be based on accurate financial information, otherwise you run the risk of falling into the ‘garbage in–garbage out’ trap. One key mistake is comparing fiscal year results for a number of comparable firms but failing to recognise that firms have different fiscal year-ends (some are at the end of January, others have a year-end in June). Ratios also are only really meaningful when used in a comparative way – for example, looking at how a set of ratios change over time in the same firm, or how a number of competing firms have very different ratios.

Further reading Mckenzie, W. (2013) FT Guide to Using and Interpreting Company Accounts, 4th edition. Upper Saddle River, NJ: FT Press/Prentice Hall. Ormiston, A.M. and Fraser, L.M. (2012) Understanding Financial Statements, 10th edition. Harlow, UK: Pearson Education.

186

PA RT F IVE : A cc ount i ng

[

PA R T S I X

]

Finance

187

Finance provides a common language and set of tools that helps managers and the owners of capital understand each other. The former looks for money to help their business grow. The latter wants a return on the money and a clear understanding of the risks involved. Of all the subjects in this book, finance has the potential to be the most confusing due to the many technical terms being used. Once you start reading through this section, you will find that much of finance, at least at this level, relies on basic arithmetic. Moreover, learning to interpret the results is, arguably, at least as important as knowing which figures to add or multiply together. The foundational concept in finance is that money has earning capacity and, thus, is worth more today than at some time in the future. This is the basis for the section, the ‘time value of money’, where the concept of compounding is discussed. At the firm level, managers are interested in knowing what price they have to pay for money, in the form of potential future returns, and how best to allocate it. Computing the firm’s ‘weighted average cost of capital’ will give an indication of how much a firm pays to its shareholders and debt-holders. Included in the calculation will be the expectation of returns from shareholders (the cost of equity), and from debt-holders (the cost of debt). The ‘capital asset pricing model’ helps to estimate an expected return for a firm’s equity holders, and ‘bond valuation’ techniques assist the issuers and buyers of tradeable debt to determine, given various conditions, what their bonds might be worth in the market. As an incentive to managers, firms often issue stock options – giving managers an opportunity to purchase shares at a certain price at some time in the future. The ‘Black-Scholes options pricing model’ is a starting point for those interested in determining the theoretical price of the options. Many observers have suggested that there is an optimal range for how much debt a company should take on: be conservative and you will rely too much on expensive equity capital; take on too much and you risk bankrupting the firm. The ‘Modigliani-Miller theorem’ suggests that the mix of debt and equity is irrelevant and that the cost of capital should remain constant because, for example, an overzealous reliance on cheap debt will result in a higher price demanded by equity holders, to compensate for the additional risks. Knowing where to deploy investors’ money is the focus of the ‘capital budgeting’ section, which outlines the various models for evaluating investment options. If the investment option is buying another firm, potential buyers and managers of firms usually want to know what the target might be worth, and this is the focus of the ‘valuing the firm’ section. Finally, investors who have portfolios of equity investments typically look for the best way to combine returns and risk. ‘Modern portfolio theory’ suggests that there is an optimal set of portfolios for every given level of risk the investor wishes to undertake.

188

PA RT S IX : Fin a n ce

Black-Scholes options pricing model

44

In the world of financial markets, when you buy an ‘option’ on a corporate stock you are paying for the right, but not the obligation, to buy or sell an amount of that stock at a pre-specified price. How much should you pay for such an option? The Black-Scholes model provides you with an answer to that question.

When to use it ●●

To calculate prices for buying or selling options in the financial markets.

●●

To develop ‘hedging’ strategies if you work for a bank or hedge fund.

●●

To work out what your stock options as an employee are worth.

Origins In 1973, Fischer Black and Myron Scholes published a paper entitled ‘The pricing of options and corporate liabilities’, in the Journal of Political Economy. In that paper, they described a novel approach to pricing options. The equation became known as the ‘Black-Scholes options pricing model’ after Robert C. Merton published a paper based on and extending Black’s and Scholes’ work. Merton and Scholes received the 1997 Nobel Prize in Economics for their work (Black died in 1995).

What it is The Black-Scholes model is used to calculate the theoretical price of ‘call options’ (the right but not obligation to buy a stock) and ‘put options’ (the right but not obligation to sell a stock). This information is very valuable to investors, many of whom buy and sell options as an integral part of their investment strategy.

189

The basic intuition behind the Black-Scholes model is that the value of the call option is a combination of two factors. One is the asset’s price volatility – defined by how closely the underlying asset’s stock price has fluctuated in relation to the market’s returns. The other is the minimum value, or present value, of the option, given the risk-free rate. The formula calculates the minimum value by growing the stock price at the risk-free rate, with returns compounded over the life of the option contract. For instance, if the current stock price is $50, the call option expires after 10 years and the risk-free rate is 2.75 per cent, then the stock price is $65.58 after 10 years. If the exercise price is $50, we subtract this exercise price from $65.58 to get the call option’s nominal value of $15.58. Discounting this nominal value to today, we get a present value – the minimum value – of the option of $11.88. Higher-beta stocks (that is, those with more volatile prices) will deliver – in the Black-Scholes model – higher returns than low-beta stocks. This is because volatile prices can result in a much higher stock price and, thus, a significantly more valuable call option. But the opposite scenario – much lower stock prices – will not have a correspondingly negative impact because there will be no losses beyond the price of the option. Long-dated options combined with a highly volatile stock price result in options with the highest expected values. Conversely (and intuitively), options that have little or no volatility and expiration dates that are fast approaching will have the lowest expected values.

How to use it Treat the Black-Scholes model as a formula into which you can enter the variables and calculate the price of the option. There are free Black-Scholes calculators available online and the calculation can be performed in a spreadsheet program. As with any other model, the assumptions made will have a large impact on the result. The Black-Scholes formula considers the following variables: ●●

the current underlying price;

●●

the strike price of the option;

●●

the length of time until the option expires, expressed as a per cent of a year;

●●

the implied volatility;

●●

the risk-free rate.

Here is the Black-Scholes formula: Where:

C = SN(d1) – N(d2)Ke-rt

C = call premium S = current stock price N = cumulative standard normal distribution

190

PA RT S IX : Fin a n ce

e

= exponential term

K = the strike price of the option

d1 =

d1 =

In (

s2 S )+(r+ )t 2 K s√t



d2 =

d2 = d1 – s√t

s = standard deviation of stock returns ln = natural logarithm r = risk-free rate t = time until option expiration There are two parts to the model. The first part, SN(d1), calculates the expected return from purchasing the underlying securities. The second part, N(d2)Ke-rt, calculates the present value of the option given the risk-free rate. The Black-Scholes model makes a series of simplifying assumptions: ●●

These options can only be exercised at their expiration date.

●●

There are no dividends paid from now until the option’s expiration date.

●●

Markets are efficient.

●●

There are no commissions.

●●

The risk-free rate is known and the rate is constant throughout the life of the option.

●●

The volatility of the underlying stock is known and the rate is constant.

●●

Returns follow a lognormal distribution.

Top practical tip Using Black-Scholes does not require one to understand the mathematics behind the model. Simply use an online calculator or a spreadsheet program to perform the calculation. Understand, however, that options traders today rely on even more sophisticated valuation techniques, to help them identify small imperfections in the market. The mathematical models used today far outstrip those devised by Black, Scholes and Merton in the 1970s.

Top pitfall The assumptions the Black-Scholes model makes may not be true when tested in

44: B l ac k- Sc holes o pt ions pr ici ng model



the market. Like any other model, Black-Scholes is only as good as its assumptions.

191

An important one here is that it is based on past volatility, which may not be an accurate reflection of future volatility. Also, unusual circumstances aren’t reflected in the calculation, such as a fundamental change in the business, a restructuring merger, or other major change in the company’s competitive position or business conditions.

Further reading Black, F. and Scholes, M. (1973) ‘The pricing of options and corporate liabilities’, Journal of Political Economy, 81(3): 637–654. Hull, J.C. (1997) Options, Futures, and Other Derivatives. Upper Saddle River, NJ: Prentice Hall. Merton, R.C. (1973) ‘Theory of rational option pricing’, Bell Journal of Economics and Management Science, 4(1): 141–183.

192

PA RT S IX : Fin a n ce

Bond valuation

45

A bond is a debt-based investment where an investor loans money to a company or government, and receives a fixed interest rate for a fixed period of time in return. Bonds can then be traded, and bond valuation is a method for determining their price.

When to use it ●●

To determine the price you should buy or sell a bond for, as an investor.

●●

To evaluate the merits of raising money through a bond issue.

Origins The idea of raising money from investors by issuing bonds dates back at least to the twelfth century and the republic of Venice. Early trading companies (such as the Dutch East India Company) were active issuers of bonds. National governments also began to raise money through bond issues from the sixteenth century onwards. Over the years, the market for bonds has become enormous, with some national governments carrying borrowing in excess of the size of their economies. The pricing and trading of bonds is therefore an important part of finance.

What it is The basic idea is that the value of a bond is the present value of expected future cash flows – a net present value calculation, as described in ‘Time value of money’, page 219. A typical bond has a principal amount, a rate of interest it will pay and a maturity date. Valuing a bond involves three steps:

193

1 Estimate the expected cash flows: The calculation is simple as each bond usually has a coupon (i.e. an interest rate). But it can get complex, depending on the issuer: think of the difference between US government and Argentine government bonds, for example. The former will pay its coupon reliably (it has only defaulted once, during the war of 1812) while the latter may not (Argentina has defaulted on its debt 12 times since it became an independent country). 2 Determine the appropriate interest rate or discount rate that should be used to discount the cash flows: If a government bond is paying 2 per cent but interest rates suddenly rise to 3 per cent, the value of the bond will fall. Remember: when interest rates rise, bond values will fall, and the reverse is true. If there are extraordinary circumstances that suggest the interest rate is not sufficient, then an appropriate discount rate should be determined. For example, if a company reveals that it is short of cash, then the risk of non-payment rises. 3 Calculate the present value of the expected cash flows: Add up the discounted values to arrive at the estimated present value of the bond. The original notion behind bonds was that they would be held to maturity, with investors simply receiving interest payments based on the coupon rate. Today, bonds are actively traded – in fact, the market for tradable bonds is larger than the market for equities. Bond valuation is therefore highly dynamic, and prices are affected by a variety of factors, some within the control of the issuing firm, some beyond their control.

How to use it Here is a simple example. Redwood Lane Ships, a publicly-traded firm, intends to issue $30 million worth of bonds to finance the build-out of a shipbuilding facility in Maine. These bonds have the following features: ●●

Face value (or a par value) of $1,000 – this is the nominal value on which Redwood Lane Ships will pay interest. Do not confuse this with the price of the bond, as you will see in the calculations to follow.

●●

Coupon of 5 per cent – this is the annual interest rate Redwood Lane Ships will pay on the face value of the bond. In this case, the firm will pay $50 at the end of each year.

●●

Maturity date – the bonds will be issued on January 1st, 2015, and will mature on January 1st, 2025. This is a medium-term, 10-year bond issue.

Let’s imagine that investors are expecting a yield to maturity (YTM) of 5 per cent for bonds from similarly-rated issuers. The YTM is the annual rate of return and also, then, the discount rate. Given that the coupon is exactly the same as the YTM, the present value – or the price of the bond – is $1,000.

194

PA RT S IX : Fin a n ce

Face value Coupon Expected YTM or discount rate

1000 5% 5% Bond valuation

Year

Period

Cash flow

Discount factor

Present value

2014

1

50.00

0.95

47.62

2015

2

50.00

0.91

45.35

2016

3

50.00

0.86

43.19

2017

4

50.00

0.82

41.14

2018

5

50.00

0.78

39.18

2019

6

50.00

0.75

37.31

2020

7

50.00

0.71

35.53

2021

8

50.00

0.68

33.84

2022

9

50.00

0.64

32.23

2023

10

1,050.00

0.61

644.61

Price (PV) of Redwood Lane Ships’ bond

1,000.00

What happens if the coupon is higher than investors are expecting? If investors are only expecting a YTM of 3 per cent, then the price of Redwood Lane Ship’s bond rises to account for the fact that it pays a higher coupon than the YTM. Following the same logic, if investors expect a YTM of 7 per cent – which is higher than the coupon rate – the price of the bond will fall.

Face value Coupon Expected YTM or discount rate

1000 5% 3% Bond valuation

Year

Period

Discount factor

Present value

2014

1

50.00

0.97

48.54

2015

2

50.00

0.94

47.13

2016

3

50.00

0.92

45.76

2017

4

50.00

0.89

44.42

45: Bond valuati on



Cash flow

195

Bond valuation Year

Period

Cash flow

Discount factor

Present value

2018

5

50.00

0.86

43.13

2019

6

50.00

0.84

41.87

2020

7

50.00

0.81

40.65

2021

8

50.00

0.79

39.47

2022

9

50.00

0.77

38.32

2023

10

1,050.00

0.74

781.30

Price (PV) of Redwood Lane Ships’ bond

1,170.60

Face value Coupon Expected YTM or discount rate

1000 5% 7% Bond valuation

Year

Period

Cash flow

Discount factor

Present value

2014

1

50.00

0.93

46.73

2015

2

50.00

0.87

43.67

2016

3

50.00

0.82

40.81

2017

4

50.00

0.76

38.14

2018

5

50.00

0.71

35.65

2019

6

50.00

0.67

33.32

2020

7

50.00

0.62

31.14

2021

8

50.00

0.58

29.10

2022

9

50.00

0.54

27.20

2023

10

1,050.00

0.51

533.77

Price (PV) of Redwood Lane Ships’ bond

859.53

Interest rates can shift as a result of economic cycles, and investors can raise or lower the risk profile of a firm given its recent or expected future performance. At any time, the market price of the bond will fluctuate to account for these influences. There is one certain fact, however: the closer the bond gets to its maturity date, the more its price will approach its par value. If we return to the example of 196

PA RT S IX : Fin a n ce

Redwood Lane Ships, opposite, the bond with the higher price of $1,170.60 (with the expected YTM of 3 per cent) will see its price move towards the $1,000 mark over time. Here is the same bond, three years before it matures:

Face value Coupon Expected YTM or discount rate

1000 5% 3% Bond valuation

Year

Period

Cash flow

Discount factor

Present value

2021

1

50.00

0.97

48.54

2022

2

50.00

0.94

47.13

2023

3

1,050.00

0.92

960.90

Price (PV) of Redwood Lane Ships’ bond

1,056.57

Likewise, if a bond is priced below its par value, its price will approach par value as it gets closer to its maturity date. It is worth noting that bonds come in a large variety of forms: treasury inflationprotected securities; zero-coupon bonds; callable bonds; convertible bonds, and so on. They differ on such dimensions as how frequently the bond pays interest (zero-coupon bonds, for example, pay no interest), whether the issuers have the option of ‘calling’ the bonds back, or if the bonds can be converted to shares at the investor’s discretion. Over the last 20 years, banks have been highly innovative in coming up with new bond variants, though as we saw in the financial crisis of 2008, such innovations can also create problems when they are not properly understood.

Top practical tip Because bonds are usually issued with a stated coupon rate and maturity date, a bond’s market price hinges on the yield currently expected by investors. If a bond has a higher coupon rate than the current expected yield, investors will pay more for the bond. In practice, investors may be able to identify and profit from mispriced bonds. The market for bonds is larger than the market for equities, and the transaction sizes for bond trade are larger. Bonds also trade less frequently than equities, so there is scope for astute investors to find pricing anomalies in the bond market and profit from them.

45: Bond valuati on

197

Top pitfall Bonds are not always ‘safer’ than equities, as the collapse in the mortgage-backed securities markets showed in 2008–2009. While bonds may pay a regular coupon, their price can fluctuate dramatically as a result of industry conditions.

Further reading Berk, J. and DeMarzo, P. (2013) Corporate Finance: The core. Harlow, UK: Pearson. Brealey, R.A. and Myers, S.C. (2013) Principles of Corporate Finance, 11th edition. New York: McGraw-Hill.

198

PA RT S IX : Fin a n ce

Capital asset pricing model

46

The capital asset pricing model (or CAPM, as it is universally known) estimates the expected return for a firm’s stock. The calculation uses the prevailing risk-free rate, the stock’s trading history and the return that investors are expecting from owning shares.

When to use it ●●

To estimate the price you should pay for a security, such as a share in a company.

●●

To understand the trade-off between risk and return for an investor.

Origins CAPM was developed by William Sharpe. In 1960, Sharpe introduced himself to Harry Markowitz, inventor of ‘modern portfolio theory’ (see Chapter 48), in search of a doctoral dissertation topic. Sharpe decided to investigate portfolio theory, and this led him to a novel way of thinking about the riskiness of individual securities, and ultimately to a way of estimating the value of these assets. The CAPM model, as it became known, had a dramatic impact on the entire financial community – both investment professionals and corporate financial officers. In 1990, Sharpe won the Nobel Prize in Economics alongside Markowitz and Merton Miller.

What it is Investors want to earn returns based on the time value of money and the risk they are taking. The CAPM model accounts for both of these in its formula. First, the risk-free rate, or ‘Rf’, represents the time value of money. This is the return earned 199

simply by buying the risk-free asset – the current yield on a 10-year US government bond, for example. Second, the asset’s risk profile is estimated based on how much its historical return has deviated from the market’s return. Given that the market has a beta of 1.0 (denoted by βa), an asset whose returns matches the market (for example, the shares in a large diversified company) would have a beta close to 1.0. In contrast, an asset whose returns fluctuate with greater amplitude (for example, a high-technology stock) would have a beta higher than 1.0. A defensive stock (for example, a dividend-paying utility) might have a beta lower than 1.0, suggesting its shares are less risky than the market as a whole. Sharpe developed a simple formula linking these ideas together: Where:

Ra = Rf + βa (Rm – Rf)

Ra = the required return for the asset Rf = the risk-free rate βa = the beta of the asset Rm = the expected market return The CAPM indicates that the expected return for a stock is the sum of the risk-free rate, Rf, and the risk premium, or βa (Rm – Rf). The risk premium is the product of the security’s beta and the market’s excess return. Consider a simple example. Assume the risk-free rate of return is 3 per cent (this would typically be the current yield on a US 10-year government bond). If the beta of the stock is 2.0 (it’s a technology stock) and the expected market return over the period is 6 per cent, the stock would be expected to return 9.0 per cent. The calculation is as follows: 3 per cent + 2.0 (6.0 per cent – 3.0 per cent). As should be clear from this example, the key part of the story is ‘beta’, which is an indicator of how risky a particular stock is. For every stock being analysed, the risk-free rate and the market return do not change.

How to use it Estimating the risk-free rate is easy because the current yield for a 10-year US government bond is readily available. Estimating the market return is more challenging, because it rises and falls in unpredictable ways. Historically, the market return has averaged somewhere in the region of 5–7 per cent, but it is sometimes much higher or much lower. If the current 10-year US government bond yield is, say, 2.6 per cent, then the market’s excess return is between 2.4 per cent and 4.4 per cent. A stock’s beta can be found on major financial sites, such as Bloomberg. It is possible to calculate beta on your own by downloading, to a spreadsheet program, a stock’s two- or five-year weekly or monthly history, and the corresponding data for the ‘market’, which is usually the S&P 500.

200

PA RT S IX : Fin a n ce

Top practical tip CAPM has come to dominate modern financial theory, and a large number of investors use it as a way of making their investment choices. It is a simple model that delivers a simple result – which is attractive, but can lead to a false sense of security. If you are an investor, the most important practical tip is, first, to understand CAPM so that you can make sense of how securities are often priced, and then to be clear on the limitations of the model. Remember, the beta of a stock is defined by its historical volatility, so if you are able to develop a point of view on the future

volatility of that stock – whether it becomes more or less volatile than in the past – you can potentially price that stock more accurately. Think of GE’s shares in the 1990s, when its highly predictable earnings growth allowed it to outpace the market, versus the early 2000s when its earnings became more volatile. Had you relied on GE’s beta in the 1990s as an indicator of how well the stock might perform in the future, you would have lost a lot of money after 2000.

Top pitfall Does CAPM really work? Like many theories in the world of business, it is approximately right, but with a very large unexplained component in terms of how much individual stocks are worth. Academic studies have come up with mixed results. For example, Eugene Fama and Kenneth French reviewed share returns in the USA between 1963 and 1990, and they found that there were at least two factors other than beta that accounted for stock returns: whether a firm was small or large, and whether the firm had a high or low book-to-market ratio. The relationship between beta and stock prices, over a short period of time, may not hold.

Further reading Black, F., Jensen, M.C. and Scholes, M. (1972) ‘The capital asset pricing model: Some empirical tests’, in Jensen, M. (ed.), Studies in the Theory of Capital Markets (pp. 79–121). New York: Praeger Publishers. Fama, E.F. and French, K.R. (2004) ‘The capital asset pricing model: Theory and evidence’, Journal of Economic Perspectives, 18(3): 25–46. Sharpe, W.F. (1964) ‘Capital asset prices: A theory of market equilibrium under conditions of risk’, Journal of Finance, 19(3): 425–442.

46: C api tal asset pricing model

201

47

Capital budgeting

To select the best long-term investments, firms rely on a process called ‘capital budgeting’. There is typically a lot of uncertainty around major investments, and the techniques of capital budgeting are a useful way of reducing that uncertainty and clarifying the likely returns on the investment. There are several different techniques, each with their own pros and cons.

When to use it ●●

To decide whether a firm should make a capital investment.

●●

To evaluate the relative attractiveness of several potential projects.

Origins Capital budgeting as a tool has been around at least since humans began farming. Historian Fritz Heichelheim believed that capital budgeting was employed in food production by about 5,000 bc. He noted that: ‘Dates, olives, figs, nuts, or seeds of grain were probably lent out ... to serfs, poorer farmers, and dependents, to be sown and planted, and naturally an increased portion of the harvest had to be returned in kind (and) animals could be borrowed too for a fixed time limit, the loan being repaid according to a fixed percentage from the young animals born subsequently’. The first documented interest rates in history – likely used as discount rates – are from Bronze-Age Mesopotamia, where rates of one shekel per month for each mina owed (or 1/60th) were levied – a rate of 20 per cent per annum. Capital budgeting techniques have obviously become more sophisticated over the years, but they are still based on simple ‘time-value of money’ principles.

202

What it is Firms commit to significant capital expenditures, such as buying or refurbishing equipment, building a new factory, or buying real estate with the goal of expanding the number of stores under a banner. The large amounts spent for these types of projects are known as capital expenditures (to distinguish them from day-to-day costs, which are called operating expenditures). The underlying logic of capital budgeting is very straightforward: it involves estimating all the future cash flows (in and out) for the specific project under consideration, and then discounting all these cash flows back to the present to figure out how profitable the project is. There are three main capital budgeting techniques employed by firms: ●●

Payback period: The length of time it will take for the project to pay for itself.

●●

Net present value (NPV): The net value of all future cash flows associated with the project, discounted to the present day.

●●

Internal rate of return (IRR): The rate of return, as a percentage, that gives a project a net present value of zero.

It should be clear that these are all variations on the same theme. We explain below how each of them is used. From a theoretical perspective, NPV is the best approach. However, many firms use IRR and payback period because they are intuitively attractive and easy to understand.

How to use it The three capital budgeting decision rules have slightly different qualities, and the best way to understand their pros and cons is to work through an example. Let’s say a manager needs to decide whether to refurbish his factory’s machines or buy new ones. Refurbishing (for $100,000) costs less than buying new machines (for $200,000), but buying new delivers a higher stream of cash flows.

Payback period Here is the comparison over a five-year period: Period

0

1

2

3

4

5

Refurbish

(100,000)

50,000

50,000

30,000

20,000

10,000

Buy new

(200,000)

30,000

100,000

70,000

70,000

70,000

47: C api tal budget ing

203

If you were to use payback period as the decision rule, you can see that the manager should choose to refurbish the machines because the investment will have a payback period of two years. For the new machines, the payback is three years. While payback is not the most refined technique for evaluating capital investments, it can be effective because it is simple and quick to calculate.

Net present value (NPV) Let’s assume that the business has a discount rate of 10 per cent. Using the NPV method, you can calculate a discount factor for each period and discount the cash flows by the corresponding factor:

Discount rate

10%

Period

0

1

2

3

4

5

NPV

Refurbish

(100,000)

45,455

41,322

22,539

13,660

 6,209

29,186

Buy new

(200,000)

27,273

82,645

52,592

47,811

43,464

53,785

 0.909

 0.826

 0.751

 0.683

 0.621

Discount Factor

1.000

The net present value is the sum of the investment and all future cash flows, discounted at 10 per cent. This NPV analysis shows that the manager should buy new machines because the investment delivers, over a five-year period, a higher net present value. This also shows how a payback analysis can fall short: it does not take into account future cash flows and it does not consider a discount rate.

Internal rate of return (IRR) The internal rate of return finds the discount rate for the streams of cash flows, assuming the net present value is set to zero. The notion of IRR is attractive because it is easy to calculate and delivers one single number. Here is the IRR calculation for the example we’ve been using:

0

1

2

3

4

5

IRR

Refurbish

(100,000)

50,000

 50,000

30,000

20,000

10,000

24%

Buy new

(200,000)

30,000

100,000

70,000

70,000

70,000

19%

Period

In contrast with the NPV method, the use of IRR suggests that the manager should refurbish the machines. Can two methods more sophisticated than payback deliver different results?

204

PA RT S IX : Fin a n ce

Yes, because of the timing of cash flows. Notice that the bulk of the cash inflows happen in year three and beyond, and these delayed cash flows are penalised at a higher rate. In the ‘refurbish’ example, the biggest cash inflows occur in years 1 and 2. Here is what happens if you reverse the stream of cash flows – year 1 with year 5 and year 2 with year 4 – without altering the total amounts:

0

1

2

3

4

5

IRR

Refurbish

(100,000)

10,000

20,000

30,000

 50,000

50,000

14%

Buy new

(200,000)

70,000

70,000

70,000

100,000

30,000

22%

Period

You can see that the IRR analysis now suggests that ‘buy new’ is the way to go. For perspective, the NPV analysis – using the reversed stream of cash flows – continues to suggest ‘buy new’ as the best option: Discount rate

10%

Period

0

1

2

3

4

5

NPV

Refurbish

(100,000)

 9,091

16,529

22,539

34,151

31,046

13,356

Buy new

(200,000)

63,636

57,851

52,592

68,301

18,628

61,009

 0.909

 0.826

 0.751

 0.683

 0.621

Discount Factor

1.000

Top practical tip Of the three techniques, payback period is the least accurate, and should not be used unless there are other reasons why getting your money back quickly is important. Of the other two, NPV is technically more accurate, while IRR is attractive because it provides a single, intuitively-meaningful number for a given project. But, as you can see from the examples above, it is helpful to look at various techniques in order to understand how sensitive their assumptions can be.

Top pitfall Keep in mind that cash flows are not usually invested at the same rate. Both NPV

and IRR assume that any cash flows received are reinvested at the same rate. So,

47: C api tal budget ing



the first of the two IRR analyses presumes that the cash flows are reinvested and

205

compounded at a rate of 24 per cent for each of the remaining years. In practice, cash flows received may not be reinvested at the same rate as they are used to pay off loans and other expenses, or they are invested in other projects (which may not promise the same returns).

Further reading Berk, J. and DeMarzo, P. (2013) Corporate Finance: The core, 3rd edition. Harlow, UK: Pearson. Heichelheim, F.M. and Stevens, J. (1958) An Ancient Economic History: From the Palaeolithic age to the migrations of the Germanic, Slavic and Arabic nations, Vol. 1. Browse online at www.questia.com.

206

PA RT S IX : Fin a n ce

Modern portfolio theory

48

Modern portfolio theory is a way of understanding the interaction between risk and reward for investors. It has shaped how fund managers and other investors choose which shares to invest in. It has also influenced how companies approach risk management.

When to use it ●●

To decide what shares (or other assets) to put in your investment portfolio.

●●

To understand the different types of risks you take when investing money.

Origins After observing that the prevailing security selection methods were lacking, Harry Markowitz published ‘Portfolio selection’ in the Journal of Finance in 1952, introducing a new concept called ‘modern portfolio theory’. Before Markowitz’s theory, the prevailing ‘best practice’ in the securities industry was to pick stocks that had the best risk-return profile, and put them into a portfolio. But Markowitz observed that this advice could actually increase an investor’s risk. At any point in an economic cycle, different groups of stocks will seem to have the best risk-return characteristics. Look closer at these screened stocks and one will find they are usually concentrated in the same industries. As investors rush to ‘defensive’ stocks in a downturn, utility stocks may seem to be a good bet. But a portfolio consisting mostly of utility – or similar – stocks will have concentration risk because of correlated assets. Markowitz argued that security returns can be modelled as random variables. One can analyse these single-period returns and compute expected values (the ‘return’), the standard deviation of returns by security (the ‘risk’) and whether the returns are correlated (for Markowitz, a portfolio of securities whose returns are not 207

corrected will have an average return with less volatility – at the portfolio level – than its individual constituents). When multiple portfolios are constructed and analysed, a few will have the best balance of risk and reward: these portfolios are found on what Markowitz called the ‘efficient frontier’.

What it is Modern portfolio theory (MPT) puts into practice the concept of diversification, with the aim of selecting securities that have, collectively, a lower risk profile than any individual security on its own. A security’s beta is the appropriate measure of risk to use when assessing a security for inclusion in a portfolio. See ‘capital asset pricing model’ in Chapter 46 for more information on beta. For example, stock market values and the price of gold may not be correlated, and the combination of these two asset classes should, in theory, result in a lower risk for the portfolio compared to stocks on their own, or gold on its own. Within the universe of stocks, even if stock prices are positively correlated – meaning that they collectively tend to rise and fall in tandem – having a diverse portfolio of stocks will still lower the portfolio’s overall risk. According to MPT, the concept of ‘risk’ as it relates to stock returns is comprised of two components: systematic risk and unsystematic risk. Systematic risk has an impact on the entire market. Think of wars and recessions – these risks cannot be diversified away with portfolio section. Unsystematic risk relates to individual stock returns. Think of a firm issuing a profit warning, or getting approval for a new drug – these individual stock movements are not correlated with changes in the overall market. The crucial point in portfolio selection, using MPT, is that each individual stock’s risk (defined by how much its returns deviate from the portfolio’s average return) is not what adds to or lowers portfolio risk. Instead, it is the degree to which the returns of the individual stocks within the portfolio look similar. If the returns rise and fall together, they have a high ‘covariance’ with each other. A well-diversified portfolio will have stocks that do not have a high covariance with each other. The returns do not need to be negatively correlated.

How to use it The logic of modern portfolio theory has some very important consequences, as it allows you to identify the optimum level of diversification in a portfolio of stocks. This does not mean that there is an ‘ideal’ portfolio for every investor. Investors will seek different levels of return. Consider, for instance, the difference between the investment objectives of a retired manager with a shorter time horizon and a lower risk appetite, with an entry-level worker who has a longer time horizon and a higher risk appetite. For each target return, there are portfolios along what Markowitz calls the ‘efficient frontier’. Along this frontier, there is a portfolio with the lowest level of risk and another with the highest indicated return: 208

PA RT S IX : Fin a n ce

Expected return

EFFICIENT FRONTIER

Other portfolios

Standard deviation Source: Based on data from Markowitz, H.M. (1952) ‘Portfolio selection’, The Journal of Finance, 7(1): 77–91.

A rational investor will look to hold one of the three portfolios on the efficient frontier. Note that as you move from left to right on the efficient frontier, the expected return rises along with the ‘risk’ – the standard deviation of returns. There is an additional layer to MPT where returns can be increased by borrowing to buy a risk-free asset, allowing the rest of the stocks in the portfolio to have a higher risk profile and, potentially, a higher target return. In the graph overleaf, the ‘capital market line’ shows portfolios with various combinations of the risk-free security – perhaps a US treasury security, risky assets and debt. The market portfolio – on both the capital market line and the efficient frontier – contains only risky assets. Except for the market portfolio, all of the portfolios on the capital market line have better risk-return profiles than portfolios on the efficient frontier. The portfolios around ‘L’ (for ‘lower risk’) contain various amounts of risk-free assets, and the portfolio at the bottom left holds only the risk-free asset. The portfolios around ‘H’ (for ‘higher risk’) show a few portfolios of risky assets and the use of leverage, or debt, with higher levels of debt delivering, potentially, higher returns. The graph suggests that taking on debt can increase the expected return of the portfolio, but the investor has to accept a higher level of risk.

48: Modern portfol io theory

209

Capital market line H

Expected return

Market portfolio

Other portfolios

L

EFFICIENT FRONTIER

Return for risk-free asset Standard deviation Source: Based on data from Markowitz, H.M. (1952) ‘Portfolio selection’, The Journal of Finance, 7(1): 77–91.

Top practical tip If you like to invest in your own stocks and shares, perhaps for your retirement, modern portfolio theory says you reach a near-optimal level of diversity when you have 20 stocks in your portfolio. A very small portfolio of two or three stocks is very risky, because those individual companies could get into trouble. A very large portfolio of several hundred stocks is obviously going to just rise and fall with the overall market conditions. It turns out you can get close to mirroring the performance of the entire market with just 20 stocks.

210

PA RT S IX : Fin a n ce

Top pitfall Keep in mind that the data used for the analyses in MPT are based on how securities have performed in the past. These historical results may not hold true when one is looking for an indication of future performance. Many investors were holding what they believed to be well-diversified portfolios when they entered the recession of 2008–2013, but this did not prevent them from losing a significant amount of money.

Further reading Elton, E.J., Gruber, M.J., Brown, S.J. and Goetzmann, W.N. (2009) Modern Portfolio Theory and Investment Analysis. Hoboken, NJ: John Wiley & Sons. Markowitz, H.M. (1952) ‘Portfolio selection’, The Journal of Finance, 7(1): 77–91. Markowitz, H.M. (1959) Portfolio Selection: Efficient diversification of investments. New York: John Wiley & Sons.

48: Modern portfol io theory

211

49

Modigliani-Miller theorem

The Modigliani-Miller (M-M) theorem is an academic proposition that lies at the heart of a lot of the mainstream models used in finance today. It says that the way a firm finances itself is irrelevant to its market value, so firms shouldn’t spend large amounts of time worrying about the mix of debt and equity they have on their balance sheet. The M-M theorem is built on explicit assumptions that do not always hold true in the real world. It is therefore a good way of addressing choices of capital structure in firms, but it does not offer you a simple answer.

When to use it ●●

To introduce a discussion on optimal capital structure, especially to managers without a background in finance.

●●

To support or reject plans to add more leverage to a firm’s balance sheet.

Origins Merton Miller and Franco Modigliani developed their ideas on capital structure in the late 1950s when they were working at Carnegie Mellon University. They were scheduled to deliver a corporate finance course without any practical experience in the field. While preparing for this course they noted inconsistencies in the literature, and their ideas to resolve these issues resulted in a paper and, in the ensuing decades, what is now known as the ‘M-M theorem’. Before this breakthrough in thinking, treasurers and finance managers used to take it as a given that there was an ideal capital structure for their firms, and that their job was to find it and stick to it. Modigliani and Miller suggested that, in fact, the mix of debt and equity capital in a firm was not important because the market will compensate for any adjustments either way. This provoked an angry reaction 212

from many quarters arguing against Modigliani and Miller’s findings. The authors ultimately won Nobel prizes in Economics, at least in part for this work.

What it is The M-M theorem is called the ‘capital structure irrelevance principle’. Put simply, this suggests the relative proportions of debt and equity in a firm do not matter. There are two propositions to the M-M theorem, called M-M 1 and M-M 2. The ‘capital structure irrelevance proposition’ (M-M 1) assumes that these conditions exist: there are no taxes, transaction costs or borrowing costs, companies and investors can borrow at the same rate, every participant in the market has all the information (there is no information asymmetry) and debt is not tax deductible so it has no effect on earnings. Given these conditions, changes in the level of debt have no impact on a company’s stock price and, thus, capital structure is irrelevant. Let’s look at an example of how this might play out. Let’s assume ABC Company’s shareholders want a 20 per cent rate of return on their investment. If there is no debt involved, the cost of capital is 20 per cent. As the firm grows, it can take on debt at a 12.5 per cent interest rate. As a result, the firm’s cost of capital should decrease as more debt is employed. But as the level of debt increases (depicted by a rising debt-to-equity ratio), the firm’s financial position weakens. The change in risk increases the cost of equity as investors demand a higher return for their money. The net effect is that the weighted average cost of capital (WACC) stays the same (see the graph below). M-M 1 – Weighted average cost of capital 35% 30% 25%

Cost of debt Cost of equity WACC

20% 15% 10% 5% 0%

0.00

0.25

0.50 0.75 Debt-to-equity ratio

1.00

1.25

Regarding the ‘trade-off theory of leverage’ (M-M 2), since debt is tax deductible and is usually cheaper than equity, many firms take on debt to boost their earnings. In their 1963 paper, Modigliani and Miller introduced taxes into their theory. Here is the example, adapted to account for debt’s tax deductibility. With the cost of debt at 12.5 per cent and a tax rate of 30 per cent, the after-tax cost of debt is 12.5 per 49: Mod i gl ian i- Mi ller theorem

213

cent × (1–30 per cent) = 8.75 per cent. You can see from the graph below that the cost of debt is now a lot lower, and this pulls the weighted average cost of capital down as more debt is being used. M-M 2 – Weighted average cost of capital 35% 30% 25%

Cost of debt Cost of equity WACC

20% 15% 10% 5% 0%

0.00

0.25

0.50 0.75 Debt-to-equity ratio

1.00

1.25

How to use it For M-M 1, because its assumptions are not met in the real world, the theorem suggests (slightly paradoxically) that the capital structure decision often is important, because one or more of the assumptions does not hold. From a practical standpoint, the M-M 1 proposition is often used as the starting point for a discussion of capital structure in firms. M-M 2 deals with the benefit of increased debt levels on the weighted average cost of capital, so one might see M-M 2 used as part of an argument to continue to add leverage to balance sheets. But, in the real world, there is a cost to piling on debt. The global financial crisis of 2008, which saw a number of highly leveraged investment banks fail, has been in part attributed to excessive leverage ratios.

Top practical tip The M-M theorems can be used as a starting point for understanding how much debt a company should take on, but you need to be very careful to work through all the assumptions behind the theorems, and to decide how applicable they are in practice to your real-life setting.

214

PA RT S IX : Fin a n ce

Top pitfall It may not be possible, in practice, to determine in advance how much more equity holders will demand if debt levels rise. There is a threshold for debt above which investors will not want to own equity at any price. Think back to well-known cases of corporate bankruptcies, such as Lehman Brothers or Blockbuster Video: in such cases the very high leverage of these firms was a key factor in leading equity investors to flee. The point is that investor behaviour in the capital markets is not entirely rational, especially when risk levels rise. The Modigliani-Miller theorem is typically most valuable as a point of discussion. The truth is, after more than half a century of debate, capital structure still matters.

Further reading Modigliani, F. and Miller, M. (1958) ‘The cost of capital, corporation finance and the theory of investment’, American Economic Review, 48(3): 261–297. Modigliani, F. and Miller, M. (1963) ‘Corporate income taxes and the cost of capital: A correction’, American Economic Review, 53(3): 433–443.

49: Mod i gl ian i- Mi ller theorem

215

50

Time value of money

Would you prefer to be given a dollar today, or in one year’s time? Obviously you would prefer to be given it today, because you could then invest it, with interest, and a year from now it would be worth more. This is the simple intuition behind the time value of money, and it is a foundational concept in the world of finance.

When to use it ●●

To make investment decisions as an individual investor.

●●

To evaluate the return on a project.

●●

To compare activities that take place at different points in time.

Origins No one knows the true origins of the concept – it has been around as long as money has existed. R.H. Parker, in 1968, traced the earliest interest-rate tables back to 1340, but the informal understanding that money is more valuable the sooner it is received was in existence before that. Over the years, this intuition was formalised into a set of mathematical tools that allows you to make careful calculations around, for example, an investment or acquisition decision.

What it is The time value of money is a simple idea that has many important consequences. Start with the notion of an interest rate – the rent paid on borrowed money. Simple interest is calculated by multiplying the starting amount – principal – by the interest rate. For example, if the principal is $100 and the annual interest rate is 10 per cent,

216

one would receive $10 after the first year. Let’s assume that the interest is paid only on initial principal but that the interest is not reinvested. Here is what we get: Year 1: 10 per cent of $100 = $10 + $100 = $110 Year 2: 10 per cent of $100 = $10 + $110 = $120 Year 3: 10 per cent of $100 = $10 + $120 = $130 Year 4: 10 per cent of $100 = $10 + $130 = $140 Year 5: 10 per cent of $100 = $10 + $140 = $150 Let’s say that the interest earned is reinvested in each of the five years and let’s assume that there are no taxes in our second example. Given this scenario, our investment will earn compound interest, which means that interest is being paid on the starting principal and on the interest that is reinvested each year. The gains from compounding are higher, as can be seen here: Year 1: 10 per cent of $100.00 = $10.00 + $100.00 = $110.00 Year 2: 10 per cent of $110.00 = $11.00 + $110.00 = $121.00 Year 3: 10 per cent of $121.00 = $12.10 + $121.00 = $133.10 Year 4: 10 per cent of $133.10 = $13.31 + $133.10 = $146.41 Year 5: 10 per cent of $146.41 = $14.64 + $146.41 = $161.05 The graph below shows the differences between simple versus compound interest over a period of 20 years. You can see immediately that the investment earning simple interest grows in a linear fashion, while the other, earning compound interest, results in geometric growth. As a result of compounding, the longer the period the more distinctively the two lines will diverge. Simple vs compound interest 800 700 600 500 400 300 200 100 0

1

2

3

4

5

6

7

8

9

Simple

10 11 12 13 14 15 16 17 18 19 20 Compound 50: Ti me va lue of money

217

Here is a formula for calculating compound interest: Pn = P0(1 + I)n

Where:

Pn = the value at the end of n time periods P0 = the beginning value I = interest rate n

= number of years

How to use it These basic ideas around the time value of money allow you to do some very useful calculations.

Net present value (NPV) To evaluate major capital investments, many firms calculate a ‘net present value’ (NPV). Let’s say you are looking at an investment that requires $100 in capital today, your cost of capital is 10 per cent, and the investment delivers cash flows of $30 a year over 10 years. In nominal terms, investing $100 and getting $300 back sounds, initially, like a good idea. But how do you account for the fact that the cash will be given to you over a 10-year period? The answer is you would discount future cash flows at a discount rate of 10 per cent. Remember that the discount factor is higher the further the cash flows are out in the future. For example, while the $30 coming back in Year 2 has a factor of 0.83 (leaving you with a present value of $24.79), you have to apply a discount factor of 0.39 for the last $30 coming back in Year 10, which leaves you with a present value for that cash flow of just $11.57. Here is the example in tabular form: Net present value of cash flows Cash flow

Discount factor

Present value

0

(100.00)

1.00

(100.00)

1

30.00

0.91

27.27

2

30.00

0.83

24.79

3

30.00

0.75

22.54

4

30.00

0.68

20.49

5

30.00

0.62

18.63

PA RT S IX : Fin a n ce



218

Period

Net present value of cash flows Period

Cash flow

Discount factor

Present value

6

30.00

0.56

16.93

7

30.00

0.51

15.39

8

30.00

0.47

14.00

9

30.00

0.42

12.72

10

30.00

0.39

11.57

Total:

84.34

This analysis shows that this is still a good investment to make: the net present value of the future cash flows, less the initial investment of $100, is positive. Note that the NPV is $84.34, which is lower than $200 (if we deducted the total nominal returns of $300 from the initial investment of $100). The difference in NPV is that the cash flows are discounted to present value.

Present value of an annuity Annuities are streams of cash paid or received at regular intervals. Rent and pension payments are some examples of annuities. You can use the annual inflation rate estimate to discount the annuities in order to calculate their present value.

Future value Starting with the current value of an asset, one can find the future value by growing the asset value by the growth rate. For example, if you have an asset worth $100 and its value grows at 10 per cent a year, compounded, the future value after five years will be $161.05.

Top practical tip Time value of money analyses have five variables: the number of periods in the project (n); the interest rate (I); the present value, or investment (PV); regular cash inflow or outflow amounts, if any (PM); and the future value (FV). You can solve any unknown variable if you know the value of the remaining four variables.

50: Ti me va lue of money

219

Top pitfalls The higher the number of periods, the more sensitive the calculation is to discount rates. Selecting an appropriate discount rate is at least as important as calculating, with precision, the investment cost and cash flows. A second note of caution is that the NPV analysis assumes one single discount rate throughout the life of the project but a firm’s discount rate may change as its cost of capital falls or rises over time.

Further reading Brealey, R.A. and Myers, S.C. (2013) Principles of Corporate Finance, 11th edition. New York: McGraw-Hill. Parker, R.H. (1968) Discounted Cash Flow in Historical Perspective. Chicago, IL: Institute of Professional Accounting.

220

PA RT S IX : Fin a n ce

Valuing the firm

51

What is a firm worth? Answering this question will allow you to determine if it is undervalued or overvalued compared to its peers. Unfortunately, there is no single model for analysing what a firm is worth. Here, we describe four different models and we explain the pros and cons of each one.

When to use it ●●

To decide the right price when making an acquisition.

●●

To defend your own company against an acquisition.

●●

As an investor, to decide when to buy or sell shares in a firm.

Origins Managers and investors have needed to value firms for as long as there have been firms to buy or stocks to invest in. Early attempts at valuation focused on simple analysis of cash flows and profitability. As the notion of ‘time value of money’ became understood, valuation methods started to incorporate discount rates and they also considered how a company was financed. In a review of the use of discounted cash flow in history, R.H. Parker (1968) notes that the earliest interest-rate tables date back to 1340 and were prepared by Francesco Balducci Pegolotti, a Florentine merchant and politician. The development of insurance and actuarial sciences in the next few centuries provided an impetus for a more thorough study of present value. Simon Stevin, a Flemish mathematician, wrote one of the first textbooks on financial mathematics in 1582, in which he laid out the basis for the present value rule.

221

What it is Firm valuation methods all involve analysing a firm’s financial statements and coming up with an estimate of what the firm is ultimately worth. There are absolute methods, which hone in on the firm’s ability to generate cash and its cost of capital, and there are relative methods, which compare a firm’s performance with that of its peer group. You should take away that the estimate of a firm’s value will change depending on the technique being used and the assumptions the analyst has selected.

How to use it The basic question that investors ask is this: ‘Is the firm undervalued or overvalued relative to its stock price?’. Here, we illustrate four firm valuation techniques: 1 Asset-based valuation – based solely on its balance sheet. 2 Comparable transaction valuation – when compared to its peer group. 3 Discounted cash flow – given its stream of future cash flows and its cost of capital. 4 Dividend discount model – given the dividend stream it intends to return to investors. Let’s work through an example to see how the four valuation methods are put into practice. Here we have a firm, Luxury Desserts, which produces high-end desserts for the New York City market. It has historical results from 2011 to 2013 and projected results over the next five years (all figures in thousands) (see table opposite). One can see that this firm intends to grow its revenues rapidly. While net income is expected to grow, net margin is anticipated to remain about the same. Looking at Luxury Dessert’s balance sheets for the past few years (see table, top of page 224), we can see that the firm’s retained earnings have increased, and it is carrying about half-a-million in cash on its books. We have also compiled a set of dessert companies that are similar to Luxury Desserts. We will use this set of comparators in our analysis (see table, bottom of page 224). Note that one of the firms, Sunrise Treats, has significantly higher sales than the rest of the comparators.

Asset-based valuation This technique looks at the fair-market value of its equity, which is calculated by deducting total liabilities from total assets. The focus is on the fair-market value of its total assets minus total liabilities. According to basic accounting principles, a firm’s income statement provides a measure of its true earnings potential, while the balance sheet gives a reliable estimate of the value of the assets and equity in the firm. 222

PA RT S IX : Fin a n ce

51: Va lu ing the f irm

223

$295 $112 $183 4.2%

Tax (38%)

Net income

  Net margin

$18

EBT

Interest expense

$103

Amortisation

$1,102

Gross margin

$415

$1,542

Materials

EBITDA

$1,763

Labour

$686

$4,407

Revenue

Total other expenses

2011

Income statement

2012

6.2%

$328

$201

$528

$18

$107

$654

$815

$1,468

$1,730

$2,045

$5,244

Luxury Desserts: recent and projected income statements

6.3%

$361

$222

$583

$18

$112

$713

$960

$1,673

$1,904

$2,192

$5,768

2013

6.0%

$473

$290

$763

$160

$115

$1,038

$1,173

$2,212

$7,822

2014 F

5.5%

$547

$335

$883

$160

$169

$1,212

$1,482

$2,693

$9,878

2015 F

5.4%

$672

$412

$1,084

$160

$172

$1,416

$1,866

$3,283

$12,442

2016 F

5.6%

$820

$503

$1,323

$160

$175

$1,658

$2,198

$3,856

$14,654

2017 F

5.8%

$996

$611

$1,607

$160

$177

$1,944

$2,574

$4,518

$17,161

2018 F

Luxury Desserts: balance sheets Balance sheet

2011

2012

2013

Assets Cash

$76

$249

$546

Accounts receivable

$749

$813

$808

Prepaid expenses

$110

$131

$144

Inventory

$220

$247

$293

$1,073

$1,115

$1,154

$2,228

$2,555

$2,944

$–

$–

$–

Accounts payable

$278

$277

$305

Long term debt

$200

$200

$200

$478

$477

$505

Contributed equity

$250

$250

$250

Retained earnings

$1,501

$1,828

$2,190

$2,228

$2,555

$2,944

Fixed assets

Liabilities and equity Operating line

Luxury Desserts: comparable firms Target company

2013 Sales

EBITDA

Target price

Artisan Cakes

$12,000

$1,300

$7,800

Italian Bakery

$2,200

$350

$1,225

Wedding Cake Suppliers

$3,000

$600

$2,700

$35,000

$3,000

$36,000

$6,000

$750

$3,000

Sunrise Treats Meadow Breads

224

PA RT S IX : Fin a n ce

In Luxury Desserts’ case, total assets are $2,944,000 and total liabilities are $505,000, leaving net asset value, or equity, of $2,440,000. If we assume the market value of equity is the same as the net asset value, then Luxury Desserts is worth just under $2.5 million. This is one way of valuing a firm, but – as we will see – it is typically an underestimate of value because there are some assets (such as loyal customers or the power of a brand) that are not put on the balance sheet.

Comparable transaction valuation The second valuation method looks for comparable firms to see how much they are trading for on the stock market. When you want to sell your home, you estimate its value by looking at how much a similar house down the street sold for, and this is the same principle. The challenge here is identifying the right comparison firms. Ideally, a comparable firm is one that is quite similar – a direct competitor in the same industry – but of course it is very difficult to find firms that are so well matched. In practice, most analysts start by comparing the firm being valued to four or five of the closest competitors in its industry sector. If there are more than a handful of candidates, the analyst can focus on firms of similar size and growth potential. For example, in the smartphone industry, an analyst might compare Apple Inc. to Samsung as both are market leaders, but exclude Sony’s entry, assuming financials can be broken out for the latter unit. We have five comparators for this valuation exercise. Dividing the target price by EBITDA, we arrive at what multiple of EBITDA each firm is worth (see the table below).

Target company

2013 Sales

EBITDA

Target price

EBITDA multiple

Artisan Cakes

$12,000

$1,300

$7,800

6.0×

Italian Bakery

$2,200

$350

$1,225

3.5×

Wedding Cake Supplies

$3,000

$600

$2,700

4.5×

$35,000

$3,000

$36,000

12.0×

$6,000

$750

$3,000

4.0×

Sunrise Treats Meadow Breads

Average Average excluding Sunrise Treats

6.0× 4.5×

Notice that the average is computed both with and without Sunrise Treats. Because Sunrise Treats is significantly larger than the rest of the firms, one can argue for an exclusion of Sunrise Treats in the analysis. Given that Luxury Desserts had 2013 EBITDA of $713,000, we can estimate the value of Luxury Desserts based on the comparable transactions analysis (see table overleaf). 51: Va lu ing the f irm

225

2013 EBITDA

Implied value

Average

6.0×

$713

$4,275

Average excluding Sunrise Treats

4.5×

$713

$3,206

Depending on the comparator set used, Luxury Desserts is estimated to be worth $3,206,000 or $4,275,000. We have used a multiple of EBITDA in our comparable transactions analysis. Other multiples may be used in combination or in place of a multiple of EBITDA. These include earnings multiples and sales multiples, to cite two examples.

Discounted cash flow (DCF) The DCF analysis is based on the observation that the value of the firm is linked to how much free cash flow it can generate. These cash flows are discounted back to present value using the firm’s discount rate. The firm’s discount rate is its cost of capital, which includes what its shareholders expected to earn on equity and what its debt-holders are owed on its debt. The typical DCF calculation discounts a firm’s ‘unlevered free cash flow’ (UFCF) by a discount rate to get to the present value of the projected results. UFCF simply means the firm’s cash flows before it makes interest payments – hence the use of the term ‘unlevered’ in the metric. Let’s look at Luxury Dessert’s example:

DCF

2014 F

2015 F

2016 F

2017 F

2018 F

Unlevered free cash flow (UFCF)  $7,821.8

 $9,877.8  $12,442.1  $14,654.1  $17,160.9

EBITDA

 $1,038.3

 $1,211.7

 $1,416.3

 $1,658.1

 $1,944.1

 less: Amortisation

–$115.4

–$168.8

–$172.0

–$174.8

–$177.3

EBIT

 $922.9

 $1,042.8

 $1,244.4

 $1,483.4

 $1,766.9

  less: Tax (38%)

–$350.7

–$396.3

–$472.9

–$563.7

–$671.4

EBIAT

 $572.2

 $646.6

 $771.5

 $919.7

 $1,095.4

PA RT S IX : Fin a n ce



226

Revenue

DCF

2014 F

2015 F

2016 F

2017 F

2018 F

EBIAT

 $572

 $647

 $772

 $920

 $1,095

 add: Amortisation

 $115

 $169

 $172

 $175

 $177

  less: Capex

–$650

–$200

–$200

–$200

–$200

 less: Working capital investment

–$150

–$150

–$150

–$150

–$150

UFCF (unlevered free cash flow)

–$112

 $465

 $593

 $744

 $923

Note that the calculation starts with EBITDA, removes amortisation (the ‘DA’), then calculates the tax on EBIT (remember that this is a firm’s ‘unlevered’ cash flow calculation). The result is EBIAT, or earnings before interest but after tax. Then amortisation, which is a non-cash expense, is added back, and cash expenses for capital expenditures and any investments in working capital are deducted. The result is UFCF. We will use 17.58 per cent as the weighted average cost of capital (WACC) for our discount rate. Using the assumed WACC, we find that the present value of Luxury Desserts’ UFCF is $1,406,000 (see table below).

Net present value of UFCF Period Discount factor PV of UFCF

2014 F

2015 F

2016 F

2017 F

2018 F

1

2

3

4

5

0.85

0.72

0.62

0.52

0.44

–$95.6

$336.6

$365.0

$389.4

$410.5

Total PV of UFCF

$1,406.0

Don’t miss out on an important step in this exercise: determining the terminal value of the firm. The tricky thing is that Luxury Desserts is expected to continue operating after the fifth year (2018). In calculating a terminal value for the UFCFs before 2018, we can assume a perpetual growth rate (g) of 4.0 per cent, and the same WACC of 17.58 per cent. The formula for terminal value is: UFCFn ×

(1 + g) (WACC – g)

51: Va lu ing the f irm

227

Thus, the terminal value for Luxury Desserts at the end of 2018 is*: 923 × (1 + 4.0 per cent) (17.58 per cent – 4.0 per cent)

= $7,064,500

We find the present value of this terminal value by multiplying $7,064,500 by 0.44 to get $3,143,000. In the previous example, 0.44 is the discount factor for 2018, or the end of the fifth year. Note that the present value of the terminal value is much higher than the present value of the five years of cash flows. This is why it is important to remember to estimate a terminal value in the first place. We add the present value of the UFCFs ($1,406,000) to the present value of the terminal value ($3,143,000) to get Luxury Desserts’ DCF value of $4,548,900.

Dividend discount model This method simply looks at the cash to be received by shareholders. Investors look either for capital gains (think high-tech companies such as Workday, which has never turned a profit but has a stock price that has soared since 2012), or for dividends (for example, telecommunications and utilities). This model works best when dividends are known, steady, and are expected to grow at a predictable rate. The simplest version of the dividend discount model, the ‘Gordon growth model’, can be used to value a firm that is in ‘steady state’ with dividends growing at a rate that can be sustained forever. The Gordon growth model relates the value of a stock to its expected dividends in the next time period, the cost of equity and the expected growth rate in dividends. Value of stock =

DPS1 ke – g

Where: DPS1 = the Expected Dividend one period from now ke = the required rate of return for equity holders g = the perpetual annual growth rate for the dividend Using Luxury Desserts’ example, let’s assume investors can expect dividends of $400,000 per year, growing at rate of 15 per cent a year in perpetuity (see table below).

Expected dividend one period from now

400

Required rate of return for equity holders

26%

Perpetual annual growth rate for the dividend

15% $3,636

*A reminder: rounded values are used here only for presentation purposes.

228

PA RT S IX : Fin a n ce

In this case, Luxury Desserts is worth $3,636,000. In summary, the various valuation methods yield different results due to the different inputs used in the calculations: ●●

Asset-based valuation: $2,440,000

●●

Comparable transaction valuation: $3,206,000 to $4,275,000

●●

Discounted cash flow: $4,548,900

●●

Dividend discount model: $3,636,000

Clearly there is no single right answer to the question, ‘What is Luxury Desserts worth?’. This analysis provides a range of estimates (from $2.4m to $4.5m), and as an analyst or potential investor you would now be expected to consider various subjective factors to come to a view on what number is most realistic. Some of these factors are about the intangible strengths and weaknesses of the firm – for example, how loyal its customers are, or how skilled you consider the firm’s managers to be. Equally important are external factors, such as how volatile the market is or whether new competitors are emerging. Finally, you also have to consider how strongly the current owners of the firm want to sell, and whether other buyers are out there. Such factors often result in a much higher price being paid than would be expected using these valuation methods.

Top practical tip For valuation analysis to be useful and meaningful, make sure to use several techniques as part of your due diligence process. They will always yield slightly different results, and these differences help you to build a more complete picture of the firm you are valuing. It is also important to understand the mechanics behind your calculations, to get a sense for which inputs are the most heavily weighted.

Top pitfall The biggest mistake in valuation analysis is to assume that the calculations are ‘right’. Remember, all these techniques are based on assumptions. So learn to be critical about the inputs you are given, as a small change (such as the assumed growth rate) can have a large impact on the final valuation.

51: Va lu ing the f irm

229

Further reading Berk, J. and DeMarzo, P. (2013) Corporate Finance: The core. Harlow, UK: Pearson. Gordon, M.J. and Shapiro, E. (1956) ‘Capital equipment analysis: The required rate of profit’, Management Science, 3(1): 102–110. Parker, R.H. (1968) Discounted Cash Flow in Historical Perspective. Chicago, IL: Institute of Professional Accounting.

230

PA RT S IX : Fin a n ce

Weighted average cost of capital

52

A firm’s ‘weighted average cost of capital’ (or WACC) is a financial metric used to measure the cost of capital to a firm. It is a weighted average of the firm’s cost of debt and its cost of equity.

When to use it ●●

To decide what discount rate to use in capital budgeting decisions.

●●

To evaluate potential investments.

Origins The term ‘cost of capital’ was first used in an academic study by Ferry Allen in 1954: he discussed how different proportions of equity and debt would result in higher or lower costs of capital, so it was important for managers to find the right balance. However, this informal analysis was eclipsed by Modigliani and Miller’s seminal paper, ‘The cost of capital, corporation finance and the theory of investment’, published in 1958. This paper provided a strong theoretical foundation to discussions about the right capital structure in firms. Building on this theory, the WACC formula is a straightforward way of calculating the approximate overall cost of capital for a specific firm.

What it is Firms are generally financed through two methods – through debt, which means borrowing money from lenders; and through equity, which means selling a stake in the firm to investors. There is a cost to both these financing methods. Holders of

231

debt expect to receive interest on their loan. Holders of equity expect their share in the firm to go up in value, and may receive an annual dividend payment. The weighted average cost of capital for a firm is calculated by working out the cost of debt (which is simply a function of the interest rate it pays) and the cost of its equity (a more complicated formula, as described below), and then coming up with a weighted average of the two depending on its proportions of debt and equity. In more technical terms, the WACC equation is the cost of each capital component multiplied by its proportional weight and then summing: WACC =

E D × Re + × Rd × (1 – Tc ) V V

Where: Re = the cost of equity Rd = the cost of debt E = the market value of the firm’s equity D = the market value of the firm’s debt V = E + D = the total market value of all sources of financing (both equity and debt) in the firm E = the percentage of total financing that is equity V D = the percentage of total financing that is debt V Tc = the corporate tax rate In summary, WACC is an aggregate measure of the cost the firm incurs in using funds of creditors and shareholders. This measure has implications for those running the firm and for potential investors. For those running the firm, creating value requires investing in capital projects that provide a return greater than the WACC, so knowing what the number is helps the firm’s managers decide which projects to invest in. For potential investors, the firm’s WACC tells you how much of a return the current investors are getting – and whether the firm might ultimately be worth more or less than that.

How to use it Here is an example of a WACC calculation. Gryphon Conglomerate is a mid-sized firm assessing a few initiatives. To determine if any of these initiatives will deliver value to the firm, Gryphon is looking to calculate its WACC. Its shareholders, principally its founder and his family, started the company and continue to provide 80 per cent of its equity capital. They are looking for a 25 per cent rate of return on their money. The other 20 per cent consists of long-term debt, with an interest rate of 6 per cent. Let’s assume the marginal tax rate is 30 per cent. If it were 100 per cent equity financed, its weighted average cost of capital would be 25 per cent. With debt, its WACC (using the formula shown above) is as follows: 232

PA RT S IX : Fin a n ce

E D × Re + × Rd × (1 – Tc ) V V 0.80 0.20 × 0.25 + × 0.06 × (1 – 30 per cent) = 1.00 1.00 = 0.20 + 0.0084 WACC =



= 0.2084, or 20.84 per cent

With this WACC in mind, each initiative has to have the potential to achieve an average annual return greater than 20.84 per cent for Gryphon to consider it. This is, in reality, a pretty high number. Even initiatives with a target return of 15–18 per cent per year would not be viable for Gryphon, even though they would be for many other firms. Since other firms in its industry use a greater proportion of debt in their capital structure (50 per cent on average), Gryphon decides to take on more debt, until it reaches the industry average. Its new WACC is shown here:



0.50 0.50 × 0.25 + × 0.06 × (1 – 30 per cent) 1.00 1.00 = 0.125 + 0.021



= 0.146, or 14.6 per cent

WACC =

With its new capital structure, Gryphon can take on initiatives that have the potential to earn 15–18 per cent per annum because the potential returns are higher than its WACC. What if Gryphon takes it one step further and attempts to finance itself with 80 per cent debt? Here is its revised, hypothetical, WACC:



0.20 0.80 × 0.25 + × 0.06 × (1 – 30 per cent) 1.00 1.00 = 0.05 + 0.0336



= 0.0836, or 8.36 per cent

WACC =

One can see that as the proportion of debt, which carries a lower cost, increases, the WACC goes down. But this new WACC is hypothetical because as Gryphon takes on more debt, it takes on more risk for its equity holders, who will be worried about the company’s ability to service the debt. The equity holders may, as a result of the increased risk of bankruptcy, demand a higher return on their investment. The impact of a higher cost of equity will then increase the WACC. In calculating the WACC for a firm, investors generally want to know what the firm’s target capital structure will look like, and this target is usually based on what is typical in the industry. For example, one might expect to see a higher proportion of debt used in real estate investment trusts compared to firms in the high-tech sector. How do you calculate the cost of debt and the cost of equity? The cost of debt is easy to estimate – you can simply look at the average interest paid on its existing long-term debt (or you can use the numbers from comparable firms). To calculate the cost of equity, one option is to use the ‘capital asset pricing model’ (CAPM), which provides an estimate based on the risk-free rate, the market return and how 52: W e ighted aver age cost of capital

233

volatile the share price has been in the recent past. The second option is to use the dividend discount model, which provides an estimate based on dividend pay-outs. The third option is to estimate a risk premium on top of the current risk-free bond yield. For example, if the current 10-year US government bond is yielding 4 per cent and the market risk premium is 5 per cent, then the cost of equity for the firm would be 9 per cent.

Top practical tip Estimating the WACC for a firm is a relatively straightforward calculation, because it is based on assumptions that make intuitive sense, and it does not involve difficult calculations. However, it is worth bearing in mind that, for many uses, a ‘back-ofthe-envelope’ calculation is actually all that is required. For example, if you are trying to decide whether a significant capital investment is worth pursuing, you need to know roughly what the firm’s WACC is (i.e. within 1–2 per cent), because the margin of error around the investment’s returns is likely to be far greater than 1–2 per cent.

Top pitfall Calculating the cost of equity requires estimates to be made. For example, if CAPM is used, one has to determine whether the current risk-free rate is an anomaly, whether the stock’s past performance (in comparison to the market’s performance) is likely to continue, and what the current market risk premium should be. Using different estimates will yield different results for the cost of equity.

Further reading Allen, F.B. (1954) ‘Does going into debt lower the “cost of capital”?’, The Analysts Journal, 10(4): 57–61. Miles, J.A. and Ezzell, J.R. (1980) ‘The weighted average cost of capital, perfect capital markets, and project life: A clarification’, Journal of Financial and Quantitative Analysis, 15(3): 719–730. Modigliani, F. and Miller, M. (1958) ‘The cost of capital, corporation finance and the theory of investment’, American Economic Review, 48(3): 261–297.

234

PA RT S IX : Fin a n ce

[PA R T S E V E N] Operations

235

Operations management refers to the models and techniques companies use for manufacturing products and services. These models are mostly concerned with efficiency and quality, and they have historically been dominated by engineers and statisticians who use quantitative methods to design well-functioning systems. In many business schools, classes in statistical analysis and decision making are also taught alongside operations classes, and we include a few of these models here. Specifically, we discuss ‘decision trees’ as a useful way of thinking through options under complex conditions, and we describe ‘sensitivity analysis’ as a way of working through the uncertainties around a recommended course of action. In terms of describing how a production system works, one important model is the ‘theory of constraints’ – the notion that any system of linked parts is only as strong as its weakest link. By understanding and remedying the constraints on a system, it can be made to work far more efficiently. Looking more broadly, it is also useful to consider how a production system that extends across a number of independent companies can be improved. The ‘bullwhip effect’ is a useful model in this regard, in that it helps you understand why small changes in demand in the market often result in large swings in production the further back into the supply chain you go. Over the last 50 years, our understanding of manufacturing operations has been transformed through new ways of working that were developed by Japanese companies such as Toyota, and through the ideas of ‘quality’ gurus, such as W. Edwards Deming. We describe three elements of this quality revolution: ‘just in time production’ is the specific technique that was created to simplify supply chains, so that they pull components through as needed; ‘total quality management’ is a broader management philosophy (often linked to the concept of lean manufacturing) that seeks to infuse the entire manufacturing system with a do-it-right-first-time mind-set; and ‘Six Sigma’ can be seen as a refinement of total quality management that became very popular during the 1990s, thanks to the massive efficiency and quality gains it achieved. Finally, we include two slightly unusual models, which don’t apply to making physical products but are increasingly important in a business world that is increasingly dominated by service and technology companies. The ‘service profit chain’ is a model for optimising the activities of service companies such as retailers, travel companies and call centres. ‘Agile development’ is an alternative to the traditional waterfall-based methodology in software development. It has dramatically changed how software is created, and its core ideas are increasingly being applied beyond the software world.

236

PA RT SEVEN : Op er ations

Agile development

53

‘Agile development’ is a software development methodology that emphasises close attention to user needs, fast development cycles and small development teams. It is an alternative to the traditional ‘waterfall’ methodology, where all the parts are carefully designed around a master specification. While there are pros and cons to both methodologies, most observers view agile development as the better model. The principles of agile development are increasingly being used to think creatively about many aspects of management in large firms.

When to use it ●●

To provide rapid responsiveness to user needs.

●●

To enable a software system to evolve as its users’ requirements change.

●●

To make the process of developing software more engaging to the individuals involved.

Origins Until the 1990s, software development was mostly done using the waterfall, or ‘cascade’, methodology, whereby a high-level design was agreed with the customer, then it was broken down into modules that were specified, then coded, then tested, before being assembled and delivered to the customer. This methodology was very careful and precise, but it was slow and expensive, and it did not allow users to change their minds about what they wanted the system to do. During the 1990s a number of different groups of software developers were experimenting with alternative techniques that involved greater responsiveness to changing user needs. These people came together at a conference in Utah in 2001 and launched the ‘Agile Manifesto’ as a way of defining the commonalities to their alternative approaches. This manifesto helped to legitimise the ‘agile’ way of 237

working. A number of specific methodologies under the agile umbrella, for example Scrum development, have since become popular.

What it is The traditional waterfall methodology for software development had five stages: requirements, design, implementation, verification and maintenance. By structuring software development in this systematic way, projects became more rigorous and more effective. Many consultancy firms, such as Accenture and Infosys, were built on the basis of their careful application of these methodologies. The so-called ‘capability maturity model’ was developed by Carnegie Mellon University as a way of verifying how rigorous a particular software project was. Agile emerged as an alternative methodology, largely because many developers had become frustrated at the rigidity of the waterfall approach. Most users of computer systems don’t know exactly what their needs are, and their needs often change anyway. In a famous conference in 2001 in Utah, the principles of the agile movement were defined as follows (www.agilemanifesto.org): ‘We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value: ●●

Individuals and interactions over processes and tools.

●●

Working software over comprehensive documentation.

●●

Customer collaboration over contract negotiation.

●●

Responding to change over following a plan.

That is, while there is value in the items on the right, we value the items on the left more.’

How to use it Agile is a high-level concept, almost a philosophy, of how software development should be done. To turn this into practice, a number of related methodologies have been developed and the most widely-used are called XP and Scrum. ‘XP’, or extreme programming, has the development of working software as its key focus. It ignores all other supposedly superfluous issues, such as managerial politics and quality checks. There are three main work phases in XP: release planning, iteration, and acceptance testing. Information from users is gathered in the first pass through the three steps, and this data is used to work out the bugs in the next iteration. There may be several iterations where software is tweaked until it has enough features to meet users’ needs. Once this minimally viable product is developed, it is released to users. XP treats users as part of their development team, encouraging them to write ‘user stories’ about how they use the software and what gaps it fills. Integration of changes is done at least once a day, the project is tracked closely to measure the 238

PA RT SEVEN : Op er ations

amount of work being done and programmers work in pairs. Working in pairs was found to be more efficient and more enjoyable to working in isolation or in a large group. ‘Scrum’ takes its name from the huddled mass of forward players in a rugby team. The key principle of the scrum methodology in software development is for the team to work rapidly and collaboratively to get the job done – to put out a software release. Unlike XP, scrum is concerned with both the technical and the managerial sides of the development process. Scrum begins by working through a project’s scope and software design. There are defined roles in a scrum team. For example, the ‘product owner’ speaks on behalf of the stakeholders and relays customer feedback to the team; the ‘development team’ codes and puts the software together, and the ‘scrum master’ manages the scrum process. Working with a backlog of development and coordination tasks, team members divide the work among themselves. Next, the development work is managed through what are called ‘sprints’, which are two- to four-week work cycles. Within each sprint programme there is the ‘daily sprint cycle’, which includes daily update meetings to discuss what was done the day before and what the objectives might be for today. During this scrum meeting, members remain standing to provide an incentive to keep the meeting as short as possible. Some other important concepts in scrum are the ‘burndown chart’, which shows the work remaining within the sprint and the ‘product backlog’, which is the complete list of requirements not currently in the product release.

Top practical tip Agile is all about giving software development teams greater responsibility for the work they do, which in turn means giving them freedom to act. So if you are responsible for overseeing an agile development project, you need to be careful not to micro-manage the team in their efforts.

Top pitfall Agile development methodologies rely on a close relationship between the developer and the user. For example, in XP the software is developed around user stories, but by design these stories are short and somewhat vague. So, as soon as a piece of development work is finished, the user needs to be brought in to review it and to offer feedback on how it can be improved. Without this close loop, there is a risk of the development project going off in the wrong direction.

53: Ag ile develo p ment

239

Further reading The ‘Agile Manifesto’ website: www.agilemanifesto.org Beck, K. with Andres, C. (2004) Extreme Programming Explained: Embrace change, 2nd edition. Harlow, UK: Addison-Wesley Professional. Schwaber, K. and Beedle, M. (2001) Agile Software Development with Scrum. Upper Saddle River, NJ: Prentice Hall.

240

PA RT SEVEN : Op er ations

The bullwhip effect

54

The ‘bullwhip effect’ illustrates the impact of coordination problems in traditional supply chains. It refers to the idea that small oscillations in orders from customers are amplified as you move back in the supply chain towards the production end. By understanding this effect, you can put in place mechanisms to manage it.

When to use it ●●

To understand why customer deliveries get delayed, or why you end up with large amounts of inventory.

●●

To make sense of the dynamics of a complex supply chain.

●●

To improve the way the supply chain functions.

Origins Around 1990, managers at consumer goods giant Procter & Gamble coined the term ‘bullwhip effect’ to label a series of erratic order patterns in its baby-nappy business supply chain. Also known as the ‘whiplash’ or ‘whipsaw’ effect, the phenomenon had been known in the field of systems dynamics since the late 1950s. Systems dynamics is a way of modelling the complex interactions between the parts of a system, such as a business firm, and it helps to explain some of the surprising ways that a system behaves. To explain the basic ideas of systems dynamics to students, Jay Forrester and colleagues at MIT developed a simulation called the ‘beer game’ in the 1960s. This is widely used, even today, by MBA students, and it allows the students to experience for themselves how small changes in customer orders have a dramatic effect on the amount of product that is actually manufactured.

241

What it is In a supply chain, individual companies make decisions about how much to produce, based primarily on information from their customers. But because of delays and errors in the information received, and because each company acts in its narrow self-interest, the system as a whole often ends up functioning very inefficiently. The ‘bullwhip effect’ is the name given to the commonly observed situation in which small changes in consumer demand can result in large variations in orders placed upstream. In fact, even when demand from the consumer is stable, there are often supply shortages and pile-ups of inventory as you go further back in the supply chain. There are many factors contributing to the bullwhip effect. Most involve downstream companies (such as retailers or distributors) acting in a way that suits their narrow interests, but with the result that their suppliers end up becoming confused about the real state of demand for their products. For example, actively running down inventory levels, lumping together orders and buying more components than usual to hedge against potential future shortages – these tactics are all likely to create problems for suppliers. More broadly, a shortage of information about the overall state of demand in the supply chain also contributes to the bullwhip effect. For nearly half a century, MBA students have played the well-known ‘beer game’, which splits the class up into four roles for a beer brand. As customers, retailers, wholesalers and suppliers, they have to ensure that there are no stockouts. No communication is allowed, and each participant can only act on the information received from the adjacent downstream player. The result is predictably similar: when small changes in customer sales are fed through the supply chain, the supplier at the end of the chain always ends up making either too much or too little product as a result.

How to use it There are several ways to counteract the bullwhip effect, including sharing information more widely, coordinating the activities of the channel more effectively and making the operation more efficient (for example, by reducing lead times). Some specific strategies to counteract the bullwhip effect are as follows:

242

●●

Provide actual data about customer demand to ‘upstream’ sites, so that everyone can plan from the same raw data.

●●

Move from large, batched orders to more frequent and smaller shipments. This will reduce the chance of an upstream supplier overreacting to an unusually large batched order when demand spikes.

●●

To discourage retailers from engaging in forward-buying, manufacturers should consider a one-price policy. Equally, a move to ‘everyday low prices’ by retailers, rather than periodic discounting, sends a clear signal about pricing that helps suppliers to those retailers to plan ahead.

PA RT SEVEN : Op er ations

●●

For firms facing a product shortage, allocating products in proportion to a customer’s past ordering record is helpful. This avoids ‘gaming’ during periods of shortages.

●●

Return policies should not be too generous. By imposing reasonable penalties for returning products, customers are more likely to buy what they actually plan to use.

Top practical tip While the bullwhip effect is observed most clearly in a multi-step manufacturing setting, the basic idea that variability gets amplified as you move upstream is relevant in many other industries as well. So if you are involved in overseeing any sort of supply chain, the key point is simply to be aware of the bullwhip effect, so that when it occurs you can explain why. In addition to the specific strategies noted above, there are also some broader things you can do. One is to make the right choice about whether to use a pull or push strategy for inventory management. If you have stable demand, a push strategy can be employed, but where demand is uncertain, a pull strategy is typically better because it avoids large amounts of inventory. However, a pull strategy also opens you up to the risk of the bullwhip effect.

Another general point is the importance of information sharing. No matter where

you sit in the supply chain, costs rise when there is a lack of visibility to demand shared along the supply chain. So encouraging information sharing among your trading partners is a good idea – some firms are reluctant to share what they see as proprietary information but, as the bullwhip effect shows, the benefits of sharing typically outweigh the costs. Finally, it is important to consider the complexity of your product portfolio, because the bullwhip effect becomes more acute the more complex the system. Often, there are significant benefits to be gained from slimming down the portfolio of products you offer.

Top pitfall Many of the proposed solutions to the bullwhip effect are technology-based, typically around using computer systems to provide real-time information about inventory and supply levels. However, you need to keep in mind that technology is only part

54: The bullwh ip effe ct



of the solution. There is also a very significant behavioural component – individual

243

managers will often deliberately act in ways that meet their narrow interests (such as to have enough stock to meet their immediate customers’ needs), even if they go against the broader needs of the system. An effective strategy for countering the bullwhip effect must also address these behavioural issues.

Further reading Forrester, J.W. (1961) Industrial Dynamics, Vol. 2. Cambridge, MA: MIT Press. Lee, H.L., Padmanabhan, V. and Whang, S. (1997) ‘The bullwhip effect in supply chains’, Sloan Management Review, 38(3): 93–102. Senge, P.M. and Suzuki, J. (1994) The Fifth Discipline: The art and practice of the learning organization. New York: Currency Doubleday.

244

PA RT SEVEN : Op er ations

Decision trees

55

Decision trees assist in the choice between two or more courses of action. By laying out options and outcomes for each choice, assigning values and probabilities along the way, you can form a balanced picture of the risks and rewards.

When to use it ●●

When a problem is complex enough that laying out each option, explicitly, will assist in forming a decision.

●●

To enable decision makers explicitly to compare the costs and benefits of each outcome.

Origins While decision trees have been in some form for centuries (consider that ‘Bayes theorem’ – which can be used to compute probability inputs for decision trees – was first stated in the eighteenth century), they became more widespread with the rise in popularity of computer programming in the 1950s.

What it is Decision trees are graphical representations of complex decisions. They allow decision makers to depict, in chart form, the possible choices available, and to assign associated costs, benefits and probabilities to each outcome. As there are controllable and uncontrollable factors in most decisions, pictograms are a useful way to indicate whether, for example, a choice has to be made or whether uncertainty is resolved: ●●

Decision nodes, shown as squares, represent the controllable factors, and are used when one has to select from two or more mutually-exclusive options. 245

●●

Decision branches are lines that extend from decision nodes; the group of alternatives should be collectively-exhaustive.

●●

Event nodes, shown as circles, depict situations where uncertainty exists: what happens next is out of the hands of the decision maker.

●●

Event branches extend from event nodes and show possible events that may result.

●●

Terminal nodes, shown as triangles, are the final result of the series of decisions and events.

How to use it Here is an example of how a decision tree can be used for decision making. The research team at Campbell Ship Motor (CSM), a designer of small, ocean-going ship motors, has developed what they believe to be a more efficient ship motor. CSM’s designs are usually purchased by a large South Africa-based ship-motor manufacturer. The improved design uses new, lower-cost and modular parts and may allow the motor to be assembled on an automated line – a large saving in costs from CSM’s current manual assembly line. CSM is wondering if it should pour resources into research and development to commercialise this product, and the team draws a ‘decision node’ as the first part of the decision tree. The team estimates that it will cost them a total of $30,000 to conduct the research and development to produce a working prototype. Due to the new materials and design, there is a high likelihood that the prototype will not pass the stress tests or the emissions tests that it will need to undergo. Should the prototype pass the tests, the team can develop a process for it to be hand-assembled. As there is uncertainty regarding whether the prototype will or will not be usable, the team draws an ‘event node’ at this point. Planning for the new motor to be hand-assembled would be the least-complicated way to proceed, as the manufacturer currently makes all of its motors by hand. As the new motor uses fewer components, there is a chance that the ship-motor manufacturer can choose to outsource the assembly of the motor to a third party. If CSM designs the new motor to allow third parties to assemble it, the ship-motor manufacturer may pay more for the design – up to $100,000 in this case – because it will not have to reconfigure its assembly operations. There is a chance that CSM can design the new motor so it can be assembled on an automated assembly line in North America. If it is successful in doing this, the ship-motor manufacturer will pay top dollar – perhaps $250,000 – for the design because it can manufacture the motor close to where its customers are, without having to incur high labour costs. The research team tries to quantify the outcome of each decision and event listed above by assigning a value at each of the ‘terminal nodes’. After laying out the decisions and outcomes, the team arrives at its decision tree, shown opposite:

246

PA RT SEVEN : Op er ations

50% Success Automated assembly line

30% Prototype is usable

Outsourced assembly

50% Failure 70% Success 30% Failure

$250,000 $30,000 $100,000 $50,000

Invest in R&D Manual assembly line Investigate?

70% Prototype is unusable

Do not invest in R&D

$75,000

–$30,000

$0

Now that the team has sketched out the decision tree, it can rely on something called the ‘rollback method’, to find the optimal strategy to pursue. Put simply, the team can determine if it is worth its while – given available information – to begin the project and, if it does begin the project, which path to take. The ‘rollback method’ starts by working backwards from the terminal nodes. Using the probabilities assigned to success and failure for each of the options, the team can calculate the expected value at event nodes (the circles) for the choice of assembly line. It does this by multiplying the chance of success and failure by the respective values that are given: ●●

Automated assembly line event node: 50 per cent × $250,000 + 50 per cent × $30,000 = $140,000.

●●

Outsourced assembly event node: 70 per cent × $100,000 + 30 per cent × $50,000 = $85,000.

●●

Manual assembly line: $75,000. (Note there is no event node for the manual assembly line.)

Given that the automated assembly line event node, at $140,000, has the highest expected value of the three choices, this becomes the rollback value for the next step of the analysis. Therefore, the expected value at the ‘invest in R&D’ event node is 30 per cent × $140,000 = $42,000. As can be seen, the expected value for ‘prototype is unusable’ is 70 per cent × -$30,000 = –$21,000. Finally, the expected value for ‘invest in R&D’ is $42,000 + –$21,000 = $21,000. As this value is higher than the ‘do not invest in R&D’ value of zero, we can state that the value of proceeding with the prototype, at the very start, is $21,000. 55: De ci si on trees

247

Top practical tip Decision trees can become very complex, so the real art in using them is to get the level of detail right. Typically, you don’t want to have more than three or four variables involved, so try to ensure that you have chosen the most critical variables in structuring your analysis.

Top pitfall While working through the details may take up most of your time, it is useful to scrutinise the choices you have laid out to ensure that they capture all possible outcomes for the decision at hand. For example, returning to the analysis above, CSM may not have considered the option of selling their new design to a Korean customer.

Further reading Quinlan, J.R. (1987) ‘Simplifying decision trees’, International Journal of ManMachine Studies, 27(3): 221. Yuan, Y. and Shaw, M.J. (1995) ‘Induction of fuzzy decision trees’, Fuzzy Sets and Systems, 69(2): 125–139.

248

PA RT SEVEN : Op er ations

Just-in-time production

56

Just-in-time (JIT) production strives to coordinate the flow of components and inputs in a supply chain to minimise costs. As its name implies, parts are scheduled to arrive as they are needed, so excessive levels of inventory are not held. JIT started out as a manufacturing concept but is becoming increasingly common in service industries.

When to use it ●●

To make your production process more efficient.

●●

To identify the bottlenecks and inefficiencies in your supply chain.

Origins While ‘just-in-time’ is typically viewed as a Japanese invention, the original ideas underlying it can be traced back to the Ford Motor Company and the work of Ernest Kanzler, who was tasked with reducing manufacturing costs in Ford’s Highland Park plant in 1919. Kanzler introduced new ways of reducing labour and inventory costs, and he shifted inventory carrying costs to dealers, who were forced to borrow to pay for the excess spare parts they were sent. In the post-war years, Japanese companies experimented with a host of new manufacturing techniques, inspired by the ‘quality’ principles of Edwards Deming, and also by some of the innovative methods pioneered by Ford much earlier. The ‘father of JIT’ was Taiichi Ohno at Toyota, who put in place the first version of a JIT inventory management system in the early 1950s, and then gradually improved it to include the ‘Kanban’ pull system by the early 1960s. This became an important part of the so-called ‘Toyota production system’, which took the automobile industry by storm. Using this new system, Toyota produced cars that were both higher in quality and lower in price than those of its competitors, and gradually these superior production techniques were adopted across the world, and in many different industries. 249

JIT refers specifically to the inventory management process that lies at the heart of the Toyota production system. Lean manufacturing is the generic name that was given to the high-efficiency/high-quality way of working introduced by Toyota and other companies in the 1960s and 1970s. Total quality management (see Chapter 61) is a much broader philosophy of management that incorporates JIT principles and also related concepts such as ‘Kaizen’ (constant improvement) and quality circles for problem solving.

What it is The traditional approach to manufacturing was push-based, with inputs and component parts bought in anticipation of the need to manufacture the finished product. Just-in-time reverses this logic: it is a pull-based system in which actual orders provide the signal for the acquisition of additional inventory, which is then made available on a just-in-time basis. In a JIT system, stock levels of raw materials, components, work in progress and finished goods are kept to a minimum. For this to work, the system for scheduling the flow of resources has to be very carefully thought through. The original version of JIT used a ‘Kanban’ system (kanban means signboard in Japanese), in which physical cards were handed down the production chain when orders were confirmed, triggering the replenishment of the minimal inventory levels at each step in production. Nowadays, the card-based system has been replaced with sophisticated production scheduling software, but it is still built on a pull-based logic. JIT systems usually extend back into the manufacturer’s supply chain, often through several tiers of suppliers, using EDI (electronic data interchange) software. For example, a car manufacturing plant might receive exactly the right number and type of tyres for one day’s production, and the supplier would be expected to deliver them to the correct loading bay on the production line within a very narrow time slot. Many of the advantages of a JIT system are related to lower costs. Stock-holding costs can be reduced, and this also results in lower wastage. Overall, JIT systems typically have lower working-capital costs; in fact, some actually have negative working capital, meaning that they are paid before they buy the components and build the product. JIT systems are more flexible, because you don’t make products in anticipation of demand. There is also the advantage that the ‘lean’ nature of JIT systems means inefficiencies in the process are exposed, and are addressed immediately. But there are also significant disadvantages in JIT manufacturing. First, there is little room for mistakes because inventory levels are very low. When an error occurs, the entire process grinds to a halt (whereas under a push-based system, the inventory acts as a buffer). Second, JIT requires external suppliers to deliver onschedule, so if they cannot be relied on, the production process suffers. Moreover, external factors beyond anyone’s control can also affect the production process – a few years ago, for example, an Icelandic volcano led to all flights in Europe being cancelled for a week, and many JIT systems collapsed.

250

PA RT SEVEN : Op er ations

How to use it JIT is an integral part of the production process in many industries today, because its advantages in terms of efficiency and quality clearly outweigh its limitations. However, there are also many small firms, or firms in less technologically-sophisticated sectors, that use a more traditional push-based inventory management system. If you work for such a firm, the question of how or when to use JIT is an important one. The first point to bear in mind is that JIT is very difficult to implement as a standalone process. It requires you to change the way you work with your customers and probably your suppliers as well, and it typically requires you to invest in new software systems to make the linkages up and down the supply chain work effectively. You might also have to familiarise yourself with alternative shipping methods, such as less-than-truckload (LTL) carriers who consolidate loads and routes to fill a trailer, and you might even be expected to invest in new facilities so that you are physically closer to the customers you are selling to.

Top practical tip To implement a JIT system, management buy-in and support at all levels of the organisation are required, because it requires a different mind-set to traditional manufacturing. Considerable resources are needed, because JIT involves reengineering the system, buying and implementing the necessary software and training-up employees. Building a close, trusting relationship with suppliers is also necessary, and this often takes a lot of time and effort.

Top pitfalls If you produce only what you need when you need it, there is no room for error, but this has two implications. First, for JIT to work, many fundamental elements must be in place – steady production, flexible resources, extremely high quality, no machine breakdowns, reliable suppliers, quick machine set-ups and lots of discipline to maintain the other elements. Second, the process of putting these pieces in place means that errors will undoubtedly be made, and delays to your production process will periodically occur. Building a JIT system, in other words, requires you to be tolerant of failure and able to learn from your mistakes.

56: Just- i n-t ime produ c ti on

251

Further reading Monden, Y. (2011) Toyota Production System: An integrated approach to just-intime. Boca Raton, FL: CRC Press. Ōno, T. (1988) Toyota Production System: Beyond large-scale production. New York: Productivity Press.

252

PA RT SEVEN : Op er ations

Sensitivity analysis

57

‘Sensitivity analysis’ is a tool for mapping out the range of potential outcomes around a decision, and is especially useful when there is uncertainty around key variables. Since there may be several variables that could interact with one another, sensitivity analysis is typically aided by computer software.

When to use it ●●

To get a better understanding of how a proposed plan might turn out, given knowledge of certain model inputs.

●●

To identify errors in the proposed plan before it is carried out.

●●

To understand the factors that have the most impact on results.

Origins Sensitivity analysis emerged during the computer revolution. Before computers were available, the process of making quantitative estimates based on multiple inputs was extremely laborious. As processing speeds increased, it became much easier to make small changes to the inputs to a calculation to see what the outcome would be. The first computer-based spreadsheets revolutionised sensitivity analysis. Visicalc, launched in 1979, was the original spreadsheet, superseded by Lotus 1-2-3 in the 1980s and Microsoft Excel in the 1990s. The beauty of spreadsheets, of course, is that by changing one cell you change the entire set of calculations. Many other forms of sensitivity analysis have also emerged over the years. For example, if you cannot use a spreadsheet because the inputs interact in highly complex ways, you can do ‘Monte Carlo simulations’ to identify a range of possible outcomes.

253

What it is If you are faced with a complex decision, and you want to know what impact a change in one input variable will have on the results, sensitivity analysis is a useful tool. A simple form of sensitivity analysis might isolate the impact of a change in sales levels on the net earnings of a firm. A more sophisticated approach might look at how changes in the prices offered by two competitors interact to affect your sales levels. Sensitivity analysis can also be highly complex. For example, consider a life insurance firm that wants to know how to price its new term life insurance product, targeted at smokers aged 60 and above. An analytical team might take a look at the life tables that show the rate of deaths for this population for a given time interval, then develop a Monte Carlo simulation in Excel to test the product in question. A Monte Carlo simulation builds different models of results – perhaps thousands of times in a row using different sets of random values – to find the range of possible outcomes for a scenario. The objective, in this case, might be to determine how best to price the policies to earn a slight underwriting profit.

How to use it There are many different approaches to sensitivity analysis that might be adopted. Here is a standard set of steps you might take: 1 Select the key parameters (the input variables): Choose the realistic range for each one – perhaps maximum and minimum values, or an 80 per cent confidence interval around the likely mean. Consider other possible scenarios that may have an impact on results. 2 Conduct the sensitivity analysis by varying each parameter individually: This typically allows you to identify certain parameters to which the key decision variables are relatively unresponsive. They can then be excluded from further analysis. 3 For the remaining parameters, consider how correlated they are with one another: Then, on the basis of that analysis, create a model that allows you to change different variables. Through a process of trial and error, select variables that are not highly correlated with each other (for example, net income and EBIT may show a higher degree of correlation than net income and sales) 4 Summarise the results: Select the variables that have the most impact on the results and rank them. Report them in tables or use a chart to depict the results visually. 5 Identify a ‘best-bet’ strategy: An analysis of the results should uncover a ‘best-bet’ strategy and several alternatives. A strategy might be selected because it yields the best results, it serves another purpose (for example, the launch of a new product supports another product line), or it is ready to be implemented immediately.

254

PA RT SEVEN : Op er ations

Top practical tip Good sensitivity analysis is painstaking work. While it is tempting to try to shortcut some of the steps, perhaps because you have an intuitive sense of what the answer will be, this should be avoided. Sometimes, the compound effects of certain variables on others can be surprising, and it takes careful analysis to bring these interactions out.

Top pitfall The biggest pitfall in doing sensitivity analysis is to assume that input variables are moving relatively independently of one another, when in fact they are highly correlated. Think back to the financial crisis of 2008. At its heart, the crisis transpired because the toxic assets called ‘collateralised debt obligations’ were dramatically more risky than the banks and credit ratings agencies had believed. These risks were underplayed because of poor sensitivity analysis – everyone assumed that only a small fraction of home-owners would default on their mortgages, but in fact all the home-owners were susceptible to the same risk of increasing interest rates. Many home-owners defaulted at about the same time, thus triggering the financial crisis.

Further reading Saltelli, A., Chan, K. and Scott, E.M. (eds.) (2000) Sensitivity Analysis. New York: John Wiley & Sons. Silver, N. (2012) The Signal and the Noise: Why so many predictions fail, but some don’t. London: Penguin.

57: Sens it i vi ty a na lys i s

255

58

The service-profit chain

The service-profit chain is used primarily in service-sector firms, such as retailers, restaurants, hotels and airlines. It describes the key causal links between employee service levels, satisfied customers and firm profitability, and it shows how important it is to have engaged employees working for you.

When to use it ●●

To identify the drivers of your profitability in a service environment.

●●

To diagnose problems with customer retention and loyalty.

●●

To diagnose problems with employee morale.

Origins The service-profit chain was developed in the 1990s by a group of Harvard Business School faculty: James Heskett, Thomas Jones, Gary Loveman, Earl Sasser and Leonard Schlesinger. While the term ‘service-profit chain’ was only introduced in 1994, it was based on many years of research examining the link between the quality of service provided by a firm and its long-term performance.

What it is Firms selling services have to think somewhat differently about the drivers of longterm success than those making and selling products. It is intuitively obvious that a high-quality service – for example, a receptionist in a hotel going out of her way to help you – is a good thing, but we still need to think carefully about what leads the receptionist to act in that way, and what the consequences of her actions are.

256

The service-profit chain is an all-encompassing framework that explains the causal links in a service firm’s business model. It identifies the key parameters you need to manage, how they influence each other, and what measures you should focus on.

How to use it By defining the elements in the service-profit chain and the links between elements, you end up with a very practical framework that can help you to diagnose how you are being successful, or where the problems might lie. This then helps you to define the changes that might be needed to improve the strength of the chain as a whole. The key links in the chain, working backwards, are as follows: ●●

Customer loyalty drives profitability and growth: Research has suggested that a 5 per cent improvement in customer loyalty results in a 25–85 per cent improvement in profits. This is the key starting point for understanding the service-profit chain.

●●

Value drives customer loyalty: Based on your own experience, what makes you loyal to a service company such as a hotel, restaurant or airline? It is the perception that you are receiving value – for example, in terms of the attentiveness of the staff or the quality of the product. Value drives customer satisfaction, which in turn translates into customer loyalty.

●●

Employee productivity drives value: While there is a product component to the offerings of most service companies (such as the hotel room itself), it is usually the quality of the service that creates the perception of value. Value is therefore created when employees want to stay with the firm and are productive. Your policies and actions as a senior manager have a direct impact on employee retention.

●●

Employee satisfaction and loyalty drive productivity: Productivity is higher when employees are feeling good about the company they work for – they are more prepared to give their discretionary time towards their work, and their positive mood rubs off on others.

●●

Internal environment drives employee satisfaction and loyalty: There are many factors affecting how employees feel about their work, including the quality of their colleagues, their working conditions, the incentives they receive and the specific roles they are asked to fill. These are all things that are ultimately under the control of the senior leaders running the organisation.

●●

Leadership underlies the chain’s success: Underpinning the serviceprofit chain are the policies and actions of the firm’s leaders. Leaders determine how the work is set up, what actions are rewarded or punished, and determine whom to hire and how to manage their hires.

58: The serv ic e- p rof i t chai n

257

Top practical tip You can’t just work on a single idea – providing better employee incentives or improving customer service – and expect results throughout the service-profit chain. Improvements will come only if changes are made in the context of the system as a whole.

If you have to choose where to start, though, it makes sense to work first of all

on the quality of the internal working environment. If you build a supportive and engaging workplace, and you provide training and promotion opportunities, you will have happy and productive employees, and this then works its way through to the rest of the links in the chain. The lesson is an important one for firms that are inclined to try to short-cut their way to improved performance by cost-cutting.

It is worth noting that the service-profit chain was developed for service firms,

but increasingly firms selling products are also partly in the service business (for example, think about buying a car – the service you receive from the dealer and the garage doing the servicing make an important impression on you). So these ideas can actually be applied to firms in most sectors.

Top pitfall The metaphor of a ‘chain’ is important here because the system works only as well as its weakest link. For example, there is no point in spending a lot of time measuring and improving customer loyalty if your employee turnover is really high or if you have disengaged employees. In that situation, you need to fix the problem with employee satisfaction, perhaps through better training or by putting in place more effective senior managers.

Further reading Cronin, J.J. and Taylor, S.A. (1992) ‘Measuring service quality: A re-examination and extension’, Journal of Marketing, 56(3): 55–68. Heskett, J.L., Jones, T.O., Loveman, G.W., Sasser, W.E. and Schelsinger, L.A. (1994) ‘Putting the service-profit chain to work’, Harvard Business Review, March– April: 164–174. Reichheld, F. and Sasser, W.E. (1990) ‘Zero defections: Quality comes to services’, Harvard Business Review, September–October: 59–75.

258

PA RT SEVEN : Op er ations

Six Sigma

59

‘Six Sigma’ is a data-driven approach for eliminating defects in any process, such as product manufacturing or customer service. The term ‘Six Sigma’ has its roots in statistics – it refers to an error rate of three parts per million.

When to use it ●●

To resolve a quality problem in a manufacturing process.

●●

To get people thinking systematically about the causes of errors in a system.

●●

To make an entire system work more efficiently and with fewer problems.

Origins The roots of ‘Six Sigma’ as a measurement standard can be traced back to nineteenth-century statistician Carl Friedrich Gauss, who introduced the concept of the normal distribution curve. Sigma is one standard deviation; ‘Six Sigma’ is the area under the curve six standard deviations from the mean. Out of one million observations, only 3.4 would lie in this part of the curve. As a management tool, Six Sigma emerged in the early 1980s. Many firms had tried to tackle quality issues through the use of total quality management (TQM), which had proven useful in some respects but was found wanting in other areas. One common complaint was that TQM was internally driven, rather than focused on customers. The Six Sigma methodology was invented by Bill Smith, an engineer at Motorola. He devised a set of tools for analysing their quality problems, and these were rapidly picked up across the company because the CEO (Bob Galvin) had pledged a ten-fold improvement in product quality. Smith’s ideas were first implemented in a production plant in Illinois, and productivity increased 12 per cent a year for the next ten years and the savings on manufacturing costs accumulated to a reported $11 259

billion. In 1988 Motorola collected the Malcolm Baldridge National Quality Award, to a large extent because of Six Sigma. Smith worked with consultant Mikel Harry to create the Motorola Six Sigma Institute, and throughout the 1980s the company performed well. But in the 1990s, after the untimely death of Smith, the Six Sigma initiative became less important to the firm. Mikel Harry left Motorola in 1994 to found the Six Sigma Academy, which allowed him to spread the concept. It was taken up first by Allied Signal, in 1994, and then by GE Capital, in 1995. When he saw its potential, GE’s Jack Welch made Six Sigma one of the company’s key initiatives, and this helped to introduce the concept to a much broader audience. Six Sigma became a highly popular quality improvement tool that was implemented in organisations worldwide. Many books were published, and both specialist and generalist consulting firms offered Six Sigma implementation advice.

What it is Achieving a ‘Six Sigma standard’ means not making more than 3.4 defects per million parts or opportunities. If you are making widgets or running a call centre, a Six Sigma process is one that is essentially error-free. Six Sigma (as a management concept) provides users with a methodology to analyse and improve the predictability of a process. There are five steps in a Six Sigma process, represented by the acronym DMAIC: 1 Define the problem, improvement activity, opportunity for improvement, project goals and customer (internal and external) requirements. 2 Measure process performance. 3 Analyse the process to determine root causes of variation and poor performance (defects). 4 Improve process performance by addressing and eliminating the root causes. 5 Control the improved process and future process performance. This is a fairly generic set of steps, and indeed they are very similar to the steps involved in implementing a TQM process. The difference is that Six Sigma is narrower and more technically-grounded, whereas TQM is an overall management philosophy that is as concerned about people issues as technical issues. Over the years TQM became broader and more abstract. In some ways, Six Sigma can be seen as taking its adherents ‘back to the roots’ of the original quality movement.

How to use it The methodology for conducting a Six Sigma project is very clearly defined. DMAIC defines the high-level process, and within this there are many additional steps. One of the reasons for Six Sigma’s success, arguably, is that people are required to 260

PA RT SEVEN : Op er ations

implement it in a highly consistent manner. And to ensure that this happens, the Six Sigma Institute pays close attention to its training and accreditation process. Using terminology from the world of martial arts, people earn ‘belts’ that indicate their level of competence: ●●

Black Belt: Leads problem-solving projects; trains and coaches project teams.

●●

Green Belt: Assists with data collection and analysis for Black Belt projects; leads Green Belt projects or teams.

●●

Yellow Belt: Participates as a project team member; reviews process improvements that support the project.

●●

White Belt: Can work on local problem-solving teams that support overall projects, but may not be part of a Six Sigma project team; understands basic Six Sigma concepts from an awareness perspective.

There is also a ‘Master Black Belt’ who trains and coaches Black Belts and Green Belts, working primarily at the programme level.

Top practical tip Like many other methodologies for improving organisational performance, Six Sigma requires commitment from the top of the firm. One of the reasons it was so successful for GE in the 1990s is because Jack Welch made it a firm-wide priority, and he then stuck to it for several years. This commitment typically involves the creation of a steering committee at the top of the firm, and senior executives undertaking a training session (often two days) to become Six Sigma champions. The Six Sigma performance measures then need to be integrated into the operating plan for the business.

Top pitfalls One common problem firms run into is that they don’t allow enough time for a Six Sigma training programme to have its effect. This is not just a chronological issue – it is often about assigning the right number of people to a task. Often, a group of employees will be given ownership of a Six Sigma process alongside their existing responsibilities, and this makes it hard to do the level of painstaking work that is required.

Another pitfall is lack of focus. It is quite common for a firm to tackle a process

that is too broad in scope. The real value of the Six Sigma methodology is that it is

▲ 59: Si x Si gm a

261

about ruling out any defect, and this is only really possible if the process is carefully circumscribed. In essence, Six Sigma works when it is used in a narrow technical way and according to the original principles of its creators, Bill Smith and Mikel Harry. If it becomes a simplified metaphor for quality improvement, then its utility is lost.

Further reading Harry, M. and Schroeder, R. (2005) Six Sigma: The breakthrough management strategy revolutionizing the world’s top corporations. London: Random House. Kwak, Y.H. and Anbari, F.T. (2006) ‘Benefits, obstacles, and future of Six Sigma approach’, Technovation, 26(5): 708–715. Neuman, R.P. and Cavanagh, R. (2000) The Six Sigma Way: How GE, Motorola, and other top companies are honing their performance. New York: McGraw-Hill Professional.

262

PA RT SEVEN : Op er ations

Theory of constraints

60

The ‘theory of constraints’ is a set of tools that allows managers to identify and resolve the bottlenecks – or constraints – in a firm’s processes that hold it back from achieving higher productivity levels.

When to use it ●●

To diagnose the bottlenecks in a production process.

●●

To identify ways to make a production process work more efficiently.

Origins The term ‘theory of constraints’ was introduced in 1984 by Eli Goldratt in his bestselling business novel The Goal. However, the underlying ideas on which the book is based are very old. In the world of business practice, the ideas can be traced back to early innovations in the manufacturing process – for example, Henry Ford’s introduction of the assembly line in the early 1900s, and then the move towards just-in-time manufacturing spearheaded by Toyota and other Japanese firms in the 1960s. All these innovations were about trying to make the process of manufacturing more efficient. Process engineers used to talk about ‘balancing the line’, which was an informal way of saying that bottlenecks should be taken out. Goldratt’s theory of constraints essentially formalised a lot of these ideas. From a theoretical perspective, Goldratt’s ideas built on the notion of ‘systems dynamics’ that emerged in the late 1950s through the pioneering work of Jay Forrester. Systems dynamics is a way of understanding how the parts of a system interact with each other, and especially how positive and negative feedback loops create somewhat unpredictable outcomes.

263

What it is Businesses turn resources into saleable services or products by relying on a group of closely-related processes. According to the theory of constraints, every organisation has at least one constraint. This constraint prevents the organisation from reaching its goals – greater profit and/or higher market share. There are manufacturing constraints – such as bottlenecks in a production system due to lack of manpower or parts, and there are non-manufacturing constraints, which may be a sales team’s effectiveness in closing deals, or the fact that market demand is waning. The theory of constraints offers tools for managers to deal with the constraints they identify. According to Goldratt, despite what every manager must think, there are few true constraints in organisations. Nevertheless, the tools channel managers’ attention to one single constraint, and this narrow focus generally results in improvements.

How to use it Applying the theory of constraints involves a five-step process: 1 Identify the constraint: This is the part of a system that is its ‘weakest link’. It can be a physical constraint (such as transporting components to a factory on time) or a policy (for example, union rules that limit the number of hours worked per week). 2 Decide how to exploit the constraint: You should attempt to get as much capability as possible from a constraining component, without undergoing expensive changes or upgrades. For example, you might reduce or eliminate the downtime of a bottleneck operation. 3 Subordinate everything else: Adjust the other (non-constraint) components of the system to allow the constrained component to operate at maximum effectiveness. Once this has been done, the overall system is evaluated to determine if the constraint has shifted to another component. If the constraint has been eliminated, you can jump to step five. 4 Elevate the constraint: Take further action to eliminate the constraint. Often this involves major changes and significant investment. This step is only considered if steps two and three have not been successful. 5 Return to step one: Having fixed the constraint, you return to the beginning of the cycle to identify the next constraint. However, you should be aware of the cultural or social factors that often get in the way of change in an organisational setting, as often this means adjusting your speed of action so that others can keep up. When using the theory of constraints as a model for improving the efficiency of a production process, the following measures are most relevant:

264

PA RT SEVEN : Op er ations

●●

Throughput: The rate at which money comes into the organisation through the sales of a product or service.

●●

Inventory: A measure of the working capital and investment tied up in operations, including facilities, equipment and traditional items in ‘inventory’ including raw materials, work in process and finished goods.

●●

Operating expense: What the firm spends to turn inventory into throughput, including expenses such as direct labour, heating and power costs, supplies and depreciation for machines used in production.

These three measures are interdependent, so that a change in one automatically means a change to one or both of the others. The purpose of the theory of constraints, in a sentence, can be summarised as: ‘maximise throughput while minimising inventory and operating expense’.

Top practical tip Before implementing the theory of constraints in your firm, start with the endgoal in mind and work backwards. For example, to reduce bottlenecks in service workflows, you may find that the answer is as simple as reordering the schedule to ensure that Friday shifts are covered by more than one person at a time. In a manufacturing situation, by giving line employees real-time information about defects, alterations can be made before the problem escalates. In all such cases, the basic logic is the same: identify the biggest single constraint and then look for the simplest possible ways of overcoming it.

Top pitfalls While the theory of constraints is very valuable as a way of diagnosing and correcting inefficiencies in a production process, it also has its limitations. One point to bear in mind is that the constraint may be outside your control – for example, customers may not want to buy any more of your products, or regulations might limit how much you are allowed to sell. In such cases, you can attempt to overcome the constraint but your power to do so is itself constrained. The other point to remember is that this is a model based on efficiency: it is about making an existing system work faster and at lower cost. If the problem your firm faces is a more strategic one – for example, trying to compete in a market facing disruptive changes – then you are much better off using other management models to help you.

60: Theory of c onstr ai nts

265

Further reading Goldratt, E.M. (1990) Theory of Constraints. Great Barrington, MA: North River Press. Goldratt, E.M., Cox, J. and Whitford, D. (1992) The Goal: A process of ongoing improvement. Great Barrington, MA: North River Press.

266

PA RT SEVEN : Op er ations

Total quality management

61

Total quality management (TQM) is an integrated set of principles and tools for helping organisations to become more efficient in everything they do. It is built on the idea that by investing in the problem-solving skills of front-line employees, you will get a return on that investment – specifically, higher-quality and lower-cost products. While originally devised in the world of manufacturing, TQM can be applied in many different aspects of business.

When to use it ●●

To enhance quality and productivity in your firm’s operations.

●●

To provide your employees with greater skills and greater responsibility.

Origins In the 1920s, the Hawthorne Works of the Western Electric Company started applying statistical quality control to its operations. Real progress was not made until 1950, when quality expert W. Edwards Deming was sent to Japan by the American government. He worked with the Japanese Union of Scientists and Engineers (JUSE), advising them on reconstruction and quality issues. Deming explained the causes of variations in quality in terms of both statistical control methods and also the ‘soft’ aspects of getting workers involved in helping to make improvements. His success resulted in other quality experts, including Joseph Juran and Philip Crosby, spending a lot of time in Japan in the 1950s. Toyota was the most visible proponent of quality manufacturing techniques. By using these techniques alongside its ‘just-in-time’ production process, it became the undisputed leader in automobile production. In the early 1960s, Toyota implemented the principles of quality management in a comprehensive way. Two important elements of this model were the concept of ‘Kaizen’, in which employees 267

were expected to make continuous incremental improvements in how the process of manufacturing was conducted, and ‘quality circles’, which were regular meetings of employees in which quality problems were discussed and solutions suggested. The success of Toyota and other Japanese firms such as Matsushita and Honda helped to spread the word about TQM, resulting in the launch of various quality awards (such as the Baldridge Award) and the introduction of International Standards Organisation (ISO) certification in the 1990s. ISO certification, particularly ISO 9000, was used by organisations as a way of showing that they complied with important quality demands. TQM is one of many acronyms used to label management systems that focus on quality. Other acronyms include CQI (continuous quality improvement), SQC (statistical quality control), QFD (quality function deployment), QIDW (quality in daily work) and TQC (total quality control).

What it is Total quality management is a systematic approach to management that requires employees to be committed to continuous improvement. While it started out as a way of improving manufacturing processes, it is now applied in all types of business settings. It works both as a ‘philosophy’ of management and as a set of specific tools and techniques. Its key principles are as follows:

268

●●

Customer-focused: The customer determines the level of quality that he or she is prepared to pay for.

●●

Total employee involvement: Employees participate in working towards common goals. Their commitment can only be obtained after fear has been driven from the workplace – so that they feel secure enough to challenge and improve the system.

●●

Process-centred: All work is made up of processes – that is, the series of steps that take inputs and transform them into outputs that are delivered to customers. The process is the ‘unit of analysis’ that you focus on if you want to improve performance.

●●

Integrated system: Although a firm may consist of many different functional specialties, it is the horizontal processes interconnecting these functions that are the focus of TQM. Understanding what these processes are, and how they fit together, is key to the TQM methodology.

●●

Continuous improvement: Employees are expected to take responsibility for identifying inefficiencies and resolving them. This is an ongoing process, rather than something that happens only in response to problems or crises.

●●

Fact-based decision making: Corrections are made on the basis of hard data rather than on gut feel. TQM therefore requires that a firm continually collects and analyses data in order to improve decision-making accuracy.

●●

Communications: Effective communications plays a large part in maintaining morale and in motivating employees at all levels.

PA RT SEVEN : Op er ations

How to use it The principles above seem, to a large degree, like common sense. Indeed, the ‘quality’ revolution has been so successful that its principles are now part of the conversation in most large firms today. However, the real challenge with TQM is in implementation, because while many firms talk about these principles, the reality is that they are actually followed in a disciplined way only by a minority of firms. For example, even though Toyota’s quality-based approach to management has been widely studied for almost 50 years, most Western automobile companies have only recently ‘caught up’ with Toyota in terms of quality and efficiency in production. When implementing a TQM system there is no single model, as every firm is unique in terms of its culture, management practices and business processes. But the basic approach typically looks something like this. First, senior executives learn about TQM as a way of working, and commit to using it. They decide on the specific parts of the organisation that will be focused on (such as the manufacturing operation) and they define the core principles to be used. A ‘master plan’ is put in place on this basis. Within the focal area, a project team is then put together to map the critical processes through which the firm meets its customers’ needs. Each process is then monitored, and team members work with front-line employees to identify ways of improving those processes. This typically requires a lot of detailed work, and training is often needed to help people working in these processes to understand the techniques being used. Progress is evaluated on an ongoing basis, and typically quite a lot of work needs to be done ensuring that there is full buy-in to the change process – for example, using a reward/recognition system.

Top practical tip Despite its widespread use, many firms have actually struggled to implement TQM in an effective way, and some have even failed. The difficulties occur for two main reasons. First, at the heart of TQM is a belief that front-line employees are sufficiently responsible and intelligent that they can be given responsibility for making improvements. This belief sits at odds with the traditional command-and-control philosophy of Western management, and it therefore requires a change in behaviour among senior managers that they are not all ready for. Second, TQM is a highly integrated and systematic approach to management. It requires careful collection of data, analysis and diagnosis, and it results in a plan for corrective action that needs to be followed up diligently. This structured approach does not work for everyone, and it is made even more challenging when the business world is changing rapidly and executive attention is fleeting.

61: Total quality management

269

Top pitfall The most important thing to remember about TQM is that it is designed to improve efficiency, which means cutting costs and improving quality around an existing set of customer needs. TQM is much less useful as a way of improving effectiveness, which is about understanding and responding quickly to changing customer needs. TQM is therefore most useful in relatively stable industries, such as the automobile industry through the 1980s and 1990s, where the winning firms were the ones who implemented their existing strategy as efficiently as possible. It is much less use in such volatile industries as telecoms or pharmaceuticals, where success is about innovation and agility. There have been some attempts at applying TQM principles to the innovation process, and this is a risky proposition and best avoided. This is because TQM is all about taking ‘slack’ out of the system, whereas innovation requires some level of slack time for people to try out their new ideas without anyone looking over their shoulder.

Further reading Deming, W.E. (1986) Out of the Crisis. Cambridge, MA: MIT Press. Powell, T.C. (1995) ‘Total quality management as competitive advantage: A review and empirical study’, Strategic Management Journal, 16(1): 15–37. Lawler, E.E., Mohrman, S.A. and Ledford, G.E. (1992) Employee Involvement and Total Quality Management: Practices and results in Fortune 1000 companies. San Francisco, CA: Jossey-Bass.

270

PA RT SEVEN : Op er ations

Index

ABB 19 abductive reasoning 130 ability model 11, 12 absorption costing 167 Accenture 238 accounting 162 accrual method 162, 163–5 activity-based costing 162, 166–9 balanced scorecard 162, 170–3 cash-based 162, 163, 164 DuPont identity 162, 174–7 economic value added (EVA) 162, 178–82 ratio analysis 162, 183–6 accrual method in accounting 162, 163–5 activity-based costing (ABC) 162, 166–9 agile development 236, 237–40 Alexander, Marcus 103 alien territory businesses 105 Allen, Ferry 231 Amazon 75, 90, 95 ambidextrous organisation 80, 81–4 analytic scientific method 130 anchoring 9 annuity present value of an 219, 221 architectural innovations 133 artefacts 35 artificial intelligence 129 assessment 360-degree 37–40 seven domains assessment model 124, 147–51 asset-based valuation 222, 225 AstraZeneca 110 Athos, Anthony 119 availability 9 balanced scorecard 162, 170–3 ballast businesses 105 bargaining power of customers 108 of suppliers 108 Barney, Jay 94 BATNA (best alternative to a negotiated agreement) 2, 31–3 Bayes theorem 245 BCG growth-share matrix 80, 85–9 beer game 241, 242 Belbin, Dr Meredith team roles 2, 14–17

beliefs basic 35 espoused 35 Berkshire Hathaway 103 best alternative to a negotiated agreement see BATNA beta of stock 200, 201 biases decision-making and cognitive 7–10 Black, Fischer 189 Black-Scholes options pricing model 188, 189–92 blue ocean strategy 80, 90–3 bond valuation 188, 193–8 bonds 193 definition 193 types of 197 Borden, Neil 62 Boston Consulting Group 85 bottom line 32 brainstorming 96, 124, 125–8 Brown, F. Donaldson 174 Brown, Tim 130 bullwhip effect 236, 241–4 business case, building the 153 business-level strategy 102 business model innovation 90–1 Campbell, Andrew 102, 103 Canon 95 capability maturity model 238 capital asset pricing model (CAPM) 188, 199–201, 233–4 capital budgeting 188, 202–6 capital expenditures 203 capital market line 209, 210 capital structure irrelevance principle 213 Cardozo, R.N. 75 cash-based accounting 162, 163, 164 cash flow, discounted (DCF) 221, 222, 226–8 Chamberlin, Edward 74 change management Kotter’s eight-step model 3–6 origins 3 Chesbrough, Hank 144 Christensen, Clay 133, 134 Citibank 18 Clark, Kim 133, 134 coalition, forming a powerful 5 cognitive biases in decision making 2, 7–10 collaboration, growth through 140 collateralised debt obligations 255

271

272

Collis, David 103 comparable transaction valuation 222, 225–6 comparison firms 225 competitions, idea 145 competitive advantage 95, 115 competitive rivalry, intensity of 108–9 compound interest 217–18 compounding 188 confirmation bias 9 consistency bias 8 constraints, theory of 236, 263–6 contextual ambidexterity 82, 83 Cooper, Robert 152–3 Cooper, Robin 166 coordination, growth through 140 core competencies 80, 94–7 corporate culture anchoring changes in 6 corporate philanthropy 99 corporate social responsibility (CSR) 80, 98–101 corporate strategy 102–6 cost of capital 231 weighted average (WACC) 188, 227, 231–4 cost leadership strategy 115, 116, 117 cost-plus pricing 67 creative destruction 133 creativity, growth through 139 critical thinking 130 Crosby, Philip 267 crowdfunding 144, 145 crowdsourcing 124, 144 culture informal 20 organisational see organisational culture current ratio 183, 184 customer immersion 144 customer lifetime value (CLV) 43–6 customer loyalty 257 and net promoter score 58–61 customer values 91 customers bargaining power of 108

Digital Equipment 18 direction, growth through 139 discounted cash flow (DCF) 221, 222, 226–8 discovery 153 Disney 95 disruptive innovation 124, 133–7 diversification 208 dividend discount model 222, 228–9, 234 dominant strategy 113 Dow Chemicals 18, 38 Dresher, Melvin 111 Duncan, Robert 81 DuPont Company 38 DuPont identity 162, 174–7 Dutch East India Company 163 dynamic pricing 66–9

de Bono, Edward ‘thinking hats’ methodology 125 debt-to-equity ratio 184, 213–14 decision-making cognitive biases in 7–10 and matrix structures 19 decision trees 236, 245–80 decline stage (product life cycle) 71, 72 deep dive 125 delegation, growth through 139 Dell 75, 117 Deming, W. Edwards 236, 249, 267 Denison, Daniel 34 Deshpande, R. 52–3 design thinking 124, 129–32 development 153 differentiation strategy 115, 116, 117

Fama, Eugene 201 Farley, J. 52–3 feedback, 360-degree 37–40 finance 188 Black-Scholes options pricing model 188, 189–92 bond valuation 188, 193–8 capital asset pricing model (CAPM) 188, 199–201, 233–4 capital budgeting 188, 202–6 modern portfolio theory 188, 207–11 Modigliani-Miller theorem (M-M) 188, 212–15 time value of money 188, 216–20, 221 valuing the firm 188, 221–3 weighted average cost of capital (WACC) 188, 227, 231–4 financial crisis (2008) 197, 255

Index

EATNA (Estimated Alternative To a Negotiated Agreement) 32 EBIAT (earnings before interest but after tax) 227 EBITDA 225–6, 227 economic value added (EVA) 162, 178–82 efficient frontier 208–10 egocentric bias 8 Elkington, John 99 Elop, Stephen 4 Emotional Competence Inventory 12 emotional intelligence 11–13 empathy 12 employee productivity 257 employee satisfaction 257 entrepreneurship/entrepreneurs 124 seven domains assessment model for 147–51 EQ-I 12 espoused beliefs 35 Estimated Alternative To a Negotiated Agreement (EATNA) 32 ethnographic market research 42, 47–50 ethnography 47 evaluation 172 exploitation 82 exploration 82

Fisher, Roger 31, 32 Fishman, Jerry 170 five forces analysis 80, 107–10 Flood, Merrill 111 focus strategy 115, 116, 117 Ford, Henry 263 Ford Motor Company 74, 249 Forrester, Jay 241 4Cs of marketing 63 4Ps of marketing 42, 62–5 framing 9 French, Kenneth 201 Friedman, Milton 98 fundamental attribution error 9 future value 219 Galvin, Bob 259 game theory 80, 111–14 Gardner, Howard 11 Gauss, Carl Friedrich 259 GE Capital 260 GE (General Electric) 85, 103, 104, 105, 201, 261 General Motors 74, 102, 116, 174 generic strategies 115–18 Goffee, Rob 34 Goldratt, Eli 263, 264 Goleman, Daniel 11, 12 Goold, Michael 102, 103 Gordon growth model 228 greenwashing 101 Greiner, Larry growth model 124, 138–42 growth stage (product life cycle) 71, 72 Hamel, Gary 94, 95 Harrison, John 143 Harry, Mikel 260 heartland businesses 104 Heichelheim, Fritz 202 Henderson, Rebecca 133 hierarchy of needs 27 Honda 268 HP 56 IBM 18, 19, 105, 125, 145 idea competitions 145 IDEO 125, 129 Ikea 90 illusory correlation 8 industry attractiveness 149 industry life cycle 70 information sharing 243 Infosys 238 innovation 124 and brainstorming 96, 124, 125–8 and design thinking 124, 129–32 disruptive 124, 133–7 open 124, 143–6 innovation jam 125, 145

innovation networks 145 Instagram 134 intelligence emotional 11–13 types of 12 interest 216–17 compound 217–18 simple 216–17 interest burden ratio 176 internal rate of return (IRR) 203, 204–5 International Standards Organisation (ISO) certification 268 introduction stage (product life cycle) 71–2 investments and capital budgeting 188, 202–6 ISO 9000 268 Janssen, Claes four rooms of change model 3 Jaworski, B.J. 51–2 Jones, Gareth 34 Juran, Joseph 267 just-in-time production 236, 249–52, 263, 267 Kahn, Herman 156 Kahneman, Daniel 7, 8 Kaizen concept 250, 267–8 Kanban system 249, 250 Kanter, Rosabeth Moss ‘change wheel’ 3 Kanzler, Ernest 249 Kaplan, Robert 166, 170, 171 Kelley, David 129 Kidder Peabody 105 Kim, Chan 91 Kodak 133, 134, 135 Kohli, A.K. 51–2 Kotler, Philip 62 Kotter, John eight-step model 3–6 Lauterborn, B. 4Cs of marketing 63 leadership/leaders 257 hallmarks of great 12 situational view of 24–5 lean manufacturing 236, 250 Lego 145 leverage (gearing) 184 trade-off theory of 213 Levitt, Ted 70 Lewin, Kurt 3, 27, 34 life cycle industry 70 product 42, 70–3 liquidity 184 Longitude Prize 143 McCarthy, Jerome 62 McDonnell 18

Index

273

McGregor, Douglas 27, 34 McKinsey 18, 85, 95 7S framework 80, 119–22 Malthus, Thomas 98 managerial roles decisional work 22, 24 informational work 22, 23 interpersonal work 22, 23–4 Mintzberg’s 22–6 March, James 82, 119 market attractiveness 148 market orientation 51–4 market research 42 ethnographic 47–50 market return 200 market segmentation 63 marketing 42 4Cs of 63 4Ps of 42, 62–5 customer lifetime value 43–6 definitions 42, 51 dynamic pricing 66–9 multichannel 42, 55–7 and net promoter score 58–61 and product life cycle 42, 70–3 segmentation and personalised 42, 74–7 marketing mix 62, 71 Markowitz, Harry 199, 207–8, 208 Martin, Roger 130 Marx, Karl 98 Maslow, Abraham 27 mass-customisation 75 matrix management 2, 18–21 Matsushita 268 maturity stage (product life cycle) 71, 72 Mauborgne, Renee 91 Mayer, John 11, 12 measures, identifying 172 Mercedes 116 Merton, Robert C. 189 Microsoft 103 Miller, Merton 31, 212 Mintzberg, Henry 80 managerial roles 22–6 modern portfolio theory 188, 207–11 Modigliani-Miller theorem (M-M) 188, 212–15 Modigliani, Franco 212, 231 money time value of 188, 216–20, 221 Monte Carlo simulations 253, 254 Montgomery, Cynthia 103 motivation internal 12 and Theory X/Theory Y 2, 27–30 Motorola 259–60 Müller, Gerhard Friedrich 47 Mullins, John 147 multichannel marketing 42, 55–7 Multifactor Emotional Intelligence Scale 12 Murdoch, Rupert 141–2

274

Index

Narver, J.C. 52 Nash equilibrium 112 negotiating techniques and BATNA 2, 31–3 net present value (NPV) 203, 204, 205, 218–19 net promoter score (NPS) 58–61 Netflix 133 Neumann, John von 111 new entrants, threat of 108 new product development stage/gate model of 124, 152–5 Newland, Ted 156 Nokia 4, 116 normal distribution curve 259 Norton, David 170, 171 obstacles, removing 5 Ohno, Taiichi 249 one-to-one marketing 75 open innovation 124, 143–6 operating expenditures 203 operating profit margin 176 operational efficiency 184 operations 236 agile development 236, 237–40 bullwhip effect 236, 241–4 decision trees 236, 245–80 just-in-time production 236, 249–52, 267 sensitivity analysis 236, 253–5 service-profit chain 236, 256–8 Six Sigma 236, 259–62 theory of constraints 236, 263–6 total quality management (TQM) 236, 250, 259, 260, 267–70 options and Black-Scholes options pricing model 188, 189–92 O’Reilly, Charles 81 organisational culture Schein’s model of 2, 34–6 Osborn, Alex 125, 126 Pacioli, Luca 163 Panasonic 48 parenting advantage 102–6 Parker, R.H. 216, 221 participant observation 47 Pascale, Richard 119 pay-off matrix 113 payback period 203, 203–4, 205 Payne, Wayne 11 peers’ appraisal 38–9 Pegolotti, Francesco Balducci 221 personalised marketing 74–7 Peters, Tom 119, 120 Petrides, K.V. 11 Pfizer 110 photography 134 Pitney Bowes 38 place 64

planning, scenario 124, 156–60 Porter, Michael 80, 92, 94, 102, 107, 109, 115–16 portfolios 188, 199 modern portfolio theory 188, 207–11 Prahalad, C.K. 94, 95 present value of an annuity 219, 221 price 63 pricing/pricing strategy 42, 66–7 Black-Scholes options pricing model 188, 189–92 capital asset pricing model (CAPM) 188, 199–201, 233–4 cost-plus 67 dynamic 66–9 factors 67 psychological 67 target-return 67 value-based 67 prisoner’s dilemma 111–14 Procter & Gamble 241 product-centric trap 51 product life cycle 42, 70–3 product platforms 145 production system 236 just-in-time 236, 249–52, 263, 267 profit and service-profit chain 236, 256–8 profit sustainability 184 promotion 63, 64 prototyping 131 psychological pricing 67 quality circles 268 quality control 167–8 quick ratio 184 R&D (research and development) 117, 143 ratio analysis 162, 183–6 rational choice theory 7 red ocean markets 90, 91 Reichheld, Fred 58, 59 representativeness 9 resource-based view 80, 94–7 return on equity (ROE) 178, 181 and DuPont identity 174–7 risk systematic 208 unsystematic 208 risk-free rate 199–200 risk management corporate social responsibility as 99 risk premium 200 rollback method 247 Salovey, Peter 11 scenario planning 124, 156–60 Schein, Ed 27 model of organisational culture 2, 34–6 Schneiderman, Art 170–1

Scholes, Myron 189 Schumpeter, Joseph 133 scoping 153 scrum 238, 239 Sears 55 segmentation 42, 74–7 self-appraisal 38 self-awareness 12 self-regulation 12 sensitivity analysis 236, 253–5 service-profit chain 236, 256–8 seven domains assessment model 124, 147–51 7S framework (McKinsey) 80, 119–22 shared values 121 corporate social responsibility and creation of 99 Sharpe, William 199 Shaw, R. 43 Shell 155, 156–7, 159 short-term wins, creating 5 Simon, Herbert 129 Six Sigma 236, 259–62 skills 121 Skype 103 Slater, S.F. 52 Sloan, Alfred P. 74, 174 Smith, Bill 259, 260 Smith, Wendell 75 social accounting 100 social skills 12 software development and agile concept 237–9 spreadsheets 253 staff 121 stage/gate model and new product development 124, 152–5 Starbucks 72 Stata, Ray 170 Staubus, George 166 Stevin, Simon 221 stock options 188 Stone, M. 43 strategy 80, 121 BCG growth-share matrix 80, 85–9 blue ocean 80, 90–3 business-level 102 core competencies and the resource-based view 80, 94–7 corporate 102–6 and corporate social responsibility (CSR) 80, 98–101 definition 80 and five forces analysis 80, 107–10 and game theory 80, 111–14 generic 115–18 McKinsey 7S framework 80, 119–22 strategy canvas 91 structural ambidexterity 82, 83 structure 121

Index

275

style 121 subordinates’ appraisal 38 substitute products/services, threat of 108 superior’s appraisal 38 suppliers, bargaining power of 108 supply chain and bullwhip effect 241–4 and just-in-time production 249–52 surveys, emotional intelligence 12 sustainability 80 sustainable advantage 149 SWOT analysis 107 systematic risk 208 systems 121 systems dynamics 241, 263 target-return pricing strategy 67 tax burden ratio 176 teams and Belbin’s team roles 14–17 and seven domains assessment model 149 temporal ambidexterity 82 Tesco 76, 116 testing and validation 154 theory of constraints 236, 263–6 Theory X/Theory Y and motivation 27–30 ‘thinking hats’ methodology 125 Thorndike, Edward 11 360-degree assessment 37–40 3M Corporation 104 time value of money 188, 216–20, 221 total quality management (TQM) 236, 250, 259, 260, 267–70 Toyota 115, 116, 236, 263 production system 249–50 and total quality management 267–8, 269 trade-off theory of leverage 213 trait model 11 trends 157–8 triple bottom line 80, 98–101 Tushman, Mike 81 Tversky, Amos 7, 8

276

Index

Unilever 103 unlevered free cash flow (UFCF) 226 unsystematic risk 208 urgency, creating 4 Ury, William 31, 32 valuation asset-based 222, 225 bond 188, 193–8 comparable transaction 222, 225–6 discounted cash flow (DCF) 221, 222, 226–8 dividend discount model 222, 228–9, 234 value-based pricing 67 value chain 116 value creation 104 value destruction 104 value innovation 92 value trap businesses 105 valuing the firm 188, 221–30 Vernon, Raymond 70 vision communicating of 5 defining 171 vision for change, creating 5 Volkswagen 116 Wack, Pierre 156 waterfall methodology 237, 238 Waterman, Robert 119 Weick, Karl 119 weighted average cost of capital (WACC) 188, 227, 231–4 Welch, Jack 260, 261 Williams, Keith 166 Wilson, Clark 38 Wind, Y. 75 XP (extreme programming) 238–9 Yahoo! 75 yield to maturity (YTM) 194–6

Also available in the series

Available at all good bookshops and online at www.pearson-books.com

Do you want your people to be the very best at what they do? Talk to us about how we can help. As the world’s leading learning company, we know a lot about what your people need in order to be better at what they do. Whatever subject or skills you’ve got in mind (from presenting or persuasion to coaching or communication skills), and at whatever level (from new-starters through to top executives) we can help you deliver tried-and-tested, essential learning straight to your workforce – whatever they need, whenever they need it and wherever they are. Talk to us today about how we can: • Complement and support your existing learning and development programmes • Enhance and augment your people’s learning experience • Match your needs to the best of our content • Customise, brand and change it to make a better fit • Deliver cost-effective, great value learning content that’s proven to work.

Contact us today:

[email protected]