Performance measurement in the 21st century : From performance measurement to performance management 9781845441876, 9781845440237

Focuses on the performance planningvalue chain (PPVC). The PPVC is the latestthinking in the field of PerformanceMeasure

269 47 1MB

English Pages 102 Year 2004

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Performance measurement in the 21st century : From performance measurement to performance management
 9781845441876, 9781845440237

Citation preview

bpmj_cover_(i).qxd

08/04/2005

11:19

Page 1

Volume 10 Number 5 2004

ISBN 1-84544-023-4

ISSN 1463-7154

Business Process Management Journal Developing re-engineering towards integrated process management Performance measurement in the 21st century: from performance measurement to performance management Guest Editor: Dr Yasar Jarrar

In association with The European Centre for Total Quality Management

www.emeraldinsight.com

Business Process Management Journal

ISSN 1463-7154 Volume 10 Number 5 2004

Performance measurement in the 21st century: from performance measurement to performance management Guest Editor Dr Yasar Jarrar

Access this journal online __________________________ 499 Advisory group and editorial board _________________ 500 Abstracts and keywords ___________________________ 501 Guest editorial ____________________________________ 503 Viewpoint Extracting value from data – the performance planning value chain Andy Neely and Yasar Jarrar _____________________________________

506

From process measurement to performance improvement Ian Robson ____________________________________________________

510

Structure, speed and salience: performance measurement in the supply chain Chris Morgan __________________________________________________

522

Measuring employee assets – The Nordic Employee IndexTM Jacob K. Eskildsen, Anders H. Westlund and Kai Kristensen ____________

Access this journal electronically The current and past volumes of this journal are available at:

www.emeraldinsight.com/1463-7154.htm You can also search over 100 additional Emerald journals in Emerald Fulltext at:

www.emeraldinsight.com/ft See page following contents for full details of what your access includes.

537

CONTENTS

CONTENTS continued

Intellectual capital – defining key performance indicators for organizational knowledge assets Bernard Marr, Gianni Schiuma and Andy Neely ______________________

551

Proposed analysis of performance measurement for a production system H’ng Gaik Chin and Muhamad Zameri Mat Saman ___________________

570

Activity-based costing for logistics and marketing Drew Stapleton, Sanghamitra Pati, Erik Beach and Poomipak Julmanichoti ___________________________________________

584

www.emeraldinsight.com/bpmj.htm As a subscriber to this journal, you can benefit from instant, electronic access to this title via Emerald Fulltext. Your access includes a variety of features that increase the value of your journal subscription.

How to access this journal electronically To benefit from electronic access to this journal you first need to register via the Internet. Registration is simple and full instructions are available online at www.emeraldinsight.com/ rpsv/librariantoolkit/emeraldadmin Once registration is completed, your institution will have instant access to all articles through the journal’s Table of Contents page at www.emeraldinsight.com/1463-7154.htm More information about the journal is also available at www.emeraldinsight.com/ bpmj.htm Our liberal institution-wide licence allows everyone within your institution to access your journal electronically, making your subscription more cost effective. Our Web site has been designed to provide you with a comprehensive, simple system that needs only minimum administration. Access is available via IP authentication or username and password.

E-mail alert services These services allow you to be kept up to date with the latest additions to the journal via e-mail, as soon as new material enters the database. Further information about the services available can be found at www.emeraldinsight.com/usertoolkit/ emailalerts Emerald WIRE (World Independent Reviews) A fully searchable subject specific database, brought to you by Emerald Management Reviews, providing article reviews from the world’s top management journals. Research register A web-based research forum that provides insider information on research activity world-wide located at www.emeraldinsight.com/researchregister You can also register your research activity here. User services Comprehensive librarian and user toolkits have been created to help you get the most from your journal subscription. For further information about what is available visit www.emeraldinsight.com/usagetoolkit

Key features of Emerald electronic journals Automatic permission to make up to 25 copies of individual articles This facility can be used for training purposes, course notes, seminars etc. This only applies to articles of which Emerald owns copyright. For further details visit www.emeraldinsight.com/ copyright Online publishing and archiving As well as current volumes of the journal, you can also gain access to past volumes on the internet via Emerald Fulltext. Archives go back to 1994 and abstracts back to 1989. You can browse or search the database for relevant articles. Non-article content Material in our journals such as product information, industry trends, company news, conferences, etc. is available online and can be accessed by users. Key readings This feature provides abstracts of related articles chosen by the journal editor, selected to provide readers with current awareness of interesting articles from other publications in the field. Reference linking Direct links from the journal article references to abstracts of the most influential articles cited. Where possible, this link is to the full text of the article. E-mail an article Allows users to e-mail links to relevant and interesting articles to another computer for later use, reference or printing purposes.

Additional complementary services available Your access includes a variety of features that add to the functionality and value of your journal subscription:

Choice of access Electronic access to this journal is available via a number of channels. Our Web site www.emeraldinsight.com is the recommended means of electronic access, as it provides fully searchable and value added access to the complete content of the journal. However, you can also access and search the article content of this journal through the following journal delivery services: EBSCOHost Electronic Journals Service ejournals.ebsco.com Huber E-Journals e-journals.hanshuber.com/english/index.htm Informatics J-Gate www.j-gate.informindia.co.in Ingenta www.ingenta.com Minerva Electronic Online Services www.minerva.at OCLC FirstSearch www.oclc.org/firstsearch SilverLinker www.ovid.com SwetsWise www.swetswise.com TDnet www.tdnet.com

Emerald Customer Support For customer support and technical help contact: E-mail [email protected] Web www.emeraldinsight.com/customercharter Tel +44 (0) 1274 785278 Fax +44 (0) 1274 785204

BPMJ 10,5

500

ADVISORY GROUP Professor Thomas H. Davenport Babson College, USA Professor Varun Grover William S. Lee Distinguished Professor of Information Systems, Clemson University, USA Professor Gopal Kanji Dept of Applied Statistics & Operational Research, Sheffield Hallam University, Sheffield, UK Dr John Peters Emerald, 60/62 Toller Lane, Bradford, UK Professor N. Venkatraman David J. McGrath Jr, Professor of Management, Boston University, USA EDITORIAL BOARD Professor Hassan Abdalla De Montfort University, Leicester, UK Professor Zaitun Abu Bakar Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur, Malaysia Dr Asamir Abuznaid Hebron University, Palestinian Authority Professor Adel M. Aladwani Department of Quantitative Methods & Information Systems, College of Business Administration, Kuwait University, Kuwait

Professor A. Sharaf Eldin Faculty of Computers and Information, Helwan University, Cairo, Egypt Professor A. Gunasekaran University of Massachusetts, USA Dr Ray Hackney Director BIT Research, The Business School, Manchester Metropolitan University, Aytoun Street, Manchester, UK Dr H. James Harrington The Harrington Institute, USA Professor Suliman Hawamdeh School of Communication and Information, Nanyang Technological University, Singapore Professor Kai Jakobs Computer Science Department, Technical University of Aachen, Germany Chan Meng Khoong Director of Business Development, EDS International (S) Pte Ltd, Singapore Dr Peter Kueng Cre´dit Suisse, Zurich, Switzerland Professor Binshan Lin Department of Management & Marketing, Louisiana State University in Shreveport, USA

Professor Mustafa Alshawi Faculty of Business and Informatics, University of Salford, UK

Professor Pericles Loucopoulos Chair of Information Systems, Department of Computation, UMIST, Manchester, UK

Professor Atta Badii Department of Information Systems, University College Northampton, Northampton, UK

Professor Michael Rosemann Director of the Centre for IT Innovation, Queensland University of Technology, Brisbane, Australia

Professor Saad Haj Bakry College of Engineering, King Saud University, Saudi Arabia Professor David Bennett Aston Business School, Birmingham, UK Professor Sergio Beretta SDA Bocconi, Milan, Italy Professor Walter W.C. Chung Department of Industrial and Systems Engineering, Hong Kong Polytechnic University, Hung Hom, Hong Kong

Business Process Management Journal Vol. 10 No. 5, 2004 p. 500 # Emerald Group Publishing Limited 1463-7154

Professor Georgios I. Doukidis Department of Management Science and Technology, Athens University of Economics and Business, Athens, Greece

Professor Barrie Dale United Utilities Professor of Quality Management, Manchester School of Management, UMIST, Manchester, UK

Professor John Sharp Faculty of Business and Informatics, University of Salford, Salford, UK Professor Namchul Shin Pace University, School of CSIS, New York, USA Constantinos J. Stefanou Technological Educational Institution (TEI) of Thessaloniki, Greece Professor W.M.P. van der Aalst Department of Information and Technology, Eindhoven University of Technology, Eindhoven, The Netherlands Professor Anthony Wensley Rotman School of Business, Ontario, Canada

Extracting value from data – the performance planning value chain Andy Neely and Yasar Jarrar Keywords Value analysis, Performance measurement (quality) Focuses on the performance planning value chain (PPVC). The PPVC is the latest thinking in the field of Performance Measurement developed by the Centre for Business Performance, Cranfield School of Management. It provides a systemic process for using data to enhance decisionmaking, bringing together a vast array of tools to extract value from data and focus efforts on what will add real value to the organisation.

From process measurement to performance improvement Ian Robson Keywords Process management, Performance management, Complexity theory, Supply chain management, Performance measurement (quality) This paper uses basic principles from complexity theory, psychology and management theory to demonstrate that many traditional methods of identifying performance measures may not result in improvements in overall performance. In order to illustrate this, the paper first identifies the answers to six fundamental questions that are critical to the success of Process measurement, if it is to move away from just measuring performance to a fully integrated approach to improving process performance. The paper then addresses a final question on the type of measurement approach that is most likely to improve organizational performance. The answers to these seven questions make a compelling argument for a reassessment of many different established approaches to measurement. However, rather than proposing yet another, different approach, it outlines the steps that integrate other approaches into a single, unified measurement approach to improving process performance.

Structure, speed and salience: performance measurement in the supply chain Chris Morgan Keywords Performance management, Organizational analysis, Supply chain management, Information management, Performance measurement (quality) The supply chain presents many challenges for management and for the design of performance measurement systems. It is possibly one of the final structural areas of business in which significant savings are to be made, and it is becoming an increasingly important strategic tool as trade becomes global in perspective. To assess the problems faced in the supply chain, this paper begins with an overview of where we are in terms of the development of performance measurement theory. It then highlights two key issues that the supply chain raises in measuring performance in an intra-organisation scenario before identifying no fewer than nine preconditions necessary for effective and dynamic performance measurement within supply chains.

Measuring employee assets – The Nordic Employee IndexTM Jacob K. Eskildsen, Anders H. Westlund and Kai Kristensen Keywords Employees, Measurement, Job satisfaction, Scandinavia Based on a review of the literature within the field of job satisfaction and organizational commitment this paper proposes a generic model for measuring employee perceptions. The model consists of seven latent constructs operationalised through 29 manifest indicators. This model is subsequently tested on data from a questionnaire survey to which approximately 9,600 employees from the Nordic countries responded. The statistical technique applied for this analysis is partial least squares (PLS), which is well suited for structural equation modelling when the focus is on prediction. The results support the structure of the suggested generic model but reveals differences between the Nordic countries with regard to strength of the

Abstracts and keywords

501

Business Process Management Journal Vol. 10 No. 5, 2004 Abstracts and keywords # Emerald Group Publishing Limited 1463-7154

BPMJ 10,5

502

relationships as well as the average case value of the seven latent constructs. Intellectual capital – defining key performance indicators for organizational knowledge assets Bernard Marr, Gianni Schiuma and Andy Neely Keywords Intellectual capital, Assets, Performance measurement (quality), Knowledge management Measuring intellectual capital is on the agenda of most 21st century organisations. This paper takes a knowledge-based view of the firm and discusses the importance of measuring organizational knowledge assets. Knowledge assets underpin capabilities and core competencies of any organisation. Therefore, they play a key strategic role and need to be measured. This paper reviews the existing approaches for measuring knowledge based assets and then introduces the knowledge asset map which integrates existing approaches in order to achieve comprehensiveness. The paper then introduces the knowledge asset dashboard to clarify the important actor/infrastructure relationship, which elucidates the dynamic nature of these assets. Finally, the paper suggests to visualise the value pathways of knowledge assets before designing strategic key performance indicators which can then be used to test the assumed causal relationships. This will enable organisations to manage and report these key value drivers in today’s economy. Proposed analysis of performance measurement for a production system H’ng Gaik Chin and Muhamad Zameri Mat Saman Keywords Quantitative methods, Production methods, Performance measurement (quality), Decision making, Manufacturing systems, Strategic planning Hitherto, very few performance measures have been constructed for selecting the right production system. Before a new, advanced manufacturing system (e.g. Just In Time and Flexible Manufacturing System) is

implemented in a company, it is of paramount importance that extensive analysis should be done to ensure that the ladder is lying against the right wall. Currently, the selection of a production system is mostly centred on perceptions or mere judgments from experience. Hence, a quantitative form of selection would be more convincing, as it will show and compare the degree of advantage between each manufacturing system in numbers and percentages. This paper discusses the authors’ attempt to formulate a performance measure that could quantitatively analyse and select the best production system for a company. A questionnaire survey has been carried out in a multinational company in Johore, Malaysia and the results are used to formulate the performance measure. Reliability tests on the instrument and correlation tests on the six identified manufacturing outputs were performed. Tests of significance were also done on the outputs used. Activity-based costing for logistics and marketing Drew Stapleton, Sanghamitra Pati, Erik Beach and Poomipak Julmanichoti Keywords Activity-based costs, Logistics, Marketing, Information gathering Activity-based costing (ABC) is gradually being utilized as more of a decision-making tool than an accounting tool. This paper investigates how, after almost a decade of slow growth, ABC is gaining acceptance as a tool to determine the true costs of marketing and logistics activities. How ABC provides managers with considerable insights into how various products, territories, and customers play major roles in logistic and marketing activities and, consequently, drive total costs is discussed. The advantages of the ABC model in terms of providing the right information to marketing managers with regard to which products, customers, or territories are more important and which could be eliminated without affecting the overall objectives of the firm are presented. The paper concludes by identifying ABC’s shortcomings and the promise it holds for the modern enterprise.

Guest editorial About the Guest Editor Yasar Jarrar is a Visiting Research Fellow at the Cranfield School of Management, UK and Visiting Lecturer in Bradford School of Management, UK, Visiting Professor in the Universita Degli Studi Della Basilicata, Facolta Di Ingegneria, Potenza, Italy. Yasar is currently the Head of Government Policy and Strategy unit in the Dubai Government Excellence Programme. Working on overall improvement and development programmes for the Government of Dubai. He is also a Visiting Research Fellow at the Centre for Business Performance, and worked on applied research for various sponsoring organizations including DHL, Bank of Scotland, Arla Foods, BTCellnet, Greggs of Yorkshire, DTI, and Accenture. Major research areas include Performance Measurement and Management, Process Improvement and Six Sigma where he authored numerous reports and papers. He is also currently an Honorary Visiting Fellow in Total Quality Management at the European Center for Total Quality Management, University of Bradford, UK, and has been an invited speaker in numerous national and international events. He also previously worked as a Quality Management Consultant and Industrial Engineer in the Middle East. Recent international speaking engagements are: 3rd Annual Balanced Scorecard Implementation Forum, Keynote Speaker, IIR, Dubai, UAE (February 2003); the International Quality Congress, Keynote Speaker, IIR, Dubai, UAE ( January 2003); creating Public Value: challenges or Participation, Partnership, Performance, and Productivity in Public Service, Warwick University, UK, November 2002; “Key Performance Indicators for The Public Sector” conference, Asia Business Forum, Kuala Lumpur, Malaysia, August 2002; “Six Sigma: The Impact on the Bottom Line” Conference, Cranfield School of Management, UK, February 2002. Recent publications are: Closing the Gap 3 – Benchmarking UK Industries. Department of Trade and Industry, UK, June 2002; REACTE Report – Benchmarking the Member States of the European Union – A Report to EU Policy Makers, Department of Trade and Industry, UK, April 2002; Guest Editor – Business Process Management Journal, Emerald, UK – Special Issue on Performance Measurement in the 21st Century – Due in 4th quarter of 2003; Guest Editor – Benchmarking: An International Journal, Emerald, UK – Special Issue on Benchmarking in the Knowledge Era – Due in 1st quarter of 2004. For more information, visit the Web site: www.som.cranfield.ac.uk/som/cbp/

Performance measurement in the 21st century From performance measurement to performance management As the pace of change accelerates in the 21st century as a result of technological opportunities, liberalisation of world markets, demands for innovation, and continually decreasing life cycles, organisations are finding that they have to continuously re-adjust and re-align their operations to counter all these challenges. This pace of change has increasingly forced organisations to be more outward looking, market oriented, and knowledge driven. These pressures have clearly driven the subject of performance measurement high up the management agenda. In order to be successful, today’s companies need to look at their internal and external environment and generate a vision of where they want to be in three, five and ten years from now. Providing focus in the organisation by translating this vision into a clear strategy and visualizing the latter one by identifying the performance areas is the next step towards success. By defining metrics for core processes and linking them to the present key performance areas, organisations can align their resources and their

Guest editorial

503

Business Process Management Journal Vol. 10 No. 5, 2004 pp. 503-505 q Emerald Group Publishing Limited 1463-7154

BPMJ 10,5

504

efforts to the overall strategy, in turn driving the organisation towards fulfilment of its future goals. Performance measurement systems have always played an important role in effective planning and efficient running of a business. By measuring performance in a constructive way, some profound improvements can be made to overall performance of an organisation. An effective measurement system enables the managers of an organisation to determine if the activities occurring within a facility are, in fact, supporting the achievement of objectives, and whether these objectives move the organisation closer to the stated vision. A good performance measurement system helps an organisation to establish its current position, communicate direction to its people and stimulate action, facilitate learning and influence behaviour. However, on the downside, a badly constructed performance measurement system will destroy performance by encouraging emphasis on the wrong activities and the lack of harmony between the organizational strategies, management and overall organizational culture. At the heart of this growth, in interest, in performance measurement is the belief that the “way we measured business performance so far, is simply not good enough for today’s business environment”. Traditional performance measures, developed from costing and accounting systems, have been criticized for encouraging short term-ism, lacking strategic focus, encouraging local optimisation, encouraging minimization of variance rather than continuous improvement, not being externally focused and even for destroying competitiveness. In an attempt to overcome these criticisms, performance measurement frameworks have been developed to encourage a more balanced view. Various approaches have been developed over the past decade including the Economic Value Added, the Performance Prism, and the Balanced Scorecard. It is in this ethos that the papers for this special issue have been selected. Performance measurement has developed in two dimensions over the decades, and it is these two dimensions that the special issue attempts to portray: (1) Performance measurement has moved from focusing merely on the financial aspects. Traditionally, management accountants have provided financial results, which aim to gauge the health of the business. Over the years, these results have become short-term in perspective, e.g. return on capital employed. Therefore, performance measures and management techniques/strategies that have more predictive power and are result oriented with a long-term perspective are needed. Companies must adopt a more enlightened approach to assessing and managing company performance. At the heart of this shift from focus on financials to wider views is the measurement of intellectual capital and intangible assets. Measuring these assets is the focus of two papers in this special issue exploring Key Performance Indicators for organizational knowledge assets and measuring employee assets. (2) Performance measurement has also developed and progressed from merely focusing on measuring things to actually thinking about the extended value chain of how to use these measures for performance management and improvement – organisations are now trying to focus on managing with

measures. This is explored in three papers in this issue, which shed some light from three different angles. The first explores the move from process measurement to performance improvement using basic principles from Complexity Theory, Psychology and Management Theory to demonstrate that many traditional methods of identifying performance measures may not result in improvements in overall performance. Another papers explore the use of performance measures in production systems and propose an analysis approach. Finally, and at a macro level, the final paper reflects on issues facing performance measurement in the Supply Chain as a whole. Yasar Jarrar Guest Editor

Guest editorial

505

The Emerald Research Register for this journal is available at www.emeraldinsight.com/researchregister

BPMJ 10,5

506

The current issue and full text archive of this journal is available at www.emeraldinsight.com/1463-7154.htm

VIEWPOINT

Extracting value from data – the performance planning value chain Andy Neely and Yasar Jarrar Centre for Business Performance, Cranfield School of Management, Cranfield, Bedford, UK Keywords Value analysis, Performance mesurement (quality) Abstract Focuses on the performance planning value chain (PPVC). The PPVC is the latest thinking in the field of Performance Measurement developed by the Centre for Business Performance, Cranfield School of Management. It provides a systemic process for using data to enhance decision-making, bringing together a vast array of tools to extract value from data and focus efforts on what will add real value to the organisation.

Business Process Management Journal Vol. 10 No. 5, 2004 pp. 506-509 q Emerald Group Publishing Limited 1463-7154 DOI 10.1108/14637150410559180

Organisations, today have access to almost unlimited amounts of data – sales, demographics, economic trends, competitive data, consumer behaviour, efficiency measures, financial calculations, etc. However, many decision makers in organisations feel lost and perplexed. They have mountains of data and still find it difficult to make the appropriate decisions, or understand where they really are. Managers, today complain of “drowning in data while thirsting for information”. Organisations seem to be generating data at a much faster rate than any manager can master, and in parallel to that, the useful life of that data is collapsing. Moreover, performance measurement as a whole is becoming more and more costly. This phenomenon of drowning in data is having a profound effect on the way, boards manage their meetings. The volume and variety of transactions within today’s organisations mean that the board/executive team could easily get engrossed in incredibly detailed reviews of operational performance at their meetings. While interesting in themselves, such detailed reviews might overlook the strategic content. Executives world-wide are clearly recognising this issue and some are beginning to question whether the structure and focus of their performance reviews is appropriate for their business. The problem has never been lack of useful tools or proven techniques. Most tools for data analysis, interpretation, and visualisation have been around for many years. Neither was the problem any lack of capable IT or business systems to deploy these tools. In the year 2000, World Research Inc.[1] estimated that the “business intelligence and data warehousing” tools and services market was growing at an average of more than 50% and was estimated to reach $113 billion by 2002”. The problem stems from how these tools are used, their sequence, interaction, and what organisations do with the output. Today, more than ever, to succeed in multi-channel, high-speed environments, organisations need to leverage the data they have at their disposal. The performance planning value chain (PPVC) aims to facilitate that process. The PPVC is the latest thinking in the field of Performance Measurement developed by the Centre for Business Performance, Cranfield School of Management. It provides a systemic

process for using data to enhance decision-making, bringing together a vast array of tools to extract value from data and focus efforts on what will add real value to the organisation. It aims to: . provide a process for the transformation of data – often disorganised or dispersed in its original form – into high-quality, value-added information that enables the users to make more effective decisions. . provide a process that coherently brings together a combination of skills for analysing and interpreting complex information from a variety of sources and the ability to present complex technical information to non-specialists, and an ability to add insights.

Performance planning value chain 507

In short, the PPVC is based on the simple premise that organisational data, and its analysis, are the lifeblood for value adding and thus, should be viewed as a critical input to organisational strategy delivery and improvement. The PPVC (Figure 1) covers the whole process of extracting value from data, from setting the hypothesis at hand to planning action based on informed decisions. In essence, it provides business analysts with a “best practice” pathway. The PPVC was designed to emphasise to the performance analysts, a key analogy that we frequently use, that of the detective. If a detective is investigating a crime or constructing a case, he/she does not rely on a single piece of data. Instead he/she gathers all of the available evidence and tries to piece together the story or sequence of events. So, it should be with performance analysts. When the analysts in an organisation are constructing a case, they should pull together all of the available data and use it to present a coherent answer to a specific question. Only then they will enable the board to have the right level of discussion. The Performance Planning Value Chain framework covers various steps for extracting value from data including: (1) Develop hypothesis. At this early stage, the first step is to think about, and decide upon the objective of the data analysis. Basically, this phase is all about deciding: what are the questions that need answering? This is a crucial step prior to starting the data collection and analysis. It requires the business analyst to identify the questions/issues that the analysis will attempt to unravel. It is about identifying the gaps in performance that need investigation, the links between enablers and results, and the preliminary

Figure 1. Performance planning value chain

BPMJ 10,5 (2)

508

(3)

(4)

(5)

areas of focus in terms of the potential problems and possible answers. Tools used in this, are mainly those that help describe the status quo in business operations and include Success Maps, Process Maps, and Gap Analysis. Gather data. This phase focuses on answering: what data do we need to collect? Do we collect it already? How can we gather it in an effective and efficient manner? While most organisations collect, literally, tons of data, few have trust in the data. There are always issues with the data sources, how was the data collected, data collection points and timing, and generally how much trust can we put in the data, etc. This is usually due to non-credible and/or non-transparent data sources, a result of poorly designed measures, or a combination of both. It is not unusual to observe two people heatedly arguing over some dimension of performance and later find that the root cause of their disagreement was the imprecise definition of a measure. It is for this reason that this step becomes important, and the tools used here were selected to ensure organisations follow a systemic and structured data collection approach. Tools here include sampling plans, measure design templates, questionnaire design, and data collection plans. Data analysis. Once we have all the right data collected via a process that can be trusted, this stage attempts to answer the question: What is the data telling us? At this point, the analysts would start transforming data to information by using tools for quantitative and qualitative data analysis. These tools help dissect the data and show them in different lights to start understanding what is the message contained in the data. Tools here include the basic seven tools of quality management (Histograms, Pareto analysis, etc.) among others. The outcomes from this stage would be the information – raw data transformed into value adding information (graphs, comparative tables, percentages, . . .). Interpretation. It is important to differentiate this step from data analysis. Once the charts and graphs have been completed in the previous step, the question now becomes: what does that mean for the business? It is here that the detective thinking becomes crucial. It is this stage that most analysts seem to miss when working with data. In fast moving environments, analysts reporting to the board usually undertake the data analysis and produce the charts and graphs and mistakenly leave the actual interpretation to the board meeting. Thus, this stage is crucial and attempts to deal with fundamental questions: What insights can we extract from the data? How will the message differ by changing the angle we look at data? Tools include information visualisation, benchmarking, and out of the box thinking techniques. Inform/communicate insights. Now that data has been analysed and some insights gained, the question becomes: how to best deliver the message we have concluded? Who is my audience and what do they want? What are the best channels for delivering my message? Great analysis and valuable insights could be lost if the message is not delivered in the right frame and light. It is, thus, prudent that once some

insights have been identified that they are packaged in a suitable delivery channel for the audience they are targeting. Tools here include presentation skills, attention management techniques and storytelling skills. (6) Make informed decisions and plan/take action. It is at this stage that an organisation can take actions based on the information presented and knowledge gained. Armed with the insights from the previous phases, the questions at this stage include: how do we take action based on that data? How do we prioritise our actions? This is where all the work done so far can be transferred into actions to deliver bottom line value to the organisation. To succeed in this undertaking, the tools required include action planning tools, decision-making and prioritisation techniques, project management and feedback systems. In an age, where data are becoming abundant and available to everyone at, more or less, increasingly reduced and affordable costs, and while the useful lifecycle of data is shortening, the ability to quickly and effectively extract value from data, to inform decision-making and facilitate quick action, will separate the winners from the losers. The PPVC is designed to move the organisation from working with data to effectively handling information and turning it into value adding knowledge and sustainable experience for competitive advantage. Note 1. World research Inc. (1999) Business Intelligence and Data Warehousing (BI/DW) Programme Competitive Analysis, California, USA.

Performance planning value chain 509

The Emerald Research Register for this journal is available at www.emeraldinsight.com/researchregister

BPMJ 10,5

The current issue and full text archive of this journal is available at www.emeraldinsight.com/1463-7154.htm

From process measurement to performance improvement Ian Robson

510

Perception Dynamics Ltd., Claygate, Surrey, UK Keywords Process management, Performance management, Complexity theory, Supply chain management, Performance measurement (quality) Abstract This paper uses basic principles from complexity theory, psychology and management theory to demonstrate that many traditional methods of identifying performance measures may not result in improvements in overall performance. In order to illustrate this, the paper first identifies the answers to six fundamental questions that are critical to the success of Process measurement, if it is to move away from just measuring performance to a fully integrated approach to improving process performance. The paper then addresses a final question on the type of measurement approach that is most likely to improve organizational performance. The answers to these seven questions make a compelling argument for a reassessment of many different established approaches to measurement. However, rather than proposing yet another, different approach, it outlines the steps that integrate other approaches into a single, unified measurement approach to improving process performance.

Business Process Management Journal Vol. 10 No. 5, 2004 pp. 510-521 q Emerald Group Publishing Limited 1463-7154 DOI 10.1108/14637150410559199

Introduction Measurement has become such an accepted approach within organisations that considerable effort is expended in trying to identify “What” can be measured and “How” to measure it. However, few people genuinely challenge “Why” they should measure in the first place. Every measurement activity incurs costs to both implement and maintain. Every additional measure is potentially reducing the efficiency of the process. Without the knowledge of the exact circumstances under which a measurement system either will or will not improve the performance, it is difficult to genuinely justify the additional cost of implementing a measurement system. In order to identify a method of developing a cost-effective way of using measurement systems to improve performance, this paper outlines answers to seven critical questions. . How can the use of measurement assist in improving the performance? . How does measurement affect human behaviour and motivation? . When are measurement systems likely to create a deterioration in performance? . How can a minimum set of measures be identified for an individual process? . How can the overall performance of a complete supply chain of processes be improved? . How can individual process measures be aligned with organizational objectives? . What overall approach should be taken to ensure that a process measurement system will genuinely improve the overall performance of an organisation? The answers to these questions suggest that many organisations may need to reappraise the way they implement and use measurement systems.

How can the use of measurement assist in improving performance? The usual justification for measurement tends to rely on cliche´s such as “What gets measured gets done” or “If you cannot measure it, you cannot manage it.” The inference that is often drawn from such statements is the more that is measured, the more that will be managed and the more that will be improved. However, as there is an almost infinite number of ways of measuring performance, many managers find that the sheer magnitude of organizational measures creates “paralysis by analysis” (Langley, 1995; Miller, 1990; Callaway, 1999) Before trying to identify all the possible factors that could be measured, we need to be clear that the main reason to implement measurement systems is to give the greatest opportunity of increasing the overall effectiveness of the business processes. Measurement systems that are not contributing to an overall improvement in performance, need to be urgently reassessed. In order to reassess the measurement activity, we need to identify exactly how measuring performance can lead to an overall improvement in the effectiveness of a business process. Only then we will be able to identify the very minimum set of measures of process performance that will enable the greatest return on the investment of implementing and maintaining the measurement systems. In order to achieve this, we require an understanding of both the mechanical and motivational aspects of measurement. From the mechanical point of view, measurement can assist in managing performance when it is part of a Control System (Fowler, 1999). A control system has four main elements: sensing, assessing, selecting, and acting. A room thermostat is an example of a feedback control system (Nanni et al., 1990). In effect, it senses the room temperature, assesses it by comparison with the pre-set temperature, selects and sends the appropriate signal to the heating/air conditioning system, which then acts to remedy any deficiency in the room temperature. In the same way, a measurement system can be used to “sense” a current level of performance. However, it is only possible to assess the meaning of this measurement by comparison with another value (Curtis, 1994). For example, taking a person’s temperature is a waste of time if you have no idea about what the normal temperature should be. Likewise, this assessment is only going to be of great value if you can select and act on a strategy that can take steps to remedy any problem indicated by an abnormal temperature. This means that Control Systems implicitly include “Rules” that link the assessing, selecting and acting components of control. For example, the room thermostat acts as if it was following two rules. The first might be “if the sensed temperature of the room is more than half a degree below the setting, turn the heating on”. The second rule would be “if the sensed temperature of the room is more than half a degree above the setting, turn the heating off”. The thermostat control system is not concerned with the almost infinite number of different events that might cause the room temperature to change. It simply focuses on the issue of keeping the room temperature within a small variance of the pre-set level. Consequently, in order for a measurement system to assist in controlling performance, it needs to be part of a complete control mechanism (Boland and Fowler, 2000). That mechanism should have, at the very least, a rule that defines where the comparison value can be identified, how to assess the information, when to act

Performance improvement

511

BPMJ 10,5

512

and what type of action is required. If the process of measuring is not part of an effective control mechanism, the cost incurred in measuring is highly likely to be a wasted cost and could, thus, potentially decrease the level of performance. It is important to emphasise the word “effective” here. In theory, virtually any performance measure might be used as part of a control. However, if a measure is never actually being used effectively to implement actions that are controlling or improving performance, no matter how useful it might potentially be, it is still incurring an unnecessary cost. It cannot seriously be considered effective management to rely on an implicit rule of “Someone will start to complain if they are not happy with the measurement, and I will decide what to do when that happens”.

How does measurement affect human behaviour and motivation? Apart from the pure mechanics of feedback control, we need to take into account the effect of such systems on human behaviour, because in some situations, they can have a beneficial effect on performance, in others they can actually have an adverse effect (Kohn, 1993). The reason for this is that the human body itself is a massive set of feedback control systems (Powers, 1998). Virtually every internal bodily function is controlled by natural control systems, as is our external behaviour. Measurement can interact with our natural behavioural control processes through the mechanism of the “Perception of Control” (Powers, 1973). When a person assesses the difference between two values of a measure, if the deficiency is perceived as important and in need of urgent action, then the person will be motivated to act. If the measure displayed on your petrol gauge is close to the empty mark, you will urgently try to find a petrol station. What is important here is to understand that for a person to act as a complete feedback control system, the same person has to be sensing, assessing, selecting and acting. Every time these processes are separated, more barriers and inefficiencies are being designed into the system. These barriers are not just about communications; they can have considerable emotional effects as well. The quality movement demonstrated this over four decades ago (Deming, 1986). Previously, the typical arrangement in manufacturing was to have operators operating the equipment and inspectors inspecting the products. In effect, the action of adjusting the equipment to keep the products within specification was separated from the assessment aspect of the control loop. This typically lead to considerable resentment and conflict between inspectors and operators as well as creating high levels of faulty components because there was little ownership of the quality problem by the people who could affect it most. When the operators were skilled to be able to carry out the inspection instead of the inspectors, the loop was closed. Quality, motivation and ownership are all increased. This demonstrates, why the phrase “What gets measured gets done” has some validity, but only when it is extended to “What gets measured gets done by the person doing the measuring”. When there is this type of closed loop, it can create what is termed as intrinsic motivation to take control and eliminate the perceived deficiency. Measurement control systems can also be used to create extrinsic motivation, by connecting a reward or punishment to a measurable level of performance. However, as will be demonstrated, it is important to understand the difference between intrinsic

and extrinsic motivations, as they can create very different types of behaviour (Herzberg, 1968). There is also another, potentially even more important way in which control systems can affect the behaviour of people and the way they organise. This understanding has come from the developing science of complexity. For some time, complexity theory has been able to demonstrate that the incredibly complex, self-organising behaviours observed in Nature are “emergent” behaviours (Waldrop, 1992). These complex behaviours can emerge from the interaction of a very small set of rule/control systems. For example, Reynolds (1987) showed that the behaviour of a flock of birds or a shoal of fish could be very realistically simulated on a computer screen of “Boids”. Each Boid representing an individual creature whose behaviour emerges from the application of just three such rule/control systems. . steer towards the average heading of local flock mates; . steer to avoid crowding local flock mates; and . steer to move towards the average position of local flock mates. In order to implement such rules there has to be three separate feedback control systems capable of determining the deficiency between desired steer and actual steer, and then taking the necessary corrective action. Although the behaviour of Boids may not seem to be relevant to human behaviour, it should be remembered that the point here is that all complex living organisation, including human organisation, is apparently created by similar principles (Lewin and Regine, 1999). In fact, these three rules actually deal with three common issues: . direction towards the moving goal; . obstacle avoidance; and . relationship with others. In the Boid example, these rules were directed to the local environment of other flock mates. However, it is possible to create complex behaviour in the flock as a whole, by adding just two additional global rules of directing towards an external goal whilst avoiding external obstacles. The rule for the external goal is hierarchically higher (Simon, 1962), but applied with less weighting than the local rules. This understanding has profound implications for organisations in general (Eisenhardt and Sull, 2001) and process performance measurement in particular. It suggests that, in principle, the most effective forms of organisation, capable of dealing with unforeseen events, are most likely to occur when the group of people involved in a process are monitoring a small number of measures that are critical to the success of the process (Parnaby, 1994; Longenecker et al., 1994).

When are measurement systems likely to create a deterioration in performance? Simply because a control system is usually an essential requirement in improving process performance, it should not be assumed that every control system will have an improvement effect. Feedback control can easily produce chaotic systems (Wilding, 1998) and there are probably more reasons for such control systems to decrease

Performance improvement

513

BPMJ 10,5

514

efficiency than to increase it (Neave, 1990). These reasons often fall into three main categories: inappropriate rules, imbalance, and variance. The most common type of measurement-system problem, which can actually encourage a deterioration in relative performance, is caused by inappropriate rules. This usually occurs because of a complete misunderstanding of how measurement is used in feedback control. Although a room thermostat is a control system to keep the room temperature relatively constant, the error correcting action (turning the heating system on or off) is only triggered if a deficiency from the norm is detected. If an error is not sensed, no action is initiated. A measurement system is likely to impede the process of improving performance if it is set-up to suggest that there is no problem in the first place (Luther, 1992). For example, consider the situation of a parcel delivery organisation by measuring its performance on delivering parcels within 24 hours. A performance measure of 94 per cent on-time delivery may seem quite reasonable, and not motivate anyone in the organisation to improve performance. Yet, if a million deliveries were made each year, this could mean, for example 60,000 missed deliveries each year. This is potentially 60,000 unhappy customers, who may decide to try a different supplier next time. Measuring the 94 per cent success rate of the on-time delivery performance was likely to maintain the status quo. However, measuring the deficiency, the 60,000 errors, has far more potential to create a feedback control system that will motivate the behaviour to improve the system. Imbalance in the control systems is equally likely to cause decreases in overall process performance (Fry and Cox, 1989; Estes, 1996). An example of this can be demonstrated using our previous example of improving manufacturing quality by ensuring that the operators were inspecting the quality of the products. Clearly, this would only focus them on improving quality, and would have no particular effect on improving the throughput of the process. In this sense, it is an unbalanced control system. It is equivalent to the Boids only having one rule. It would not create the overall behaviour required. However, inappropriate attempts to balance the system can also lead to further deterioration in overall performance. For example, imagine the likely outcome if the productivity rate was carefully monitored, and to extrinsically motivate the operators, managers set up a productivity bonus scheme based on the number of components produced. It really should not surprise anyone if quality levels were to suddenly decrease and independent inspectors had to be reinstated. The final, but most overlooked and in many ways the most difficult category of factors that are likely to decrease overall levels of performance, are caused by variance (Joiner, 1994). Even if the average level of a particular aspect of performance remains unchanged, individual values of the performance measurement will vary around that average. Thus, a particularly good, or a particularly poor level of performance for a particular period may not be more than the outcome of the natural variance of the system. It may not necessarily give any indication of a change in the underlying capability of the system to perform at a certain level (Deming, 1986). The problem here is that feedback control systems may well inappropriately assess such a measurement as meaning that there is a requirement for remedial action. In such situations, that action, which incurs additional costs, is also likely to be inappropriate and could well cause additional problems in the future.

This situation was most graphically demonstrated by Professor Deming (Aguayo, 1991), the American professor who was one of the leading influences in bringing about the Japanese Quality Revolution. He designed the Red-Bead experiment to demonstrate to an audience how easy it was to inappropriately act on “factual” information. The experiment consisted of volunteers from the audience acting as workers who assembled products from a number of components. The experiment recreated a not untypical set of rules used by managers to reduce error rates. Workers with the lowest defect rates were praised and congratulated, whilst those with high defect rates were retrained, reprimanded or dismissed. The workers with the lowest error rates were then paid double-rate overtime to cover the shortfall in production. However, as the experiment clearly demonstrated, the variation in the level of defects (components that included a red bead) was in no way related to the competence of the volunteers. It was wholly caused by the complete random variations in the number of red (as opposed to white) beads supplied to each worker. The actions of the manager appeared to be necessary to keep the process under control when, in fact, the actions were ensuring the deterioration of process performance. This was because, the real problem was further up the supply chain, but was being masked by statistical variation. Variance is present virtually in every measure of performance (Wheeler, 1993). Yet, with the exception of the Statistical Process Control quality approaches (Neave, 1990), including the Six Sigma approach (Pande et al., 2000), very few established measurement systems even consider the problems caused by variance.

How can a minimum set of measures be identified for an individual process? Too many, too few or inappropriate process performance measures can easily create a deterioration in overall performance. Simply identifying everything that can be measured gives no indication as to whether or not the complete set of critical system measures are being identified. The overall performance of a process needs, at the very least, to take into account both the capability of the process to provide the predicted level of service, as well as the cost of providing that service. Thus, a more effective approach is to try to identify the minimum set of measures that would identify whether the overall performance of a process was unacceptable. The exact value of each measure that should trigger corrective action should then be defined. In other words, the aim is to identify when the process will NOT be delivering the desired level of service and efficiency. It may seem odd to focus on failure at such a stage. However, the whole point of control is often to achieve success by ensuring that failure is avoided. Such an approach can be used at both the micro and macro level. For example, consider General Electric as a Macro Process. As early as 1986, Jack Welch had set-up four parallel rule/control systems that were to be used to consistently improve the performance of GE. The global rules of performance (Tichy and Sherman, 2001) were: . market Leadership: be number one or number two in the market; . profitability: well above average real returns on investments; . competitive advantage: distinct and not easily matched; and

Performance improvement

515

BPMJ 10,5

516

.

leverage: must focus on GE strengths (Large scale complex pursuits that require massive capital investment).

Few observers have realised that these are comparable to the set of Boid rule/control systems. Many consider these simply as strategies or ideal goals. In fact, they were part of the control systems that were implemented to deal with anything that failed to meet these measurable criteria. There was a control activity for those parts of GE that failed to meet the minimum measures of performance. The options within the contingency process were fix, sell or close. In this way, the four global rules provided a classic minimum set of performance control systems that were capable of driving the organizational changes that gave GE the desired continual improvement in performance. Identifying the critical-to-failure measures is a powerful way of identifying the minimum set of necessary control systems (Harry and Schroeder, 2000). However, the GE example also highlights another powerful approach to improving performance. Imagine every process as having three different aspects. The first aspect is the process in its desired future state, operating at the highest levels of overall performance. The second aspect is that of the process in its current state, with all its usual, day-to-day operating activities and problems. The final aspect is the process of changing the current state into the desired future state. Now consider the three aspects as three separate processes, the future and current processes, joined by the change process. This trio of processes is implicit in GE’s rules. The rules state the minimum requirements for the organisation process in the future. The current organisation process defines the current operational process that is focused on supplying the customer today. However, by conceptually (but not necessarily functionally) separating the improvement process from the operational process, it also becomes possible to measure the performance of the improvement process itself. For example, consider the delivery organisation that was achieving a 60,000 per million error-rate. This is a measure of the capability of the operational system, and in itself is of limited value. What would be of far more interest would be to know the capability of the improvement process. If that information were available, it would be possible to predict how much it would cost and how long it would take to halve the failure rate (Joiner, 1994). In a rapidly changing world, with ever-increasing competition, the effectiveness of the improvement processes is almost certainly more critical to long-term survival than the current level of performance of the operational processes. Ensuring that every critical operational process conceptually has a sister improvement/change process does not mean that different functions are responsible for the different processes. In fact, where it is appropriate for the same group to be involved in both processes, it can create a number of additional benefits. This is because team spirit and levels of motivation to solve problems are most likely to increase in different and challenging situations, such as a perceived crisis or outward-bound/team-building courses, which are completely different from the normal, daily, operational activities. In this type of situation, groups can progressively learn to become more effective at problem solving and work more as a team. Psychologically, it is often far easier to build up team spirit (Katzenbach and Smith, 1998) by making a clear distinction between the operational and change processes. The teams can then use performance information about the operational process

(Davenport and Beers, 1995), to improve their problem-solving capabilities within the context of a distinct improvement process, even though the same group is involved in both processes (Luther, 1992). How can the overall performance of a complete supply chain of processes be improved? The identification of an independent but related improvement process has another, potentially even more powerful benefit, because it is critical to optimising the overall performance of a complete supply chain of processes. The supply-chain of processes can be considered as defining the “horizontal” flow of goods and services from the external supplier processes, through the internal processes to the external customer processes. One of the consistent problems with focusing managers and staff on improving the performance of their local processes is that often, the local measurement systems are in conflict with improving the overall performance (Fry and Cox, 1989). This becomes particularly obvious when measures have been chosen on the basis of whatever is the easiest to measure and then used as part of a reward system. A classic example is the stores manager whose bonus was set on reducing the cost of stock holding. Within no time at all, he had brought the organisation to a standstill by not ordering any new stock. A traditional approach to avoiding this is to set up service level agreements (SLA) between internal business processes. Where these include performance levels, they are usually the outcome of a compromise between the level of service that the internal customer desires, and the level of service to which the manager of the internal supply process is prepared to commit. Typically, it leaves a situation where neither party is happy. Thus, if an internal delivery process was currently operating at 94 per cent, and the customer process was demanding 99 per cent, the managers may compromise and produce an SLA with a level of service of 96 per cent on-time delivery to the internal customer process. However, the alternative is for the internal customer to initially accept that the service is currently operating at a level of 60,000 parts per million missed deliveries. The SLA can then be structured around the improvement process, identifying the strategy and time scales for halving the error rate. Using this approach, it is possible to work back from the head of the supply chain, identifying those few performance measures that are critical to failure at each interface and agree a strategy that will progressively improve the overall performance of the whole chain. How can individual process measures be aligned with organizational objectives? Strategically, it is not unusual for high-level measures for organizational objectives to be identified and then “dis-aggregated” and cascaded down through the management structure. However, this procedure often attempts to dis-aggregate measures, such as customer satisfaction, which have never been aggregated from the critical front-line supply-chain of processes. This leads to the usual “what can be measured at this level” approach to performance. In other words, by default, the strategy of every separate part of the organisation is to achieve whatever is most easily measurable, rather than supplying the desired level of service in the most effective manner. This can cause massive misalignment, particularly if it is connected to a reward system. It also leads

Performance improvement

517

BPMJ 10,5

518

to the processes producing the primary goods or services being heavily monitored, with little control over the internal services that are often considered “too difficult” to measure, but which may well be adversely affecting the performance of the main processes. In order to ensure that the process measurement systems are aligned with organizational objectives, this procedure has to be reversed. The control measures are initially identified, working back from the customer, through the supply chains (Harry and Schroeder, 2000). The performance of internal services that are critical to operational performance should be treated in exactly the same way as any other process. Only, after the supply chain measures have been identified, can they be progressively aggregated “upwards” through a hierarchy of processes. The desired supply-chain process benchmarks can only then be identified after the direct upward chain of measures have been established. For example, consider a situation where the internal delivery process already discussed, was a process within a company with organizational rules similar to General Electric. The performance of the delivery process would need to be aligned with those aims. It may well be estimated that to maintain market leadership, it would be necessary to halve the level of delivery failures within the next six months. It may also be identified that in order to keep within the gross profit rules, the cost per consignment would need to be reduced by 15 per cent within the next 12 months. Various strategies for the improvement process could be developed to identify if and how this could be achieved (Kaplan and Norton, 2000). In this way, the local set of performance measures for the improvement process (and subsequent emergent organisation) would be fully aligned with the organizational aims. In effect, the supply chain of the improvement/change processes is the strategy for transforming the organisation from its current capability, to the level of performance that is required to ensure future organizational success. What overall approach should be taken to improve the overall performance of an organisation? The various principles described above demonstrate that improved performance is not a natural outcome of initiating measurement systems. In order to be useful, measurement systems have to be an integral part of a set of effective control systems that have been carefully and specifically designed to improve the overall performance. Such an approach would include methods that: . identify the main set of organizational rules and criteria critical to failure and which are fundamental to the competitive success of the organisation; . define the horizontal supply-chain of operational processes, from the customers, through the internal processes, back to the external suppliers; . identify the interfaces that need to be controlled for each process; . work back from the customer processes and identify the minimum set of critical failure indicators at each interface. Typically, these will include at least the Quality of Service and Unit Cost (or profitability) measures, but may also include other critical factors such as on-time delivery, cycle time, etc. However, they will only be initially included if control was deemed critical to the supply performance;

.

.

.

.

.

.

create the hierarchy of processes that will allow the supply chain performance measures to be progressively aggregated through the vertical hierarchy; identify critical performance mismatches both through the operational supply chain and with organizational goals; create improvement/change processes for the operational processes that need to achieve higher levels of performance, and identify the control rules necessary to improve the effectiveness of those change processes; ensuring the improvement aims are not set as an arbitrary percentage improvement on current performance standards, but are identified by working back from external benchmarks and organizational objectives; account for the motivational aspects of measurement when designing the measurement systems. Identifying “who” is measuring and taking action is just as important as defining as what is being measured; and validate every process measure to ensure that it is not adversely affecting the performance with problems associated with inappropriate rules, imbalance, or variance.

When viewed individually, none of these steps would be considered as either unique or revolutionary. Each one of them is utilised in one or more of the various, widely implemented approaches to performance improvement. These approaches include Six Sigma (Harry and Schroeder, 2000), Strategy Maps (Kaplan and Norton, 2000), Quality Function Deployment (Akao, 1990), Benchmarking and gap analysis (Balm, 1996), Moments of Truth (Carlzon, 1987), Service Blueprinting (Shostack, 1984), Process redesign and Goal setting (Longenecker et al., 1994), Balanced Scorecards (Kaplan and Norton, 1996), Measuring Business Performance (Neely, 1998), to name but a few. However, the overall approach described has not been created by attempting to aggregate a variety of traditional approaches. It has been developed from a theoretical analysis as to what is required to ensure that a measurement system can be used in the most effective way for improving performance. This theoretical basis is such that it acts as a foundation that automatically allows the integration of the most beneficial aspects of the whole range of different approaches to improving performance, into a single, unified system. However, when working through the various stages, it is essential not to underestimate the importance of working sequentially backwards from the ideal future and customer requirements. The process of working back from the future is critical to the alignment process (Davis, 1987; Ackoff, 1981; Geneen, 1984). Implementing all the same processes, but in a haphazard or reverse order, will not necessarily align the performance measurement systems to assist in moving an organisation towards a clearly defined future state. To many, it may seem that applying such a method in order to align an organisation would be too costly and time consuming. However, it only appears that way because organisations never even attempt to measure the costs that would be saved by eliminating the wasted effort caused by organizational misalignment. Conclusions Complexity theory has allowed us to create a unified approach to improving performance, which relates the psychology of behaviour to the practices that are often

Performance improvement

519

BPMJ 10,5

520

observed in the most successful organisations. Given that the average life of a commercial organisation is currently less than 25 years and rapidly falling, it seems likely that failure to align a whole organisation to the improvement of its competitive performance could prove to be fatal. References Ackoff, R.L. (1981), Creating the Corporate Future: Plan or be Planned for, Wiley, New York, NY. Aguayo, R. (1991), Dr Deming. The American Who Taught the Japanese about Quality, Fireside Press, New York, NY. Akao, Y. (1990), Quality Function Deployment. Integrating Customer Requirements into Product Design, Productivity Press, Cambridge, MA. Balm, G.J. (1996), “Benchmarking and gap analysis: what is the next milestone?”, Benchmarking for Quality Management & Technology, Vol. 3 No. 4, pp. 28-33. Boland, T. and Fowler, A. (2000), “A systems perspective of performance management in public sector organisations”, The International Journal of Public Sector Management, Vol. 13 No. 5, pp. 417-46. Callaway, R.L. (1999), The Realities of Management: A View from the Trenches, Quorum Books, Westport, CT. Carlzon, J. (1987), Moments of Truth, Ballinger, Cambridge, MA. Curtis, K. (1994), From Management Goal Setting to Organizational Results: Transforming Strategies into Action, Quorum Books, Westport, CT. Davenport, T.H. and Beers, M.C. (1995), “Managing information about processes”, Journal of Management Information Systems, Vol. 12 No. 1, pp. 57-80. Davis, S.M. (1987), Future Perfect, 2nd revised ed., Addison-Wesley, Reading, MA. Deming, W.E. (1986), Out of a Crisis, Cambridge University Press, Cambridge, MA. Eisenhardt, K.M. and Sull, D.N. (2001), “Strategy as simple rules”, Harvard Business Review, Vol. 79 No. 1, pp. 107-16. Estes, R.W. (1996), Tyranny of the Bottom Line: Why Corporations Make Good People Do Bad Things, Berret-Koehler, San Francisco, CA. Fowler, A. (1999), “Feedback and feedforward as systemic frameworks for operations control”, International Journal of Operations & Production Management, Vol. 19 No. 2, pp. 182-204. Fry, T.D. and Cox, J.F. (1989), “Manufacturing performance: local versus global measures”, Production and Inventory Management Journal, Vol. 30 No. 2, pp. 52-6. Geneen, H.S. (1984), Managing, Doubleday, New York, NY. Harry, M. and Schroeder, R. (2000), Six Sigma, The Breakthrough Management Strategy Revolutionizing the World’s Top Corporations, Doubleday, New York, NY. Herzberg, F. (1968), “One more time: how do you motivate employees?”, Harvard Business Review. Joiner, B. (1994), Fourth Generation Management, McGraw-Hill, New York, NY. Kaplan, R.S. and Norton, D.P. (1996), The Balanced Scorecard: Translating Strategy into Action, Harvard Business School Press, Boston, MA. Kaplan, R.S. and Norton, D.P. (2000), “Having trouble with your strategy? Then map it”, Harvard Business Review. Katzenbach, J.R. and Smith, D.K. (1998), The Wisdom of Teams, Harvard Business School Press, Boston, MA.

Kohn, A. (1993), “Why incentive plans cannot work”, Harvard Business Review. Langley, A. (1995), “Between ‘paralysis by analysis’ and ‘extinction by instinct’”, Sloan Management Review, Vol. 36 No. 3, pp. 63-76. Lewin, R. and Regine, B. (1999), The Soul at Work: Unleashing the Power of Complexity Science for Business Success, Texere Publishing, New York, NY, available at: www.amazon.co.uk/ exec/obidos/ASIN/0752811851/qid¼1093515471/sr¼1-2/ref¼sr_1_10_2/026-47657161446837#product-details Longenecker, C.O., Scazzero, J.A. and Stansfield, T.T. (1994), “Quality improvement through team goal setting, feedback and problem solving”, International Journal of Quality & Reliability Management, Vol. 11 No. 4, pp. 45-52. Luther, D.B. (1992), “Advanced TQM: measurements, missteps, and progress through key result indicators at corning”, National Productivity Review. Miller, D. (1990), The Icarus Paradox, HarperBusiness, New York, NY. Nanni, A.J., Dixon, J.R. and Vollmann, T.E. (1990), “Strategic control and performance measurement – balancing financial and non-financial measures of performance”, Journal of Cost Management, Vol. 4 No. 2, pp. 33-42. Neave, H. (1990), The Deming Dimension, SPC Press, Knoxville, TN. Neely, A.D. (1998), Measuring Business Performance, Economist Books, London. Pande, P.S., Neuman, R.P. and Cavanagh, R.R. (2000), The Six Sigma Way, How GE, Motorola and other Top Companies are Honing their Performance, McGraw-Hill, New York, NY. Parnaby, J. (1994), “Business process systems engineering”, Int. J. Technology Management, Vol. 9 Nos 3/4, pp. 497-508. Powers, W.T. (1973), Behaviour: The Control of Perception, Aldine, Chicago, IL. Powers, W.T. (1998), Making Sense of Behavior: The Meaning of Control, Benchmark, CT, available at: www.amazon.com/exec/obidos/tg/detail/-/0964712156/qid¼1093516569/ sr¼1-1/ref¼sr_1_1/002-5283367-6317668?v¼glance&s¼books#product-details Reynolds, C. (1987), “Flocks, herds, and schools: a distributed behavioral model”, Computer Graphics, Vol. 21 No. 4, pp. 25-34. Shostack, G.L. (1984), “Designing services that deliver”, Harvard Business Review, Vol. 79 No. 1, pp. 107-16. Simon, H.A. (1962), “The architecture of complexity”, Proc. American Philosophical Soc., Vol. 106 No. 6. Tichy, N.M. and Sherman, S. (2001), “Control your destiny, or someone else will”, HarperBusiness, New York, NY. Waldrop, M.M. (1992), Complexity, The Emerging Science at the Edge of Order and Chaos, Viking Press, New York, NY. Wheeler, D.J. (1993), Understanding Variation, The Key to Managing Chaos, SPC Press, Knoxville, TN. Wilding, R.D. (1998), “Chaos theory: implications for supply chain management”, The International Journal of Logistics Management, Vol. 9 No. 1, pp. 43-56.

Performance improvement

521

The Emerald Research Register for this journal is available at www.emeraldinsight.com/researchregister

The current issue and full text archive of this journal is available at www.emeraldinsight.com/1463-7154.htm

BPMJ 10,5

Structure, speed and salience: performance measurement in the supply chain

522

Chris Morgan Cranfield School of Management, Cranfield Centre for Logistics and Transport, Cranfield University, Cranfield, UK Keywords Performance management, Organizational analysis, Supply chain management, Information management, Performance measurement (quality) Abstract The supply chain presents many challenges for management and for the design of performance measurement systems. It is possibly one of the final structural areas of business in which significant savings are to be made, and it is becoming an increasingly important strategic tool as trade becomes global in perspective. To assess the problems faced in the supply chain, this paper begins with an overview of where we are in terms of the development of performance measurement theory. It then highlights two key issues that the supply chain raises in measuring performance in an intra-organisation scenario before identifying no fewer than nine preconditions necessary for effective and dynamic performance measurement within supply chains.

Introduction Performance measurement is coming of age. So much research has been conducted in recent years into the nature and methodology of measuring performance in organisations, it is worthwhile reflecting on where we are now, how we got here and what the future performance measurement challenges will be within the environment of the supply chain. The paper will also identify the strategic challenges facing the management of supply chains in the future and go on to consider the impact of these challenges on the design and implementation of future performance measurement systems (PMS). What is performance measurement and what should it do? We use the concept and actuality of performance measurement in organisations as one of the key managerial tasks integrated within the wider activities of planning, organising activities, motivating our workpeople and controlling events. In this context, performance measurement is related to strategic intent, and the broad set of metrics used by managers to monitor and guide an organisation within acceptable and desirable parameters. However, “performance” implies predetermined parameters and “measurement” implies an ability to monitor events and activities in a meaningful way. But, how do we determine what is desired and how it is measured? As we shall see, these questions have found a variety of answers as our understanding of organisations has deepened. Business Process Management Journal Vol. 10 No. 5, 2004 pp. 522-536 q Emerald Group Publishing Limited 1463-7154 DOI 10.1108/14637150410559207

A brief history of performance measurement The roots of modern organizational performance measurement lie in Venice in the fifteenth century when the foundations of accounting practice were laid with the invention of double entry book keeping. The principle of identifying profit and

controlling cash flow was to dominate organizational performance measurement until the early 1900s when William Durant, founder of General Motors, realised that profit was not simply the result of an accounting exercise, but the outcome of a cost stream that spread throughout the supply chain. Drucker (1995) identified this as a turning point in performance measurement and the forerunner of the modern Japanese Keiretsu. These early developments stimulated a number of strands of thinking about PMS that emerged as our understanding of organisations began to develop. Rooted in management accounting, the traditional concepts of performance measurement have been challenged by accounting professionals and academics alike. Johnson and Kaplan (1987) for example, drew several contemporary arguments together to demonstrate that traditional organizational performance measures either had lost, or were rapidly losing, relevance to modern organisations. They argued that the traditional performance measures were rooted in the industrial revolution and reflected a control system for radically different markets and organisations to those found in modern society. To continue to use these methods of measuring performance was to shackle modern management with an unwieldy set of tools that decreases managerial productivity because of: the time needed to gather, process and present information (often on relatively unimportant factors); a failure to provide accurate product costs because of the averaging process used to allocate costs and overheads; and finally, encourages managers to take a short term view of the business that is frequently distorted due to the mix of long and short term spending requirements. Other more recent examples would include Oakland (1995) who identified the problems of PMS in the total quality management (TQM) context, Harrison (1992) in the just in time ( JIT) context, Luscombe (1993) in the manufacturing resource planning (MRP II) context and Hammer and Stanton (1995) in the business process reengineering context (BPR). Each new development of the operational context threw more questions up about traditional approaches to performance measurement. What seems clear is that while historical information forms the basis of a PMS, it is difficult for managers to avoid reactive behaviour. The underlying reason for forming a PMS on historical information is that this data is usually gathered during normal trading activities. For example, the data gathered in most financial systems, when processed, forms the basis for performance reporting required and used by the banking, investment and shareholding fraternities. The value of this retrospective data for proactive management is limited and its use has been likened to “driving a car by looking in the rear view mirror”. It does however, provide a yardstick by which to judge how closely management has been able to manage to their declared strategy. It is also a consistent standard across many different organisations. Other historical data may be gathered in non-financial areas of the organisation and Table I is a sample of the kind of performance measures that may be found in many manufacturing, service and logistic environments. Leaving the financial measures in Table I aside (because arguably they are so entrenched that they would never be replaced), it is useful to examine the other performance measures to see how good they really are, in terms of helping management to adopt a proactive posture. In many cases, it must be said that the item measured will trigger a management response that will be too late in terms of either the

Performance measurement in the supply chain 523

BPMJ 10,5

524

Table I. A selection of traditional historic performance measures

Financial

Operations

Marketing

Quality

Creditor days Debtor days Dividend cover Stock turnover P.E ratio Net asset turnover R.O.C.E. R.O.E. Current ratio Gross profit R.O.O.A. Return on sales Sales/sq. m. Gearing

Ops. lead time Inventory Stock turn Set-up time Labour utility Machine utility Work in progress Employee turnover Direct productivity Indirect productivity Supplier perform. Variances Process time No. of accidents

Market share Orders on hand Order lead time No. of complaints New product intro. Repeat orders Delivery perf. Time to market Warranty claims Returns Service visits First pick per cent First drop per cent Transport utility

Percentage of rework Percentage of rejects Percentage of conformance Percentage of scrap Qual. admin. costs Recall costs Liability costs Perf. penalties Percentage of errors Prevention costs Qual. tr’g costs Product testing Perf. testing Laboratory costs

internal or external customer. This of course raises two important questions they are as follows: (1) what is a good performance measure? and; (2) what system of performance measurement is most likely to produce conditions conducive to proactive management? Neely et al. (1995) identified no less than 22 qualifications defining performance measurement “goodness”. They include a clear link between the performance measure and the organisation strategy; being simple to understand; being able to be controlled by the either the person doing the measurement or their close associates; being defined by the supplier and the consumer; providing timely and accurate feedback about realistic targets; being clearly defined and visible; be a part of a feedback loop and be presented in a clear and consistent format; data should be presented in terms of trends rather than absolutes and in terms of information rather than opinion or raw data; being based on an agreed understanding of what is being measured; and, if possible use data that is automatically gathered as a part of the process. Clearly the issue of “goodness” as applied to performance measures is far from simple and this explains, at least in part, why many organisations feel the need to invent their own non-financial performance measures. The second of these questions has evoked a series of alternative PMS. These alternative systems have tried to break the stranglehold of the traditional PMS with varying degrees of success. Goldratt and Cox (1984), a Physicist originally and used to logical physical systems of measurement, was unhappy with the imprecision of many concepts and words used in conventional cost accounting systems. The word “cost” in particular, he found, had many meanings that changed and overlapped – investment cost, operating cost, absorption costs, opportunity cost, variable cost, fixed and product cost. Claiming that this kind of nomenclature only confused most managers, he suggested that the most productive approach was to concentrate on the process itself and use measures that concentrate on physical rather than financial items. This approach led to the theory of constraints (TOC) and the concept of managing the process constraints (or bottlenecks) effectively. By concentrating on increasing

throughput (defined as the value taken only at sale of a product or service), decreasing operating expenses (fixed costs) and decreasing investment (reducing inventories), he argued that profitability increases. Certainly throughput accounting (TA) is conceptually simpler than conventional management accounting in that it treats direct labour and variable overhead (conventionally defined) as an operating expense. Cooper and Kaplan (1987), like Goldratt, were also uneasy with some of the traditional approaches of management accounting to costing. They argued that indirect costs ought not to be treated as global quantities but needed to be analysed accurately and apportioned to the cost profile of those activities in an organisation that use the overhead. This overall approach is called activity based costing (ABC) and has attracted much interest. Maskell (1989), in particular, suggests that one of the strengths of ABC is that it allows for flexibility, complexity and continual improvement – features that are important within the context of BPR and other organizational change philosophies. These two approaches to developing new PMS are not by any means unique. Time based PMS (Mather, 1988; Stalk and Hout, 1990; Barker, 1993) concentrates on reducing total throughput time and reducing or eliminating non value-adding activities. Life cycle costing (Maskell, 1991) advances the argument that accountants can only understand costs if they “live” with products throughout the product life cycle. There are even different cultural approaches to costing. The Japanese approach (Schonberger, 1986) to PMS is to emphasize cost reduction rather than cost control and to use the target market price as the driver for the complete production system. While Goldratt, and his subsequent interpreters, concentrated on the underlying metric problems of management accounting, they still operate within what is a fundamentally unitary concept of management (i.e. a view that management can always determine the outcomes that result from their decisions). Once the need arises to define performance in other than readily measurable terms, and this is often the case in service environments, the magnitude of difficulty increases. Kaplan and Norton (1992) and subsequently Kaplan (1993) suggested that a PMS needs additional vectors that reflect the wider environment in which organisations exist. These were customer related measures that seek to define what a customer values from an organisation; internal global process measures that seek to measure the effectiveness of the organisation in meeting its financial objectives; learning measures to examine if an organisation can develop in order to create future value; and, measures that test whether an organisation is creating value for its shareholders. Kaplan calls this approach to PMS the “Balanced Scorecard” and if nothing else, it does represent a move towards a pluralistic view of the organisation (i.e. the view that managerial decisions will be modified by the environment in which their decisions are enacted). It is interesting to note that Kaplan and Norton (2000) have recently moved their ground slightly in that their original Balanced Scorecard gave no special weighting to any of the four factors. However, this most recent work acknowledges that a hierarchy within the Balanced Scorecard PMS may have to be applied – a point identified some time ago by Eccles and Pieburn (1992). The issue of measuring performance in subjective areas has always been something of a nettle-bed and has grown in importance as customer relations management (CRM) strategies have become embedded in broader corporate strategies. The real challenge in this area of performance measurement has been to find an approach that is flexible

Performance measurement in the supply chain 525

BPMJ 10,5

526

enough to take customer and managerial attitudes into account, while at the same time producing usable results. A great deal of CRM is about matching expectations with reality. Research undertaken by Zeithaml et al. (1990), which can trace its intellectual roots to research undertaken on expectation analysis at Durham University in the 1960s, developed the SERVQUAL model. In this model, the key service dimensions of tangibles, reliability, responsiveness, competence, courtesy, credibility, security, access, communication and understanding of the customer are measured via structured questionnaires. The expectations of customers are compared with the perception of managers of the service provided and gaps are identified. If the gaps are positive, there is over-provision of service, if negative, under-provision of service. A zero gap is the service target. The SERVQUAL analysis, and its subsequent derivatives, does provide a flexible PMS for the subjective vectors of the Balanced Scorecard, but it requires time and resources that can be in excess of the capabilities of small and medium sized enterprises. Taken as a whole, it is clear that the measurement of the softer side of performance is far more complex than many assume. To some extent the debate about which PMS is used is irrelevant. Whatever system is used (traditional, new, or, yet-to-be-invented), the important issues are whether or not the PMS supports an organisation in its current activities in a consistent and reliable manner; whether or not it will retain validity with the passage of time; whether or not it provides management with a balance of information that is relevant to the organisation’s activities and strategies; and finally, whether or not management use the information it provides in a proactive as well as a reactive way. What should a good PMS look like? Even to a casual reader the previous part of this paper must emphasize the complex issues that performance measurement raises. Performance measurement is one of the core elements of managerial activity and the choice of PMS is central to achieving corporate strategic targets. Because of this, one would expect managers to agonise over the design of their PMS, and yet, there is little research evidence to say that this is so. The reverse is often the case, and in reality many managers are using PMS that were designed 10, 15 or 20 years ago – if they were designed at all! Can this be healthy? Clearly not! And this begs the question “how should a PMS be designed and how can it be kept up to date?” The real challenge in answering this question is to get the “big picture” and to recognise that there will never be a “one size fits all” solution. Having said this, Figure 1 is a framework that shows the conceptual and pragmatic issues that must be resolved in the design of a PMS. In Figure 1, the fundamental relationships are between the strategy, the supply chain, the value adding process, the distribution chain and the customer. These relationships define every organisation. Strategy is generated within the constraints defined by: operating standards (comparative or generic) that establish the basic qualifiers to compete; economic and market activities that establish the nature of the competitive advantage needed to succeed; shareholder and stakeholder expectations of financial performance; the broad environment that defines the limits of acceptable operations and of course the capabilities of the organisation in terms of knowledge, skills, resources and people. Whatever strategy is defined within this cradle of influences should drive the operational activities and establish the targets for the PMS. Although the PMS is in

Performance measurement in the supply chain 527

Figure 1. Performance measurement in the corporate context

fact a single composite system, it is relevant to consider it as having five facets. These facets are balance, structure, design, focus and targets. The balance of the system is usually derived from the application of the Balanced Scorecard or similar range of measures. The structure is driven through knowledge of those issues that give an organisation its competitive advantage. The structure should also be heavily .

BPMJ 10,5

528

influenced by operational input in order to ensure that the PMS does not become out of touch with operational issues and capability. The design of the performance measures should be driven on one hand by what the strategy defines as good in terms of corporate direction, and on the other hand by managers and operators within the system who have to make the measures work in a proactive and practical sense. An additional element of input into the design activities is the reflective data processed from the performance measuring activities that gives feedback on the wider organizational effects of the PMS. The focus element of performance measurement is related to the importance of the performance measurements taken. Conventionally this focus is set by strategy, but it is important to recognise that input from the operational measurement is equally important. The “real world” often has several levels of performance measurement going on simultaneously. Some of these other levels may be focused on transient problems, specific management initiatives or special projects. To have a competitive advantage is to use capabilities more effectively than competitors. It is, therefore, critical that at least one aspect of the PMS is focused on this issue, and it is this issue that provides the major focus for the operational aspects of the business. In addition to the strategic input into this aspect of the PMS, it is also necessary to reflect the operational system’s actual ability. By doing this, it is possible to refine the competitive targets in real time and this provides a critical driver for improving the organisation’s capabilities. Through this complicated overlay of interconnected performance facets, the basic questions of what and how is to be measured can be addressed. In Figure 1, the performance measurement review process is linked with all of the elements of the PMS. This is a vital feedback system that ensures that not only the individual performance measures are identified, but also adjusted to reflect the changing environment. Failure to provide this mechanism will inevitably result in the PMS becoming less and less relevant to the organisation’s general and operational strategies – an observation we find repeatedly being made in the body of literature previously reviewed. Some future challenges for performance measurement that may currently be identified are those associated with knowledge management, virtual organisations, organizational interactions and organizational agility (or adaptability). Also, within the context of organizational interactions, a great deal has yet to be understood about the role of performance measurement in the establishing and maintaining the integrated supply chain. Performance measuring activities keep the strategic aspects of an organisation on competitive or market targets, operational aspects of the organisation on strategic targets, and should provide key data for the vital aspects of knowledge management, information for planning, control and progress monitoring. They should also change as the market and operational environment changes and by being relevant, enable proactive management. Strategic change and challenges in the supply chain The greatest contribution that the concept of the “supply chain” has made is to: encourage managers to think outside the organisation box; to recognise the interdependencies that exist between and within organisations; and, to recognise the financial and logistic implications of inter-company and inter-national trading. However, in getting to our current position change has been a constant theme.

We tend to think of the supply chain and its optimisation as a modern concept, but in 1926 Henry Ford had a fully integrated supply chain with stock visibility throughout, and a balanced production cycle. Our production cycle is about eighty-one hours from the mine to the finished machine in the freight car, or three days and nine hours instead of the fourteen days which we used to think was record breaking. Counting the storage of iron ore in winter and various other storages of part or equipment made necessary from time to time for one reason or another, our average production cycle will not exceed five days (Ford and Crowther, 1926).

Of course in 1926, cars were simpler, choice limited and it was much easier for Henry Ford to impose his vision on his supply chain because of the times in which he lived and the power he could wield. Nonetheless, Henry Ford achieved an integrated and controlled supply chain that would still be envied by many organisations. His three/four day car production cycle is only now becoming a serious proposition once again largely due to modern technology and systems. Between the 1920s and today, there have been four major phases in supply chain thinking. 1930 to 1970 was a period dominated by the “Just-in-Case” philosophy. Inventory was ordered in bulk, and often held because of a sporadic take-off pattern. Safety stocks were necessary because of the poorly developed planning and forecasting systems, unreliable technology and a truculent workforce. The customer had a low profile in management thinking. The 1970s and 1980s saw the development of manufacturing requirements Planning (MRP II), which although still an inventory push system, had at its heart a more structured approach to planning and forecasting. Better planning meant that safety stocks could be reduced and the issue of “lead time” began to assume a much higher significance. It would be right to argue that this period saw the beginnings of the many of the supply chain concepts, although ideas were fragmented and research limited in scope. As the supply chain concept developed, so did other philosophies. Not least of which was the quality revolution and the gradual emergence of customer as a primary business focus. The crossover between the 1980s and 1990s was dominated by the “Just-in-Time” (JiT) philosophy. JiT turned the concept of the supply chain on its head by advocating the “inventory pull system” rather than the traditional “inventory push system” (Suzaki, 1987). Within the JiT principles are the concepts of: economical low batch sizes; flexible manufacturing technology; rapid changeover of product line; simple control systems; short lead times; 100 per cent quality; and, the supplier being an integrated part of the production system. Cooper (1995) suggested that JiT creates “lean” organisations that have the potential to respond rapidly to customers needs. However, lean organisations do not necessarily have to be JiT and many organisations that were unable to create a JiT environment managed to apply many of the lean supply/production techniques. The lean regime focused on the elimination of waste, cost transparency in the supply chain, labour flexibility, queue management and continuous improvement. Comparing JiT to the lean environment, it is possible to see many shared targets and of course both systems were firmly focused on the customer. Towards the end of the 1990s, as manufacturing systems had been optimised, attention refocused on the supply chain. This was partly in response to a growing realisation that new configurations of supply chain were going to be required for evolving market places (e.g. “e” businesses); partly in response to supply chain

Performance measurement in the supply chain 529

BPMJ 10,5

530

rationalisation, restructuring and globalisation; and, partly in response to an increasing need to cope with rapidly changing demand patterns – often associated with high volume retail and manufacturing supply chains. This combination of factors led to the concept of the “agile” or “rapidly re-configurable” supply chain and a realisation that in larger organisations competition is often defined as “supply chain vs supply chain” (Christopher, 1998). Agility is likely to be a significant strategic driver in logistical systems development well into the first decade of the 2000s. The main characteristics of an agile supply chain are: a quick customer response; manufacturing systems that can be customised; supply chain flexibility; scheduling synchronised with final demand; controlled supply processes; integration of capabilities with trading partners; use of “e” trading; concurrent product development; and “pipeline” cost improvements (Hughes et al., 1998). These “desirable” characteristics must be measured if they are to be implemented and controlled. Are the measurement systems currently used in supply chains up to the task? Focusing PMS At this point, and before discussing the future, it is worthwhile injecting a dose of reality. While the previously described developments can be observed in many large organisations, it is equally true to say that many small and medium size enterprises (SME) have not progressed very far down the road of developing their supply chain. Very small enterprises often do not have the time, resources or information to undertake the analyses required for optimisation activities. Medium sized enterprises may have the information as their management systems develop, but can lack the skills to interpret and apply it. These are real problems and they are global problems. In trying to develop future supply chains, these small and medium sized organisations will slow down progress and may introduce an additional task for larger organisations – that of small supplier development. A good example of this can be found when the large food retailers tried to introduce electronic data interchange (EDI) as a way of reducing supply costs. Many small suppliers did not have the skills or the equipment to make the systems work effectively. The large food retailers found that they had to supply both, often free, in order to make their own change and supply chain strategies a reality. In pointing out the shortfalls of SMEs, it is also worthwhile stating that many larger organisations are a long way from delivering the “perfect order” (i.e. correct order entry, correctly formatted EDI and transaction codes, all items ordered being available, ship date enables on time delivery, orders are picked correctly, paperwork is complete and accurate, timely delivery, shipment not damaged, invoice correct, accurate surcharges, no customer deductions and no errors in payment processing). If one considers this to be one set of transactions in the supply chain, it is clear that the probability of creating a “perfect supply chain” with “perfect orders” with 100 per cent operational efficiency and customer satisfaction is still quite low. For example, if a supply chain were five organisations deep and each organisation performed at 95 per cent (an ambitious target for some), the overall supply chain performance would be 0.955 or 77 per cent efficiency. From a metrics point of view the “perfect order” concept is the result of wider strategic aspirations and a reflection of some of the issues described in Figure 1.

Linking organisations in a supply chain effectively – one of the key targets in achieving agility – also raises other metrics challenges. Bowersox et al. (1999) identifies these as: end-customer level sales; inventory dwell time; cash-to-cash cycle time; customer satisfaction; and, benchmarking. If end-customer level sales are visible throughout the supply chain, it becomes possible for each element of the supply chain to synchronise the velocity of inventory. This should allow the complete logistical process to be compressed allowing cost reduction and higher service levels. Inventory dwell time, defined as the ratio of idle days to active days in the supply chain, is an indicator of supply chain efficiency. Clearly it will not be possible to eliminate all idle time because of the need for inspection, or buffering to cope with uncertainty in demand. However, if dwell time is reduced or eliminated, cost will be reduced and utilisation of existing facilities improved. Cash-to-cash cycle time, or the time between buying inventory and realising revenue from sales, is important for cash flow dynamics within an organisation. Buying with 30 days credit and selling with 60 days credit is bad news. Buying with 60 days credit and selling with 30 days credit is good news. The implications of this in extended or international supply chains can be very significant both in terms of overall cash velocity within the supply chain and for issues such as tax and intra-border transfers. Coordinating cash-to-cash cycle times is likely to be a significant challenge in the creation of future integrated supply chains. Customer satisfaction has already been discussed and the problems associated with measuring it identified. Benchmarking, as the final element of Bowersox et al.’s view, does present some serious challenges in the context of supply chain comparators. The choice of benchmarking technique whether done at a product level, functional level, process level, organisation level or strategic level, or with a combination of some of these is a non-trivial undertaking. Even when benchmarking is done in relatively stable conditions, it is difficult. If we consider re-configurable agile supply chains as the environment for the exercise, it seem unlikely that historic approaches will be adequate and yet, to be invented techniques will be needed. At a market level, there does appear to be two quite distinct logistic trends emerging. Large logistics organisations are getting larger through acquisition and diversification. These organisations are forming collaborative relationships with their large customers that may involve pooling resources, creating jointly owned entities, or more simply outsourcing. For the most parts these kind of arrangement are predicated on high transaction volumes and an ability to create “Chinese Walls” within the larger logistic environment. Economic value added (EVA) is primarily achieved through economies of scale and more effective planning. Smaller “market niche” organisations are emerging to offer specialist services at lower transaction volumes. Some of these market niches are vertical in nature (e.g. the distribution of drugs directly from warehouse to the patient bedside), and some horizontal (e.g. car distribution). EVA, in this case, is primarily achieved through higher levels of service and the ability to rapidly reconfigure a limited operational base. The remaining smaller and medium sized logistics organisations, that are neither linked with larger organisations nor tightly focused on a market niche, face what would appear to be an uncertain future. If the agile, re-configurable supply chain is indeed the future, what must be the focus for performance measurement and what kind of performance measures will be used?

Performance measurement in the supply chain 531

BPMJ 10,5

532

Constructing future supply chain performance measurement systems In considering the future for supply chain PMS, two major requirements emerge from this analysis. (1) Performance measures must be linked with the strategy of an organisation, be part of an integrated control system, have internal validity and enable proactive management; and (2) The performance measurement system must be dynamic, intra-connectable, focused and usable. Let us now consider these requirements individually. The first requirement is clearly an important requisite for internal stability, economy, efficiency and effectiveness. Figure 1 shows such an integrated system, albeit typified. More specifically the performance measures that are commonly found in the logistics environment are shown in Table II. Many of these could be easily fitted into a PMS framework such as the Balanced Scorecard, but are often only used in terms of functional activities. As a consequence, the type and volume of performance measures usually reflect managers’ immediate preoccupations, (e.g. If cost is a current issue then there will be a proliferation of cost performance measures.) These are problems of perspective and structure. It is a fact that most middle managers behave in fairly territorial ways and many PMS encourage this way of acting. Managerial flexibility is unlikely to emerge unless a broader visibility of information (rather than simply functional data) is given to these middle managers. To create a flexible internal system, there is the need for a change in the way performance measurement information is presented. This problem has been faced in many volatile environments, none more so than food retailing. Some time ago, Tesco

Financial performance Cost performance

Customer service

Quality

Stock turn

Cost per unit

On-time deliveries

Handling damage

ROI

Cost per sale

No. of stock-outs

Information integrity

ROA

Inbound freight

Delivery cycle time Paperwork accuracy

ABC control Outbound freight EVA EPS

Table II. Typical logistic system performance measures

Order processing cost Direct/indirect labour cost Direct product profitability Inventory carrying cost Cost of returns Damage costs Backorder costs

Enquiry response time No. of shipping errors No. of customer complaints Delivery shortages No. of packages damaged Reliability SLA performance

Paperwork delivery

Operational productivity Pick rate per employee Units shipped per employee Average used capacity Stock velocity

No. of damage claims Average pick time per order Picking and shipping Transport capacity accuracy utility Labour turnover Equipment uptime Time per order processed

plc were faced with a problem that store managers had widely differing utility of the weekly and monthly performance data provided by Head Office. The performance document was many pages long and full of closely typed tables containing “useful” information. Many managers simply ignored this document, or at best gave it a cursory glance, and as a result, it was difficult to enforce performance norms across the company. After a careful study of the problem, Tesco concluded that the problem lay in the way in which the data was structured. Managers were finding that it took too long to locate relevant data about problems because it was lost in other data. Tesco’s answer was to create a “drill down structure”. Store activities were grouped under a Balanced Scorecard structure of people, finance, operations and customer. These major groups were divided into sub-groups. For example the “operations” category was divided into stock management, waste management, product availability, productivity and queue performance (the “one in front” initiative). At this level, the performance of the whole store could be summarised on one sheet. Managers could rapidly identify areas that were off target and then drill down into the more detailed data on subsequent sheets. To make matters simpler, the top sheet had a set of traffic lights against each category (red – investigate urgently, yellow – monitor carefully, green – within target). This initiative is not unique and most organisations that have taken the time and trouble to make their performance data accessible have found a consequent improvement in organizational performance and flexibility. The second requirement raises the operational problem of making a PMS dynamic from both an internal and supply chain perspective. “Dynamic” in these circumstances may be interpreted as meaning: real time control; internal operational and inventory visibility; external operational and inventory visibility; and internal and external systems connectability. For a system to be “real time” it requires information to be gathered as a part of the process in an accurate and timely manner. This information must be validated, processed and then used to drive operational activities. Without this data, it is impossible to make the transition to “visibility”. To some extent, there are examples already existing of this way of working – automated warehouse systems or “in-cab” vehicle location and control systems. In manufacturing, control systems based on shop floor data collection would be an analogous example. Some car manufacturing systems even allow the dealer to interrogate the factory system to determine where the customer’s car is in its build cycle. Several elements need to be in place for these systems to be a practical proposition. (1) Cheap and reliable identification of units in transition that will allow automatic interrogation of movements and locations producing reliable data; (2) Standard protocols that are accepted and applied in all environments; (3) Communication systems that are capable of handling the volume of data generated by real time systems; (4) Hardware and software capable of processing the data quickly in order to provide managers with relevant control information; (5) Multi-layered control systems that allow fast management response to delinquent situations and fast “drill down” data trees that enable problems to be pinpointed rapidly; (6) System handshake protocols that will allow “high level” interrogations to take place between partners in the supply chain;

Performance measurement in the supply chain 533

BPMJ 10,5

534

(7) Routing and re-routing protocols that allow supply chain cost control, speed and flexibility of delivery response; (8) High velocity electronic cash transfers instigated automatically; and (9) Robust systems with inbuilt automatic recovery abilities. Today, in 2001, nearly all of the elements exist in one form or another, or alternatively are technically feasible. Why then is everyone not rushing to create this panacea? Strangely, enough the answers to this question are straightforward. The most obvious barrier is the economic one and the initial costs of such a system would be very high. Rather like the creation of the Internet, economic viability will only come with a massive buy-in from logistic operators around the world. The second barrier would be that of organisation cultures and the general perception that organisations must compete rather than co-operate. Unless the secrecy barriers created by competitive attitudes, and possibly by management’s fear of scrutiny, are eliminated, it is unlikely that much real progress will be made. A third barrier is that of chauvinism. The problems that will be encountered are similar to those encountered within the E.C. of Europe. Suspicion of motives, pride, national beliefs all feature large and mean that the way to progress will require a great deal of political skill. There are no doubt other barriers. Conclusions Forecasting real world events is always fraught with danger. It is likely that during the next ten years a few very large pan-world logistics organisations may amalgamate in the way described to create a “global control system”. It is even possible that in time (say by 2015) the larger national logistics organisations would join such an initiative. These new supply chain amalgamations will find their markets with the pan-world corporations, and together will have enormous commercial influence. These new corporations will need new PMS of the nature described in this paper or face increasing instability and diseconomies of scale as they grow. Other supply chain organisations that will operate outside these large economic combines will undoubtedly have to interface with them in one way or another. These will almost certainly be constrained to national markets, and with the exception of the niche market operators, may be constrained to becoming the local interfaces of the large systems. These kinds of organisations will need to focus their PMS on internal activities. Better PMS will lead to better margins, and it is these margins that will be squeezed by the large operators. Survival may be the difference between a good PMS and a mediocre PMS. In this respect, many supply chain companies still have a long way to go. We are on the brink of some major changes in the supply chain industry and there can hardly be a better time to start preparing PMS for the future. Whatever supply chain configurations emerge in the long run PMS will need to be appropriately structured to provide managers with information that can be accessed quickly and understandably. Information will need to be timely so that proactive management can be practiced and be relevant so that broader strategic aims can be achieved. References Barker, R.C. (1993), “Value-adding performance measurement: a time-based approach”, International Journal of Operations & Production Management, Vol. 13 No. 5, pp. 33-40.

Bowersox, D.J., Closs, D.J. and Stank, T.P. (1999), 21st Century Logistics: Making Supply Chain Integration a Reality, Council of Logistics Management, Michigan State University, East Lansing, MI. Christopher, M. (1998), Logistics and Supply Chain Management: Strategies for Reducing Cost and Improving Service, 2nd ed., Prentice Hall, Harlow. Cooper, R. (1995), When Lean Enterprises Collide, Harvard Business School Press, Boston, MA. Cooper, R. and Kaplan, R.S. (1987), “How cost accounting systematically distorts product costs”, in Burns, W.J. (Ed.), Accounting and Management, Field Study Perspectives, Harvard University Business Press, Boston, MA. Drucker, P. (1995), “The information executives truly need”, Harvard Business Review, Vol. 73. Eccles, R. and Pieburn, P. (1992), “Creating a comprehensive system to measure performance”, Management Accounting UK, Vol. 70 No. 10, pp. 41-4. Ford, H. and Crowther, S. (1926), Today and Tomorrow, William Heinemann, London. Goldratt, E.M. and Cox, J. (1984), The Goal, North River Press, New York, NY. Hammer, M. and Stanton, S.A. (1995), The Reengineering Revolution Handbook, Harper Collins, London. Harrison, A. (1992), Just in Time in Perspective, Prentice-Hall, London. Hughes, J., Ralf, M. and Michels, W. (1998), Transform Your Supply Chain: Releasing Value in Business, Thompson Business Press, London. Johnson, H.T. and Kaplan, R.S. (1987), Relevance Lost: the Rise and Fall of Management Accounting, Harvard Business School Press, Boston, MA. Kaplan, R.S. (1993), “Putting the Balanced Scorecard to work”, Harvard Business Review, Vol. 71, pp. 134-42. Kaplan, R.S. and Norton, R. (1992), “The balanced scorecard – measures that drive performance”, Harvard Business Review, Vol. 70, pp. 71-9. Kaplan, R.S. and Norton, R. (2000), “Having trouble with your strategy? – then map it”, Harvard Business Review, Vol. 78, p. 67, et seq. Luscombe, M. (1993), Integrating the Business: MRP II, a Practical Guide for Managers, Butterworth Heineman, London. Maskell, B. (1989), “Performance measurement for world class manufacturing (parts 1, 2 and 3)”, Management Accounting. Maskell, B. (1991), Performance Measurement for World Class Manufacturing, Productivity Press, Cambridge, MA. Mather, H. (1988), Competitive Manufacturing, Prentice-Hall, Englewood Cliffs, NJ. Neely, A.D., Gregory, M.J. and Platts, K.W. (1995), “Performance measurement systems design”, International Journal of Operations & Production Management, Vol. 15 No. 4. Oakland, J.S. (1995), Total Quality Management, Butterworth-Heineman, London. Stalk, G. and Hout, T.M. (1990), “How time based management measures performance”, Planning Review, pp. 26-9. Schonberger, R.J. (1986), World Class Manufacturing: The Lessons of Simplicity Applied, Free Press, New York, NY. Suzaki, K. (1987), The New Manufacturing Challenge, Free Press, New York, NY. Zeithaml, V.A., Parasuraman, A. and Berry, L.L. (1990), Delivering Quality Service: Balancing Customer Perceptions and Expectations, Free Press, New York, NY.

Performance measurement in the supply chain 535

BPMJ 10,5

536

Further reading Goldratt, E.M. (1990), The Theory of Constraints, North River Press, New York, NY. Lea, R. and Parker, B. (1989), “The JIT spiral of continuous improvement”, Industrial Management & Data Systems, Vol. 4, pp. 10-13. Neely, A.D., Gregory, M.J., Platts, K.W. and Richards, H. (1994), “Realizing strategy through measurement”, International Journal of Operations of Production Management, Vol. 12 No. 3. Oram, M. and Wells, R.S. (1995), Re-engineering’s Missing Ingredient, IPD, London. Stephens, D.W. and Krebs, J.R. (1986), Foraging Theory, Princeton University Press, NJ.

The Emerald Research Register for this journal is available at www.emeraldinsight.com/researchregister

The current issue and full text archive of this journal is available at www.emeraldinsight.com/1463-7154.htm

Measuring employee assets – The Nordic Employee IndexTM

Measuring employee assets

Jacob K. Eskildsen The Aarhus School of Business, Aarhus, Denmark

537

Anders H. Westlund The Stockholm School of Economics, Stockholm, Sweden

Kai Kristensen The Aarhus School of Business, Aarhus, Denmark Keywords Employees, Measurement, Job satisfaction, Scandinavia Abstract Based on a review of the literature within the field of job satisfaction and organizational commitment this paper proposes a generic model for measuring employee perceptions. The model consists of seven latent constructs operationalised through 29 manifest indicators. This model is subsequently tested on data from a questionnaire survey to which approximately 9,600 employees from the Nordic countries responded. The statistical technique applied for this analysis is partial least squares (PLS), which is well suited for structural equation modelling when the focus is on prediction. The results support the structure of the suggested generic model but reveals differences between the Nordic countries with regard to strength of the relationships as well as the average case value of the seven latent constructs.

Introduction The world market is becoming an increasingly complex place where it operates today’s businesses putting up new demands for corporate information systems (Kaplan and Norton, 1996). The information systems of the future will include measures of intangibles such as customer and employee/job satisfaction since these intangibles are in account for more than half of the book value in most industries in both the US and Europe (Kaplan and Norton, 1996; Kristensen et al., 2003). These future information systems will require standardised methods for measuring intangibles (Kristensen and Westlund, 2001) and this is already well underway in the field of customer satisfaction through national indices such as ACSI in the US and EPSI rating in Europe. Less has happened in the field of job satisfaction with respect to developing standardised measuring methods (Kristensen and Westlund, 2001) and most of the studies done on job satisfaction are limited to specific countries or even specific organisations. These studies report different and sometimes contradictory findings with respect to the antecedents of motivation, satisfaction, faithfulness and commitment (Abraham, 1999; Eby et al., 1999; Eskildsen and Dahlgaard, 2000; Gaertner, 1999; Howard and Frink, 1996; de Jonge et al., 2001; Law and Wong, 1999; Pollock et al., 2000; Shockley-Zalabak et al., 2000; Susskind et al., 2000). The aim of this paper is therefore, to study the literature and develop a generic model for measuring employee motivation, employee/job satisfaction, faithfulness and commitment as well as their antecedents.

Business Process Management Journal Vol. 10 No. 5, 2004 pp. 537-550 q Emerald Group Publishing Limited 1463-7154 DOI 10.1108/14637150410559216

BPMJ 10,5

538

Secondly the aim of this paper is to test the generic model, called The Nordic Employee Indexe, on data from a survey conducted in August 2001 in which approximately 9,600 respondents from Denmark, Norway, Sweden and Finland participated. Here previous research, which shows that Danish employees are more satisfied than employees from the other Nordic countries (Sousa-Poza and Sousa-Poza, 2000), will also be tested. The Nordic Employee Indexe is a joint project between the two independent market research companies MarkedsConsult A/S and CFI Group. The statistical method applied in this instance is called partial least squares (PLS) which is a technique well suited for structural equation modelling when the focus is on prediction (Jo¨reskog and Wold, 1982). This analysis will show to what extent the generic model corresponds with the mental models of the respondents. A generic model for employee perceptions There is a lot of research being done on employees and the attitudes that they have to different aspects of their job situation. Especially the structural relationship between job characteristics, psychological responses and finally behavioural outcomes have received growing attention in recent years. The research within this field is mainly trying to explain or predict one of two things: . the degree to which the employee intends to turnover; or . the degree to which the employee is affectively committed. Few attempts have been made to include both aspects because the studies tend to focus either on keeping the employee with the organisation or on how the employee is performing while with the organisation (Eby et al., 1999; Gaertner, 1999; de Jonge et al., 2001; Susskind et al., 2000). Thus, the focus is either on reducing the costs associated with employee turnover or on increasing the financial gain from highly committed employees. This is a paradox because a high degree of faithfulness is not necessarily a good thing unless it is accompanied by a high degree of affective commitment and vice versa. The two concepts have however, something in common since they are both action-oriented concepts that deal with the behaviour of the employees (Duboff and Heaton, 1999; McCusker and Wolfman, 1998; Price, 1997). They are two aspects of the same construct called employee loyalty by some (Duboff and Heaton, 1999; McCusker and Wolfman, 1998) and organizational commitment by others (Price, 1997). In the following, faithfulness and affective commitment will be treated as two aspects of the construct “loyalty”. A lot of studies show that faithfulness and affective commitment is primarily driven by a positive psychological response to contextual stimuli and factors inherent in the individual (Abraham, 1999; Eby et al., 1999; Gaertner, 1999; Hochwarter et al., 1999; Koys, 2001; Law and Wong, 1999; Oshagbemi, 1999; Pollock et al., 2000). Some studies include job satisfaction as the psychological response of interest, some include intrinsic motivation and others include both these concepts (Eby et al., 1999; Hochwarter et al., 1999; de Jonge et al., 2001; Law and Wong, 1999; Susskind et al., 2000). A classical definition of job satisfaction states that it is “a pleasurable or positive emotional state resulting from the appraisal of one’s job or job experience” (Locke, 1976). Intrinsic motivation is often conceptualised as “the degree to which the employee

is self-motivated to perform effectively on the job – that is, the employee experiences positive internal feelings when working effectively on the job” (Hackman and Oldham, 1975). Based on these definitions the two concepts are manifestations of the same construct – a psychological response to contextual stimuli and factors inherent in the individual. Including one concept as a predictor of behavioural outcomes without the other creates a paradox like the one mentioned before. A high degree of intrinsic motivation without a high degree of job satisfaction is not viable in the long run and a high degree of job satisfaction without a high degree of intrinsic motivation is not a desirable situation in the eyes of the organisation. Intrinsic motivation and job satisfaction will therefore in the following, be treated as two aspects of the construct “motivation & satisfaction”. So far it has been argued that the construct “loyalty” is a behavioural outcome, which is driven by “motivation & satisfaction” a psychological response and how to obtain this is typically the focus of the human resource management (HRM) system of the organisation (Cornelius, 1999; DeCenzo and Robbins, 1999; Dale et al., 1997; Kaye and Dyason, 1998; Noe et al., 2000). Modelling the psychological response as driven by characteristics of the job and work environment is in line with Hackman and Oldham as well as later attempts to model these phenomena (Eby et al., 1999; Hackman and Oldham, 1975; de Jonge et al., 2001; Law and Wong, 1999). In recent studies within the field of job satisfaction and intrinsic motivation a lot of different constructs have been included as predictors of job satisfaction and intrinsic motivation but overall these constructs can be divided among four main groups of characteristics of the job and work environment and these are the following (Anderson and Martin, 1995; Boswell and Boudreau, 2000; Clark, 2001; Ducharme and Martin, 2000; Gaertner, 1999; Howard and Frink, 1996; de Jonge et al., 2001; Law and Wong, 1999; Oshagbemi, 1999, 2000; Pollock et al., 2000; Sousa-Poza and Sousa-Poza, 2000; Susskind et al., 2000; Testa, 1999): . organizational vision: this area focuses on the cultural/ethical aspects of the organisation, the ability of corporate management to make sound decisions as well as to inform the employees about the state and direction of the organisation; . superiors: this area focuses on the relationship that the employee has to the immediate manager i.e. the perceived professional and leadership skills of the manager; . co-workers: this area focuses on the social climate among the co-workers, the degree of professional cooperation as well as the sense of social belonging; and . conditions of work: this area focuses on the job content, the physical work environment, job security, the pay and benefit package, in other words all of the aspects of the job itself when perceived as isolated from the social and cultural context. As an addition to these common areas we propose that corporate image should be included as a fifth area when attempting to predict the psychological response of the employees. Although the idea of including image as a predictor of the psychological response of employees might be quite novel it is commonly included as a predictor in most of the studies done on customer satisfaction (Oliver, 1997; Olve et al., 1999). The proposed model is shown in Figure 1 below. Here organizational vision is included as “Corporate Leadership” and co-workers as “Cooperation”. The model

Measuring employee assets

539

BPMJ 10,5

540

Figure 1. The conceptual model of the Nordic Employee Indexe

consists of five exogenous latent variables and two endogenous latent variables with “Loyalty” as the final outcome. In the conceptual model shown in Figure 1 no direct impact from the five exogenous constructs on the final outcome “loyalty” has been suggested. This is also rather novel since most of the structural studies done in this area includes direct impacts from the exogenous variable to the final outcome (Eby et al., 1999; Gaertner, 1999; Howard and Frink, 1996; de Jonge et al., 2001). The reasoning behind this is that there will be no behavioural effect of any given job characteristic without an intervening psychological response. Any changes within an organisation will not affect the behaviour of the employee without being processed mentally by the employee. It is in other words not possible to shortcut the brain of the employee in order to improve his/her loyalty to the organisation. Some of the structural studies within this field have been able to report empirically significant direct impacts (Eby et al., 1999; Gaertner, 1999; Howard and Frink, 1996; de Jonge et al., 2001), but if the theoretically justified indirect impact is present the direct impact will in many cases be present too. This phenomenon is more often a consequence of the statistical technique applied (structural equation modelling) than it is theoretical insight unravelled from the data material. In order to make an estimation of the proposed model feasible a questionnaire that covers different aspects of the seven constructs from the conceptual model has been developed based on studies of the literature (Allen and Meyer, 1996; Oshagbemi, 1999; Hirschfeld, 2000; Pollock et al., 2000; Sousa-Poza and Sousa-Poza, 2000; Susskind et al., 2000) as well as focus group sessions in different organizational settings. The topics included under each of the seven constructs from the conceptual model is shown below: (1) Image . overall image; . proud to tell; and . perception of others.

(2) Corporate leadership . overall evaluation; . ability to make right decisions; . ability to inform employees; and . corporate ethics. (3) Immediate manager . overall evaluation; . professional skills; and . leadership skills. (4) Cooperation . professional cooperation; . climate; and . social belonging. (5) Conditions of work . job content; . work environment; . pressure; . pay and benefits; . job security; and . attention to professional and personal development. (6) Motivation & Satisfaction . overall satisfaction; . comparison to ideal; . feel motivated; and . look forward to go to work. (7) Loyalty . intend to stay; . looking for another job; . recommendation to friends; . willingness to change; . willingness to do extra effort; and . colleagues think I do well. The focus of the following section is on the data used for testing the conceptual model shown in Figure 1. Sampling The data for The Nordic Employee Index were collected in August 2001 through a postal survey. A total of 85,000 questionnaires were sent to employees in the Nordic

Measuring employee assets

541

BPMJ 10,5

542

countries; 20,000 in Denmark; 25,000 in Sweden; 20,000 in Norway and 20,000 in Finland. The questionnaire contains generic questions on the topics shown previously that can be answered by any employee irrespective of industry, education etc. The survey also contains a number of demographic and other questions such as age, education, size of company and whether or not the respondent holds a managerial position. The questionnaires were sent to randomly selected households in the Nordic countries. The selection in Denmark and Norway was based on the telephone companies’ database of fixed-line customers. In Sweden and Finland the selection was based on the national register. In order to fill in the questionnaire the respondent had to meet the following criteria: . be employed, but not self-employed; . work at least 25 hours a week for the same employer; and . be at least 18 years old. If several members of the household fulfilled the criteria, the person whose birthday passed most recently was asked to complete the questionnaire. About 9,600 responses were collected, of which there were about 2,650 in Denmark, 2,050 in Norway, 1,900 in Sweden and 3,000 in Finland. The response rate is 11.3 per cent but since a substantial amount of the Nordic households do not have people that fulfil the criteria above, the response rate is considerably higher in reality. It is our view that the effective response rate is well above the 20 per cent range, which is quite normal for postal surveys. All in all the response rate is therefore perceived as satisfactory. Methodology There are quite a number of statistical techniques that can be applied for testing causal relationships, but the method used to estimates the model behind The Nordic Employee Indexe is PLS. The reason for this choice is that the focus of the study is on predicting employee loyalty and PLS is a technique well suited for this purpose (Jo¨reskog and Wold, 1982). Had the focus on the other hand been on explanation LISREL would have been the natural choice (Jo¨reskog and Wold, 1982). Furthermore, LISREL is rather sensitive to skewed distributions, multicollinearity and misspecifications in the model (Bollen, 1989), which is not the case for PLS (Cassel et al., 1999). The PLS model consists of three parts: inner relations, outer relations, and weight relations (Wold, 1980; Wold, 1985; Fornell and Cha, 1994). The inner relations depict the relations between the latent variables as shown in equation (1). h ¼ Bh þ Gj þ z

ð1Þ

Here h is a vector of the latent endogenous variables and B the corresponding coefficient matrix (Fornell and Cha, 1994). j is a vector of the latent exogenous variables, G the corresponding coefficient matrix and finally an error term, z, is included. The second part of the model is the outer relations (Fornell and Cha, 1994). Here the relationship between the latent variables and the manifest variables are defined and in contrast to LISREL these can both be reflective and formative by nature (Jo¨reskog and Wold, 1982). Since the analysis performed here is based on reflective outer relations

only this situation is mentioned in the following. The general formula for reflective outer relations is shown in equation (2).

Measuring employee assets

y ¼ L y h þ 1y x ¼ L x j þ 1x

ð2Þ

543 Here y is a vector of the observed indicators of h and x is a vector of the observed indicators of j. Ly and Lx are matrices that contain the li coefficients which link the latent and the manifest variables together and d and e are the error of measurement for x and y, respectively (Fornell and Cha, 1994). The weight relations are the final part of the PLS model. In PLS each case value of the latent variables can be estimated through the weight relations shown in equation (3) as linear aggregates of their empirical indicators. ^ ¼ vhy h j^ ¼ v j x

ð3Þ

The ability to calculate case values for the latent variables is one other reason why PLS has been chosen instead of LISREL in this instance. In LISREL case values cannot be calculated without factor indeterminancy, which means that they should be used with caution (Bollen, 1989). This is not a problem in PLS estimation. Empirical results In order to verify the conceptual model the PLS procedure has been performed on the data collected for The Nordic Employee Indexe and the overall results for the Nordic countries are shown in Figure 2. In the estimation procedure the respondents have been weighted with respect to country, region, sex, age as well as education in order to make the results more representative.

Figure 2. Empirical model for the Nordic Employee Indexe

BPMJ 10,5

544

In Figure 2 the unstandardised path coefficients resulting from the estimation are shown and they are all highly significant with t-values in the range of 13-148. The coefficients express the effect that a change in the predicting variable will have in the depending variable ceteris paribus. If for instance the level of “Conditions of Work” increases with one the level of “Motivation & Satisfaction” will increase with 0.6 ceteris paribus.The next thing is therefore, to evaluate the overall fit-measures and these are shown in Table I below. There is a range of measures that can be used to assess the fit of a PLS model, but four of the most commonly applied are composite reliability, average variance extracted, R2 and root mean squared error of approximation (Fornell and Cha, 1994; Chin et al., 1996; Hulland, 1999). These four fit measures have all been calculated for the empirical model and the first of these measure included in Table I is the composite reliability given by equation (4). Here li is the factor loadings and Qii are the unique/error variance (Chin et al., 1996). P 2 li rc ¼  P  2 P ð4Þ li þ Qii This measure is used to assess the reliability of the latent variables and is preferable compared to Cronbach’s a with its assumption of t equivalency among the measures, because it gives a better estimate of the true reliability (Chin et al., 1996). In this case the composite reliability of the latent variables are above 0.90 except for “Conditions of Work” which is just below 0.85 and this is satisfactory. The second measure shown in Table I is average variance extracted (AVE) given by equation (5). This measure calculates the amount of variance captured by the latent variables in relation to the amount due to measurement error (Fornell and Cha, 1994). K X l2k AVE ¼

k¼1 K  X

l2k

þ uk

ð5Þ



k¼1

The AVE’s are all above 0.5 except for “Conditions of Work”. An analysis of the individual items showed that the item dealing with job security is the main reason for this, which might be because of the specific structure of the labour markets in the

Table I. Fit-measures for the empirical model

Image Senior management Immediate superior Cooperation Conditions of work Motivation & satisfaction Loyalty Total for model

Composite reliability

AVE

R2

RMSEA

0.9383 0.9421 0.9475 0.9182 0.8484 0.9450 0.8681 –

0.8353 0.8028 0.8577 0.7895 0.4876 0.8113 0.5299 0.6900

– – – – – 0.7131 0.6930 –

– – – – – – – 0.059

Nordic countries. In these countries the labour market is highly regulated and employees are therefore, guaranteed some degree of job security and therefore, it might not be as important for them as for employees from other cultural settings. One solution to this problem would be to simply omit this item from the questionnaire, but since the purpose is to develop a generic model applicable across cultural differences the item has remained apart of the “Conditions of Work” construct. The third fit-measure in Table I is the R 2’s of the latent endogenous variables. As in simple regression these express the proportion of variance in the latent endogenous variables explained by the structural relationships (Fornell and Cha, 1994). The two R 2’s stemming from this analysis are both quite high, so the fit of the model is also satisfactory on this account. The final fit-measure in Table I is the root mean squared error of approximation (RMSEA). This measure is calculated as the root mean squared difference between the sample correlation matrix and the parameterised correlation matrix. A value of 0.05 or less indicates a close fit but a value of 0.1 or less will indicate a reasonable error of approximation (Brown and Cudeck, 1993). In this instance the RMSEA is close to 0.05 so all in all the fit of the model must be said to be satisfactory. From the model in Figure 1 it is evident that the most important driver of “Motivation & Satisfaction” on the overall level in the Nordic countries is “Conditions of Work”. Actually it is as important as the rest of the drivers put together which means that the characteristics of the job is as important a driver as the social and image related aspects, but this is not necessarily the same if one looks at individual countries. In order to test this the estimation has been performed on individual country level and the relative unstandardised impacts of the five exogenous variables are shown in Figure 3 below. In each of the four estimations the respondents have been weighted with respect to region, sex, age as well as education in order to make the results more representative for the country in question. The results of these estimations on individual country show that the structural relationship between “Motivation & Satisfaction” and its drivers are different in the four Nordic countries. The Danes put a higher emphasis on “Corporate Leadership” and “Immediate Superior” than employees from the other Nordic countries. The Norwegians on the other hand put a higher emphasis on “Image” and “Cooperation” and Finnish employees put a higher emphasis on “Conditions of Work”. As mentioned previously the application of PLS enables a calculation of case values for each of the latent variables using the weight relations. This has been done for each of the four countries and the results for the four Nordic countries on all seven latent constructs is shown in Figure 4 below. Here the case values have been indexed to a 0-100 scale in order to make the results easier to interpret. From Figure 4 it is obvious that Danish employees have a more positive view on their job situation than the employees in the rest of the Nordic countries. This is actually a confirmation of the previous research, which showed that Denmark had the most satisfied employees compared to 20 other countries (Sousa-Poza and Sousa-Poza, 2000). The only surprising thing compared to the previous research are that Swedish employees have such a low evaluation since they typically are at the same level as the employees from Norway (Sousa-Poza and Sousa-Poza, 2000). If the trend from the sited study holds true one would actually expect that employees from a large number of countries would evaluate their job situation worse than the Swedish employees do if

Measuring employee assets

545

BPMJ 10,5

546

Figure 3. Relative impacts in the Nordic countries

The Nordic Employee Indexe was expanded to other countries (Sousa-Poza and Sousa-Poza, 2000). Concluding remarks The purpose of this paper was to make a contribution within the field of measurement of intangible assets by developing a generic model for employee perceptions. Based on a literature review a generic model for measuring employee motivation, employee/job satisfaction, faithfulness and commitment as well as their antecedents were proposed. This model consisted of five exogenous variables and two endogenous variables related in a causal structure shown in Figure 1. In order to facilitate an empirical verification of the proposed generic model called The Nordic Employee Indexe a questionnaire was developed consisting of 29 statements covering the model along with a number of demographic variables. This questionnaire was in August 2001 sent to 85,000 employees from Denmark,

Measuring employee assets

547

Figure 4. Index by country

Norway, Sweden and Finland of whom approximately 9,600 responded. The statistical method applied to verify the proposed model was PLS. The analysis showed that the model fits the data well and that the five exogenous variables are very good at explaining the variation in “Motivation & Satisfaction” and “Loyalty”. Further analysis of the data revealed that the relative impact of the five exogenous variables varies from country to country as does the average case value for each of the latent variables. The Danes put a higher emphasis on “Corporate Leadership” and “Immediate Superior” than employees from the other Nordic countries. The Norwegians on the other hand put a higher emphasis on “Image” and “Cooperation” and Finnish employees put a higher emphasis on “Conditions of Work”. Furthermore, Danish employees have a more positive view on their job situation than the employees in the rest of the Nordic countries which is a confirmation of the previous research (Sousa-Poza and Sousa-Poza, 2000). Future research will focus on expanding the study to other countries in Europe in order to establish a standard for measuring employee motivation, employee/job satisfaction, faithfulness and commitment as well as their antecedents. The effect of demographic characteristics on employee motivation, employee/job satisfaction, faithfulness and commitment will be of special interest in future research. References Abraham, R. (1999), “The impact of emotional dissonance on organizational commitment and intention to turnover”, The Journal of Psychology, Vol. 133 No. 4, pp. 441-55.

BPMJ 10,5

Allen, N.J. and Meyer, J.P. (1996), “Affective, continuance and normative commitment to the organization: an examination of construct validity”, Journal of Vocational Behavior, Vol. 49, pp. 252-76. Anderson, C.M. and Martin, M.M. (1995), “Why employees speak to coworkers and bosses: motives, gender and organizational satisfaction”, The Journal of Business Communication, Vol. 32 No. 3, pp. 249-65.

548

Bollen, K.A. (1989), Structural Equations with Latent Variables, Wiley, New York, NY. Boswell, W.R. and Boudreau, J.W. (2000), “Employee satisfaction with performance appraisals and appraisers: the role of perceived appraisal use”, Human Resource Development Quarterly, Vol. 11 No. 3, pp. 283-99. Brown, M.W. and Cudeck, R. (1993), “Alternative ways of assessing model fit”, in Long, J.S. (Ed.), Testing Structural Equation Models, Sage, London. Cassel, C., Hackl, P. and Westlund, A.H. (1999), “Robustness of partial least-squares method for estimating latent variable quality structures”, Journal of Applied Statistics, Vol. 26 No. 4, pp. 435-46. Chin, W.W., Marcolin, B.L. and Newsted, P.R. (1996), “A partial least squares latent variable modeling approach for measuring interaction effects: results from a Monte Carlo simulation study and voice mail emotion/adoption study”, Proceedings of the 17th International Conference on Information Systems, Cleveland, OH. Clark, A.E. (2001), “What really matters in a job? Hedonic measurement using quit data”, Labour Economics, Vol. 8, pp. 223-42. Cornelius, N. (1999), Human Resource Management – A Managerial Perspective, International Thomson Business Press, London. Dale, B.G., Cooper, G.L. and Wilkinson, A. (1997), Managing Quality & Human Resources, Blackwell, Oxford. DeCenzo, D.A. and Robbins, S.P. (1999), Human Resource Management, Wiley, New York, NY. de Jonge, J., Dormann, C., Janssen, P.P.M., Dollard, M.F., Landeweerd, J.A. and Nijhuis, F.J.N. (2001), “Testing reciprocal relationships between job characteristics and psychological well-being: a cross-lagged structural equation model”, Journal of Occupational and Organizational Psychology, Vol. 74, pp. 29-46. Duboff, R. and Heaton, C. (1999), “Employee loyalty a link to valuable growth”, Strategy & Leadership, January/February. Ducharme, L.J. and Martin, J.K. (2000), “Unrewarding work, coworker support, and job satisfaction”, Work and Occupations, Vol. 27 No. 2, pp. 223-43. Eby, L.T., Freeman, D.M., Rush, M.C. and Lance, C.E. (1999), “Motivational bases of affective organizational commitment: a partial test of an integrative theoretical model”, Journal of Occupational and Organizational Psychology, Vol. 72, pp. 463-83. Eskildsen, J.K. and Dahlgaard, J.J. (2000), “A causal model for employee satisfaction”, Total Quality Management, Vol. 11 No. 8, pp. 1081-94. Fornell, C. and Cha, J. (1994), “Partial least squares”, in Bagozzi, R.P. (Ed.), Advanced Methods of Marketing Research, Blackwell, Cambridge. Gaertner, S. (1999), “Structural determinants of job satisfaction and organizational commitment in turnover models”, Human Resource Management Review, Vol. 9 No. 4, pp. 479-93. Hackman, J.R. and Oldham, G.R. (1975), “The development of the job diagnostic survey”, Journal of Applied Psychology, Vol. 60, pp. 159-70.

Hirschfeld, R.R. (2000), “Does revising the intrinsic and extrinsic subscales of the Minnesota satisfaction questionnaire short form make a difference?”, Educational and Psychological Measurement, Vol. 60 No. 2, pp. 255-70. Hochwarter, W.A., Perrewe´, P.L., Ferris, G.R. and Brymer, R.A. (1999), “Job satisfaction and performance: the moderating effects of value attainment and affective disposition”, Journal of Vocational Behavior, Vol. 54, pp. 296-313. Howard, J.L. and Frink, D.D. (1996), “The effects of organizational restructure on employee satisfaction”, Group & Organization Management, Vol. 21 No. 3, pp. 278-303. Hulland, J. (1999), “Use of partial least squares (PLS) in strategic management research: a review of four recent studies”, Strategic Management Journal, Vol. 20, pp. 195-204. Jo¨reskog, K.G. and Wold, H. (1982), “The ML and PLS techniques for modelling with latent variables”, in Wold, H. (Ed.), Systems Under Undirect Observations, Vol. 1, North-Holland, New York, NY, pp. 263-70. Kaplan, R.S. and Norton, D.P. (1996), The Balanced Scorecard, Harvard Business School Press, Boston, MA. Kaye, M.M. and Dyason, M.D. (1998), “Harnessing human resources to achieve business excellence”, The TQM Magazine, Vol. 10 No. 5, pp. 387-96. Koys, D.J. (2001), “The effects of employee satisfaction, organizational citizenship behavior, and turnover on organizational effectiveness: a unit-level, longitudinal study”, Personnel Psychology, Vol. 54, pp. 101-14. Kristensen, K. and Westlund, A.H. (2001), “Management and external disclosure of intangible assets”, European Quality, Vol. 8 No. 4, pp. 60-4. Kristensen, K., Juhl, H.J. and Eskildsen, J.K. (2003), “Models that matter”, International Journal on Business Performance Management, Vol. 5 No. 1, pp. 91-106. Law, K.S. and Wong, C-S. (1999), “Multidimensional constructs in structural equation analysis: a illustration using the job perception and job satisfaction constructs”, Journal of Management, Vol. 25 No. 2, pp. 143-60. Locke, E. (1976), “The nature and consequences of job satisfaction”, in Dunette, M.D. (Ed.), Handbook of Industrial and Organizational Psychology, Rand-Mcnally, Chicago, IL, pp. 1297-349. McCusker, D. and Wolfman, I. (1998), “Loyalty in the eyes of employers and employees”, Supplement to the November 1998 Workforce, available at: www.workforceonline.com Noe, R.A., Hollenbeck, J.R., Gerhart, B. and Wright, P.M. (2000), Human Resource Management: Gaining a Competitive Advantage, McGraw-Hill, Boston, MA. Oliver, R.L. (1997), Satisfaction, McGraw-Hill, Boston, MA. Olve, N.G., Roy, J. and Wetter, M. (1999), Performance Drivers, Wiley, Chichester. Oshagbemi, T. (1999), “Overall job satisfaction: how good are single versus multiple-item measures?”, Journal of Managerial Psychology, Vol. 14 No. 5, pp. 388-403. Oshagbemi, T. (2000), “Satisfaction with co-workers’ behaviour”, Employee Relations, Vol. 22 No. 1, pp. 88-106. Pollock, T.G., Whitbred, R.C. and Contractor, N. (2000), “Social information processing and job characteristics”, Human Communication Association, Vol. 26 No. 2, pp. 292-330. Price, J.L. (1997), “Handbook of organizational measurement”, International Journal of Manpower, Vol. 18, pp. 303-558. Shockley-Zalabak, P., Ellis, K. and Winograd, G. (2000), “Organizational trust: what it means, why it matters”, Organization Development Journal, Vol. 18 No. 4, pp. 35-48.

Measuring employee assets

549

BPMJ 10,5

550

Sousa-Poza, A. and Sousa-Poza, A.A. (2000), “Well-being at work: a cross-national analysis of the levels and determinants of job satisfaction”, Journal of Socio-Economics, Vol. 29, pp. 517-38. Susskind, A.M., Borchgrevink, C.P., Kacmar, K.M. and Brymer, R.A. (2000), “Customer service employees’ behavioral intentions and attitudes: an examination of construct validity and a path model”, Hospitality Management, Vol. 19, pp. 53-77. Testa, M.R. (1999), “Satisfaction with organizational vision, job satisfaction and service efforts: an empirical investigation”, Leadership & Organization Development Journal, Vol. 20 No. 3, pp. 154-61. Wold, H. (1980), “Model construction and evaluation when theoretial knowledge is scarce: theory and application of partial least squares”, in Ramsey, J.B. (Ed.), Evaluation of Econometric Models, Academic Press, New York, NY, pp. 47-74. Wold, H. (1985), “Partial least squares”, in Johnson, N. (Ed.), Encyclopedia of Statistical Sciences, Vol. 6, Wiley, New York, NY, pp. 581-91.

The Emerald Research Register for this journal is available at www.emeraldinsight.com/researchregister

The current issue and full text archive of this journal is available at www.emeraldinsight.com/1463-7154.htm

Intellectual capital – defining key performance indicators for organizational knowledge assets Bernard Marr

Intellectual capital

551

Centre for Business Performance, Cranfield School of Management, Cranfield, Bedfordshire, UK

Gianni Schiuma DAPIT – Facolta´ di Ingegneria, Potenza, Italy

Andy Neely Centre for Business Performance, Cranfield School of Management, Cranfield, Bedfordshire, UK Keywords Intellectual capital, Assets, Performance measurement (quality), Knowledge management Abstract Measuring intellectual capital is on the agenda of most 21st century organisations. This paper takes a knowledge-based view of the firm and discusses the importance of measuring organizational knowledge assets. Knowledge assets underpin capabilities and core competencies of any organisation. Therefore, they play a key strategic role and need to be measured. This paper reviews the existing approaches for measuring knowledge based assets and then introduces the knowledge asset map which integrates existing approaches in order to achieve comprehensiveness. The paper then introduces the knowledge asset dashboard to clarify the important actor/infrastructure relationship, which elucidates the dynamic nature of these assets. Finally, the paper suggests to visualise the value pathways of knowledge assets before designing strategic key performance indicators which can then be used to test the assumed causal relationships. This will enable organisations to manage and report these key value drivers in today’s economy.

Introduction In the last decade management literature has paid significant attention to the role of knowledge for global competitiveness in the 21st century. It is recognised as a durable and more sustainable strategic resource to acquire and maintain competitive advantages (Barney, 1991a; Drucker, 1988; Grant, 1991a). Today’s business world is characterised by phenomena such as e-business, globalisation, higher degrees of competitiveness, fast evolution of new technology, rapidly changing client demands, as well as changing economic and political structures. In this new context companies need to develop clearly defined strategies that will give them a competitive advantage (Porter, 2001; Barney, 1991a). For this, organisations have to understand which capabilities they need in order to gain and maintain this competitive advantage (Barney, 1991a; Prahalad and Hamel, 1990). Organizational capabilities are based on knowledge. Thus, knowledge is a resource that forms the foundation of the company’s capabilities. Capabilities combine to The authors would like to thank, Go¨ran Roos, Steven Pike, Oliver Gupta, as well as the two anonymous reviewers for their valuable comments which helped us to improve this paper.

Business Process Management Journal Vol. 10 No. 5, 2004 pp. 551-569 q Emerald Group Publishing Limited 1463-7154 DOI 10.1108/14637150410559225

BPMJ 10,5

552

become competencies and these are core competencies when they represent a domain in which the organisation excels (Prahalad and Hamel, 1990). It is the result of both individual and organizational activities. In particular at the individual level, it includes personal knowledge and individual skills and talents; while at the organizational level, competence includes infrastructure, networking relationships, technologies, routines, trade secrets, procedures and organizational culture. Knowledge is today’s driver of company life (Bontis et al., 1999) and the wealth-creating capacity of the company is based on the knowledge and capabilities of its people (Savage, 1990). Today, many companies see themselves as learning organisations pursuing the objective of continuous improvement in their knowledge assets (Senge, 1990). This means that knowledge assets are fundamental strategic levers in order to manage business performance and the continuous innovations of a company (Marr and Schiuma, 2001; Mouritsen et al., 2002; Quinn, 1992; Boisot, 1998). In order to execute a successful strategy, organisations need to know what their competitive advantage is and what capabilities they need to grow and maintain this advantage. Capabilities are underpinned by knowledge. Therefore, organisations that seek to improve their capabilities need to identify and manage their knowledge assets. The perspective that knowledge assets represent the foundation of organizational capabilities explains the growing interest in knowledge management as an evolving discipline and approach to improve business performance. Although the management literature provides plentiful insights into knowledge management practices only little has been documented about the assessment of organizational knowledge assets. Given that it is difficult to manage something that is not being measured, organisations require frameworks to measure their knowledge assets. Managers need tools that help organisations in defining key performance indicators for those knowledge assets that are underpinning the strategic key capabilities of the organisation. This paper will discuss the concept of knowledge and will then review approaches suggested in the literature to measure intellectual capital and knowledge assets. Subsequently, the paper will present the knowledge assets dashboard as a framework that allows companies to define key performance indicators for measuring knowledge assets. Finally, it discusses the findings and suggests possible paths of how to take this research further. The concept of knowledge assets The attempt to operationalise the employment of knowledge has led academics as well as practitioners to define new concepts to identify, classify and manage the knowledge resources of organisations. The management literature shows two main streams that discuss knowledge. One of them, taking an epistemological approach, interprets knowledge as an entity and discusses the differences between information and knowledge, and it’s implications for knowledge management, whereas the other stream of literature discusses knowledge as an organizational asset that has to be managed in order to improve organizational performance. In the former stream of literature, knowledge is interpreted as information with an applied interpretation process (Penrose, 1959; Davenport and Prusak, 1998; Liebowitz and Wright, 1999). The attention is focused on the different features of knowledge in order to provide managers with meaningful guidelines to implement knowledge management processes (Albino et al., 2001; Spender, 1996; Winter, 1987). One of the most

meaningful results from this discussion is the distinction between tacit and explicit knowledge (Prusak, 1997; Nonaka, 1991; Nonaka and Takeuchi, 1995; Polanyi, 1958). The latter stream of research defines knowledge assets as a major part of an organisation’s value. The research seeks to help managers in managing and evaluating the company performance (Teece, 2000; Roos et al., 1997; Stewart, 1997). A major contribution provided by this research stream is the concept of intellectual capital (IC), which helps managers to identify and classify the knowledge components of an organisation. The two different streams are complementary and provide the cornerstones for the definition of a managerial framework to identify, assess, exploit and manage organizational knowledge. IC has contributed to a better understanding of knowledge assets and was a first step towards a less abstract and more operative conceptualising of knowledge. The literature refers to IC in a number of different ways. The expression “intellectual capital statement” refers to “capital”, emphasising the accounting value (Bukh et al., 2001). Some authors use the concept of IC while referring to the knowledge of a social community, such as an organisation or professional practice groups (Nahapiet and Ghoshal, 1998). Other scholars interpret IC as a human resource (Boudreau and Ramstad, 1997; Liebowitz and Wright, 1999), while yet others associate it with information technology (Davenport and Prusak, 1998). Knowledge management practitioners often interpret IC as a portfolio of organised knowledge which can be leveraged into wealth-creating processes and activities (Chase, 1997). In recent years different definitions of IC have been provided in the management literature and some of the key definitions are summarised in Table I. The ownership of specific knowledge provides organisations with particular capabilities (Leonard-Barton, 1992; Prahalad and Hamel, 1990). Therefore, the management of knowledge assets plays a key part in allowing an organisation to maintain and refresh its competencies over time. In order to manage knowledge assets, organisations need to measure them. The next section introduces the main approaches suggested to measuring knowledge assets. Overview of measurement approaches The aims of measuring knowledge assets can be twofold. First, to evaluate an organisation in order to communicate its real value to the market (external perspective). Second, to identify the knowledge components of an organisation in order to manage them so they can be turned into continuous performance improvement (internal perspective). Much attention in the IC literature has been on the market evaluation of an organisation’s knowledge assets. Many people involved in the debate believe that the “true” value of a company can only be assessed by taking the intangible assets into account. They argue that assets such as brand equity, knowledgeable workers, corporate culture, stakeholder relations, access to markets, competitive position and a host of other off-balance sheet resources have to be considered (Harvey and Lusch, 1992). This external perspective of measuring IC appears to be particularly useful for accounting purposes since it allows organisations to place a value on their intangible assets. Internal measurement and reporting of IC is about knowledge-management activities. On the basis of the above remarks it can be said that in terms of IC measurement knowledge management and accounting do not “serve the same masters”

Intellectual capital

553

BPMJ 10,5

554

Authors

Intellectual capital

Hall (1992)

May be classified as “assets” (e.g. brand, trademark, contracts, databases) or “skills” (e.g. know-how of employees, organizational culture) Knowledge that can be converted into value Consists of four main components: market assets, human-centred assets, intellectual property assets and infrastructure assets Consists of three categories of intangible assets: internal structure, external structure and human competence It is composed of a thinking part, i.e. the human capital, and a non-thinking part, i.e. the structural capital Intellectual material that has been formalised, captured, and leveraged to produce a higher-valued asset It is the sum of human capital and structural capital. It involves applied experience, organizational technology, customer relationships and professional skills that provide an organisation with a competitive advantage It is a concept that classifies all intangible resources as well as their interconnections Sources of future benefits (value), which are generated by innovation, unique organizational designs, or human resource practices It is composed of all knowledge-based assets, distinguished between organizational actors (relationships, HR) and infrastructure (virtual and physical)

Edvinsson and Sullivan (1996) Brooking (1996) Sveiby (1997) Roos et al. (1997) Stewart (1997) Edvinsson and Malone (1997)

Bontis et al. (1999) Lev (2001) Marr and Schiuma (2001) Table I. Definitions of intellectual capital

(Birkinshaw, 2002). The former is concerned with optimising the management of knowledge resources in the company in order to continuously improve performance. The latter is aimed at setting standards for organizational accounting in order to give stakeholders a more comprehensive picture of the traditional monetary value of the company. The assessment of knowledge in organisations is a difficult matter. However, since knowledge is of significant importance for a company’s competitiveness its evaluation is a fundamental issue. Leif Edvinsson, ex-corporate director of intellectual capital at Skandia, highlights this with the words: “A company grows, because it has hidden values. To keep growing you must surface them, care for them, and transfer them through the business. . .if managers can measure it, they will value it” (Stewart, 1994). In recent years a number of models have been proposed to measure knowledge assets. They can be considered as an indicator of the ineffectiveness of traditional measurement frameworks in capturing the knowledge dimensions within an organisation. A review of the literature of the traditional performance measurement frameworks revealed that little attention is being dedicated to assessing knowledge. The financially

biased management control of the 20th century has been heavily criticized (Kaplan, 1983; Meyer and Gupta, 1994; Johnson and Kaplan, 1987) and approaches such as the Performance Measurement Matrix by Keegan et al. (1989), Lynch and Cross’ SMART Pyramid (1991) and the Macro Process Model (Brown, 1996) were proposed to reflect the need for more comprehensive measurement systems. Although, for example, the Performance Measurement Matrix allows any measure of performance to be accommodated within it, there is no explicit dimension for knowledge assets. Besides explicit performance measurement frameworks, other initiatives such as the Malcom Baldrige Award and its European equivalent the EFQM Excellence Model, have had an impact on the corporate measurement agenda and encouraged organisations to examine some of the “soft” dimensions of performance such as leadership, employees and impact on society. The need to implement more balanced and integrated measurement frameworks led to the development of the Tableau de Bord (Epstein and Manzoni, 1997) and subsequently the Balanced Scorecard (Kaplan and Norton, 1992, 1996). Implicitly, the learning and growth perspective of the Balanced Scorecard incorporates measures about innovation capability and staff development, but it does not provide more detailed guidelines on which knowledge dimensions should be measured. A more recent performance measurement framework, the Performance Prism (Neely et al., 2002; Neely and Adams, 2001; Marr and Neely, 2001), explicitly takes some knowledge assets of organisations into account. The capabilities facet considers some of the knowledge assets such as capabilities of the people, practices and routines, infrastructure as well as technological capabilities. Although the Performance Prism reflects the need to integrate the knowledge assessment with other more traditional aspects of performance measurement there are no explicit guidelines of which knowledge assets to choose. A systematic review of the management literature filtered out the following as key models that address the measurement of knowledge assets: the Skandia Navigator (Edvinsson and Malone, 1997), the IC-Index (Roos et al., 1997), the IC Audit Model (Brooking, 1996) and the Intangible Asset Monitor (Sveiby, 1997). Each of these models will be discussed in the following sections.

Skandia navigator In order to evaluate its market value Skandia proposed to split market value into financial capital and IC. The latter is considered to equate to the firm’s intangible assets. The components of IC were subdivided into human capital, and other intangible assets embedded in the organisation itself called structural capital. Structural capital has been further subdivided into customer capital, e.g. the value of customer relations and organizational capital. The latter can be further broken down into process capital, related to the procedures and routines of the company’s internal processes, and innovation capital, that represents the enablers to innovate products and processes. The Skandia approach, therefore, splits IC into the following four categories: human capital, customer capital, process capital and innovation capital (Figure 1). On the basis of the above classification Skandia has developed an IC assessment tool called the Skandia Navigator. It is very similar to the Balanced Scorecard proposed by Kaplan and Norton but adds a human perspective in order to have the following five foci of measurement: the financial focus, the customer perspective (customer focus),

Intellectual capital

555

BPMJ 10,5

556

Figure 1. Skandia’s classification of the intellectual capital

the process perspective (process focus), the human perspective (human focus), and the renewal and development perspective (innovation focus) (Figure 2). Although Skandia made a significant contribution towards raising awareness of IC the problem with the Skandia approach is that is was developed specifically for one company. The classification of assets is primarily externally focused; its aim is to visualise the value of Skandia and to educate the analyst community. The Skandia Navigator is very similar to the Balanced Scorecard and is intended to function as a management tool. The problem is that all measures are eventually expressed in monetary terms and it is questionable that one can express knowledge assets in monetary terms. In the Balanced Scorecard approach there is a clear vision of how the different perspectives are related. In the Balanced Scorecard financial performance is achieved by meeting customers needs with a certain market proposition, in order to deliver this market proposition organisations have to execute their processes and in order to do this they need the right training and development. It is also not clear as how the five perspectives in the Skandia Navigator relate to each other. The overarching equation which sums IC and financial capital to give the market value of an organisation is wrong since the variables are not separable in this way as they interact with each other, they are not the same and rather represent two different sides of an equation. IC-Index approach The IC-Index approach represents an attempt to assess IC holistically. According its authors Johan and Go¨ran Roos and colleagues it can be interpreted as an approach which consolidates IC indicators into a single index in order to provide a more comprehensive visualisation of the company’s IC. The authors point out that one of the key advantages of this approach is that it enables organisations to correlate changes in IC with changes in the market (Roos et al., 1997). The IC-Index approach is based on an IC distinction tree which splits IC into human capital and structural capital, separating “thinking” and “non-thinking” knowledge assets. In other words, knowledge embodied in employees is separated from the

Intellectual capital

557

Figure 2. Skandia Navigator

structural knowledge assets of a company. Human capital is further split into competence (i.e. skills and education), attitude (i.e. the behavioural components of employees’ work), and intellectual agility (i.e. the innovation ability of employees). While Structural capital is seen as an aggregation of relationship capital (i.e. the relationships that company undertakes with customers, suppliers, allies, shareholders and other stakeholders), organizational capital (i.e. all sources of organizational, e.g. databases, process manuals, culture and management styles), and the renewal and development value (i.e. the intangible side of “anything” and “everything” that can generate value in the future, e.g. investments in training employees, reengineering and restructuring efforts, research and development). Figure 3 shows the complete IC distinction tree as proposed by Roos et al. (1997). Roos et al. (1997) propose consolidating all the different IC measures into a single index or at least into a small number of indices. In this way it is possible to provide a comprehensive picture of a company’s IC which would allow both an inter-company comparison as well as tracking of the relationship between the IC and the financial capital of an organisation. In order to define an IC-Index, an organisation should identify and list the most important IC measures. The list of indicators to measure the organizational IC should be ranked and, for each category of IC, only a handful of meaningful indicators should be selected. Once the list of meaningful indicators has been defined, each indicator must be expressed as a dimensionless number. After the IC indicators are all expressed

BPMJ 10,5

558

Figure 3. The IC distinction tree

in dimensionless numbers the process of consolidation can start. The indicators chosen must be weighted and summarised into a single index. There are three main factors affecting the selection process the company has to go through. These factors are the strategy, the characteristics of the company and the characteristics of the business the company operates. Figure 4 shows the importance of each role of the different factors in the selection of IC forms, weights and indicators. Strategy is the driver in making sense of different IC forms and in understanding which ones will support the company to perform its strategic goals. Therefore, when selecting the capital forms, strategy is the key issue. When choosing indicators the characteristics of the industry the company operates in are essential. Finally for the choice of weights managers need to consider the relative importance of each capital form in the creation of value in the particular business of the company. The most valuable contribution of the IC Index approach is that it allows organisations to measure how changes in the market or changes in other performance indicators correlate with the changes in the IC Index. It can, therefore, be used as a tool for managers to test or challenge some of their assumptions about how the development of IC affects the business as a whole. On a critical note using aggregates makes it difficult to identify the key business drivers. Furthermore, the weightings for each of the different measures is done subjectively which can be dangerous if managers get it wrong as the index would not fully reflect the real IC of an organisation. On the other hand, if a good dialog about the weighting of IC forms takes place among the management team, then this can be one of the most valuable contributions of this approach. It is important that the management team understands

Figure 4. The selection of capital forms, weights and indicators

and agrees which forms of IC are most important to the organisations and which forms of IC will drive sustainable performance. It is also important to note that the IC Index does not allow organisations to benchmark their performance against each other because every company’s IC Index will be made-up of different measures with different weightings. There is a danger that if organisations start to publish their IC Index, analysts might start comparing those, however, without some standardisation this will always mean comparing apples with pears. IC Audit model The IC Audit model proposed by Annie Brooking (1996) attempts to calculate a Dollar value for the non-tangible part of the organisation called IC. Brooking interprets IC as containing the following components: market assets, human-centred assets, intellectual property assets and infrastructure assets. Market assets are defined as market-related intangibles such as brands, contracts, customers, distribution channels, licensing agreements and franchise contracts. Human-centred assets are the knowledge of the people within the organisation and involve components such as expertise, problem solving capability, creativity, entrepreneurial and managerial skills. According to Brooking, intellectual property assets are corporate assets that can be expressed in financial terms. Examples of these assets are trade secrets, copyright, patent, service marks and design rights. Finally, infrastructure assets equal those technologies, methodologies and processes which enable the organisation to function. The implementation of the IC Audit starts with a questionnaire containing 20 questions. This set of questions is designed to assess whether the organisation needs to develop new or strengthen existing areas of IC. In order to conduct a full IC Audit of a company a number of specific audit questionnaires are used. Once an organisation completes its IC audit, they can use three approaches to calculate a monetary value for its IC: (1) the cost-based approach – determining the value of an asset by ascertaining its replacement costs; (2) the market-based approach – determining the value of an asset by obtaining a consensus of what others in the market have valued the asset at; and (3) the income-based approach – determining the value of an asset by looking at the income-producing capability of the asset. The IC Audit is heavily externally focused with the aim of placing a Dollar value against IC assets using one of the three approaches described above. There are various problems with these approaches of which some are already highlighted by Annie Brooking. The underlying assumption of the cost-based approach is that price of a new asset equals economic value of that asset during its lifetime. The major disadvantage with this approach is that it equates price with value. Baruch Lev explains that in most cases there is no effective market for intangible assets and that they are often not easily transferred, therefore it is a very risky approach to equate costs with value in the area of IC (Lev, 2001). Even in cases where replacement costs might reasonably be expected to provide a value, the valuer has to quantify the necessary adjustment from the brand new state of the replacement asset to the actual state of the asset under consideration (Brooking, 1996).

Intellectual capital

559

BPMJ 10,5

560

The problem of non-existent efficient markets is even bigger for the market-based approach. In order to obtain a consensus among participants in the market, there has to be a market in existence. IC is often very specific to organisations. With the exception of a few intangible assets like patents and brand names, there is no active or public market with any record of exchange, which makes it impossible to determine a value (Teece, 2000). The problem with the income-based approach is that it is hard to determine future economic benefits derived from the ownership of IC and equate this to value. For this it is important to use an appropriate discount factor in order to derive the present value of future earnings expectations. Besides this it is very hard to determine how much the ownership of a specific IC contributes to earnings, as there is mostly only an indirect link. Overall, the IC Audit is an interesting approach to identify IC within organisations and the work has contributed towards raising awareness about the importance of IC. However, the suggested valuation approaches need further refinement in order to be attractive for practical use. Intangible asset monitor The Intangible Asset Monitor was developed by Karl Erik Sveiby as a presentation format that displays indicators for internal management information purposes (Sveiby, 1997). It adopts the concept of Intangible Assets rather than IC. The following three categories of intangible assets are taken into account: (1) intangibles represented by competence of employees; (2) intangibles related to the internal structure of the organisation; and (3) those related to the external structure including brand names, image, and relationships with suppliers and most importantly relationships with customers. Sveiby suggests that the first step is to determine who will be interested in the measurement of IC. He also distinguishes between external presentation, where the company describes itself as accurately as possible to its stakeholders (such as customers, creditors, or shareholders) and internal measurement, where management tries to gather as much information as possible in order to monitor progress and be able to take corrective actions (Sveiby, 1997). The Intangible Asset Monitor is internally focused. For the purpose of classification employees have to be grouped into professionals (people who plan, produce, process, or present products and solutions) and support staff (those who work in accounting, administration, reception, etc.). The competence grouping only includes professionals. Professionals should be measured according to the type of activity, degree of responsibility, or area of competence. The internal structure only measures support staff, whereas the external structure measures independent contractors as well as measures for, e.g. the time employees spend developing and building customer relationships. Furthermore, Sveiby suggests that also customers should be categorised based on the type of intangible assets they provide (e.g. learning, image) as well as on the level of profitability they provide to the organisation. Measurements for each of the three intangibles can then be divided into three measurement groups indicating the following:

Intellectual capital

(1) growth and renewal; (2) efficiency; and (3) stability. Organisations are advised to develop one or two indicators for each intangible under each of the measurement groups. Table II shows the basic structure of the Intangible Asset Monitor and includes some examples of measures companies might want to include for each of the intangibles by measurement groups. The Intangible Assets Monitor is a valuable contribution to the debate on IC and especially emphasises the internal perspective. It is meant to act as a management and communication tool and not as a valuation approach although the derived business game “Tangoe”, marketed by Celemi does use Sveiby’s methodology and “values” companies’ intangible assets. It is important to provide managers with a meaningful way to communicate information on intangible assets, whereas the Intangible Assets Monitor is an appealing approach to measure the performance of IC, the downside is that it is not clear as how to integrate it into any broader performance measurement frameworks in order to establish the link between intangible performance drivers and performance outcomes which become increasingly important (Kaplan and Norton, 2000).

561

Knowledge Assets Map The “Knowledge Assets Map” provides managers with a broader framework of organizational knowledge from both an external and internal perspective (Marr and Schiuma, 2001; Schiuma and Marr, 2001). It is based on a broader interpretation of knowledge assets in companies and integrates the ideas put forward by authors discussed above. The Knowledge Assets Map provides a framework that allows organisations to identify the critical knowledge areas of their company. The Knowledge Assets Map is based on an interpretation of the company’s knowledge assets as the sum of two organizational resources: stakeholder resources and structural resources. Figure 5 shows the classification of knowledge assets. Stakeholder resources are divided into stakeholder relationships and human resources. The first category considers relationships with external parties while the second considers the internal actors of the organisation. Structural resources are split into Human competence Indicators of Years in profession; growth/renewal Education level Training costs; Turnover Indicators of efficiency

Indicators of stability

Proportion of professionals in the company; Leverage effect; Value-added per professional Average age; Seniority; Relative pay position; Professional turnover rate

Internal structure

External structure

Investments in internal structure; Customers contributing to systems/process building Proportion of support staff; Sales per support person; Corporate culture poll

Profitability per customer; Organic growth

Age of organisation; Support staff turnover rate; Rookie ratio

Proportion of big customers; Age structure; Devoted customers ratio; Frequency of repeat orders

Satisfied customers index; Win/loss index; Sales per customer

Table II. Matrix of IC measures of Internal Asset Monitor

BPMJ 10,5

562

Figure 5. Knowledge Assets Map

physical and virtual infrastructure, which refers to their tangible and intangible nature. Finally, virtual infrastructure is further sub-divided into culture, routines and practices and intellectual property. The six categories of knowledge assets classified by the Knowledge Assets Map are defined in further detail below. Stakeholder relationships include all forms of relationships a company has with its stakeholders. These relationships could be licensing agreements, partnering agreements, financial relations, contracts, and distribution arrangements. They also include customer relationships like customer loyalty and brand image, as a fundamental link between the company and one of its key stakeholders. Human Resource contains knowledge assets provided by employees in forms of skills, competence, commitment, motivation and loyalty as well as in form of advice or tips. Some of the key components are know-how, technical expertise, and problem solving capability, creativity, education, attitude, and entrepreneurial spirit. Physical infrastructure comprises all infrastructure assets, such as structural layout and information and communication technology like databases, servers, and physical networks like Intranets. In order to be a knowledge asset, physical infrastructure components have to be based on specific knowledge and are generally unique (often in their combination) to one organisation. Culture embraces categories such as corporate culture, organizational values, networking behaviour of employees and management philosophies. Culture is of fundamental importance for organizational effectiveness and efficiency since it provides people with a shared framework to interpret events; a framework that encourages individuals to operate both as an autonomous entity and as a team in order to achieve the company’s objectives. Practices and Routines include internal practices, virtual networks and routines, like tacit rules and informal procedures. Some key components are process manuals providing codified procedures and rules, databases, tacit rules of behaviour as well as management style. Practices and routines determine how processes are being handled and how work flows through the organisation. Intellectual property is the sum of knowledge assets such as patents, copyrights, trademarks, brands, registered design, trade secrets and processes whose ownership is

granted to the company by law. They represent the tools and enablers that allow a company to gain a protected competitive advantage. It is possible to provide a wide range of indicators for each category of a company’s knowledge asset. However, the management team has the task of identifying the most meaningful indicators that help to assess their company’s knowledge assets. Therefore, it is important to warn managers not merely to adopt the metrics proposed in the literature since most of them are general and do not necessarily address the types of knowledge that have a critical role in the specific organisation’s value-added processes (Grant, 1991b). Managers need to start from the recognition that the knowledge assets are unique to each company and that they have to design metrics that really address and measure the key knowledge assets that underlie their competencies and strategy. The dynamic nature of knowledge assets The Knowledge Assets Map allows classifying knowledge assets and helps managers to gain an understanding of their structure and hierarchy. It can be used as a tool to facilitate the discussions about which are the important knowledge assets in an organisation. However, it can only provide a static view of the knowledge asset base and does not indicate how these assets lead to value creation. Knowledge assets interact with each other and get transformed along a value creation pathway in order to contribute to the overall business performance and value creation. This interaction takes place not only between actors but in particular between actors and infrastructure. In an attempt to clarify the existing relationship between actors and infrastructure in each organisation, or as Roos et al. (1997) describe them the thinking and non-thinking parts of the organisation, we created the Knowledge Assets Dashboard. The Knowledge Assets Dashboard is a different visualisation of the Knowledge Asset Map in order to emphasise the difference between acting and infrastructure knowledge assets. The four perspectives taken from the Knowledge Assets Map are placed along two axes. The vertical axis includes the knowledge assets concerning the actors of an organisation – stakeholder relationships and human resources. The horizontal axis includes knowledge assets linked to the organizational infrastructure – physical and vertical infrastructure (Figure 6). This should highlight the dynamic nature of knowledge assets and the flows between the actors and the infrastructure assets (Nonaka, 1994). Knowledge assets interact with each other to create capabilities and competencies, and it is often this interaction which delivers a competitive advantage because it makes these assets difficult for competitors to imitate (Barney, 1991b; Teece et al., 1997; Roos and Roos, 1997). As with other management approaches, it is important to link them into corporate strategy (Kaplan and Norton, 1996). To identify the knowledge assets, the first task is to define the core capabilities and competencies that will allow an organisation to execute their strategy. Following the identification of core capabilities a set of critical knowledge assets can be defined that is needed to maintain these capabilities in order to execute the strategy. The set of core capabilities can be derived using a classical top-down approach or the bottom-up approach based on the knowledge-based or resource-based view of the firm (Figure 7). Using the classical top-down approach the strategy and core-capabilities are derived from understanding the external market conditions by taking into account

Intellectual capital

563

BPMJ 10,5

564

Figure 6. Knowledge Asset Dashboard

Figure 7. Top-down vs bottom-up

factors such as the bargaining power of suppliers, customers, entry barriers, and potential substitute products and technologies (Porter, 1979). The role of capabilities is to enable companies to perform the necessary processes to execute their strategy, which in turn fulfils the needs of a clearly defined market. Using this approach organisations can understand whether they have the capabilities they need to deliver their strategy and see whether they have any development needs. The bottom-up approach, on the other hand, takes a knowledge-based view of the firm to guide strategy formulation (Grant, 1997; Sveiby, 2001). In this approach organisations match more closely their internal capabilities with the opportunities in the market (Andrews, 1971). The bottom-up concept of corporate strategy definition is more internally focused and allows organisations to first identify what they do

well and then go out and look for ways to exploit it in the market (Collins and Montgomery, 1995). Using either of the two strategy definition approaches outlined above, the final strategy is based on the capabilities to execute the strategy. Having identified the capabilities organisations can then, using the Knowledge Asset Dashboard, find the knowledge assets that underlie these capabilities. The Knowledge Assets Dashboard will draw attention to the fact that there is usually a combination of actor and infrastructure knowledge assets which is a first step towards recognising the dynamic nature of knowledge assets. A next step towards measuring knowledge assets is to visualise the interactions and flows of knowledge assets. This can be done in a mapping process indicating causal relationships and showing how exactly knowledge assets interact and create value. This can take the form of strategy maps (Kaplan and Norton, 2000), success maps (Neely et al., 2002), or the Navigator model developed by Roos et al. (Chatzkel, 2002). For a simplified model of such a map refer Figure 8. When organisations have identified the core capabilities and the underlying knowledge assets, and when they have mapped the value-creating pathways, only then should organisations start designing measures to track the performance. The identification of knowledge assets and the mapping technique provides organisations with a hypothesis about how they think the business works. Performance measures can now be used to test these causal relationships or the assumed correlations. Measures can therefore be designed with a clear purpose in mind and are not picked at random will. This should naturally reduce the number of performance measures to a minimum and avoid that organisations measure everything without a clear understanding of why they are measuring (Neely, 2002). Business performance measurement conducted in the structured way outlined above will allow organisations to gain real insights that will allow managers to make better-informed decisions in order to reach the strategic objectives and improve performance. The insights can be used to influence the management of knowledge assets and drive strategic knowledge management practices (Marr and Schiuma, 2001) or to report and disclose this information to interested stakeholders.

Intellectual capital

565

Figure 8. Success map

BPMJ 10,5

566

Conclusion A long-term competitive advantage can only be gained from the management of the knowledge assets underlying organizational capabilities. Building on existing literature this paper introduces the Knowledge Asset Map and Knowledge Asset Dashboard in order to provide organisations with a comprehensive tool that can help them to identify their key knowledge assets. The Knowledge Asset Map is grounded in the existing literature on performance measurement of intellectual capital and knowledge assets. The Knowledge Asset Dashboard represents a different visualisation of the Knowledge Asset Map. The aim of this visualisation along two axes is to clarify the interaction between actor and infrastructure knowledge assets in order to illuminate the fact that all knowledge assets are dynamic and are constantly interacting and evolving (Nonaka, 1994). We further suggested that organisations first map the dynamic transformations or causal interactions of knowledge assets (Kaplan and Norton, 2000; Neely et al., 2002, Chatzkel, 2002) in order to visualise the value-generation pathways and to formulate a business hypothesis that can then be tested in a more scientific way. We would like to encourage more empirical research into business performance measurement in knowledge-based organisations. More case research would help to further empirically ground the suggest frameworks. Furthermore, we call for more empirical work on the mapping approaches and investigations into the circumstances where more rigid approaches such as strategy maps or more open approaches such as success maps or navigator-type approaches are most appropriate. References Albino, V., Garavelli, A.C. and Schiuma, G. (2001), “A metric for measuring knowledge codification in organisation learning”, Technovation, Vol. 21 No. 7, pp. 413-22. Andrews, K.R. (1971), The Concept of Corporate Strategy, Dow Jones-Irwin, Homewood, IL. Barney, J.B. (1991a), “Firm resources and sustained competitive advantage”, Journal of Management, Vol. 17 No. 1, pp. 99-120. Barney, J.B. (1991b), “Types of competition and the theory of strategy: towards and integrative framework”, Academy of Management Review, Vol. 11 No. 4, pp. 791-800. Birkinshaw, J. (2002), “Managing internal R&D networks in global firms: what sort of knowledge is involved?”, Long Range Planning. Boisot, M.H. (1998), Knowledge Assets: Securing Competitive Advantage in the Information Economy, Oxford University Press, Oxford. Bontis, N., Dragonetty, N.C., Jacobsen, K. and Roos, G. (1999), “The knowledge toolbox: a review of the tools available to measure and manage intangible resources”, European Management Journal, Vol. 17 No. 4, pp. 391-402. Boudreau, J.W. and Ramstad, P.M. (1997), “Measuring intellectual capital: learning form financial history”, Human Resource Management, Vol. 36 No. 3, pp. 343-56. Brooking, A. (1996), Intellectual Capital: Core Assets for the Third Millennium Enterprise, Thompson Business Press, London. Brown, M.G. (1996), Keeping Score: Using the Right Metrics to Drive World-Class Performance, Quality Resources, New York, NY. Bukh, P.N., Larsen, H.T. and Mouritsen, J. (2001), “Constructing intellectual capital statements”, Scandinavian Journal of Management, Vol. 17, pp. 87-108.

Chase, R.L. (1997), “Knowledge management benchmarks”, Journal of Knowledge Management, Vol. 1 No. 1, pp. 83-92. Chatzkel, J. (2002), “A conversation with Go¨ran Roos”, Journal of Intellectual Capital, Vol. 3 No. 2. Collins, D.J. and Montgomery, C.A. (1995), “Competing on resources: strategy in the 1990s”, Harvard Business Review, pp. 118-28. Davenport, T.H. and Prusak, L. (1998), Working Knowledge: How Organizations Manage What They Know, Harvard Business School, Boston, MA. Drucker, P.F. (1988), “The coming of the new organization”, Harvard Business Review. Edvinsson, L. and Malone, M.S. (1997), Intellectual Capital: The Proven Way to Establish Your Company’s Real Value By Measuring Its Hidden Values, Piatkus, London. Edvinsson, L. and Sullivan, P. (1996), “Developing a model of managing intellectual capital”, European Management Journal, Vol. 4 No. 4, pp. 356-64. Epstein, M.J. and Manzoni, J.F. (1997), “The balanced scorecard and tableau de bord: translating strategy into action”, Management Accounting, Vol. 79 No. 2, pp. 28-36. Grant, R.M. (1991a), Contemporary Strategy Analysis, Blackwell, Oxford. Grant, R.M. (1991b), “The resource-based theory of competitive advantage: implications for strategy formulation”, California Management Review, Vol. 33 No. 3, pp. 14-35. Grant, R.M. (1997), “The knowledge-based view of the firm: implications for management practice”, Long Range Planning, Vol. 30 No. 3, pp. 450-4. Hall, R. (1992), “The Strategic analysis of intangible resources”, Strategic Management Journal, Vol. 13 No. 2. Harvey, M.G. and Lusch, R.F. (1992), “Balancing the intellectual capital books: intangible liabilities”, European Management Journal, Vol. 17 No. 1, pp. 85-92. Johnson, T.H. and Kaplan, R.S. (1987), Relevance Lost: The Rise and the Fall of Management Accounting, Harvard Business School, Boston, MA. Kaplan, R.S. (1983), “Measuring manufacturing performance – a new challenge for managerial accounting research”, The Accounting Review, Vol. 58 No. 4, pp. 696-705. Kaplan, R. and Norton, D. (1992), “The Balanced Scorecard – measures that drive performance”, Harvard Business Review, Vol. 70 No. 1, pp. 71-9. Kaplan, R.S. and Norton, D.P. (1996), The Balanced Scorecard – Translating Strategy into Action, Harvard Business School, Boston, MA. Kaplan, R.S. and Norton, D.P. (2000), “Having trouble with your strategy? Then map it”, Harvard Business Review, pp. 167-76. Keegan, D.P., Eiler, R.G. and Jones, C.R. (1989), “Are your performance measures obsolete?”, Management Accounting, pp. 45-50. Leonard-Barton, D. (1992), “Core capabilities and core rigidities: a paradox in managing new product development”, Strategic Management Journal, Vol. 13. Lev, B. (2001), Intangibles: Management, Measurement, and Reporting, The Brookings Institution, Washington, DC. Liebowitz, J. and Wright, K. (1999), “Does measuring knowledge make ‘cents’?”, Expert Systems with Applications, Vol. 17 No. 5, pp. 99-103. Lynch, R.L. and Cross, K.F. (1991), Measure Up! The Essential Guide to Measuring Business Performance, Mandarin, London. Marr, B. and Neely, A. (2001), “Organizational performance measurement in the emerging digital age”, Int. J. Business Performance Management, Vol. 3 Nos. 2-4, pp. 191-215.

Intellectual capital

567

BPMJ 10,5

568

Marr, B. and Schiuma, G. (2001), “Measuring and managing intellectual capital and knowledge assets in new economy organisations”, in Bourne, M. (Ed.), Handbook of Performance Measurement, Gee, London. Meyer, M.W. and Gupta, V. (1994), “The performance paradox”, Research in Organizational Behavior, Vol. 16, pp. 309-69. Mouritsen, J., Bukh, P.N., Larsen, H.T. and Johnson, T.H. (2002), “Developing and managing knowledge through intellectual capital statements”, Journal of Intellectual Capital, Vol. 3 No. 1, pp. 10-29. Nahapiet, J. and Ghoshal, S. (1998), “Social capital, intellectual capital, and the organizational advantage”, Academy of Management Review, Vol. 23 No. 2, pp. 242-66. Neely, A. (2002), Business Performance Measurement: Theory and Practice, Cambridge University Press, Cambridge, NY. Neely, A. and Adams, C. (2001), “The performance prism perspective”, Journal of Cost Management, Vol. 15 No. 1, pp. 7-15. Neely, A., Adams, C. and Kennerley, M. (2002), The Performance Prism: The Scorecard for Measuring and Managing Business Success, Financial Times Prentice Hall, London. Nonaka, I. (1991), “The knowledge-creating company”, Harvard Business Review, Vol. 69 No. 6, pp. 96-104. Nonaka, I. (1994), “A dynamic theory of organizational knowledge creation”, Organization Science, Vol. 5 No. 1, pp. 14-37. Nonaka, I. and Takeuchi, H. (1995), The Knowledge-creating Company: How Japanese Companies Create the Dynamics of Innovation, Oxford University Press, Oxford. Penrose, E.T. (1959), The Theory of the Growth of the Firm, Wiley, New York, NY. Polanyi, M. (1958), Personal Knowledge: Towards a Post-Critical Philosophy, University of Chicago Press, Chicago, IL. Porter, M.E. (1979), “How competitive forces shape strategy”, Harvard Business Review, Vol. 57 No. 2, p. 137. Porter, M.E. (2001), “Strategy and the Internet”, Harvard Business Review, Vol. 79 No. 3, p. 63. Prahalad, C.K. and Hamel, G. (1990), “The core competence of the corporation”, Harvard Business Review, Vol. 68 No. 3, pp. 79-91. Prusak, L. (1997), Knowledge in Organizations, Butterworth-Heinemann, Washington. Quinn, J.B. (1992), Intelligent Enterprise: A Knowledge and Service Based Paradigm for Industry, Free Press, New York, NY. Roos, G. and Roos, J. (1997), “Measuring your company’s intellectual performance”, Long Range Planning, Vol. 30 No. 3, p. 325. Roos, J., Roos, G., Dragonetti, N.C. and Edvinsson, L. (1997), Intellectual Capital: Navigating the New Business Landscape, Macmillan, London. Savage, C.M. (1990), Fifth Generation Management: Co-creating Trough Virtual Enterprising, Dynamic Teaching, and Knowledge Networking, Butterworth-Heinemann, Newton, MA. Schiuma, G. and Marr, B. (2001), “Managing knowledge in e-businesses: the knowledge audit cycle”, Profit with People, in Deloitte & Touche, Russel Publishing, London, pp. 82-5. Senge, P.M. (1990), The Fifth Discipline: The Art and Practice of the Learning Organization, Doubleday/Currency, New York, NY. Spender, J.C. (1996), “Dynamic theory of the firm”, Strategic Management Journal, Vol. 17, pp. 45-62.

Stewart, T.A. (1994), “Measuring company IQ”, Fortune, p. 129. Stewart, T.A. (1997), Intellectual Capital: The New Wealth of Organizations, Doubleday/Currency, New York, NY. Sveiby, K.E. (1997), The New Organizational Wealth: Managing and Measuring Knowledge-based Assets, Barrett-Kohler, San Francisco, CA. Sveiby, K.E. (2001), “A knowledge-based theory of the firm to guide in strategy formulation”, Journal of Intellectual Capital, Vol. 2 No. 4. Teece, D.J. (2000), Managing Intellectual Capital: Organizational, Strategic, and Policy Dimensions, Oxford University Press, Oxford. Teece, D.J., Pisano, G. and Shuen, A. (1997), “Dynamic capabilities and strategic management”, Strategic Management Journal, Vol. 18 No. 7, pp. 509-33. Winter, S.G. (1987), “Knowledge and competence as a strategic asset”, in Teece, D.J. (Ed.), The Competitive Challenge, Ballinger, Cambridge, MA. Further reading Huber, G.P. (1991), “Organizational learning: the contributing processes and the literatures”, Organization Science, Vol. 2 No. 1, pp. 88-114.

Intellectual capital

569

The Emerald Research Register for this journal is available at www.emeraldinsight.com/researchregister

The current issue and full text archive of this journal is available at www.emeraldinsight.com/1463-7154.htm

BPMJ 10,5

Proposed analysis of performance measurement for a production system

570

H’ng Gaik Chin Shell Malaysia Trading, Butterworth, Penang, Malaysia

Muhamad Zameri Mat Saman Faculty of Mechanical Engineering, Universiti Teknologi Malaysia, Johor Bahru, Malaysia Keywords Quantitative methods, Production methods, Performance measurement (quality), Decision making, Manufacturing systems, Strategic planning Abstract Hitherto, very few performance measures have been constructed for selecting the right production system. Before a new, advanced manufacturing system (e.g. Just In Time and Flexible Manufacturing System) is implemented in a company, it is of paramount importance that extensive analysis should be done to ensure that the ladder is lying against the right wall. Currently, the selection of a production system is mostly centred on perceptions or mere judgments from experience. Hence, a quantitative form of selection would be more convincing, as it will show and compare the degree of advantage between each manufacturing system in numbers and percentages. This paper discusses the authors’ attempt to formulate a performance measure that could quantitatively analyse and select the best production system for a company. A questionnaire survey has been carried out in a multinational company in Johore, Malaysia and the results are used to formulate the performance measure. Reliability tests on the instrument and correlation tests on the six identified manufacturing outputs were performed. Tests of significance were also done on the outputs used.

Business Process Management Journal Vol. 10 No. 5, 2004 pp. 570-583 q Emerald Group Publishing Limited 1463-7154 DOI 10.1108/14637150410559234

Introduction The performance measurement of a company can be defined as the sum of each strategy of its component functions such as manufacturing, finance, marketing, service, research and development, etc (Miltenburg, 1995). In a successful firm, these strategies interlock to provide maximum competitive advantage of which the firm is capable. Customers’ expectations and the level of manufacturing capability among the competitors are increasing rapidly in today’s ever-competitive marketplace. The company that can deliver a high quality, low cost product at the right time to the marketplace will win the most market share. Hence, implementing the right strategy of production system is of crucial importance in order to achieve competitive advantage. Considering the importance of performance measure in selecting the right production strategy, the question is there any methodology that can quantitatively measure the selection of a production system whilst comprehending all the necessary manufacturing outputs to boost a company’s performance? Hence, this paper will focus on the authors’ attempt to come up with a methodology to narrow down the scope of selection in a quantitative way before actual implementation or extensive simulation is done on its elements. Findings and results validations were done and presented in this paper to further strengthen the concept of methodology proposed. For example, if a

company is undecided on whether to change its current production system from a conventional operator-paced production system to a more advance manufacturing system such as flexible manufacturing system (FMS) or lean production, then this methodology of selection is advisable to be done beforehand. Albeit Monte Carlo simulation is mostly used currently, it takes quite some time to construct and experiment the simulation models many times, not to mention that extensive analysis and validation needed before a final decision is made. Hence, this simple methodology revised from the Operations Research Decision Making Analysis, will pose as an initial selection before the selected production system is narrowed down for further simulation analysis. Besides, it will compare the advantages of each production system in numbers or percentages before a decision is arrived. This paper begins with a description of the methodology employed in eliciting the relevant information needed in constructing the coefficients of the empirical formula whereby the performance measure of the production system is the function of the six manufacturing outputs, namely cost, quality, performance, delivery, flexibility and innovativeness. This is followed by a summary of components of the six manufacturing outputs that is relevant to the researched company. Following this, the steps of deriving the formula, with its assumptions, correction factors and results are elaborated. Reliability and validity test on the questionnaire and the test of significance in means between the degree of importance and degree of satisfaction for the six identified manufacturing outputs are also briefly explained. Conclusions and some proposed future research directions culminate this paper. Literature review: the six manufacturing outputs This section reviews the importance of manufacturing outputs and how the outputs progress through time to become more specific and detail. Six manufacturing outputs based on Miltenburg’s (1995) work are identified and clearly defined in this section, as these outputs will be used as the main parameters to measure the performance of a production system. In addition, some past researches on framework and reliability measurements of the manufacturing outputs are also reviewed. The related measurements are then chosen from the list and tabled after some discussions with the respondents. In the early 1960s and 1970s, most people felt that manufacturing provided only two outputs to their customers – cost and quality; a notable exception was Wickham Skinner (1969). Owing to the development of advance manufacturing systems such as the just in time (JIT) production and FMS, the list of outputs increased with the inclusion of delivery and flexibility (Fine and Hax, 1985; Stalk, 1988; Hall and Nakane, 1990). Many organizations focus more attention to the cost, quality, delivery and flexibility outputs while some included performance and innovativeness in the quality and flexibility outputs. It was only during the 1990s that innovativeness and performance emerged as important outputs (Miltenburg, 1995). Manufacturing provides six outputs, namely cost, quality, performance, delivery, flexibility, and innovativeness (Miltenburg, 1995). In this highly competitive, technology-advancement manufacturing world, it is imperative to look in a deeper and wider aspects in selecting a good manufacturing strategy via manufacturing outputs. And to prioritize the manufacturing outputs, it is vital that the details and measurements of the respective outputs should be clearly defined and distinguished.

Analysis of performance measurement 571

BPMJ 10,5

Cost It is undeniable that each product manufactured has a cost. A low cost gives a low price and provides a better opportunity for profit than does a high cost. Cost is of utmost important to manufacturers’ of commodities such as structural steel shapes, electronic devices, papers etc., which are manufactured to standard specifications and sold under standard delivery terms.

572 Quality and performance Many non-manufacturing people are of the general opinion that quality and performance are from the same output. In manufacturing strategy, quality is associated with conformance to specifications and critical customer expectations. Performance is associated with features of the products as they affect the products ability to do what other products cannot. For example, when a Formula One car is said to have outstanding quality, we do not know to what extend does the quality means. When the car is designed and manufactured to exacting specifications, this satisfies the definition of quality. However, when it means that the car has unique design features, such as aerodynamism in the front and rear wing, a durable engine, chassis and tiptronic transmission, then we have the performance output. Tools and technologies that provide high levels of quality (such as statistical quality control and standardization) are often different from those that provide high levels of performance (such as concurrent engineering and highly skilled workers). Hence, differentiating between quality and performance allows us to design one production system for McDonald’s restaurant where quality is important; and a different production system for a fine cuisine restaurant, where product performance is important (Miltenburg, 1995). Delivery The delivery output comprises delivery time and delivery time reliability. Delivery time is the amount of time a manufacturer requires to supply a product to a customer. Often, especially in busy times, manufacturing cannot meet orders later than promised. When this happens, delivery time reliability drops (Miltenburg, 1995). Nowadays, customer expect on time delivery and in small lots. Hence, lean production systems especially the just in time manufacturing is gaining more and more popularity in recent years as a result of this output. Flexibility and innovativeness In manufacturing systems, flexibility means to what extend the volumes of the existing products can be increased or decreased to respond quickly to the needs of the customers. Innovativeness simply means the ability to introduce new products or make design changes to the existing models. Skinner (1969) developed the framework that is the basis of modern operating strategy. This was based on the premise that there are many ways to compete, apart from cost, and that each manufacturing unit should focus on doing those few things well that are critical to the achievement of the corporate mission. Underlying his ideas is the concept of strategic trade-offs: the achievement of high levels of performance on

one factor can only be obtained at the expense of performance on one or more other factors. Many reliability measurements of the manufacturing outputs have been developed. For instance, the Cronbach Alphas and correlations measurement from Roth and Miller (1990) suggest that 11 variables measuring manufacturing success are grouped into five aggregate variables called dimensions. Their analysis showed that this is an effective grouping. Besides, a simpler list measurements of the manufacturing outputs, compiled by Leong et al. (1990) was another method of measuring the six manufacturing outputs. Some of the measurements that are related to the researched company are chosen from the list compiled by Leong et al. (1990) and tabled after some discussions with the respondents Table I. The performance measure of six production system namely job shop, batch, operator-paced, equipment paced, continuous flow, JIT and FMS using the six manufacturing outputs are developed and illustrated in a PV-LF diagram as shown in Figure 1.

Analysis of performance measurement 573

Study methodology In the midst of economic crisis, one company from among the numerous companies contacted in Johore, Malaysia had indicated their willingness to participate in the study. The company is a multinational company, which produces electrical and electronic goods. The current production system adopted is operator-paced production system. There are a few major departments which are grouped based on the electrical Manufacturing output

Measures

Cost

Unit product cost, unit labour cost, unit material cost, total manufacturing overhead cost, inventory turnover (raw material, WIP, finished goods), machine capacity, direct and indirect labor productivity Internal failure (scrap, rework) and external failure (field or systems failure) cost, warranty cost, quality of incoming materials No. of standard and advance features, product resale price, no. of engineering changes, mean time between failures Delivery time, Master Production Schedule performance, inventory accuracy, average lateness, per cent of on time delivery No. of products in the line, No. Of available options, Minimum size order, Average production lot size, Average volume fluctuations that occur over a time period divided by the capacity limit, No. Of parts produced by a machine, Possibility of producing parts on different machines No. of new products introduced per year, lead time to new design, level of R&D investment, consistency of investment over time

Quality Performance Delivery Flexibility

Innovativeness

Table I. The measurement of the six manufacturing outputs related to the researched company

BPMJ 10,5

574

Figure 1. PV-LF diagram of standard performance weight for each output

goods they produced. Each major department is subdivided into several sections, namely Engineering Section, Production Planning and Control Section, and Quality Control Section, whereby the study is scoped to a selected major department only. The questionnaire developed in this study consisted of five sections: the respondent’s background; the degree of importance of each manufacturing output; the degree of satisfaction of each manufacturing output; comments and suggestions; and finally, explanations of each output components related to the company. Twenty-six employees comprising of production supervisors, technicians, engineers and managers are surveyed. As the respondents are from different background, the first section was intended to determine the correction factor or the weight of each respondent’s opinion. It is generally believed that the longer a professional employee works in the company, the better he or she will understand the production system. In addition, the degree of satisfaction, in particular from those who have extensive experience from the company will be more reliable too. Besides, it might be appropriate too that the opinion from a higher status employee carries more weight than a lower one. Hence, it might be appropriate that the opinion from each employee has to be discriminated and multiplied with a correction factor or weight as shown below: Weight ¼

No: of years þ Position 2

All the weights assigned for the position and no. of years work in the company as shown in Table II are determined after discussions with the managers and engineers there. They believed that both positions in the company and no. of years of experience in the production line are equally important. As managers and engineers have greater exposure with the customers and suppliers than the supervisors or technicians, it is agreed that the weight of 0.5 were given for the opinion of those in the supervisor/technician category.

For the second section, the respondents were asked to rank the degree of importance to each manufacturing output on a five-point Thurstone scale of “1” “Least Important”, “2” “Average”, “3” “Important”, “4” “Very Important” and “5” “Utmost Important”. As hitherto, no manufacturer or manufacturing systems in the world is able to provide all six manufacturing outputs at the highest level (Miltenburg, 1995), hence tradeoffs are needed in providing a single or a few selected outputs at the highest level whilst maintaining the other outputs at a relatively high level. Owing to this, respondents are allowed to choose each scale for a maximum of twice only for this section. For the third section, the respondents were asked to rate the degree of satisfaction to each manufacturing output on a five-point Likert scale from “1” “Not Satisfied” to “5” “Very Satisfied”, with the middle denoted as “Average”. However, in the calculation of the coefficients of the degree of satisfaction for each manufacturing output, the scale from the questionnaire for this section is reversed from “1” “Very Satisfied” to “5” “Not Satisfied”, and the middle “Average” remains unchanged. The last part of the questionnaire shows the components for each manufacturing outputs that are related to the researched company. The components are selected and discussed with some of the engineers and managers of the company to ensure that they are related with the company. It is hoped that this section will provide the respondents a clear view of each manufacturing output.

Analysis of performance measurement 575

Survey results and model development From the survey, the mean scale of each output, percentage of rating and coefficients of each manufacturing output for the degree of importance and degree of satisfaction can be calculated. Table III and IV shows the calculated results of the degree of importance and degree of satisfaction, respectively. No. of years work

Weight

1 year or less 2-3 years 4-5 years Above 5 years

Mean scale, m0 (1-5) per cent rating Coefficient, ko

Mean scale, m0 (1-5) per cent rating Coefficient, ko

Position

0.5 1.0 1.5 2.0

Weight

Supervisor/technician – Engineer Manager

0.5 1.0 1.5 2.0

C

Q

P

D

F

I

3.87387 77.4775 0.17346

4.71171 94.2342 0.21097

3.66667 73.3333 0.16418

4.15315 83.0631 0.18596

2.99099 59.8198 0.13392

2.93694 58.7387 0.1315

C

Q

P

D

F

I

2.58559 51.7117 0.17426

2.14414 42.8829 0.14451

2.24324 44.8649 0.15118

2.36937 47.3874 0.15968

2.77477 55.4955 0.18701

2.72072 54.4144 0.18336

Table II. Correction factor for each manufacturing output

Table III. The results of the degree of importance for each output

Table IV. The results of the degree of satisfaction for each output

BPMJ 10,5

576

As Table III adequately shows, it can be seen that the company places the highest degree of importance for the quality output, followed by delivery, cost and performance. It came as no surprise as the company is currently using the operator-paced production strategy, where the quality to produce identical goods with tight specifications, reliability in delivering goods and low cost production is of paramount importance. The company places low emphasis on the flexibility and innovativeness output as their mean scale rating are well behind the other outputs. Table IV shows that the company is quite satisfied with all their manufacturing outputs as the mean scale of all manufacturing outputs are between 2 and 3. On the whole the performance measure for the production system (PMPS) can be defined as the function of the sum of six manufacturing outputs; with assumptions that there are no significant interactions exist between the outputs or the interactions that exists is minimal and negligible. PMPS ¼ f {Cost; Quality; Performance; Delivery; Flexibility; Innovativeness} ¼ kC C þ kQ Q þ kP P þ kD D þ kF F þ kI I whereby, DOI is the coefficient for degree of importance; DOS is the Coefficient for degree of satisfaction; AC is the actual coefficient; W is the weight or correction factor; m0 is the mean scale of single output; ko is the coefficient of single output; ao is the actual coefficient; PMPS is the performance measurement of production system; O the single output, C the cost; Q the quality; P the performance; D the delivery; F the flexibility; and I the innovativeness. P W· O Mean scale; m0 ¼ P W Percentage of rating ¼

m0 £ 100 per cent 5

m0 Coefficient; ko ¼ P m0 Hence, the total coefficients of all outputs should be totaled to 1. It is apparent that the degree of importance (DOI) is a more dominant factor than the degree of satisfaction (DOS). This is because the respondents are the ones who know which output is important to them in selecting a suitable production system. By emphasising on the most important manufacturing output(s), this will certainly give an overall good production results. DOS only manifests which output they are dissatisfied and needs improvement. It does not show that those dissatisfied outputs are very important or not. Nevertheless, the emphasis on the DOS should not be undermined too. This is because in manufacturing, every manufacturing output should be considered important and maintained at a relatively high level. For instance, if very high degree of importance is placed on the cost output, but the corresponding level of satisfaction is not high, the coefficient of cost for DOS will slightly increase the actual coefficient. However, if the corresponding level of satisfaction is high, the coefficient of cost for DOS will slightly decrease the actual coefficient to enable the actual coefficient

of the other not as important but lesser-satisfied outputs to narrow the gap with the cost output. Like the Operation Research Expected Mean Value Analysis, the sum of all coefficients of manufacturing outputs is equal to one. Hence, after some discussions with the engineers and managers of the researched company, it was resolved that both the coefficients of DOI and DOS will be integrated with the weight of 0.7 given to the coefficients of DOI and 0.3 given to coefficients of DOS.

577

ao ¼ 0:7kDOI þ 0:3kDOS

Actual Coefficient;

Analysis of performance measurement

But before the determination of whether weighted average of DOI and DOS was carried out, hypothesis test of significance in means between the mean scale of degree of importance and degree of satisfaction for the six manufacturing outputs were done. This is to ensure that at least one of the factors between the two mean scale of DOI and DOS has significant difference. In order to conduct the tests, the following hypothesis was set-up. H0. m12 m0 ¼ 0, i.e. there is no significant difference between the DOI and DOS mean scale H1. m12 m0 – 0, i.e. there is at least one significant difference between the DOI and DOS mean scale From Table V, it was conspicuous that there is significant difference in means between the mean scale of DOI and DOS for all the six manufacturing outputs. As there was more than one factor that has significant difference in means, the weighted average for the calculation of actual coefficient should be executed. Next, the empirical formula for performance measures of production systems for the researched company can be derived by using the actual coefficient as shown in Table VI. Performance measure for production systems, PMPS ¼ 0:1737C þ 0:1910Q þ 0:1603P þ 0:1781D þ 0:1499F þ 0:1471I Factor Cost Quality Performance Delivery Flexibility Innovativeness

Mean scale DOI

Mean Scale DOS

tcal

ttable

Results

3.874 4.712 3.667 4.153 2.991 2.937

2.438 2.074 2.049 2080 2.512 2.475

6.064 22.435 8.103 10.059 2.098 2.261

2.011 2.011 2.011 2.011 2.011 2.011

Significant Significant Significant Significant Significant Significant

Note: t-value at 0.05 level of significance with 50 degrees of freedom ¼ 2.011

DOI DOS AC

C

Q

P

D

F

I

0.17346 0.17426 0.17370

0.21097 0.14451 0.19103

0.16418 0.15118 0.16028

0.18596 0.15968 0.17808

0.13392 0.18701 0.14985

0.1315 0.18336 0.14706

Table V. Paired sample statistics for mean scale of DOI and DOS of each output

Table VI. The results of the actual coefficient (AC) for each output

BPMJ 10,5

578

where the input values of C, Q, P, D, F and I can be obtained by using the converted numerical values (Table VII) from the PV-LF diagram (Figure 1) of Professor Dr John Miltenburg. The assumptions made for the validation of this formula are as follows: (1) there are no interactions exist between the manufacturing outputs or the interaction between the outputs is insignificant and can be neglected; (2) the information provided by each employee surveyed is correct and reliable; and (3) the weights considered for the formulation of empirical formula is reliable. Reliability and correlation test Reliability tests in the questionnaire used in the survey were conducted following the approach adopted by Yusoff and Aspinwall (2000). Spearman’s r correlations analysis and Cronbach a model, which measures internal consistency, are used in the reliability testing of the manufacturing outputs of DOI and DOS, respectively. The former method is preferred in measuring DOI because the DOI scale involves the ranking of each output. The latter method was used in measuring DOS as Likert Scale was used in measuring DOS. Spearman’s rho correlations analysis All the DOI data were divided into two groups, then by using the statistical package for social science (SPSS) bivariate correlation procedure, the Spearman’s r correlation coefficient for each output were performed. The following Table VIII shows the values

Table VII. Performance weight for each manufacturing output (1-10)

Batch Operator-paced FMS JIT Equipment-paced

C

Q

P

D

F

I

2 3 8 9 9

3 7.5 8 9 9.5

7.5 9.5 7.5 7 3.5

2.5 4 4 7 7.5

9 6.5 7.5 7 1

9 3.5 3 7 1

Factors DOI cost DOI quality DOI performance DOI delivery Table VIII. Spearman’s rho correlation coefficient results

DOI flexibility DOI innovativeness

Results Correlation coefficient Sig. (2 tailed) Correlation coefficient Sig. (2 tailed) Correlation coefficient Sig. (2 tailed) Correlation coefficient Sig. (2 tailed) Correlation coefficient Sig. (2 tailed) Correlation coefficient Sig. (2 tailed)

0.952 0.000 0.838 0.000 0.890 0.000 0.824 0.000 0.894 0.000 0.938 0.000

of the coefficient for each factor, of which all the factors are significant at 95 per cent confidence interval. This shows that the instrument used is reliable. Internal consistency analysis Using the SPSS reliability analysis procedure, an internal consistency analysis was performed on the DOS of each manufacturing output. Table IX below shows the results of the analysis. From the a value of 0.9250, it can be concluded that this instrument has high DOS internal consistency and is therefore reliable.

Analysis of performance measurement 579

Correlation test Correlation test is done to ensure that all the factors do not inter-correlate with each other as stated in the assumption. The Spearman’s rho (for DOI) and Pearson (for DOS) product moment correlation coefficients test is carried out to find if there are any strong correlations between each manufacturing outputs for the test. For 99 per cent confidence interval, there is only one pair of factors that are significant in DOI (Delivery and Flexibility) as shown in Table X and DOS (Flexibility and Innovativeness) as shown in Table XI, respectively. However, as the correlation values of the factors of DOI and DOS are generally not significant, and only one pair

Factors DOS DOS DOS DOS DOS DOS

cost quality performance delivery flexibility innovativeness

Scale mean if item deleted

Scale variance if item deleted

Corrected item total correlation

a if item deleted

11.3846 11.6923 11.8077 11.6538 11.3077 11.3846

10.0062 10.5415 9.7615 9.7554 8.4615 8.8862

0.6875 0.7759 0.8158 0.7702 0.8752 0.8332

0.9237 0.9162 0.9078 0.9131 0.8995 0.9050

Note: Reliability coefficients; a ¼ 0.9250

Factors

DOIC

DOIQ

DOIP

DOID

DOIF

Table IX. Internal consistency results

DOII

DOI cost DOI DOI DOI DOI DOI

Correlation coefficient 1.000 2 0.148 0.021 2 0.451 20.473 0.147 Sig. (2-tailed) – 0.472 0.920 0.021 0.015 0.474 quality Correlation coefficient 20.148 1.000 2 0.188 2 0.258 20.161 0.146 Sig. (2-tailed) 0.472 – 0.357 0.203 0.431 0.477 performance Correlation coefficient 0.021 2 0.188 1.000 2 0.201 20.049 20.105 Sig. (2-tailed) 0.920 0.357 – 0.326 0.813 0.609 delivery Correlation coefficient 20.451 2 0.258 2 0.201 1.000 0.593 0.159 Sig. (2-tailed) 0.021 0.203 0.326 – 0.001 0.439 flexibility Correlation coefficient 20.473 2 0.161 2 0.049 0.593 1.000 0.013 Sig. (2-tailed) 0.015 0.431 0.813 0.001 – 0.949 innovativeness Correlation coefficient 0.147 0.146 2 0.105 0.159 0.013 1.000 Sig. (2-tailed) 0.474 0.477 0.609 0.439 0.949 –

Note: The pair of bolded values is correlated at 99 per cent confidence interval

Table X. Spearman’s rho correlation results

BPMJ 10,5

Factors DOS cost DOS quality

580

DOS performance DOS delivery DOS flexibility

Table XI. Pearson correlation results

DOS innovativeness

Correlation Sig. (2-tailed) Correlation Sig. (2-tailed) Correlation Sig. (2-tailed) Correlation Sig. (2-tailed) Correlation Sig. (2-tailed) Correlation Sig. (2-tailed)

DOSC

DOSQ

DOSP

DOSD

DOSF

DOSI

1.000 – 0.068 0.743 0.089 0.667 0.057 0.784 0.167 0.414 0.032 0.876

0.068 0.743 1.000 – 0.199 0.329 0.159 0.438 20.146 0.477 20.132 0.519

0.089 0.667 0.199 0.329 1.000 – 0.386 0.051 0.062 0.762 0.149 0.468

0.057 0.784 0.159 0.438 0.386 0.051 1.000 – 0.222 0.276 0.404 0.040

0.167 0.414 2 0.146 0.477 0.062 0.762 0.222 0.276 1.000 – 0.660 0.000

0.032 0.876 2 0.132 0.519 0.149 0.468 0.404 0.040 0.660 0.000 1.000 –

Note: The pair of bolded values is correlated at 99 per cent confidence interval

from each DOI and DOS is significant, it can be assumed that the interactions between the outputs is minimal and can be neglected. Results, findings and discussions Table XII shows the performance measure of each production system from a scale of 1 to 10. Job shop and continuous flow production system is omitted from this selection because the features of those systems are unsuitable for the researched company’s current production system, that is operator-paced production. Referring again to Table XII or the comparison graph on Figure 2, the JIT production yields the highest performance measures percentage rating (77.25 per cent) followed by FMS (63.91 per cent) and its status quo, that is the operator-paced production system (56.67 per cent). It can be seen from Table XII that almost all the manufacturing outputs (except performance) of JIT production system outperform the corresponding outputs of its status quo, the operator-paced production system. The performance output from the operator-paced has 4 per cent of rating more than JIT. Nevertheless, since the performance output was only ranked by the respondents as the fourth most important output after delivery, quality and cost, it showed that JIT is clearly a better choice. One interesting finding than should be considered is that the two most important manufacturing outputs (quality and delivery) were slightly better produced by the equipment-paced than the JIT production system. However, the overall production performance rating for equipment-paced production system was

Table XII. The value of expected performance measure of each production system

Batch Operator-paced FMS JIT Equipment-paced

C

Q

P

D

F

I

PMPS

Per cent rating

0.3474 0.5211 1.3896 1.5633 1.5633

0.5730 1.4325 1.5280 1.7190 1.8145

1.2023 1.5229 1.2023 1.1221 0.5611

0.4453 0.7124 0.7124 1.2467 1.3358

1.3491 0.9744 1.1243 1.0493 0.1499

1.3239 0.5149 0.4413 1.0297 0.1471

5.2605 5.6687 6.3909 7.7252 5.5446

52.61 56.67 63.91 77.25 55.45

Analysis of performance measurement 581

Figure 2. Comparison results of performance measures for each production system

pulled down by low performance in flexibility and innovativeness. If the management reconsider its strategies by placing a very low emphasis in flexibility and innovativeness, equipment-paced production system should be taken into consideration too. As a whole, the results clearly compared the manufacturing outputs for each production system in numbers or percentages before a final selection is reached. It also tells us that the production system that has the highest PMPS (overall rating) does not mean to be the best selection as it also depends on the company’s policies and emphasis in the output. With that, it answered the research question mentioned earlier of developing a methodology that can quantitatively measure the selection of a production system whilst comprehending all the necessary manufacturing outputs to boost a company’s performance. Research constraints and future work A few research constraints are highlighted in this paper in order for improvements and future research. (1) The weight for DOI and DOS for the calculation of actual coefficient should be given in a quantitative way. The weight of 0.7 and 0.3 assigned to DOI and DOS in this paper are merely discussions with the engineers and managers there and may be perhaps, too subjective. (2) The input values for the empirical formula are based on the numerical values of Professor Dr Miltenburg’s PV-LF matrix. It is assumed that the weight given for each manufacturing output for the corresponding manufacturing system is reliable. (3) The evaluations of the manufacturing outputs were done internally, that is within the employees of the researched company itself. The respondents may

BPMJ 10,5

582

say that their delivery is very reliable but in reality, it may not be. Hence, it will be more holistic if the evaluation also involves the ideas from outsiders, particularly from the suppliers and the customers of the researched company. (4) The evaluations on each of the manufacturing outputs are done as a whole, with the assistance of Table VII, which consists of the elements of each output. It will be more thorough if the respondent is told to evaluate each element listed in the table separately, rather than evaluate the output as a whole. Conclusion The methodology on authors’ attempt to quantitatively analyse and select a production system has been presented in this paper. A reliability analysis on the survey instrument was conducted and concluded that it was fairly reliable and valid measure. Correlation analysis was also conducted to ensure there were minimal interactions between the manufacturing outputs. The correlation results showed that there is only one pair that is slightly above the 99 per cent confidence interval significant level. Hypothesis t-testing was also performed to investigate any difference in means between the DOI and DOS for the six manufacturing outputs. The results showed that there is significant difference between them. Hence, different weights were assigned to the coefficients of DOI and DOS of each output. The analysis from the survey revealed that the JIT production system topped the list of production system with a expected performance rating of 77.25 per cent. This is followed by FMS (63.91 per cent) and operator-paced production system (56.69 per cent), which is also the status quo production system of the company. However, it cannot be concluded that JIT strategy should be finalised. The detailed performance of every manufacturing output should be taken into considerations too. The equipment paced production, although ranked fourth overall, should be considered too as it is expected to produce better quality and delivery outputs than JIT, as both of the outputs are ranked as the most important. Further discussions among the managements should be done before the selection of the production system can be finalised. Future research can concentrate on refining and further validating the instrument developed in this study by exploding each output to its specific elements and by involving the participation of suppliers and customers of the researched company. Further analysis and validations on the numerical inputs of the PV-LF matrix are recommendable. References Fine, C. and Hax, A. (1985), “Manufacturing strategy: a methodology and an illustration”, Interfaces, Vol. 15 No. 6, pp. 28-46. Hall, R.W. and Nakane, J. (1990), Flexibility: Manufacturing Battlefield of the 90s, A report on attaining manufacturing flexibility in Japan and the United States, Association for Manufacturing Excellence, Wheeling, IL. Leong, G.K., Snyder, D.L. and Ward, P.T. (1990), “Research in the process and content of manufacturing strategy”, Omega: The International Journal of Manufacturing Science, Vol. 18 No. 2, pp. 109-22. Miltenburg, J. (1995), Manufacturing Strategy, Productivity Press, Portland, Oregon.

Roth, A.V. and Miller, J.G. (1990), “Manufacturing strategy, manufacturing strength, managerial success, and economic outcomes”, in Ettlie, J.E., Burstein, M.C. and Feigenbaum, A. (Eds), Manufacturing Strategy, Kluwer Publishers, Dordrecht, pp. 97-108. Skinner, W. (1969), “Manufacturing – missing link in corporate strategy”, Harvard Business Review, pp. 136-45. Stalk, G. (1988), “Time – the next source of competitive advantage”, Harvard Business Review, pp. 41-51. Yusoff, S.M. and Aspinwall, E.M. (2000), “Critical success factors in small and medium enterprises: survey results”, Total Quality Management, Vol. 11 Nos 4/6, pp. 448-62. Further reading Dumond, E.J. (1994), “Making best use of performance measures and information”, International Journal of Operations & Production Management, Vol. 14 No. 9, pp. 16-31. Halley, F. and Babbie, E. (1995), Data Analysis Using SPSS for Windows, Pine Forge Press, CA. Jonsson, P. and Lesshammar, M. (1999), “Evaluation and improvement of manufacturing performance measurement systems-the role of OEE”, International Journal of Operations & Production Management, Vol. 19 No. 1, pp. 55-78. Mapes, J., New, C. and Szwejczewski, M. (1997), “Performance trade-offs in manufacturing plants”, International Journal of Operations and Production Management, Vol. 17 No. 10, pp. 1020-33. Neely, A., Gregory, M. and Platts, K. (1996), “Performance measurement system design: a literature review and research agenda”, International Journal of Operations & Production Management, Vol. 15 No. 4, pp. 80-116. Price, J.L. (1997), “Handbook of organizational measurement”, International Journal of Manpower, Vol. 18 Nos 4/6, pp. 305-558. SPSS 10.0.5 For Windows (1999), SPSS Inc., USA. Sweeney, M.T. and Szwejczewski, M. (1996), “Manufacturing strategy and performance”, International Journal of Operations & Production Management, Vol. 16 No. 5, pp. 25-40. Upton, D. (1998), “Just in time and performance measurement systems”, International Journal of Operations & Production Management, Vol. 18 No. 11, pp. 1101-10.

Analysis of performance measurement 583

The Emerald Research Register for this journal is available at www.emeraldinsight.com/researchregister

BPMJ 10,5

The current issue and full text archive of this journal is available at www.emeraldinsight.com/1463-7154.htm

Activity-based costing for logistics and marketing Drew Stapleton

584

University of Wisconsin La Crosse, La Crosse, USA

Sanghamitra Pati Indiana University-Purdue University Indianapolis (IUPUI), USA

Erik Beach Trane Co., La Crosse, Wisconsin, USA

Poomipak Julmanichoti University of Miami, Coral Gables, Plorida, USA Keywords Activity-based costs, Logistics, Marketing, Information gathering Abstract Activity-based costing (ABC) is gradually being utilized as more of a decision-making tool than an accounting tool. This paper investigates how, after almost a decade of slow growth, ABC is gaining acceptance as a tool to determine the true costs of marketing and logistics activities. How ABC provides managers with considerable insights into how various products, territories, and customers play major roles in logistic and marketing activities and, consequently, drive total costs is discussed. The advantages of the ABC model in terms of providing the right information to marketing managers with regard to which products, customers, or territories are more important and which could be eliminated without affecting the overall objectives of the firm are presented. The paper concludes by identifying ABC’s shortcomings and the promise it holds for the modern enterprise.

Business Process Management Journal Vol. 10 No. 5, 2004 pp. 584-597 q Emerald Group Publishing Limited 1463-7154 DOI 10.1108/14637150410559243

Background In today’s competitive environment, consumers are demanding lower priced and superior quality products while, firms are concentrating on ways to best identify their cost drivers and improve profit. Nowadays, repurchase decisions based solely on brand loyalty are becoming a thing of the past as customers do not hesitate to switch their allegiance to firms that sell excellent quality products at competitive prices. Shopping for new products on the Internet has made E-Commerce the new way to buy, trade or sell goods. In the changing technological environment, firms have come to realize that traditional cost accounting systems do not provide accurate cost information, thus, making decisions about pricing, product, and technology precisely “wrong” based on cost systems relying on “accrual” bases. The practice of activity-based costing (ABC) is gaining popularity because of these changing forces. ABC focuses on the activities associated with the costs and assigns costs by using multiple cost drivers. Our objective in this paper is to show how ABC can be used as a tool for determining true costs of marketing and logistics activities and help firms make better decisions based on more accurate costing information. ABC can assign activity costs to the product, service, or customers that consume resources in order to measure profitability and provide cost-effective and timely information better than traditional accounting systems. ABC enables the managers to understand profitability better. Making decisions related to profitability without isolating the factors accounting for profits is like playing poker without looking at one’s cards.

In the following pages, we first discuss the concept of ABC and how it differs from the traditional accounting system. In the next section, we shed light on how ABC provides information as to how various customers or products affect the consumption of marketing activities related to logistics and drive total costs. The following section contains a discussion of how ABC is gaining popularity in non-manufacturing functions such as marketing. We then illustrate how ABC can be used to make major decisions such as making a product in-house or buying it from vendors (i.e. make-or-buy). Finally, we discuss advantages and disadvantages of ABC. Activity-based costing ABC is a costing model that identifies the cost pools, or activity centers, in a firm and assigns costs to products and services (cost drivers) based on the number of events or transactions involved in the process of providing a product or service (McKenzie, 1999). Understanding ABC can lead to greater knowledge of a firm’s business processes and underlying expenses. ABC is a budget and analysis process that evaluates overhead and operating expenses by linking costs to customers, services, products and orders. It allows managers to discern between profitable and non-profitable products or services. ABC helps fill in the gaps of traditional costing by identifying all the work activities and their costs that go into manufacturing a product, delivering a service, or performing a process. When the individual costs are added, a clear picture of the total cost of a process becomes transparent. ABC can even distinguish the cost of serving the different segment of customers (Shank, 1996). Assigning service department costs to activities Kaplan and Atkinson (1998) explain that the basic principle behind the ABC system is that resource expenses must be assigned to the activities performed. Expenses identified in the financial system are collected and distributed to these activities. This helps the firm to understand how much they are spending on the activities that support the production of certain products or services. Tracing firm expenses can begin by interviewing department managers to determine what principal activities are performed in each department and what factors determine how long an activity takes. Once this information is obtained, one can begin to map the resource costs to activities. This can be illustrated by taking the case of a materials handling department in which the people in the department performed three basic tasks (Kaplan and Atkinson, 1998). These tasks included: (1) receiving purchased products; (2) receiving raw materials; and (3) disbursing parts/materials. It was assumed that each of the employees in the department was equally skilled and equally paid, so the firm decided to use time percentages for assigning resource costs to the three activities. The time of the department manager was spread across all three activities performed by the department in proportion to the time worked by each of the employees under supervision. Thus, costs were assigned in relation to the work performed without treating each category work as different. Some departments, in a firm, have individuals who come to work every day, no matter at what level of capacity their department is working. This type of commitment

Activity-based costing

585

BPMJ 10,5

586

often is classified as fixed, because the number of people who work does not vary with demand. This view is not always correct though, because most departments have some fixed and variable costs. Inside a department, employees may be assigned to perform certain tasks based on the level of demand. This type of task can be traced to what job is being performed, and can therefore, be carried out to the final cost of the end product. Some resources might also be fixed for a department. This might include the space in the facility that the department uses and the machines that the department needs to operate. With today’s technology, costing information should be readily available to assess the variable costs of a department, as well as the committed costs, in determining the feasibility of a project (Goebel and Marshall, 1998). Activity cost drivers Cost drivers that link the performance of activities to demands made by individual products are used in this assignment. To allocate a cost center to products, we use simple drivers such as labor hours, machine hours, material processed, or units produced. Even though there are only a few types of cost drivers, they can produce a large variety of information (Ginger, 1994). The quantity of cost drivers is generally proportional to the physical volume of the product produced, but some support departments or indirect resources are not used in these proportions (Kaplan and Atkinson, 1998). Consequently, this method creates significant errors in the costs assigned to individual products. To avoid such errors, traditional cost systems emphasize four types of activities that link the number of units produced: (1) unit-level activities represent the work performed for every unit of product or service produced; (2) batch-level activities represent resources required to perform a batch-level activity. Traditional cost systems view the expenses of this resource as fixed; (3) product-sustaining activities represent the work performed to enable the production of individual products to occur; and (4) customer-sustaining activities represent the work that enables the firm to sell to individual customers. The linkage between these activities and cost objects is done by activity cost drivers. For each activity, there is an activity cost driver that links to the different activities. To calculate an activity cost driver rate, the activity cost is divided by driver quantity. To avoid miscalculation of this cost, the composition of this activity is considered and then all the relevant parts of this activity is calculated before putting them together (Lewis, 1991). Designing the optimal system Learning the language of cost accounting will ultimately give you a better chance of surviving in today’s competitive environment (Rotch, 1990). Designers must ensure that the ABC systems are not merely a more complex and expensive way to do cost allocations. If ABC is performed correctly, it can actually trace back to the activities that underlie the cost assignments, making the ABC system quite different from traditional cost allocation methods. Though ABC systems can be more accurate than traditional cost allocation, it may utilize estimates. This is because, actual costs cannot be traced back and the costs of

finding true costs may outweigh the benefits of finding true costs. Activity-based costing works best with a minimum amount of detail and estimated cost figures (Gering, 1999a, b). If more accurate costs are needed, the ABC designer should come up with more precise measurement tools. Even though estimates are used most frequently, the ABC system is still more precise than other arbitrary cost methods because the system shows a cause and effect relationship between the cost and the activity that produces the cost. Critically, the overall goal of the ABC system is not to create the most precise cost system, but to create a system that more accurately allocates costs than traditional cost allocation methods (Pohlen and Bernard, 1994). Traditional cost methods are inexpensive to use but generally provide management with inaccurate figures, which can lead to bad and expensive business decisions. On the other hand, the cost of an extremely elaborate and expensive ABC system may outweigh the benefits it produces. Ultimately, the best cost allocation system is the one that balances the cost of errors with the cost of measurement. Smith and Dikolli (1995) reminded management to analyze the firm within the scope of two important questions. First, would the information provided by ABC provide data significantly different from what would be provided by a traditional accounting system? Second, if the information provided by ABC is considered “better”, would it change the decisions made by management? If not, the cost of an ABC system is probably not justified. If a firm cannot answer yes to both of these questions, no new design will make a significant impact. Implementing ABC In implementing ABC, when determining the cost drivers for each activity, it is important that managers do not get bogged down with too many details that cannot be explained. However, a system that is too general may not be accurate enough. Customers, whether internal or external, need to be prepared for the changes to come. The firm should educate customers to prepare them before starting the implementation process. In implementing ABC, the firm should set-up a balanced team that gets input from all parties involved: finance staff, information technology staff, human resources, and so forth. The firm will be able to get faster buy-in from upper management if managers can quickly point to cost savings. Activity-based cost implementation must have the support of all the levels in a firm. ABC requires a new way of thinking from all the firm’s functions. For optimal success, the firm should establish a reasonable time frame. In most industries, six to twelve months is aggressive, but reasonable. Too much time will lose the momentum of the people involved. ABC can live in perpetuity, but the initial implementation must be well planned (Gering, 1999a, b; Pohlen and Bernard, 1994). ABC and logistics Logistics is becoming more and more recognized as the critical step in the process of meeting the demands of the customer. Top management, of firms, is now more closely involved in logistics decisions and the importance of the consequences of these decisions is growing (Stock and Lambert, 2001). Traditional cost accounting has included logistics as part of sales, general, and administrative expenses (Quillian, 1991). These costs were then allocated arbitrarily as direct labor hours consumed, cost

Activity-based costing

587

BPMJ 10,5

588

per cases shipped, or as a simple percentage of sales. Instead of focusing on the logistics, most attention was focused on the manufacturing of products. Because of this low level of management attention, many firms do not have sufficient insight into the characteristics of the distribution of their products and therefore, lack sufficient insight into distribution costs. These physical distribution costs can range from 7 to 30 percent of sales (Davis, 1991). With this percentage so high, the management of logistics costs has become increasingly important due to their significant impact on product profitability, product pricing, customer profitability, and ultimately, corporate profitability (Smith and Dikolli, 1995). This understanding of logistics’ importance has led the firms to seek a competitive advantage derived from logistics and supply chain activities. Managers require more accurate and focused costing of logistics functions to ensure the profitability and reflect the demands of lower prices from the customers of the firm. This makes it necessary for the firms to have more detailed financial information to identify ways to reduce costs of the supply chain and for reconstructing their process of logistics (Sheth and Sisodia, 1995). The success of this venture will be dependent on the ability of the firm to accurately trace the costs to specific products, customers, supply chains, and other logistics activities. ABC can assist logistics managers by showing the connection between performing particular activities and the demands, those activities make on a firm’s resources. As mentioned earlier, ABC differs from the traditional method by tracing costs to products according to the activities performed. This means that the firms that employ the ABC system will be able to find out in detail which lines are generating profits for them and which are not. A customer profitability analysis performed by Cooper and Kaplan (1991) found 20 percent of the customers generated 225 percent of the profits, 70 percent of the customers hovered around the break-even point, and the remaining ten percent generated a 125 percent loss (Cooper and Kaplan, 1991). This analytical insight can be very helpful in determining the profitability of a venture. A firm can use this information to find out what customers are creating a loss for the firm. From there, the firm could proceed to terminate the relationship with the customer, thus, saving money and allowing it to focus on more profitable customers. Or, it could go to this customer and find out why the costs are so high and proceed to work with them to bring down the costs. This could foster a stronger loyalty to the firm by the customer and also allow the firm to drop the price of its goods or services to the customer as long as the customer maintained its commitment to keep the prices down. The first suggestion of dropping the customer altogether might not be a viable option if market share in an industry is important. Some firms may believe that it is important to maintain the customer at a loss rather than to give it up to a competitor. By using the ABC method to find out where the costs are coming from, will give the firm an added benefit and aid in this determination. Along with the benefit of ABC to pin point the customers who are generating a loss, ABC will also point out the customers who are generating a profit. When a firm can find out that a customer is very profitable, it will be of the greatest importance to continue to keep that customer happy. By knowing what the true costs of a customer are, promotional discounts could be administered at times to keep the customer content and the manager of the firm will be aware that the firm is still making a profit for them.

In a highly competitive market, knowing these costs about a customer can make a difference on how to price a product or service and still maintain market share and profitability. A good example of this situation is described by Shank (1996). In this case, the firm, Allied Stationary, warehouses specialty paper products and delivers these paper products to its customers on demand. When an order comes in, a person is designated to go into the warehouse and fulfill the order. From there, it is delivered to the customer. Two customers, A and B, were analyzed in this case using the ABC method instead of the old method where a general price was charged to the customers. Customer A used the services more infrequently than customer B, but was charged the same. After analysis of cost drivers and the demand of service from both customers, a startling picture developed. The lack of demand from customer A meant that Allied Stationary was overcharging the customer by $8,500.00 and customer B was being undercharged by over $14,000.00. This is a great picture for customer B and if customer A found out that it was being overcharged, then that customer might be finding a new firm to work with. Though startling, this is not uncommon as most firms do not use ABC methods to analyze the impacts of activity costs in their logistics environment. From this discussion, it is evident that ABC appears well suited for costing and measuring the performance of logistics processing. Many logistics costs remain buried in overhead, and logistics managers do not have adequate control over their costs. Logistics confronts many of the same conditions that make manufacturing firms good ABC candidates. These include diversity of resource consumption, and product and resource consumption not related with traditional volume-based allocation measures. Logistics can benefit from this costing method in that it may identify opportunities where process reengineering could reduce operating costs or improve service performance. Logistics can also provide an opportunity to extend ABC across the supply chain. ABC in a supply chain setting could identify opportunities for eliminating redundant activities existing within the supply chain, chain members with excessive resource consumption patterns, or analyzing alternative channel structures. One decision that ABC might find in a supply chain is that it might be more beneficial to outsource the logistics of a firm. Third party logistics providers are reporting revenue increases of 15-40 percent annually (Bowman et al., 1997). There are two reasons for this expansion into outsourcing. First, managers may want to cut costs and headaches, and third parties offer an easy, market-friendly solution. The second reason is to remain competitive, firms require flexibility, due to faster cycle times and shorter product lives. And flexibility becomes more difficult and costly when the firms rely solely on in-house resources. Additionally, firms that become market specialists and take on outsourcing jobs of other firms will most likely seek the economies that individual firms will not due to the lack of volume (e.g. ShipChem.com). The idea of outsourcing logistics business would take some considerable evaluation and study due to the potential of hold-up costs incurred if the outsourcing firm would strike or impede on the firm’s ability to get its products to its customers. Logistics does pose several distinct challenges that may make ABC implementation more difficult than in the manufacturing environment (Rotch, 1990):

Activity-based costing

589

BPMJ 10,5

590

(1) output is harder to define; (2) activity in response to service requests may be less predictable; and (3) joint capacity represents a high portion of total cost and is difficult to link to output related activities. Since ABC has been a recent innovation within many firms, a survey was conducted to see just what was going on in terms of firms using this application. The big driver for the switch to ABC in logistics has to do with the major role played by customer service in a firm’s competitive strategy (Quillian, 1991). The survey was intended to find out whether logistics firms had an ongoing ABC system, how it was implemented, and identifying the future of ABC in these corporations. The majority of the firms had implemented some form of an ABC analysis with most of the firms using it as a one-time snapshot of its logistics costs. These firms also mentioned that they would continue to use ABC on a periodic basis to update their costing data. Only a small number of firms planned to replace their existing system and 14 percent said that they did not consider it a viable alternative. Many of the responses for not implementing it, was due to higher priorities other than ABC, too complicated, not cost justified, unsure how to proceed, and more practical alternatives available. Many of the respondents cited two major advantages of ABC: accurate product pricing and performance. This would be very helpful in the budgeting and strategic planning process of the firm since the application could be used for predicting changes in the activity costs as well as pinpointing areas for process reengineering and strategic positioning. Overall, most firms seemed to express a satisfaction with the implementation of the ABC process with many planning to extend it to other areas of the firm besides logistics. Many of the firms also mentioned using this application in the sales area where a more variable pricing system can be designed to allow firms to be more diverse in price and adjusting it as the market changes. This will also allow firms to reflect the accurate costs of handling and procurement of products on the logistics end. The overall impact of ABC in logistics firms is still undetermined as the implementation of ABC systems is in its early stages. The need for more accurate costing of activities along with performance will attract more firms to this application in the future. Of the firms that currently partake in ABC, most will generate their own form. These forms may include a model with one cost driver to a model that costs every type of activity involved in the firm. A problem for the lack of ABC in firms might also reflect the level of knowledge in the firm. A large logistics firm may have the capabilities to incorporate the underlying fundamentals of an ABC system and see rather large results based off of the volume of business that it does. A smaller firm may not be able to incorporate these fundamentals due to the lack of knowledge or possibly because the level of economies that this smaller firm holds will not justify the high start up and implementation costs. Other reasons to implement ABC might include to the diversity of products, the firm offers, the number of services that it provides, number of customers it serves, and number of supply channels that it works with. Whatever the reason is for implementing an ABC system the core focus should be on efficiency and how a firm’s logistics helps the firm to attain efficiency and profit maximization goals. With the advent of buying products online continuing to explode

and cost savings a major reason why consumers will buy online, it will be important to keep efficiency throughout the logistics functions. If these costs cannot remain competitive in the industry then that firm may choose to discontinue the product line. If ABC can be implemented with accurate results, then a firm might be able to hold a logistics-induced competitive advantage in knowing exactly what the costs are for a particular output and adjust its costs accordingly.

Activity-based costing

591 ABC and marketing Initially ABC implementation focused primarily on the manufacturing costs. In today’s competitive environment, the ABC approach has spread to key non-manufacturing functions such as those of marketing (e.g. distribution, promotions, selling, and customer service). ABC gives management a clear picture of how customer service, distribution, and promotions functions generate revenues and consume resources. As Cooper and Kaplan (1991) note, ABC first traces cost resources to activities and from activities to specific products, assigning the costs of overhead and marketing directly to each product. This activity enables management to have a clear understanding that manufacturing costs and traditional cost accounting systems are not the only causes of failure in today’s global environment. The problem lies in the ignorance of segregating marketing costs from the mainstream cost allocations. Marketing costs are a significant portion of the total costs of many manufactured products. The statistics show that the marketing costs make up more than 50 percent of the total cost in many product lines and approximately 20 percent of the US Gross National Product lines (Lewis, 1991). Marketing costs provide relevant quantitative data that assist marketing managers in making decisions regarding profitability, pricing, and adding or dropping the product lines or territory. The ABC model was developed by Professors Kaplan and Cooper to overcome some of the problems that traditional accounting was unable to accomplish, i.e. a meaningful approach to allocating overheads. ABC can be used to trace non-manufacturing costs such as marketing costs to product lines and sales territories in order to measure profitability. In this competitive environment, not understanding profitability is like playing poker without looking at your cards. In order to implement ABC, a few steps need to be followed to trace marketing costs to products. The first step in ABC is to establish the activity centers and activities. In this context, activities for marketing might include advertising, selling, order filling, shipping, and warehousing. Resources are the elements needed to perform activities (e.g. facilities, sales personnel, material, distribution and handling equipment, etc). Resource or cost drivers measure the quantity of resources consumed by an activity. Activity drivers associate activities with cost objects. They reflect a measure of the frequency and intensity of use of an activity by a cost object. Cost objects are the reasons for performing an activity. Cost objects include products, service, and customers. Once the activities are established, the next step is to determine the cost drivers for each activity. Table I provides examples of activities and their possible cost drivers, resources, activity drivers, and cost objects for some of the commonly performed marketing activities. After establishing cost drivers, the following step is to separate the variable costs from fixed costs. For example, setting up a telephone line is a category of fixed cost,

BPMJ 10,5

Activities Advertising

592 Selling Order filling

Table I. ABC cost drivers, resources, activity drivers and cost objects

Resource or cost drivers Gross sales, number of promotion channels used, number of new products introduced Gross sales, number of orders received, number of sales calls Number, size or weight of units shipped

Resources

Activity drivers

Supplies, material, channels used for promotion

Products and/or Advertisement needed to attract, for customers example, customers with $40,000+inco.

Telephone lines used, sales personnel employed Sales liaisons employed, computer interface used

Telephone calls to sell say Brand A

Products and/or customers

Sales liaison employed to expedite highly preferential customers Space needed to store type A products

Product and/or service and/or customers

Warehousing Number, size, weight Shelve space used, of units shipped material handlers employed Shipping Number, size, weight, Number of carriers used, trucks used or size of units shipped Customer Number of returns, Number of service complaints, etc. employees in grievance handling department

Freight $ used to transport products to Asian Markets Returns per each dollar sold in Europe

Cost objects

Products and/or customers Products and/or customers Customers and/or service

however, cost of each phone call is a variable cost dependent on the geographic parameters, length of the connection, and time of day the call originates, at least in part. At the same time, the activities should be categorized as value added or non-value added. The goal of ABC is to significantly increase the proportion of value added activities as seen from customers’ perspectives. A value-added activity should be used for further analysis because the customer needs these kinds of activities, whereas, non-value added activities should be eliminated, as by definition, these activities do not add value but add costs to the processes. The next step is to determine unit cost for each activity. Functionally, unit cost equals total activity costs divided by costs drivers selected. For example, total activity costs for shipping is $25,000 and number of units shipped is 2,500, then, the unit cost rate will be $10/unit (25,000/2,500). In a similar way, we can find packaging and shipping unit costs, which is related to the units of product shipped. The final step is to trace the costs of each activity to cost objects, that is, a product or service or a customer. This information provides which products, territory, and customers are profitable and which low-value high-cost activities can be eliminated. Given a better understanding of cost due to the aforementioned analysis, management can focus on what is profitable and important, given the goals of the firm. ABC is a powerful model of revealing which product, customers, or territories are more important and which could be eliminated without compromising the overall objectives of the firm. ABC also provides information that will help marketing managers make better decisions in terms of employing a particular type of promotion over another (i.e. print media or Internet), or the more efficient combination of a

promotional mix. It shows the logical ways to identify and negotiate win-win customer situations and also what the firm should focus on in improving its performance in the eyes of its customers. The ABC system is straightforward when one understands the system. Therefore, employees involved in the core team project must spend a great deal of time looking at what really drives costs in their business by analyzing activities and interviewing employees. This process will help them to establish costs and benefits of pursuing each activity. In Figure 1, we have provided how marketing costs could be traced to products or service or customers (cost objects). ABC implementation generally takes 6-12 months. Until recently, accounting professionals did not recognize the importance of marketing costs and were generally ignored. Since marketing costs are important components of the total cost of a product, more focus should be given in this area to provide a far more accurate portrayal of costs.

Activity-based costing

593

Applying ABC to marketing mix decision To understand ABC systems, it is helpful to view the business firm as an entity that is engaged in performing a series of activities (e.g. research and development, product design, manufacturing, marketing, distribution, and customer service) for the purpose of providing products (goods or services) to customers. In conducting these activities,

Figure 1. Marketing costs

BPMJ 10,5

594

the firm incurs costs. To accurately attribute these costs to products, it is necessary to determine the consumption of activities by individual products. Accordingly, ABC involves the process of identifying the significant activities within the firm, linking costs to these activities, and measuring the consumption of the activities by the various products. ABC may thus, be defined as a system that focuses on activities as the fundamental cost objects and uses the costs of these activities as building blocks for compiling the costs of other cost objects (Foster, 1999). Conclusion Fundamentally, ABC is based on the notion that activities consume resources and products consume activities. Accordingly, product costing using ABC procedures involves a two-step allocation process in which costs are first allocated to activities and then the activity costs are allocated to products based on each product’s demand for, or consumption of, the activities. ABC systems recognize that many types of overhead costs vary proportionally with measures of activity other than volume of product output. Some costs, for example, may vary with the number of production runs, raw material purchase orders, component parts in products, operations performed, inspections, customer orders, or other measures of activity not related to volume of output (Lewis, 1991). When making and evaluating pricing decisions, the focus should be on causation as the fundamental cost concept. The major advantages of using ABC are that it captures the underlying economic conditions of the modern firm and it focuses attention on long-run cost causation rather than on short-run cost behavior (Committee and Grinnell, 1992). Associating costs with activities enables the management to determine which phase the product is in and whether the product is subsidizing or being subsidized by others. ABC does not distort the cost of current products. Activities are allocated to the particular products that generate the demand. ABC enables management to determine how activities consume resources, even beyond a single period, and then to study ways to reduce costs in the development process. Further development of ABC may be hampered by two theoretical drawbacks. First, ABC, as currently practiced, ignores resource constraints. ABC advocates correctly argue that overhead and operating costs of the firms have to be clearly understood and the non-value added processes eliminated. However, under ABC, the distinction between value added and non-value added activities and products are made absolute as if management faces no constraints. In practice, most firms struggle to manage their constrained resources so that they can increase production and sales turnover. This kind of operational focus may result in product-mix decisions that may appear irrational from an ABC perspective (Committee and Grinnell, 1992). Products with low profit margins may continue to be produced where highly profitable products, as identified by the firm’s ABC system, may be dropped. This disconcerting paradox is the result of ABC’s inability to account for resource constraints. ABC can provide substantial assistance with the cost aspects of the decision-making process. In particular, it can enable management to: (1) understand the true costs of a decision; (2) include only those costs that have relevant drivers; and (3) determine the full cost of subcontracting as an activity.

ABC has numerous advantages over its traditional counterpart: (1) ABC has helped firms across the world to become more efficient and more effective; (2) ABC provides a clear picture of where resources are being spent, customer value is being created, and money is being made or lost; (3) ABC offers a better alternative to labor-cost based product costing; (4) ABC identifies value-added activities; (5) ABC identifies many activity costs that are not related to production at all but are traditionally allocated to products as production cost. On the other hand, it identifies many marketing, selling and administrative costs that should be included to determine better product pricing estimates; (6) ABC indicates the areas where the change in firm operation to reduce costs will allow the firm to satisfy customer demands better; (7) ABC helps retailers with dual channel firms such as a combination of online and counter selling operation to identify how much they spend on marketing and other functions and where the costs should be allocated; (8) ABC eliminates or reduces non-value added activities; and (9) ABC allows for the pursuit of competitive advantages to many firms through the identification of relevant cost drivers and activities. By using multiple cost drivers, ABC relates to cost with greater accuracy than traditional costing techniques. Traditional techniques typically rely on one to three volume-based cost drivers to trace overhead costs to products. ABC uses multiple cost drivers to reflect relationships existing between the activities and resources they consume. Activity cost drivers fall into four categories that are unit, batch, product sustaining, and customer sustaining activities. Volume-based drivers imply that costs vary and can be controlled at the unit level. The addition of batch, product, and facility activities enable ABC to establish a clear cause and effect relationship between costs and their drivers. As a result, management obtains greater transparency and understanding of how overhead costs vary with changes in all four activities. An ABC analysis allows managers to pinpoint overhead resources to activities, products, services, or customers with the objective of reducing or eliminating resource consumption. This technique can focus on improving the efficiency of an activity by reducing the number of times the activity must be performed, eliminating unnecessary or redundant activities, selecting a less costly alternative, or using a single activity to accomplish multiple functions. ABC implementation can provide greater visibility of how differently products, customers, or supply channels impact profitability. The firm can more accurately trace costs and determine the areas generating the greatest profit or loss. Product and customer profitability analyses performed by the firms using ABC may significantly alter management perceptions of the status quo operation. Managers can target high cost products or services for reduction efforts. In conjunction with ABC, they can use other techniques such as re-pricing, minimum buy quantities, or charging by service to improve profitability.

Activity-based costing

595

BPMJ 10,5

596

More accurate performance measures appear as a logical consequence of an ABC system. The activity description includes financial and non-financial information. The financial information describes the costs or resources necessary for performing the activity. The non-financial information describes the activity in terms of the time required, quality, number of transactions, or schedule attainment. The firm can use the non-financial information to develop performance measures for the activity. The performance measures describe the work done and the results achieved in completing an activity. Management can use the performance measures to track product returns, damage, claims, or data entry errors. The linkage between the performance measures and the activity provides a relatively straightforward means for computing the cost of poor or improved performance. Management can translate how the elimination or reduction of an activity such as processing product returns would impact resource consumption and reduce the firm’s overall marketing cost. Although ABC brings several advantages to the firm, it also may hold potential disadvantages for the firm. The following are the drawbacks of ABC implementation. (1) It is a resource-consuming activity. It is costly for the firm to adopt ABC because of the cumbersome accounting changes involved. (2) It is time-consuming due to the lengthy procedures it entails. It also takes time for adjustment. (3) It is not appropriate for every firm; typically, firms with low overhead costs will not benefit from adopting this system. (4) It is a labor-intensive operation. (5) The benefits from the implementation of ABC are not always commensurate with the costs incurred in deploying the system. (6) It may cause poor labor relations in the firm if people are not willing to buy into this concept or not willing to break from the status quo. In using ABC, firms are able to control and manage overhead costs efficiently and effectively. ABC increases managerial insights into how products, customers, or supply channels consume work and resources. In both areas of marketing and logistics, ABC allows firms to find the laggards and improve or drop them. This system also allows managers to make decisions with more accurate information and lower chances for error in those decisions. As we mentioned, many firms have to find new ways to stay competitive in the current global, web-based economy. By understanding costs in more depth, firms are able to strive for cost advantages while still maintaining a high quality product or service. Although ABC does have many advantages, it still comes with some disadvantages. It is up to each individual firm to see that if ABC is implemented in any way, it will not be just a different expensive and time-consuming system that spits out the same information. If a firm can successfully implement this practice, the benefits might quickly follow. References Bowman, R.J., Lear-Olimpe, M., Salkin, S. and Thomas, J. (1997), “And now, your logistics forecast. . .”, Distribution, pp. 36-7. Committee, B.E. and Grinnell, J.D. (1992), “Predatory pricing, the best price-cost test”, Journal of Cost Management, pp. 52-8.

Cooper, R. and Kaplan, R.S. (1991), “Profit priorities from activity-based costing”, Harvard Business Review, Vol. 69, p. 130. Davis, H.W. (1991), “Physical distribution costs”, Annual Conference Proceedings of the Council of Logistics Management, pp. 357-64. Foster, T.A. (1999), “Time to learn the ABCs of logistics”, Logistics Management and Distribution Report, pp. 67-70. Gering, M. (1999a), “Activity-based costing and the customer”, Management Accounting, pp. 26-7. Gering, M. (1999b), “Activity-based costing lessons learned implementing ABC”, Management Accounting, pp. 26-7. Ginger, G. (1994), “Activity-based costing for marketing and manufacturing”, Academy of Marketing Science, p. 298. Goebel, D.J. and Marshall, G.W. (1998), “Activity-based costing: accounting for a market orientation”, Industrial Marketing Management, pp. 497-510. Kaplan, R.S. and Atkinson, A.A. (1998), “Activity-based costing system”, Advanced Management Accounting, 3rd ed., pp. 97-112. Lewis, R.J. (1991), “Activity-based costing for marketing”, Management Accounting, pp. 33-8. McKenzie, J. (1999), “Activity-based costing for beginners”, Management Accounting, pp. 56-7. Pohlen, T.L. and Bernard, J.L.L. (1994), “Implementing activity-based costing (ABC) in logistics”, Journal of Business Logistics, pp. 1-13. Quillian, L.E. (1991), “Curing ‘functional silo syndrome’ with logistics TCM”, CMA Magazine, p. 9. Rotch, W. (1990), “Activity-based costing in service industries”, Journal of Cost Management, p. 8. Shank, J.K. (1996), “Allied stationary”, Cases in Cost Management, South Western College Publishing, Boston, MA, pp. 9-17. Sheth, J.N. and Sisodia, R.S. (1995), “Feeling the heat”, Marketing Management, pp. 19-39. Smith, M. and Dikolli, S. (1995), “Customer profitability analysis: an activity-based costing approach”, Managerial Auditing Journal, pp. 3-8. Stock, J.R. and Lambert, D.M. (2001), Strategic Logistics Management, McGraw-Hill, New York, NY. Further reading ABC HOME (n.d.), file:///Cj/EUDORA/Attach/ATT00004.htm Donath, B. (1999), “Fire your big customers? maybe you should”, Marketing News, p. 9. Lobo, R.O. and Lima, P.C. (1998), “A new approach to product development costing”, CMA Magazine, pp. 14-17.

Activity-based costing

597