Qualitative Research In Organizations and Management - Volume 2, Issue 3 : Case Studies 9781846637216, 9781846637209

This ebook is a review of developments in both the qualitative tradition and case studies in management research. The ar

175 43 550KB

English Pages 86 Year 2007

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Qualitative Research In Organizations and Management - Volume 2, Issue 3 : Case Studies
 9781846637216, 9781846637209

Citation preview

qrom cover (i).qxd

02/11/2007

09:23

Page 1

ISSN 1746-5648

Volume 2 Number 3 2007

Qualitative Research in Organizations and Management An International Journal

Special issue on case studies Guest Editors: Bill Lee, Paul M. Collier and John Cullen

www.emeraldinsight.com

Qualitative Research in Organizations and Management:

ISSN 1746-5648 Volume 2 Number 3 2007

An International Journal Special issue on case studies Guest Editors Bill Lee, Paul M. Collier and John Cullen

Access this journal online _________________________

167

Editorial advisory board __________________________

168

GUEST EDITORIAL Reflections on the use of case studies in the accounting, management and organizational disciplines Bill Lee, Paul M. Collier and John Cullen _____________________________

169

A researcher’s tale: dealing with epistemological divergence Janet Bryant and Barbara Lasky __________________________________

179

The “singular view” in management case studies Sue Llewellyn and Deryl Northcott _________________________________

194

Introducing strong structuration theory for informing qualitative case studies in organization, management and accounting research Lisa Jack and Ahmed Kholeif_____________________________________

Access this journal electronically The current and past volumes of this journal are available at:

www.emeraldinsight.com/1746-5648.htm You can also search more than 150 additional Emerald journals in Emerald Management Xtra (www.emeraldinsight.com) See page following contents for full details of what your access includes.

208

CONTENTS

CONTENTS continued

Case study research and network theory: birds of a feather Evert Gummesson______________________________________________

226

Obituary _________________________________________

249

www.emeraldinsight.com/qrom.htm As a subscriber to this journal, you can benefit from instant, electronic access to this title via Emerald Management Xtra. Your access includes a variety of features that increase the value of your journal subscription.

Structured abstracts Emerald structured abstracts provide consistent, clear and informative summaries of the content of the articles, allowing faster evaluation of papers.

How to access this journal electronically

Additional complimentary services available

To benefit from electronic access to this journal, please contact [email protected] A set of login details will then be provided to you. Should you wish to access via IP, please provide these details in your e-mail. Once registration is completed, your institution will have instant access to all articles through the journal’s Table of Contents page at www.emeraldinsight.com/1746-5648.htm More information about the journal is also available at www.emeraldinsight.com/ qrom.htm

Your access includes a variety of features that add to the functionality and value of your journal subscription:

Our liberal institution-wide licence allows everyone within your institution to access your journal electronically, making your subscription more cost-effective. Our web site has been designed to provide you with a comprehensive, simple system that needs only minimum administration. Access is available via IP authentication or username and password.

E-mail alert services These services allow you to be kept up to date with the latest additions to the journal via e-mail, as soon as new material enters the database. Further information about the services available can be found at www.emeraldinsight.com/alerts

Emerald online training services Visit www.emeraldinsight.com/training and take an Emerald online tour to help you get the most from your subscription.

Key features of Emerald electronic journals Automatic permission to make up to 25 copies of individual articles This facility can be used for training purposes, course notes, seminars etc. This only applies to articles of which Emerald owns copyright. For further details visit www.emeraldinsight.com/ copyright Online publishing and archiving As well as current volumes of the journal, you can also gain access to past volumes on the internet via Emerald Management Xtra. You can browse or search these databases for relevant articles. Key readings This feature provides abstracts of related articles chosen by the journal editor, selected to provide readers with current awareness of interesting articles from other publications in the field. Non-article content Material in our journals such as product information, industry trends, company news, conferences, etc. is available online and can be accessed by users. Reference linking Direct links from the journal article references to abstracts of the most influential articles cited. Where possible, this link is to the full text of the article. E-mail an article Allows users to e-mail links to relevant and interesting articles to another computer for later use, reference or printing purposes.

Xtra resources and collections When you register your journal subscription online, you will gain access to Xtra resources for Librarians, Faculty, Authors, Researchers, Deans and Managers. In addition you can access Emerald Collections, which include case studies, book reviews, guru interviews and literature reviews.

Emerald Research Connections An online meeting place for the research community where researchers present their own work and interests and seek other researchers for future projects. Register yourself or search our database of researchers at www.emeraldinsight.com/ connections

Choice of access Electronic access to this journal is available via a number of channels. Our web site www.emeraldinsight.com is the recommended means of electronic access, as it provides fully searchable and value added access to the complete content of the journal. However, you can also access and search the article content of this journal through the following journal delivery services: EBSCOHost Electronic Journals Service ejournals.ebsco.com Informatics J-Gate www.j-gate.informindia.co.in Ingenta www.ingenta.com Minerva Electronic Online Services www.minerva.at OCLC FirstSearch www.oclc.org/firstsearch SilverLinker www.ovid.com SwetsWise www.swetswise.com

Emerald Customer Support For customer support and technical help contact: E-mail [email protected] Web www.emeraldinsight.com/customercharter Tel +44 (0) 1274 785278 Fax +44 (0) 1274 785201

QROM 2,3

168

Qualitative Research in Organizations and Management: An International Journal Vol. 2 No. 3, 2007 p. 168 # Emerald Group Publishing Limited 1746-5648

EDITORIAL ADVISORY BOARD

Professor Howard S. Becker USA

Professor Michael Myers University of Auckland, New Zealand

Professor Alan Bryman University of Leicester, UK

Professor Anne de Bruin Massey University, New Zealand

Professor Brendan O’Dwyer Amsterdam Graduate Business School, The Netherlands Dr Asta Pundziene ISM University of Management and Economics, Lithuania

Dr Ardha Danieli University of Warwick, UK

Professor John Rodwell Macquarie University, Australia

Dr Joanne Duberley University of Birmingham, UK

Professor David Silverman Goldsmiths College and Kings College, London, UK

Dr David Fryer University of Stirling, UK Professor Yiannis Gabriel University of London, London, UK Professor Evert Gummesson Stockholm University, Sweden Professor Chris Humphrey University of Manchester, UK Professor Antonio Mutti University of Pavia, Italy

Professor Chris Steyaert University of St Gallen, Switzerland

Anna Buehring Manchester Metropolitan University, UK

Professor Richard Thorpe University of Leeds, UK Professor John van Maanen MIT Sloan School of Management, USA Dr Margaret Vickers University of Western Sydney, Australia

The current issue and full text archive of this journal is available at www.emeraldinsight.com/1746-5648.htm

GUEST EDITORIAL

Reflections on the use of case studies in the accounting, management and organizational disciplines

Guest editorial

169

Bill Lee University of Sheffield Management School, Sheffield, UK

Paul M. Collier Monash University, Clayton, Australia, and

John Cullen University of Sheffield Management School, Sheffield, UK Abstract Purpose – The purpose of this paper is to explain the background to the special issue and to provide an introduction to the articles on case studies included in the issue. Design/methodology/approach – The paper uses a review of developments in both the qualitative tradition and case studies in management research to provide a backdrop for the articles that are included in the issue. The articles discuss: the merits of unique cases and singular forms of evidence within a single case; the comparability of case studies with tools in other areas; and methods of theorising from case studies. Findings – The merits of case studies have often been understated. The articles in this issue highlight a broader variety of uses of case study research than is commonly recognized. Originality/value – This guest editorial introduces the papers in this issue, which may be read either as individual contributions that have merits per se, or as part of a collection that this introductory paper helps to knit together. Keywords Case studies, Qualitative research Paper type General review

Introduction Case studies have emerged as an increasingly important qualitative approach in many management disciplines (Gummesson, 2000, p. 88; Scapens, 2004), despite the continued dominance of survey methods and statistical rationales of research in different branches of management, including finance (Bessant et al., 2003, p. 55) and marketing (Gummesson, 2000, p. 90). The capacity of case studies to draw from different data sources, to allow several levels of simultaneous analysis of the dynamics in a single setting (Eisenhardt, 1989, p. 534) creates the potential for a richer understanding of organizational phenomena than can be conveyed by statistical analysis. The richness of the data also makes empirically researched case studies extremely important in teaching. Mahoney (1997) suggested that although academic teaching and

Qualitative Research in Organizations and Management: An International Journal Vol. 2 No. 3, 2007 pp. 169-178 q Emerald Group Publishing Limited 1746-5648 DOI 10.1108/17465640710835337

QROM 2,3

170

research are often seen as joined yet divergent interests and pursuits, the two are inextricably intertwined. This requirement to go beyond just publishing research in academic journals received support from both Humphrey (2001) and Roberts (2003). Humphrey argued that if we want to effect real accounting practice reform, then we must disseminate our research further than just publishing in academic journals. This point is further emphasised in Roberts’ (2003) report on the review of research assessment that stresses the need to disseminate research beyond the academic peer group. Cullen et al. (2004) provided an illustration of the use of an empirically researched and narratively constructed case study that facilitated a powerful interface between research, would-be practitioners and practice itself. The richness of the material as a representation of organisational reality is important here and the strength of doing qualitative field studies (Ahrens and Chapman, 2006, p. 832) are that they “avoid thinning out the data beyond the point where it loses its specificity and becomes bland”. Empirically researched case studies therefore have the potential to contribute to the development of both theory and practice. The capacity of empirical case studies to complement a range of epistemological traditions, including interpretivist approaches, has been recognized by a number of authors. For example, Boland and Pondy (1983, p. 223) suggest that research through case studies involves an implicit rejection of some perspectives that rely on an “objectively knowable, empirically verifiable reality”. Despite such understanding, there is a danger that the dominance of a particular epistemological tradition – namely positivism – in research more generally, has skewed case studies development and usage. To some extent, this may be seen by looking at the pattern of development of the literature on case studies. For example, Lee (1999, p. 15) reports that in a review of articles that employed qualitative research and were published in leading American journals, Yin’s (1984, 1989, 2003) series of editions of his book on case studies featured prominently. Whatever the merits of Yin’s books – and there are many – Yin, like other authors (Jankowicz, 2005, pp. 231-2) who adopt a similar stance, tends to define a narrow range of uses for case studies. (Yin 2003, p. 3; Scapens, 1990, 2004) defines three uses for case studies: exploratory, descriptive and explanatory. Unfortunately, the majority of these uses are best understood as poor relations to positivistic, quantitative research. Exploratory case studies tend to be conducted as preliminary research in advance of wide-scale surveys to map out the themes for the subsequent research. Descriptive case studies are often used to expand on trends and themes already discovered by survey research. It is only the explanatory case that seeks to derive a detailed understanding of a particular phenomenon where the case is not seen as ancillary to more quantitative methods. Yet, as Gummesson (2000, p. 85) states, this third use of case studies is “looked on with scepticism, or sometimes even horror, by mainstream business school professors”. Such scepticism may create fear amongst new academics and discourage their selection of explanatory case studies (Humphrey and Lee, 2004a, p. xxv). Fortunately, there are counter forces and counter trends to those generated by the scepticism of some management academics. Manifestations of these forces and trends include the different strategies adopted by other management researchers to help advance qualitative research. Recognising the limitations of using criteria associated with positivistic research for evaluating qualitative studies (Cassell et al., 2006; Ahrens and Chapman, 2006), one long-standing strategy has been to formulate criteria

appropriate for assessing and – as a corollary – demonstrating the quality of qualitative research (Guba and Lincoln, 1989; Johnson et al., 2006; Seale, 1999). Another strategy has been manifest in accounting where Humphrey and Lee (2004b) have extended the traditions established by Bell and Newby (1977) in sociology and Bryman (1988) in organizational Studies, by compiling a collection that demystifies all aspects of the qualitative-research process (Humphrey and Lee, 2004c, p. 67). Elsewhere, an alternative strategy has been followed by a small group of people who have origins in work psychology and is evident in the work of Cassell and Symon and their collaborators. Cassell and Symon (1994; 2004a) and Symon and Cassell (1998) have documented and popularized the use of a variety of methods for collecting and analysing qualitative data. More recently, they and their colleagues (Cassell et al., 2005) have produced a range of materials to help educate management researchers in the use of qualitative methods. To date, the pattern of advance of qualitative methods in general and case studies in particular has varied both within disciplines according to the geographical location and across disciplines. Very few disciplines in USA have developed strong qualitative traditions – see, for example, Lee (2004, p. 62). The strength of quantitative research traditions in America has affected developments in some other countries. For example, there are very few opportunities for Australian or New Zealand academics to collaborate or present conference papers using qualitative methods without travelling to Europe. In Europe, in some disciplines, there has been the development of very strong qualitative traditions. For example, UK academics cannot help notice the difference in feedback that members of the business and management panel (Bessant et al., 2003) provided to different disciplines about the submissions to the last research quality audit that took place in the UK, namely the 2001 research-assessment exercise. For instance, while most finance researchers generally relied on “large sample econometric/statistical methods” accounting produced world class research that was eclectic in methodological approach. This eclecticism includes qualitative research (Lee and Humphrey, 2006). In this regard, qualitative research in accounting has been supported by the establishment of a whole supportive academic infrastructure of conferences and journals (Guthrie and Parker, 2004, p. 10) where qualitative approaches to case studies have been seen as a legitimate feature of empirical studies. For example, Scapens (2004, p. 257) reports that in the first ten years of the publication of Management Accounting Research, for which he is Editor-in-Chief, around a quarter of the published papers were based on case study research. The academic infrastructure supportive of qualitative research has also provided a forum in which the application and usefulness of case studies has been both explained (Otley and Berry, 1994) and the subject of intense debate (Humphrey and Scapens, 1996a, b; Llewellyn, 1996; Young and Preston, 1996). Accounting is not the only discipline to have now developed a strong qualitative tradition. Other management disciplines to have also done so include organizational studies (Bessant et al., 2003, p. 58; Cassell and Symon, 2004b; Bryant and Lasky, 2007) and Scandinavian approaches to marketing (Bessant et al., 2003, p. 56; Gummesson, 2007). Advances in qualitative traditions in a range of disciplines are now being facilitated by the publication of academic journals dedicated to qualitative research. Initially, there were journals such as Qualitative Inquiry and Qualitative Research that

Guest editorial

171

QROM 2,3

172

accommodated all fields. More recently, there has been the establishment of journals such as Qualitative Research in Accounting and Management and this journal, Qualitative Research in Organizations and Management, that are dedicated to qualitative research in the management disciplines. One of the ambitions expressed in the editorial of the first issue of this journal was to enable the different disciplines of management to be able to learn from each other in their development of qualitative methods (Cassell and Symon, 2006, p. 6). Previously, the sharing of ideas across disciplines may have been confined to face-to-face multidisciplinary gatherings of people sympathetic to the use of case studies. A good example was the workshop on case studies organized jointly by the Management Control Association and the British Academy of Management’s Research Methodology Special Interest Group in February 2006 when this special issue was proposed. Now, the development of qualitative research journals such as this one allows such ideals to be disseminated more widely. The hope when this special issue was conceived was that papers that furthered our understanding of the benefits and advantages of case study research would be submitted from a range of disciplines. The four articles that we have decided to include are drawn from three different disciplines: accounting; marketing; and organizational studies. These articles broadly address three different types of issues: dealing with particular cases and particular respondents’ viewpoints within cases that have merits per se; the use of theory in case studies; and the comparison of case studies with other research tools. This collection of articles provides a valuable collection of ideas and experiences of case studies to be shared between all researchers, regardless of the management discipline from which they are drawn. Recognising the benefits of the particular in case study research The first two articles in this special issue address different dimensions of the question of what to do when either a particular respondent in a case study, or a particular case, provides markedly different data from that provided by the majority of evidence. Case study research has often been assessed against the criterion of whether it is possible to generalise its findings and the need for triangulation of evidence. The application of statistical logic and tests of probability have allowed survey methods to state the confidence levels that generalizations might materialise in the population as a whole, regardless of the limited nature of knowledge that such a generalization may provide. In a context where such ideas are dominant, case studies – despite their richness of detail and their potential to reveal different types of interconnected phenomena – are often dismissed as anecdotal. With some rare exceptions (Patton, 2002, p. 230; Saunders et al., 2007, pp. 170-8), there has been little systematic attempt to provide comprehensive articulation of legitimate ways of selecting cases from which to theorise. There is a potential for generalizing from high-quality case studies. Lukka and Kasanen (1995, p. 83) suggest three types of generalization rhetoric: statistical generalization; contextualisation generalization that constitutes a “meaningful and convincing connection of the study with the real-world phenomena surrounding the case in question, such as history, institutions and markets”; and constructive generalization in which:

. . . the researcher relies on the diffusion of solution ideas and argues that the successful implementation of the solution . . . makes it credible that the solution will also work in similar organizations elsewhere.

Yet, as Gummesson (2000, p. 96) has noted, a merit of case studies is that – in contrast to the grail of generalization sought by survey techniques – they allow the realisation of particularization. There are, however, issues of how best to present a unique case and the tools available for drawing attention to its specific qualities. In contrast to the positivistic approach of Eisenhardt (1989, p. 545, 1991), who encourages constant iterations between the data and emergent ideas to create a “single well-defined construct” the first article in this issue by Bryant and Lasky (2007) suggests a different approach. Bryant and Lasky contend that it is possible to present an individual case in a markedly different way to the findings from the majority of cases. They propose using what might appear initially to be an incommensurate paradigm to that which is used to analyse the majority of the material. Bryant and Lasky start from a position of using conventional-grounded theory in a way consistent with positivism and conducting continuous iterations of the data to find patterns and regularities to express in the emergent ideas. They then highlight how they used a more constructionist narrative approach to analyse a separate case that offered markedly different data. Bryant and Lasky were therefore able to draw out the rich and different account provided by a “maverick” case, thus providing all of their respondents with a voice and a fuller picture of all of the data that they had collected. A different type of problem arising from finding a minority opinion in fieldwork is apparent in the second article in this collection by Llewellyn and Northcott (2007). Llewellyn and Northcott report a research study where the singular view – namely a viewpoint held by one respondent alone – provided the greatest insights into the emergent situation. Distinguishing between the characteristics and the meaning of a phenomenon, Llewellyn and Northcott’s paper highlights that when a state of flux exists on a contested terrain, there may be many meanings of the significance of a phenomenon. In such a situation, seeking the most common or popular view may not be the one that offers the most valuable insights into the emergent situation. Unfortunately for the academic community, the quality checks involved in the double-blind review process of journals where reviewers often interpret validity as that found in the greatest volume of evidence, may discourage presentation and publication of the most insightful evidence. Fortunately for Llewellyn and Northcott and for our understanding of research methods, subsequent events were to give greater weight to the view of emergent trends expressed initially in the singular view. Theorising from case studies The third article in this collection by Jack and Kholief (2007) addresses the question of how formal theory may be used in case study research. Hitherto, there have been a variety of approaches developed for combining extant formal theories and data from case study research. These range from the rejection of formal theories and instead theorising inductively from the data (Glaser and Strauss, 1967); through working with a toolbox of different theories against which to examine the data throughout the fieldwork (Marginson, 2004, p. 327) to the prior selection of a particular theoretical framework or paradigm and the analysis of research themes from within that

Guest editorial

173

QROM 2,3

174

framework or paradigm. There are inevitably benefits and problems with each approach. If theory is simply built from the data that is generated by case studies, there is the potential for the generation of novel theory (Eisenhardt, 1989, p. 546). Yet, there is equally the danger that that researchers will get lost in the data and end up developing a theory that is overly complex (Eisenhardt, 1989, p. 547). A toolbox of different theories may be used. This is the argument that researchers should “employ a number of differing perspectives, possibly in dialectic tension with one another” (Covaleski and Dirsmith, 1990, p. 566), or the so-called “middle range” approach of Laughlin (1995) and Broadbent and Laughlin (1997). However, in doing so, there is the likelihood that there will be the avoidance of selective plausibility (Scapens, 2004, p. 274) as a consequence of only examining the data that fits the researcher’s preferred theory. Equally, the testing of data against different theories could result in either a suitable theory being discounted prematurely because it does not fit an early case, or a myriad of completely different theories might end up being used to explain each separate case with no unity between them. The selection of a single formal theory could lead to the selective plausibility that Scapens counsels against, which as Humphrey and Scapens (1996a, b, p. 88) have suggested, may lead to researchers using evidence to merely illustrate a favoured theory. However, this is also an approach that offers the benefit of allowing researchers to direct their research and discipline their senses to pick up on those dimensions of a phenomenon that a theory is best able to explain. These themes are apparent in the article by Jack and Kholeif (2007). Drawing on a recent interpretation of structuration theory (Stones, 2005), they propose case studies as a methodological tool for the dialectical development of theory. In other words, structuration theory can be used to generate more searching questions about agents and structures as part of interpreting research evidence in the accounting, management and organizational research fields. The data generated may then be fed back into further developing the theory. A powerful feature of this emergent theoretical development is the position of the researcher as being “analogous to that of an investigator, elucidating the case through evidence, theory, experience and intuition” (Jack and Kholeif, 2007). Such a process requires great discipline (Ahrens and Chapman, 2006) and such discipline enhances the contribution being made by qualitative field research. Extending the applicability of case studies The final article in this collection considers whether case studies may be combined with other methodological tools as part of a research strategy. Often, case studies are seen as a research strategy per se that incorporates a range of methods that each permit the study of one or more dimensions of the phenomenon under investigation (Hartley, 2004; Yin, 2003). As a consequence, there has been little attempt to consider how case studies may be combined with other methodological tools and what purpose such a strategy might achieve. The recent expansion of work on mixed methods (Bryman, 2006a, b, c) highlights that many researchers believe that all forms of research approach may have their own respective technical limitations, or that a research project may require the answering of supplementary questions, which may necessitate the use of additional research methods.

In this context, the final article in this collection by Gummesson (2007) raises an interesting comparison between case studies and network theory. Starting from a framework that seeks to promote innovation in high-quality case studies, Gummesson draws attention to a strength that both case studies and network theory share; namely, a capacity to represent complex processes as they unfold. Gummesson suggests that case studies and network theory may be combined either in ways that are supplementary in a study of the same phenomenon, or as alternative research tools for different research questions. Concluding summary This first special themed issue of Qualitative Research in Organization and Management is, per se, a manifestation of the advancement of qualitative research methods that has taken place in the array of management-related disciplines. The issue is dedicated to articles on case studies in accounting, management and organizations. We hope that this collection on a single theme will create the motivation and potential for cross-germination of ideas based on experiences of researchers from the different academic areas of accounting, marketing and organizational studies. The following articles provide authority on ways of analysing and presenting cases that are different, maximising the benefit of singular viewpoints found in case study research, theorising from cases, and employing case studies alongside other methodological tools. As such, the articles promise to help open up new research agendas and strategies for investigators who use case studies. References Ahrens, T. and Chapman, C.S. (2006), “Doing qualitative field research in management accounting: positioning data to contribute to theory”, Accounting, Organizations and Society, Vol. 31 No. 8, pp. 819-41. Bell, C. and Newby, H. (Eds) (1977), Doing Sociological Research, George Allen & Unwin, London. Bessant, J., Birley, S., Cooper, C., Dawson, S., Gennard, J., Gardiner, M., Gray, A., Jones, P., Mayer, C., McGee, J., Pidd, M., Rowley, G., Saunders, J. and Stark, A. (2003), “The state of the field in UK management research: reflections of the research assessment exercise (RAE) panel”, British Journal of Management, Vol. 14 No. 1, pp. 51-68. Boland, J.R.J. and Pondy, L.R. (1983), “Accounting in organizations: a union of natural and rational perspectives”, Accounting, Organizations and Society, Vol. 8 Nos 2-3, pp. 223-34. Broadbent, J. and Laughlin, R. (1997), “Developing empirical research: an example informed by a Habermasian approach”, Accounting Auditing & Accountability Journal, Vol. 10 No. 5, pp. 622-48. Bryant, J. and Lasky, B. (2007), “A researcher’s tale: dealing with epistemological divergence”, Qualitative Research in Organizations and Management: An International Journal, Vol. 2 No. 3, pp. 179-93. Bryman, A. (Ed.) (1988), Doing Research in Organisations, Routledge, London. Bryman, A. (Ed.) (2006a), Mixed Methods,Vol. I, Sage, London. Bryman, A. (Ed.) (2006b), Mixed Methods,Vol. II, Sage, London. Bryman, A. (Ed.) (2006c), Mixed Methods,Vol. III, Sage, London. Cassell, C. and Symon, G. (Eds) (1994), Qualitative Methods in Organizational Research: A Practical Guide, Sage, London.

Guest editorial

175

QROM 2,3

176

Cassell, C. and Symon, G. (Eds) (2004a), Essential Guide to Qualitative Methods in Organizational Research, Sage, London. Cassell, C. and Symon, G. (2004b), “Raising the profile of qualitative methods in organizational research”, in Humphrey, C. and Lee, B. (Eds), The Real Life Guide to Accounting Research, Elsevier, Oxford, pp. 491-508. Cassell, C. and Symon, G. (2006), “Taking qualitative methods in organization and management research seriously”, Qualitative Research in Organizations and Management: An International Journal, Vol. 1 No. 1, pp. 4-12. Cassell, C., Symon, G., Buehring, A. and Johnson, P. (2006), “The role and status of qualitative methods in management research: an empirical account”, Management Decision, Vol. 44 No. 2, pp. 290-303. Cassell, C., Symon, G., Johnson, P., Bishop, V. and Buehring, A. (2005), Benchmarking Good Practice in Qualitative Management Research: Training Workshops – The Facilitator’s Guide, Economic and Social Research Council Publication, Swindon, available at: www. shef.ac.uk/bgpinqmr/ Covaleski, M.A. and Dirsmith, M.W. (1990), “Dialectic tension, double reflexivity and the everyday accounting researcher: on using qualitative methods”, Accounting, Organizations and Society, Vol. 15 No. 6, pp. 543-73. Cullen, J., Richardson, S. and O’Brien, R. (2004), “Exploring the teaching potential of empirically-based case studies”, Accounting Education: An International Journal, Vol. 13 No. 2, pp. 251-66. Eisenhardt, K.M. (1989), “Building theories from case study research”, Academy of Management Review, Vol. 14 No. 4, pp. 532-50. Eisenhardt, K.M. (1991), “Better stories and better constructs: the case for rigor and comparative logic”, Academy of Management Review, Vol. 16 No. 3, pp. 620-7. Glaser, B.G. and Strauss, A.L. (1967), The Discovery of Grounded Theory, Aldine de Gruyter, Hawthorne, NY. Guba, E. and Lincoln Y.S. (1989), Fourth Generation Evaluation, Sage, Newbury Park. Gummesson, E. (2000), Qualitative Methods in Management Research, Sage, London. Gummesson, E. (2007), “Case study research and network theory: birds of a feather”, Qualitative Research in Organizations and Management: An International Journal, Vol. 2 No. 3, pp. 226-48. Guthrie, J. and Parker, L. (2004), “Diversity and AAAJ interdisciplinary perspectives on accounting, auditing and accountability”, Accounting, Auditing & Accountability Journal, Vol. 17 No. 1, pp. 7-16. Hartley, J. (2004) in Cassell, C. and Symon, G. (Eds), Essential Guide to Qualitative Methods in Organizational Research, Sage, London, pp. 323-33. Humphrey, C. (2001), “Paper prophets and the continuing case for thinking differently about accounting research”, British Accounting Review, Vol. 33 No. 1, pp. 91-103. Humphrey, C. and Lee, B. (2004a), “Introduction”, in Humphrey, C. and Lee, B. (Eds), The Real Life Guide to Accounting Research, Elsevier, Oxford, pp. xxiii-xxx. Humphrey, C. and Lee, B. (Eds) (2004b), The Real Life Guide to Accounting Research, Elsevier, Oxford. Humphrey, C. and Lee, B. (2004c), “The real life guide to accounting research: a strategy for inclusion”, Qualitative Research in Accounting & Management, Vol. 1 No. 1, pp. 66-84.

Humphrey, C. and Scapens, R.W. (1996a), “Theories and case studies of organizational accounting practices: limitation or liberation”, Accounting, Auditing & Accountability Journal, Vol. 9 No. 4, pp. 86-106. Humphrey, C. and Scapens, R.W. (1996b), “Rhetoric and case study research: response to Joni Young and Alistair Preston and to Sue Llewellyn”, Accounting, Auditing & Accountability Journal, Vol. 9 No. 4, pp. 119-22. Jack, L. and Kholief, A. (2007), “Introducing strong structuration theory for informing qualitative case studies in organization, management and accounting research”, Qualitative Research in Organizations and Management: An International Journal, Vol. 2 No. 3, pp. 208-25. Jankowicz, A.D. (2005), Business Research Projects, Thomson Learning, London. Johnson, P., Buehring, A., Cassell, C. and Symon, G. (2006), “Evaluating qualitative management research: towards a contingent criteriology”, International Journal of Management Reviews, Vol. 8 No. 3, pp. 131-56. Laughlin, R. (1995), “Empirical research in accounting: alternative approaches and a case for ‘middle-range’ thinking”, Accounting Auditing & Accountability Journal, Vol. 8 No. 1, pp. 63-87. Lee, B. and Humphrey, C.G. (2006), “More than a numbers game: qualitative research in accounting”, Management Decision, Vol. 44 No. 2, pp. 180-97. Lee, T.A. (2004), “Accounting and auditing research in the United States”, in Humphrey, C. and Lee, B. (Eds), The Real Life Guide to Accounting Research, Elsevier, Oxford, pp. 57-71. Lee, T.W. (1999), Using Qualitative Methods in Organizational Research, Sage, London. Llewellyn, S. (1996), “Theories for theorists or theories for practice? Liberating academic accounting research?”, Accounting, Auditing & Accountability Journal, Vol. 9 No. 4, pp. 112-8. Llewellyn, S. and Northcott, D. (2007), “The ‘singular view’ in management case studies”, Qualitative Research in Organizations and Management: An International Journal, Vol. 2 No. 3, pp. 194-207. Lukka, K. and Kasanen, E. (1995), “The problem of generalizability: anecdotes and evidence in accounting research”, Accounting Auditing & Accountability Journal, Vol. 5 No. 5, pp. 71-90. Mahoney, T.A. (1997), “Scholarship as a career of learning through research and teaching”, in Andre, R. and Frost, P.J. (Eds), Researchers Hooked on Teaching: Noted Scholars Discuss the Synergies of Teaching and Research, Sage, London, pp. 112-24. Marginson, D.E.W. (2004), “The case study, the interview and the issues: a personal reflection”, in Humphrey, C. and Lee, B. (Eds), The Real Life Guide to Accounting Research, Elsevier, Oxford, pp. 325-37. Otley, D.T. and Berry, A.J. (1994), “Case study research in management accounting and control”, Management Accounting Research, Vol. 5 No. 1, pp. 45-65. Patton, M.Q. (2002), Qualitative Research & Evaluation Methods, 3rd ed., Sage, Thousand Oaks, CA. Roberts, G. (2003) Review of Research Assessment, UK Joint Funding Bodies. Saunders, M., Lewis, P. and Thornhill, A. (2007), Research Methods for Business Students, 4th ed., Pearson Education, Harlow. Scapens, R.W. (1990), “Researching management accounting practice: the role of case study methods”, British Accounting Review, Vol. 22 No. 3, pp. 259-81. Scapens, R.W. (2004), “Doing case study research”, in Humphrey, C. and Lee, B. (Eds), The Real Life Guide to Accounting Research, Elsevier, Oxford, pp. 257-79.

Guest editorial

177

QROM 2,3

178

Seale, C. (1999), “Quality in qualitative research”, Qualitative Inquiry, Vol. 5 No. 4, pp. 465-78. Stones, R. (2005), Structuration Theory, Palgrave, London. Symon, G. and Cassell, C. (Eds) (1998), Qualitative Methods and Analysis in Organizational Research: A Practical Guide, Sage, London. Yin, R. (1984), Case Study Research: Design and Methods, 1st ed., Sage, London. Yin, R. (1989), Case Study Research: Design and Methods, 2nd ed., Sage, London. Yin, R. (2003), Case Study Research: Design and Methods, 3rd ed., Sage, London. Young, J. and Preston, A. (1996), “Are accounting researchers under the tyranny of single theory perspectives?”, Accounting, Auditing & Accountability Journal, Vol. 9 No. 4, pp. 107-11. About the authors Bill Lee is a Senior Lecturer in Accounting and Financial Management at the University of Sheffield’s Management School where his teaching responsibilities include contributing to the teaching of qualitative methods to Masters and PhD students. He is Secretary to the British Academy of Management’s Research Methodology Special Interest Group and a Co-organizer of a current ESRC-funded seminar series on “Advancing Research in the Business and Management Field”. His publications in the area include the research methods collection, The Real Life Guide to Accounting Research that he co-edited with Christopher Humphrey in 2004 and which is being reissued in paperback next year. Bill Lee is the corresponding author and can be contacted at: [email protected] Paul M. Collier is an Associate Professor in Accounting at Monash University in Australia. He was previously a Senior Lecturer at the University of Aston in Birmingham UK and he held an Advanced Institute of Management Public Sector Fellowship to support his qualitative research studies of accounting in the police. John Cullen is a Professor of Management Accounting and Associate Dean with responsibility for knowledge transfer at the University of Sheffield’s Management School. He has researched and published widely, generally basing his work on case studies of both small and large organizations in the public and private sectors. He has also been innovative in utilizing case studies of his work in his teaching and has published a number of book chapters and articles in journals such as Accounting Education on the benefits of using case studies in teaching.

To purchase reprints of this article please e-mail: [email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints

The current issue and full text archive of this journal is available at www.emeraldinsight.com/1746-5648.htm

A researcher’s tale: dealing with epistemological divergence

A researcher’s tale

Janet Bryant and Barbara Lasky Swinburne University of Technology, Lilydale, Australia

179

Abstract Purpose – The paper’s purpose is to explore a theoretical and methodological dilemma. Design/methodology/approach – Commencing doctoral research, and committed to an orthodox grounded theory approach, a unique story was uncovered which, to do it and the research justice, required an alternative form of representation. Intuition decreed that this should be narrative. However, grounded theory and narrative entail epistemologically and ontologically incommensurate paradigms. The paper seeks to consider whether inclusion of the unique story would compromise, or subvert, the already emergent grounded theory. An exploration of the relationship between different epistemological and ontological traditions is also to be made, based on the assumption that method “slurring,” and a more eclectic approach to using incommensurate paradigms, may be valuable. Findings – In transcribing and coding data using strictly orthodox grounded theory methods, the researcher runs the risk of “stripping” the research story of some critical dimension(s). However, combining a narrative approach with that of grounded theory, the paper allows for the representation of an atypical “Maverick” case, along-side other more typical cases. Originality/value – The paper points out, to the early career qualitative researcher in particular, that it is legitimate to combine seemingly incommensurate methodologies, notably where not to do so would result in the loss of enriching and powerful insights into basic social processes. Keywords Narratives, Qualitative research, Research methods Paper type Conceptual paper

Introduction The aim of this paper is to explore a theoretical and methodological dilemma that became evident to the researcher on completing stage one of a grounded theory project on implementation of best value legislation, by local government authorities (LGAs) in Victoria, Australia. Initially there was a strong commitment to using orthodox Glaserian (1978, 1992, 1998, 2001, 2003) “grounded theory” to explore the implementation of best value. Quite early in the research process, however, a “story” presented by a participant was so compelling, and original, that just to “code” the interview transcript without a full inclusion of this particular participants story would have seriously diminished the research outcomes. By critical voice, the authors refer both “literally” to the critique of state government legislation in terms of its impact on LGAs and the “lived lives” of their constituents; as well as to the potential of personal stories to demonstrate life as experienced directly by the interviewee. As a result of this particular interview, some limitations of using Glaserian “grounded theory” in a singular and orthodox manner became evident, providing an urgent imperative for closer examination of the contingencies entailed in combining grounded theory with narrative analysis. The paper has a number of sections. The following section provides the first author’s reflexive account of her identity, and methodological stance, as researcher within this research. A short note explains the involvement of the second author.

Qualitative Research in Organizations and Management: An International Journal Vol. 2 No. 3, 2007 pp. 179-193 q Emerald Group Publishing Limited 1746-5648 DOI 10.1108/17465640710835346

QROM 2,3

Next, background to the research is presented, and the research problem is outlined. The Maverick interview is profiled and the nature of the methodological crisis is considered; and the emergent epistemological and ontological arguments with their underpinnings in the literature of grounded theory and narrative are debated. The paper concludes with some comments on the impact of such a “crisis of representation” on early career qualitative researchers.

180 The researchers First author The first author is a PhD candidate researching reform processes associated with the introduction of best value legislation into local government in Victoria, in 1999. Under guidance of her supervisor, the first author (here in after called the researcher except where it was more appropriate to use the pronoun “I”) was committed to applying orthodox Glaserian grounded theory. This certainly seemed an appropriate methodology as her focus was on investigating how actors implement and mediate legislation generated by, from, or within, the institutional structure of the state, as well as taking account of idiosyncratic local contingencies, strategic policies, and the cultural environments of the LGAs concerned. Despite initial commitment to all the axioms of the orthodox Glaserian approach the imperative to include narrative analysis as part of the current grounded theory project stemmed, serendipitously, from her stumbling “unguarded” into the metaphysical domain accurately dubbed as “he crisis of representation” by Denzin and Lincoln (2003, p. 27). This resulted from the unique nature of one particular interview – a “David and Goliath” story of such rich textual quality that the researcher felt that obedient compliance with the coding dictates of grounded theory would excise an important critical voice, diminishing somewhat, the research outcomes. Second author The second author was not involved in the research here, but was experienced in narrative research within the local government area, in the 1990s, during a period of enforced amalgamations and compulsory competitive tendering (CCT) initiatives. As colleagues, the decision to work on this paper together made sense. The research In Australia, local councils represent administrative subunits of the states. They function both to implement state-legislated changes and resolve policy disputes at the sub-national level. During the 1990s, Victoria was driven by a state government whose agenda for local government reform focused on institutional arrangement. According to Martin (1999, p. 19), Victoria was “seen as the most proactive (of all states) having completed boundary reform [through amalgamation of local councils] within a specific timeframe,” as well as achieving significant rate cuts and setting minimum standards for CCT. Enforced CCT initiatives, rate cutting and rate capping regulations prescribed by the state, further exacerbated tensions arising from amalgamation. Vince (1997) later argued that, the Victorian experience of amalgamation, with its poorly planned, hastily executed mergers that did not involve councilors, staff and community, resulted in long-term organisational problems that ultimately impacted negatively on service delivery, a point which became obvious in the “Maverick” interview.

As a result of these changes, LGAs experienced a plethora of enforced structural modifications over which they exercised little control. Perhaps, the most notable of these was a massive reduction of councils and shires, from 210 LGAs, to just 78 in a period of two-and-a-half years. Best value legislation, introduced in Victoria in 1999 by the newly elected Bracks’ Labour Government, aimed to over-ride the management strategy of CCT implemented by the prior conservative government. The new legislation implicitly recognised the negative impact of the highly regulative ethos of the CCT period; and, more explicitly, acknowledged that a “one size fits all,” market-driven approach to service delivery, was not necessarily appropriate for achieving modernization of the local government sector. It heralded potential for an altogether more flexible, self-regulatory, democratic and ameliorative approach towards local governance. From the first stage of data analysis it was evident that a context of uncertainty, and at times, confusion, prevailed after the introduction of Best Value. Participants in the research cited a lack of specified guidelines for personnel responsible for the implementation, although the legislation required demonstration of community consultation, and evidentiary documentation of effective service delivery. This constituted a key challenge, as presumably overly prescriptive guidelines would be counter to the self-regulatory ethos and flexibility implicit in the legislation. The greatest challenge noted by participants however, was to produce a credible review procedure for their services that dovetailed best value program requirements with their organisation’s capacity to deliver best value outcomes. The case represented by the “Maverick” interview (as the researcher dubbed it), illustrated a purely intra-organisational response, may not be the whole story with respect to implementation of best value, in Victorian local authorities. The reactivity of at least some participants, in this instance, citizens of a rural community who took it upon themselves to re-establish the former (pre-amalgamation) boundaries of their Shire, called for some form of representation. This “Maverick” story goes to the heart of the identity of the social actors under study. In this particular case, the participant along with local citizens, collectively resisted what was perceived as threat, or damage, to the fabric of their community, while simultaneously developing processes to meet regulatory requirements of the legislation contributing to that damage. The research problem Initial coding operations were well developed, with data collection just beyond the “scoping” stage, and coding of eight in-depth interviews with organisational development personnel from a range of shires and municipalities throughout Victoria, underway. In keeping with the analytic inductive approach (Silverman, 1997) of grounded theory, a thematic analysis based on codes drawn from the interview transcripts was undertaken. Organisation of data into categories and themes from the start enables a common thread to be readily identified and substantiated from the data, and Glaser (2001, p. 225) is insistent on “a set of fundamental processes that need to be followed if the study is to be recognised as a product of the grounded theory methodology.” This includes simultaneous data collection, coding and analysis; a range of subsequent coding operations; the use of constant comparison; memoing, theoretical sampling, categorisation and densification to delimit a core category thereby working the data toward identification of a basic social process. As a result of preliminary data collection and analysis, several themes and a “latent” grounded

A researcher’s tale

181

QROM 2,3

182

theory were emergent. Things were generally moving in a predictable fashion. Suddenly, a “story” presented by one participant was so compelling and original that just to “code” the interview transcript would have seemed a travesty. Firstly, it would obfuscate the participant’s right to individual voice, but secondly, coding would dampen, or indeed diminish, the “David and Goliath” quality of the story. The reduction implicit in coding operations would have led to “loss of critical voice”; and the researcher was conscious that this may seriously diminish research outcomes, including the validity and authenticity of the parsimonious grounded theory apparent, even at this early stage of analysis. Critical voice refers “literally” to the critique of state government legislation in terms of its impact on LGAs and the “lived lives” of their constituents, as well as to the potential of personal stories to demonstrate life as experienced directly by the interviewee, or more accurately “owner” of the narrative. Intuitively, narrative presented as a method which would enable inclusion of this interview, and countervail against the dependence of orthodox Glaserian method on reifications of “lived experience” (albeit “grounded” ones). Clearly, a closer investigation was called for as orthodox grounded theory was, methodologically speaking, decidedly not on an intimate footing with narrative, at least on epistemological grounds. Furthermore, it has been recently advanced (Rynes, in Editor’s note to Suddaby, 2006, p. 633) that grounded theory in management research is in danger of losing its integrity both because of “overly generic use of the term ‘grounded theory’ and confusion regarding alternative epistemological approaches to qualitative research.” This predicament, has articulated into an “erosion versus evolution” debate between the originators of the approach: The tension ultimately prov[ing] to be a point of departure between the founders of grounded theory, with Glaser favouring creativity and openness to unanticipated interpretations of the data while Strauss (and co-author Juliet Corbin) became advocates of adherence to formal and prescriptive routines for analysing data (Suddaby, 2006, p. 638).

In other iterations of grounded theory methodology, Charmaz (2005) argues: Glaser (2001) treats data as something separate from the researcher and implies that they (data) are untouched by the competent researcher’s interpretations. If perchance researchers somehow interpret their data, then according to Glaser these data are “rendered objective” by looking at many cases (pp. 510-511).

Buoyed by this position, it was felt that one should and may proceed without adhering rigidly to the neutrality and axioms of mid-century positivism. Grounded theory and narrative may be sufficiently compatible bedfellows, minimally in the domain of ontology and axiology, to legitimate some form of “marriage,” which might accommodate their apparently fundamental epistemological incompatibility. To this end, the epistemological, axiological and ontological foundations of the two methods of inquiry were subjected to closer scrutiny. Several urgent questions immediately surfaced catapulting me into the domain of reflexivity about the “knowledge-making” enterprise itself (Whitley, 1984). For example, what methodological dilemmas might occur if this story was maintained complete as “narrative” within the grounded theory project? Humphreys (2005), for example, in his discussion of autoethnography and reflexivity, attempts to achieve what Saldana (2003) describes as “a solo narrative . . . reveal[ing] a discovery and retell[ing] an epiphany in a character’s life” (pp. 224-5) to illustrate the potential for

adding value to qualitative research by using such an approach ( p. 841). How should one deal with epistemological divergence resulting from two research strategies sitting within alternative paradigms of inquiry? Glaserian grounded theory, with objectivist epistemological persuasions, sits theoretically in the interpretivist camp, whereas narrative analysis is a “child” of post-modernist constructionism. Epistemologically speaking at least, narrative sits in this transactional and subjectivist domain, meaning the researcher is essentially a co-creator with participants of the research outcomes. Thus, from an axiological standpoint, both researcher and participants should remain written into the research. Not so with grounded theory. Given these tensions, was it nevertheless legitimate, and valuable, to combine the two methodologies? Would doing so make the research outcome more robust? As an early career qualitative researcher, I was unsure. However, as Calas and Smircich (1999) argue “Multi-paradigmatic awareness simply facilitates a very modern, metatheoretical discussion around these issues: What philosophy of knowledge is behind “truthful knowledge”? Each paradigm is a foundational claim (a metatheory) about the possibility of true knowledge. Each offers a way toward a more complete understanding or explanation of the social world. Each claims to be the best view of the world “out there”. None [author’s italics] accounts for the language game in which they may all be embedded” ( p. 651). Yet importantly, they point out the significance of having multiple paradigms in organisation studies is that they do “edge” us towards such reflexivity. David and Goliath: the “Maverick” interview The “Maverick” story was a community story, as well as being both an “inter”- and intra-organisational saga. This shire had succeeded, where no other shire or municipality had, in re-claiming its former identity, historical boundaries, and “community status” that the participant made amply clear were negatively compromised by imposed LGA amalgamation, and other public sector management strategies, such as CCT and best value: Case 6 (Rural): “I picked up the pieces of what happened to the workforce under the forces of CCT. The quality of jobs had deteriorated remarkably and . . . we slowly built back up to a rational view of the world . . . I’ve lead the de-amalgamation [as]. . . country people were getting a bit pissed of . . . but it took six years for people to twig what happened [in which]. . . no council work done. There [had been] no council for the first couple of years [after the amalgamation]. . . The people they [Government] chose were completely incompetent at running any particular council. They made monumental errors in capital expenditure and about social values. So they set in train the de-amalgamation because they were unaware of what was going on. I’ll give you an example. The white lines on our roads were being painted by people from Western Australia. But none of those people worked in our town or spent their money here. Now, every time ten people who live and work here . . . one extra high school teacher is put in the town” (excerpt from interview 6. C).

A powerful story The “de-amalgamation” story demonstrated the capacity of citizen-based social movements to achieve ends one would have supposed “unthinkable,” given the circumstances of the previous decade. It also illustrated the potential “unruliness” of unintended consequences reform processes may generate, even when circumstances render this relatively unlikely. In other cases such as in Case 4 illustrated below,

A researcher’s tale

183

QROM 2,3

184

participant responses were typically framed within essentially intra-organisational parameters – the stories were “organisational stories” rather than community or “inter”-organisational accounts, and they lacked the power of story as a site of representation of an epiphany in a character’s life. For example: Case 4 (Rural) “We have got a different mindset and framework [and] we have the pressure of Best Value (BV) to pass this magic test. We discovered you could just tick off a lot of the BV steps. In the process we’ve discovered a lot more about our services. So I suppose we are a bit more positive that we have built a new style which is relatively efficient . . . and organisation specific” (excerpt from interview 4, with H) Case 4 (Rural) “We’ve been a bit slow implementing BV [but] we are doing it seriously now. People aren’t seeing it as a threat. They are seeing it as an opportunity” (Excerpt from Interview 5, with C).

Methodological crisis: the problem of commensurability After struggling for weeks, firstly to clearly articulate, and secondly, to resolve the methodological issues being faced in trying to “marry” two non-commensurate epistemological domains, I came upon the following quote in Denzin and Lincoln (2003, p. 267) They ask: Are paradigms commensurable? Is it possible to blend elements of one paradigm into another, so that one is engaged in research that represents the best of both worldviews? The answer, from our perspective, has to be a cautious yes. This is especially so if the models (paradigms) share axiomatic elements that are similar, or resonate strongly between them . . . Commensurability is an issue only when researchers want to “pick and choose” among the axioms of positivist and interpretivist models, because the axioms are contradictory and mutually exclusive.

Pick and choose – that was exactly what I wanted to do! I did not want to treat the two methodologies as mutually exclusive, sensing that used in conjunction they would make a valuable contribution to my research. If these approaches at least shared ontological and axiological axioms, surely there might be a strong case for researchers to “pick” and “choose” and indeed to “slur” epistemologically incommensurate paradigms? Grounded theory Orthodox Glaserian (1978, 1998, 2001) grounded theory has the intent of developing middle range theories through identification of the basic social process used by actors to deal with contingencies of their everyday reality. It was an appropriate choice to further the aims of the research of “explication,” or how actors solve problems in implementing legislation for public sector reform. Grounded theorists’ questions are not concerned with how knowledge is constituted as “truth” by the social actors (the post structuralist concern), or how their realities are structured through language. The grounded theorist wishes to identify (and theorise) how actors deal with their problems. The aims are pragmatic, that is, to know “what our informants” main concern is and how they seek to resolve it” (Glaser, 2001, p. 177). As methodology, grounded theory invites the researcher to tread similar pathways to those followed by traditional ethnographic researchers – carefully gathering data, and then arranging it in a consistent manner, generally according to coding dictates of

the method. Consequently, to a greater or lesser extent, participants are, ontologically speaking, “written out” of the story. This does not preclude them from “surfacing” indirectly by means of in vivo codes, or in vignettes supporting the emergent theory. The conventions of ethnographic realism also require researchers to write themselves out of the text, but include an expectation that researchers report clearly in the axiological domain, for example by exposition of their values, biases or the potentially “value laden nature” of any particular choice of study. Indeed, for this reason, the paper is presented as “A Researcher’s Tale,” both examining dilemmas confronted in doing grounded theory, and, simultaneously acknowledging how as researcher, one inevitably becomes part of one’s research, regardless of whether or not this is consciously intended in the research design. One reason for this is that in-depth interviewing always provides potential for the researcher to participate in stories of participants as they are “told, lived and co-composed” in the course of the interview. Thus, at a point somewhere halfway through analysing the “scoping” interviews, the researcher was dismally aware if she continued in a singular fashion with orthodox Glaserian methodology there was a danger of losing the multi-layered, and many stranded, qualities of the interview material; most particularly of the one interview providing an excellent, if antimonous, case of the basic social process emerging from analysis of data so far. Basic social process (BSP) is the term used by Glaser (1978, 1998, 2001) to identify the key category, or process, via which social actors confront and solve problems in negotiating their everyday experiences – in this case, interpreting and implementing best value legislation. So, importantly, BSP’s exclude neither structural, nor temporal dimensions as epistemological or ontological axioms for framing or interpreting social action. Glaser (1978) prescribes that data is worked via open, selective and theoretical coding processes towards ever higher levels of reification, moving it towards increasingly conceptual domains. Ultimately, by means of theoretical coding, researchers move towards generation of a grounded theory or “a set of integrated conceptual hypotheses organised around a core category” (Glaser, 2003, p. 2). In essence, this endeavour is positivistic because it requires “stripping” the data of much of its ontological robustness/connectedness, in the search for nomothetic patterns or regularities. Despite this, the theory that emerges remains “grounded,” and indeed, provisional. In short, the participant is “written out” of the text in any sort of ontological sense, and the researcher is even more convincingly banished. Narrative In contrast to, the aims of Glaserian grounded theory, narrative inquiry sees research as a continuous collaborative act, where research outcomes are ideographic, rather than nomothetic, representations of the lived experiences of the participants. In opposition to grounded theory, narrative privileges the role of “plot” in weaving together recognisable patterns of events, recognising that central to the plot structure are human predicaments and attempted solutions. Narratives are neither true, nor false, but simply forms of “reality representation.” The language chosen by the individual, via which they relate their narrative, reflects how that individual sees the world. The individual’s construction of past events and actions serves to enable them to claim identities and construct their lives. Both culturally and organisationally, narratives serve to give cohesion to shared beliefs, and to transmit values. White (1981)

A researcher’s tale

185

QROM 2,3

186

observes narratives are “metacodes” for the nature of a shared reality. The corollary of this, ontologically, is that there is a plurality of discursive realities. Thus, “representations” of reality are perceived as just that – “representations.” Research outcomes from this standpoint then essentially become just one more form of “representation.” Yet in assuming such a standpoint perspective researchers make way for reflexivity and “a critical examination of the way modern (paradigmatic or foundational) knowledge has been constituted, without needing to provide for an alternative knowledge” (Calas and Smircich, 1999, p. 652). Interestingly, Schwandt (2003) advances an additional epistemological framework to traditional interpretivist accounts that may potentially offer at least one avenue for accommodating some of the tensions outlined above emerging from the struggle to reconcile epistemologically non-commensurate paradigms. Although, according to interpretivist traditions, the interpreter typically objectifies that which is to be interpreted (thereby remaining unaffected by and external to the interpretive process), philosophical hermeneutics offers a “radically different way of representing the notion of interpretive understanding” (p. 300). In this model the role of interpreter is that of “exegete” or one engaged in critical analysis or explanation of a text or some human action (which Schwandt uses interchangeably) based on the method of the hermeneutic circle. Several axioms of philosophical hermeneutics potentially provide a foundation for epistemological reconciliation. Firstly, the conviction that understanding, in the first instance, is not a procedure or rule-governed undertaking; rather, understanding is interpretation – “it is a very condition of being human” (Schwandt, 2003, p. 301). Secondly, understanding requires engagement of one’s biases, including a close examination of “our historically inherited and unreflectively held prejudices” (Schwandt, 2003, p. 302) because these function to disable our efforts to understand others as well as ourselves. Thirdly, only by entering into a dialogic encounter with what is “other,” or not understood, can we expose ourselves to risking and testing our preconceptions. Finally, the act of understanding is essentially “lived,” or “existential,” and “does not occur in two separate steps of first acquiring understanding; and secondly applying that understanding” (Schwandt, 2003, p. 303). What Schwandt (2003) offers the researcher here, then, is the notion of understanding as simultaneously a form of moral-political engagement, and a mode of inquiry not aimed at developing a procedure of understanding, but rather directed to clarifying the conditions in which understanding takes place. Philosophical hermeneutics opposes “naı¨ve realism or objectivism with respect to meaning, and can be said to endorse the conclusion that there is never a finally correct interpretation” (p. 302). This position would be acceptable to some constructivists, including Glaser, yet philosophical hermeneutics recognises meaning as not necessarily constructed (as in created, or assembled) but as an outcome of negotiation; or a “coming to terms” between actors, thus “embodying” the researcher squarely within the research “product”. In this sense understanding is always dialogic, participative and bound up with conversation, the form of which an in-depth interview essentially mirrors. This resolution of the reflexivity conundrum interfaces comfortably with Ellis and Bochner’s (2000, p. 773) reflections on autoethnography as “an autobiographical genre of writing and research that displays multiple layers of consciousness, connecting the personal to the cultural.” Coffey and Atkinson (1996) point out that narratives and stories are an obvious way for social actors to talk to strangers, in this case, the researcher. To the post-modernist,

reality is complex and multi-faceted, but the paradigm is often selected as the most appropriate epistemological framework for accessing behavioral issues, such as the responses of local government personnel to implementing best value. Further, than allowing individuals the freedom to explore their own career paths, narratives also function to bring to the surface issues that can then be understood at a level other than the rational and external with which individuals too often order their lives, thus deepening understanding in the moral-political and existential sense that Schwandt flags as the epistemological contribution of philosophical hermeneutics. In contrast with objectivist epistemological assumptions of interpretivism, the constructivism of the post-structuralist paradigm of inquiry entails: . . . the view that all knowledge, and therefore all meaningful reality as such, is contingent on human practices, being constructed in, and out of, interaction between human beings and their world, and developed and transmitted within an essentially social context (Crotty, 1998, p. 42).

A commitment to constructivism is subjectivist because it entails the assumption that meaning is essentially “constructed” by participants, in any given situation. This position is essentially common to both the post-structuralist and interpretivist paradigms of inquiry, but it is epistemologically incompatible with the objectivism and the realism of orthodox grounded theory, as espoused most recently by Glaser (2001). For these reasons, I sensed immediately the story of the “Maverick” interview had potential to expand, as it were, the “ontological sufficiency” of the lived experience both of the interviewee, and the community represented in the case of this “lived story.” The retelling of the story was surely axiomatic of how public sector reform was managed, in the Victorian local government context? For those who practice narrative as method, interestingly, there was a curiously close overlay in this instance between inclusion of this particular narrative and what Glaser refers to as “fit.” By “fit” Glaser (1978, p. 4) means the categories of the theory must fit the data. That is, the data must not be forced to fit pre-existent or pre-conceived categories, or be discarded in favor of keeping extant theories in tact. Slurring The powerful storied quality of the interview called for an alternative form of representation to the inevitable “reification” entailed in the procedures of “grounded theory.” According to Glaser (1978) “open coding” requires “running the data open” by coding different incidents into as many categories as possible, beginning with fracturing the data into analytic pieces that can be raised to a conceptual level. Notably, this method is quite the counter to what Rice and Ezzy (1999, p. 125) consider “proper”, namely: Narrative analysis is distinguished from other forms of qualitative data analysis by its attention to the structure of the narrative as a whole. Traditional thematic analysis typically fragments texts, coding small chunks then collating them. Narrative analysis searches for larger units of discourses and codes their structure and thematic content.

A narrative approach would counterbalance the confronting limitation in coding the Maverick interview, and the sense the researcher had of emasculating this unique story by destroying its ontological substance. “Losing the plot” through fragmentation of the powerful narrative quality of the interview could be avoided.

A researcher’s tale

187

QROM 2,3

188

The possibility of subverting what Glaser (2003) refers to as “relevance,” a key criteria for the robustness of a grounded theory, in understanding the take up of best value, was also apparent as codes established from the data are essentially reifications of the actor’s subjective definitions of their world, although directly grounded in them. Whilst coding takes place in close proximity to participants’ constructs about reality, the codes do not, strictly speaking, “exactly” replicate these. However, this does not exclude codes from directly “mirroring” vivid phrases participants used to encapsulate their reality, as is frequently seen in grounded theory research. Nor does it preclude use of vignettes or direct quotations from our data. So as method, grounded theory admits merely “glimpses” of the plot. Glaser (2003, p. 14) argues four key criteria are equally applicable to conducting grounded theory research and for judging or evaluating it. If appropriate procedures are adhered to they will generate “theory that fits, works, is relevant, and readily modifiable.” In an attempt to delineate overlays and points of divergence between the two methods, let us first consider these in relation to narrative analysis. The first of these criteria concerns validity and Glaser uses the word “fit,” which in grounded theory derives from the method of constant comparison. Via constant comparison, the researcher ensures concepts adequately express patterns in the data they purport to conceptualise. To ensure “fit,” the method of constant comparison must be used for the duration of the research. “Fit” is not an explicit issue for the narrative analyst, simply because an extremely close ontological overlay between the act of developing “plot,” and the participant’s experience of reality, renders it redundant. However, narratives must “ring true” to become part of social currency minimally in the groups that generate them. Thus, a number of authors have sensibly pointed out that people are not free to fabricate narratives “at will.” “The stories produced by individual social actors would make no sense if they did not accord, however obliquely, with broader social narratives” (Lawler, 2002, p. 251). So in presenting a narrative, Glaser’s criteria of “fit” essentially plays a role by default rather than as an outcome of the principle of constant comparison. Secondly, for Glaser, “workability” is an indicator that grounded theory generates hypotheses sufficiently accounting for how participants in a substantive area continually resolve their main concerns. So, grounded theory must have explanatory value, not just a descriptive purpose. In this sense, “workability” is also central to narrative analysis, but resolution may be expressed as an element of the plot, for example or as part of a coda; rather than in a categorical sense, as an identification of a basic social process, or a core category. While Glaserian grounded theorising is predicated on identification of the mode of problem resolution actors utilise to mediate their daily reality; narrative does not preclude such a possibility. Resolution is not strictly necessary as an “outcome” of narrative research, because stories can equally well be flexible, ambiguous, open ended or have multiple interpretations (Rice and Ezzy, 1999). The third characteristic of Glaserian grounded theory entails “relevance,” not just to an academic audience, or to the researcher, but via explication of the participants’ main concerns. Features of good narrative are entirely compatible with this criterion of robust grounded theory. In the case of the “Maverick” transcript, the participant’s passion for relational aspects of his community shone throughout, for example, he identifies those disenfranchised by the amalgamation, and other key players in the

story, and expounds on the general contribution of various community members in the township’s mobilisation to “de-amalgamate.” Furthermore, in clarifying “how things work” in rural communities the participant makes evident the power of stories to bind individuals as they realise they have common experiences underlying their shared identity and linked futures. Stories of this nature have the capacity to enable members of organisations, as well as the reader/“listener,” to “step into the story recreating the world it presents, and retaining the experience,” in this way making the story their own. The researcher too is compelled towards “auto-ethnographic” representation as defined by Ellis and Bochner (2000). What could be more relevant? The fourth and final evaluative criteria for Glaser, entails modifiability. Grounded theories are always “modifiable” on the basis of new data, and this should not be confused with verification. Grounded theories are never “right” or “wrong.” Such data likewise never provides “proof” or “disproof” as such, just new analytic challenges. Although grounded theory requires identification of basic social processes, and explanation of how these are used for problem resolution by the social actors, Glaser acknowledges the provisional nature of such theorising. With grounded theory then, the “potential” for explanation (the positivistic tendency of this) would not necessarily be compromised by the flexibility and open-endedness one celebrates in using the narrative method. The important point in examining Glaser’s four evaluative criteria for robustness in grounded theory is that the gap between narrative and grounded theory as methods for producing particular research outcomes closes. Indeed, the gap is quite minimal, particularly from the axiological and ontological standpoint. Schwandt (2003) provides an avenue for accommodating epistemological divergence between the two methodologies, by including philosophical hermeneutics as a fourth form of interpretivist understanding. “Relevance” in the Glaserian sense is self-evident in the aspects outlined above of the “Maverick” narrative. One of the dilemmas of a coding operation, and indeed adherence to grounded theory procedures generally, is that the research act may erode the criterion of relevance of the “research act” for all participants/stakeholders implicated in the research. The “Maverick” interview transcript setting in train these theoretical and methodological “musings” clearly had qualities making this case sufficiently different from other cases to merit its inclusion as a narrative, rather than “submerging” it in a grounded theory. With the story of “de-amalgamation” unfolding, both temporally and from the standpoint of the participant, it provided both descriptive and explanatory attributes of a good “plot.” A richly fabricated story emerged, with “story lines” densely woven around a number of themes, amplified, but dissimilarly “resolved” in other interviews, such as: Case 15 (Rural) Local knowledge went out the door here because the ones with useful skills just go in those circumstances [enforced amalgamation]. . . but Council ended up referring back to people who were doing the jobs before in one way or another. In the small communities like O, or distant places like M, you didn’t have an ‘out there’ workforce so you are walking down the street and you might see Joe Blow who maintains roads and let him know directly what’s wrong . . . that’s how things get done in the country. Now you have to put in a formal call and contractors have to be available on a certain day . . . it still works but whether it works any better with contracting is debatable. So with Best Value a lot of our energy has been bringing services back in-house (Int.15. G).

A researcher’s tale

189

QROM 2,3

190

Case 3 (Metropolitan) I think there still is some scepticism about Best Value because CCT put “off” a lot of people, quite literally! It built up a lot of barriers between teams. That made it actually become competitive in-house between units . . . a lot of “them” and “us”. I think communication systems are just now starting to be identified, and still some communication barriers need to be addressed . . . but a lot of councils really did BV like they had done CCT, it was just under a slightly different banner . . . but the approach taken here was very different. The CEO here was very much about developing the organisation and so he just incorporated best value principles into an overall program he wanted to introduce (Int.3.J).

Some concluding comments The researcher here is a PhD candidate and early career researcher. In the course of conducting her PhD research she suddenly faced a confronting theoretical and methodological dilemma. Propelled into the realm of reflexivity the challenge was to articulate research outcomes without becoming entrapped in “unreflexive” representational forms and “paradigmatic webs” which may easily typify modernity, and subsequently dictate methodological orthodoxy. Fearful of committing an unforgivable methodological gaffe, but at the same time fascinated by the problem, and recognising that other early career researchers must also, on occasion, face similar dilemmas, she set out to explore if two non-commensurate epistemological domains, in this case orthodox Glaserian grounded theory and narrative, could be merged to provide a more robust research outcome. The researcher’s “crisis of representation” occurred in the process of transcribing and coding a “Maverick” case. Whereas in all other cases adaptive intra-organisational responses to finding solutions to the problems of implementing best value were evident, the “Maverick” interview represented an extra-organisational and community-based response to the imposition of regulatory pressures entailed in the Legislation. The “David and Goliath” magnitude of this response, made the researcher feel this interview would supply a validating dimension to any theory emerging from the data, representing, so it seemed, a site of resistance to government reforms. What appeared was a phenomenon of episodic proportions, to which only narrative method would do justice, in conjunction with a more “piecemeal,” patterned and detailed analysis of micro-processual phenomenon more malleable to “grounded theorizing.” Grounded theory methodologically speaking is primarily inductivist. Glaser (1998) acknowledges, however, that grounded theory borrows deductive methods from the positivistic/empiricist paradigm, for example during the theoretical sampling stage. He is also positivistic in prescribing that theory is the end-product of the research process. From an epistemological standpoint, however, although grounded theory interfaces comfortably with the epistemological constructivism of an interpretivist paradigm of inquiry, it is incommensurate with the relativistic persuasions and constructionism of narrative analysis. Polkinghorne (1998) says: Narratives exhibit an explanation instead of demonstrating it” (italics the authors). Stories can function to find an intentional state that mitigates (or at least makes comprehensible) a deviation from a canonical cultural pattern. This is possible because the story does not depend on its reference to some extralinguistic reality, but on its openness for negotiating meaning. As Bruner (1990) claims, “stories are especially viable instruments for social negotiation” echoing Schwandt’s (2003, pp. 49-50) analysis of the epistemological axioms of philosophical

hermeneutics discussed previously. The narrative of the “Maverick” interview presented precisely this opportunity. Despite some epistemological differences, on closer examination, there is a great deal of convergence, or methodological overlay, between with these approaches suggesting that, while grounded theory and narrative are somewhat challenged on grounds of epistemological intimacy, they do have potential to be very compatible ontological “bed-fellows.” The “Maverick” case clearly demonstrated potential to provide an adjunct for “fit, relevance, workability and modifiability,” the principles Glaser (2003, p. 14) adopts as criteria, applicable to conducting and evaluating, as well as judging, grounded theory. “Losing the plot” through fragmentation of the powerful narrative quality of the “Maverick” interview could be avoided. Despite general epistemological incommensurability of the paradigms in which narrative and grounded theory are typically situated, it has been demonstrated that these two approaches share much in common. It is on this basis it is argued, that they can potentially make more than good “bed-fellows.” Firstly, although narrative inquiry sits more comfortably with the constructivist/subjectivist assumptions of post-structuralism than with the realist/objectivist ones drawn on in grounded theory, it nevertheless shares with grounded theory, a multi-layered perspective of reality. So, in terms of ontological assumptions, post-structuralism is commensurate with interpretivism. Interpretivists for example, readily recognise in any given context there are multiple realities. This logically entails accepting multiple methods of inquiry are useful if reality is to be properly understood. For interpretivists, meanings are simply the basis of symbolic exchange and the main “currency” via which actors negotiate daily reality. Parallels here between grounded theory approaches and narrative inquiry are obvious. There is not much of a gap between a method which aims to elicit and attend to the “storied” qualities of the social actor’s experience, and one which aims to attend to processual aspects of their daily lives. The difference is really just one of how the researcher attends to “mining” the data. Apart from sharing these “multi-layered” ontological understandings, a second area of convergence concerns the primacy which both narrative inquiry and grounded theory give “agentic” attributes of social acts. Both forms of inquiry are premised on the notion that individuals subscribe to, and rely on, symbolic exchange. The medium of exchange may present a slightly different focus. For the narrative analyst this occurs via narratives (stories) used to represent and to understand the world around them, as well as to invest their lives with meaning (Sarbin, 1986), or via symbolic exchanges punctuating mediation of the ordinary events in the daily lives of individuals. In narrative inquiry, events and exchanges are given meaning according to the part they play in either the whole of an individual’s life, or for respective parts of their life, whether at work, with family, or in some other organisation. As Eisner (1997, p. 60) says: “We make our experience, not simply have it” a statement few grounded theorists or interpretivists generally, would care to challenge. Accordingly, narrative inquiry includes a temporal dimension which interpretivists have tended to neglect. “Story telling” then, is not just a site where meaning is constructed by actors. The stories themselves assume ontological significance beyond the immediate situated context in which they are generated. They become repositories of the actors past, sites where actors invent the present, and a means for social actors to envision a future. In short, the story is a very suitable vehicle for representing the atypical “Maverick” case

A researcher’s tale

191

QROM 2,3

192

alongside of the grounded theory practising alchemy which explicates/objectifies the shared basic social process of others responding in more typical ways to the challenges of implementing best value in Victorian local government. It was with considerable relief the researcher could see, at least insofar as orthodox Glaserian grounded theory and narrative go, it could be argued quite cogently inclusion of a “Maverick” case within a grounded theory project was not only appropriate, but would enhance the reader’s feeling for, and understanding of, all participants’ stories in a more inclusive way. As an early career researcher, she could relax, confident in the knowledge that she could defend her position and remain true to all the research participants, no matter what their story. References Bruner, J.S. (1990), Acts of Meaning, Harvard University Press, Cambridge, MA. Calas, M.B. and Smircich, L. (1999), “Past postmodernism? Reflections and tentative directions”, Academy of Management Review, Vol. 24 No. 4, pp. 649-71. Charmaz, K. (2005), “Grounded theory: objectivist and constructivist methods”, in Denzin, N. and Lincoln, Y. (Eds), Handbook of Qualitative Research, Sage, Thousand Oaks, CA, pp. 509-35. Coffey, A. and Atkinson, P. (1996), Making Sense of Qualitative Data: Complementary Research Strategies, Sage, Thousand Oaks, CA. Crotty, M. (1998), The Foundations of Social Research: Meaning and Perspective in the Research Process, Allen and Unwin, St Leonards. Denzin, N. and Lincoln, Y. (Eds) (2003), The Landscape of Qualitative Research, 2nd ed., Sage, Thousand Oaks, CA. Eisner, E.W. (1997), “The new frontier in qualitative research methodology”, Qualitative Inquiry, Vol. 3 No. 3, pp. 259-73. Ellis, C. and Bochner, A.P. (2000), “Autoethnography, personal narrative, reflexivity: researcher as subject”, in Denzin, N.K. and Lincoln, Y.S. (Eds), Handbook of Qualitative Research, 2nd ed., Sage, Thousand Oaks, CA, pp. 733-69. Glaser, B.G. (1978), Theoretical Sensitivity, Sociology Press, Mill Valley, CA. Glaser, B.G. (1992), Basics of Grounded Theory Analysis: Emergence Versus Forcing, Sociology Press, Mill Valley, CA. Glaser, B.G. (1998), Doing Grounded Theory: Issues and Discussions, Sociology Press, Mill Valley, CA. Glaser, B.G. (2001), The Grounded Theory Perspective: Conceptualization Contrasted with Description, Sociology Press, Mill Valley, CA. Glaser, B.G. (2003), The Grounded Theory Perspective II: Description’s Remodelling of Grounded Theory Methodology, Sociology Press, Mill Valley, CA. Humphreys, M. (2005), “Getting personal: reflexivity and autoethnographic vignettes”, Qualitative Inquiry, Vol. 11 No. 6, pp. 840-60. Lawler, S. (2002), “Narrative in social research”, in May, T. (Ed.), Qualitative Research in Action, Sage, London, pp. 242-58. Martin, J. (1999), “Leadership in local government reform: strategic direction versus administrative compliance”, Australian Journal of Public Administration, Vol. 58 No. 3, pp. 10-24. Polkinghorne, D. (1988), Narrative Knowing and the Human Sciences, State University of New York Press, Albany, NY.

Rice, P. and Ezzy, D. (1999), Qualitative Research Methods, Oxford University Press, South Melbourne. Saldana, J. (2003), Longitudinal Qualitative Research: Analyzing Change through Time, Rowman and Littlefield, Oxford. Sarbin, T.R. (Ed.) (1986), Narrative Psychology: The Storied Nature of Human Conduct, Praeger, New York, NY. Schwandt, T.A. (2003), “Three epistemological stances for qualitative inquiry”, in Denzin, N. and Lincoln, Y. (Eds), The Landscape of Qualitative Research, 2nd ed., Sage, Thousand Oaks, CA, pp. 292-331. Silverman, D. (Ed.) (1997), Qualitative Research: Theory, Method and Practice, Sage, London. Suddaby, R. (2006), “From the editors: what grounded theory is not”, Academy of Management Journal, Vol. 49 No. 4, pp. 633-42. Vince, A. (1997), “Amalgamations”, in Dollery, B. and Marshall, N. (Eds), Australian Local Government: Reform and Renewal, Macmillan Education, Basingstoke, pp. 151-72. White, H. (1981), “The value of narrativity in the representation of reality”, in Mitchell, W.J.T. (Ed.), On Narrative, University of Chicago Press, Chicago, IL, pp. 1-23. Whitley, R.D. (1984), The Intellectual and Social Organization of the Sciences, Oxford University Press, Oxford. Further reading Charmaz, K. (2000), “Grounded theory: objectivist and constructivist methods”, in Denzin, N. and Lincoln, Y. (Eds), The Landscape of Qualitative Research, Sage, Thousand Oaks, CA, pp. 249-91. Glaser, B.G. (1994), More Grounded Theory Methodology: A Reader, Sociology Press, Mill Valley, CA. Glaser, B.G. (1996) in Kaplin, W.D. (Ed.), Gerund Grounded Theory: The Basic Social Process Dissertation, Sociology Press, Mill Valley, CA. Glaser, B. and Strauss, A. (1967), Discovery of Grounded Theory, Aldine, Chicago, IL. About the authors Janet Bryant teaches sociology and research methods at Swinburne University. Research interests include social policy on youth homelessness; her current doctoral study of changes to public sector management; and, participation in action research to augment students’ real world learning via collaborative projects with Swinburne’s Centre for Regional Development including The Role of Regional Universities; and participation in a large intra-institutional Social Indicators Project. Janet Bryant is the corresponding author and can be contacted at: [email protected] Barbara Lasky, an Associate Professor, is the Academic Leader of the Business Enterprise Group at Swinburne University of Technology, Division of Higher Education Lilydale. Her research interests include issues of organisation behaviour and group dynamics, and action research and narrative methodologies. She has published both nationally and internationally. E-mail: [email protected]

To purchase reprints of this article please e-mail: [email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints

A researcher’s tale

193

The current issue and full text archive of this journal is available at www.emeraldinsight.com/1746-5648.htm

QROM 2,3

The “singular view” in management case studies Sue Llewellyn

194

The Management Centre, The University of Leicester, Leicester, UK, and

Deryl Northcott The Auckland University of Technology, Auckland, New Zealand Abstract Purpose – This paper aims to challenge the conventional wisdom in qualitative case study research that the findings of the case depend on the identification of common themes across the statements of multiple case informants (usually, as expressed at interview). Design/methodology/approach – This is a methodological paper that uses a published work to illustrate its arguments. It explores research on the meaning and significance of politically and culturally sensitive emergent change. Findings – The paper finds that, during such change, many respondents may not accurately discern the “direction of travel” in their organization and, hence, gathering evidence on common views may not be a productive research strategy. Research limitations/implications – It was only possible to use one illustration (politically and culturally sensitive emergent change); other scenarios where the “singular view” may be significant were, therefore, not covered. Practical implications – Ultimately, the findings of a case study may have to rely on insights from just one respondent. Originality/value – This paper argues that for some research agenda “singular views” may be more insightful than “common themes.” It also discusses the development of research that is prompted by a “singular view.” Keywords Qualitative research, Case studies, Research methods, Interviews Paper type Conceptual paper

Qualitative Research in Organizations and Management: An International Journal Vol. 2 No. 3, 2007 pp. 194-207 q Emerald Group Publishing Limited 1746-5648 DOI 10.1108/17465640710835355

Introduction Only one person holds the singular view. Along with being unique, the singular view also connotes the idea of an extraordinary insight or a particularly perceptive understanding of the significance of a situation. This paper is concerned with the validity of the “singular view” in the context of qualitative management research that seeks to uncover the sense and significance [1] of organizational situations (or events). This type of research is by no means the only kind of qualitative research undertaken in the field of management. Qualitative management research can also be focussed on discovering the characteristics (or defining features) of situations, events or institutions. There is no hard and fast distinction between “sense and significance” and “characteristics” – indeed, often the latter have to be defined before the former can be debated! Nevertheless, research that is primarily concerned with “sense and significance” is relatively frequent in qualitative work. This distinction between “characteristics” and “sense and significance” (essentially, the difference between “what is this?” and “what does this mean?”) is important in the context of the value of

the “singular view,” hence it is discussed further in the illustrative example (set out later) that forms the core of this paper. The illustrative example used is a (now) published paper by the authors where the value of the evidence we obtained from our singular respondent was at first, challenged by the reviewers on the grounds that it was not representative. Subsequently, as tends to be the norm in the pragmatic world of academic publishing, we addressed the reviewers’ concerns in the way they indicated would be acceptable to them, through further substantiating evidence (that in our case became available over time). However, we are not necessarily arguing for such “longitudinal triangulation” in this paper. Although such strategies may be helpful and will increase general confidence in research findings, in this paper we wish to explore a situation where reliance on a “singular” view – unsupported by other evidence – is valid. Even in the context of work concerned with the sense and significance of events (rather than their characteristics), qualitative researchers have been reluctant to trust insights unless they are fairly widely held; they have followed quantitative researchers in placing more reliance on majority rather than minority views. This paper discusses why this is the case and argues that under many circumstances encountered in qualitative management research it may actually be necessary to discount the common view in favour of the singular one. Where circumstances are complex and changing, when conditions pertain that have not been encountered before, when situations are highly politicised and where there are many stakeholders with different agenda, it is hardly surprising that there are multiple and different assessments of “what’s going on.” Moreover, any view on “what’s going on” is a “view from somewhere”; by definition the views of stakeholders will be views from the social and political “stake” that they have in the situation (or event). Amongst these stakeholder views there are likely to be some based on more knowledge than others, some that are formed on superior judgements to others and some from positions that give more insight than others into “what’s going on.” Hence, one may conclude that amongst the many conflicting views that may arise over the meaning and significance of a situation some views will be “closer to the mark” than others. Ultimately, at a particular point in time, there could be just one person who has “got it right.” If it is conceded that the singular view may be the most insightful one, why has case study research (which actually rests on the proposition that there is something to be learned from the single case) always sought multiple sources of evidence within the single case? This puzzle is explored next before the case for the singular view is made (and explained in relation to a specific example of empirical research). Why have we sought the “common view” in qualitative management case studies? Qualitative management case studies generally seek to establish “what’s going on” in any organization with primary reference to the views of multiple rather than single informants (often in conjunction with multiple rather than single sources of documentary evidence). As argued in the introduction, in terms of establishing the key characteristics (or the “what is this?”) of any situation or event this approach is generally the most valid one. However, because of the issues discussed earlier (i.e. differential stakeholder perspectives, knowledge bases, judgement abilities and

“Singular view” in management

195

QROM 2,3

196

access to information) the “more the merrier” method may not be the most productive when it comes to working out the “what does this mean?” aspect of “what’s going on.” This section discusses two interconnected issues that have tended to stop qualitative researchers sometimes taking a risk and going with a “singular view,” these are the history of the development of qualitative research and the role of theory in qualitative work. Both of these matters are of huge significance in their own right; what follows is only an attempt to sketch some brief comments pertaining to the main focus of this paper. The development of qualitative research Proponents of qualitative management case study research struggled to establish its legitimacy in the context of the dominant positivist paradigm resting on tenets derived from scientific work, i.e. rigorous testing, quantitative evidence, validity, predictability, reliability, and generalization (Scapens, 1990; Llewellyn, 1992). Detractors of case studies dismissed them with such descriptors as “anecdotal,” “unsubstantiated,” and “subjective.” Most damning of all was the indictment that case studies were “unscientific.” But there was an issue that was even more significant than combating this criticism in the development of the case study – this was simply that the case study was a new research approach. Even those advocating the case study inevitably began to understand their new methods in the light of positivism as the “scientific method in social science.” Consequently, researchers tended to argue the case for case studies largely by trying to meet some of the criteria for “good research” adopted by the scientific community. Rather than develop new criteria for judging the value of qualitative case study research and some fresh linguistic terms for describing its central tenets[2], qualitative researchers developed their methods largely from within the dominant paradigm of scientific research. This does not deny that there were “battles” between case study researchers and “positivists”; indeed, for some time now, self-avowed positivists have been hard to find within social science (Bryman, 1988). But, despite all the rhetorical distancing from positivism, in the qualitative camp there continued to be a dominant concern with issues around the “representativeness” of data and the “generalizability” of case study findings. And these criteria are, basically, derived from the scientific method. If the intent of the qualitative research under discussion is to discover the key characteristics of a situation or event (i.e. working out “what is this?”) then a concern with representativeness and generalizability is, usually, appropriate. But, if the focus is primarily on figuring out the sense and significance of “what’s going on” then these criteria are less relevant. Indeed, on occasions, they may impede rather than assist the research investigation. The concern with “representativeness” will be discussed first. One classic text in the development of research methodology in social sciences is Winch. He challenges John Stuart Mill whom he quotes as stating “. . . understanding a social institution consists in observing regularities in the behaviour of its participants and expressing these regularities in the form of a generalisation” (Winch, 1990, p. 86). Winch argues that Mill’s view is still very influential in the work of contemporary social scientists. “Regularities” (or frequent occurrences) should be the focus of any social research and the occurrence of these regularities makes researchers confident of “generalizing” from their observations. If researchers are primarily concerned with “regularities in behaviour” then representativeness is of prime significance because without adequate

“coverage” the regularity (or pattern) in the observed behaviour may not be discernible. This early concern with regularity and, hence, representativeness, continued into interview-based as opposed to observation-based research. The “common theme” in interviews is equivalent to the regularity in behaviour that Mill stressed. Underlying this approach is an assumption that what is not exhibited as regular behaviour, or what is not echoed as a common theme, lacks significance. What is not regular or common becomes, by implication, arbitrary or even senseless – at least so far as the research investigation is concerned. But although Winch (1990, pp. 86-91) was also concerned with regularities[3], he critiqued Mill because Winch was an early exponent of the view that identifying behavioural regularities in social science must encompass the perspective of the agent, which, in turn, makes reference to the cultural context within which the agent acts. The case study methodology grew from the ideas of Winch (and others) that understanding social phenomena involves an appreciation of the context within which they occur – indeed, the case study method embodies the notion that it is frequently impossible to discern any strict boundary between social phenomena and context. This conceptualisation of “understanding” challenged the idea that the “observation” of social behaviour was sufficient as a research method in social science because it is impossible to have an adequate understanding of context without grasping the context that the human agent perceives for her or his “behaviour.” So the case study methodology emphasises contextual understanding but, despite this focus on context, a concern with representativeness (to capture regularities) remains. Perhaps, the most quoted source on the case study is Yin (1989, p. 23); here is his definition: A case study is an empirical inquiry that: † investigates a contemporary phenomenon within its real-life context; when † the boundaries between phenomenon and context are not clearly evident; and in which † multiple sources of evidence are used.

The “what is this?” question of “characteristics” is implicit in Yin’s idea of an “investigation”. The related question of “what does this mean?” is not ruled out in Yin’s formulation of the case study, but is not so evident. The idea of representativeness is expressed through Yin’s insistence on “multiple sources of evidence”. The implication is that researchers can be more confident of their findings when many informants offer the same conclusions. They are also reassured that they are on the right path when different types of evidence (e.g. observation, interviews, and documentary sources) point in the same direction. Other authors have referred to the garnering of multiple sources of evidence as “triangulation”. For example, Sayer (1992, p. 223) describes a “. . . triangulation process in search of inconsistencies, mis-specifications and omissions”. The idea is that through triangulation a researcher will be more sure that inconsistencies have been resolved, mis-specifications erased and omissions avoided. But this concern with triangulation (or multiple sources of evidence) to ensure representativeness makes more sense with regard to identifying the “characteristics” of an event than to discerning its meaning. But before pursuing this in more depth, this section turns finally to exploring, briefly, a concept that is closely related to representativeness – the idea of generalization. Generalization is related to the idea of representativeness because if researchers are sure of the representativeness of their data in case studies, then they will be confident of “generalizing” to other cases where the same circumstances hold. Again Yin (1989, p. 38)

“Singular view” in management

197

QROM 2,3

198

has been influential in setting out an understanding of what is implied by generalization in relation to case studies. He makes a distinction between “statistical” and “analytic” (or “theoretical”) generalization, stating: In statistical generalization, an inference is made about a population (or universe) on the basis of empirical data collected about a sample . . . [He warns] A fatal flaw in doing case studies is to conceive of statistical generalization as the method of generalizing the results of the case . . . [He asserts]. . . the method of generalization [in case studies] is “analytic generalization” in which a previously developed theory is used as a template with which to compare the empirical results of the case study. If two or more cases support the same theory, replication may be claimed.

So Yin’s idea here is that if the researchers have analysed cases with similar circumstances, and if the results of the cases support the same theory, confidence in this theory is strengthened. The significance of theory for case studies goes back to the emphasis on contextual understanding. As argued above, Winch and others asserted the need to understand context from the point of view of the case participants, but their understanding may not be exhaustive. There may well be wider contextual issues that transcend the understanding of the case participants. To achieve this broader view a theoretical framing of the case is required. The role of theory in qualitative work Appropriate theorization frames (or extends) mere information from the case and turns it into useful knowledge, so “Theorization (or conceptual framing) is the “value-added” of qualitative academic research’ (Llewellyn, 2003). This value-added “merger” between the views of informants and academic theory is not, however, easily accomplished nor is it immune from challenge. In interview work in case studies there is an uneasy balance between merely reproducing and legitimising an existing theory (through simply illuminating or giving life to theory through the selective use of quotes) or relying solely on an ethnographic account, which taps only into the limited perspective of the case informants (Llewellyn, 2003; Wainwright, 1997). If qualitative management research takes the statements of informants on “what’s going on” as a starting point but adds “value” by extending (or even challenging) their views, it is hardly surprising that there is an emphasis on the “common view” to demonstrate both that the opinions of informants have been adequately canvassed (the representativeness issue) and that their ideas link substantively into the proposed conceptual frame (the validity of theorization issue). Silverman (2001, p. 223), for example, advises qualitative researchers to pay attention to two problems identified by Fielding and Fielding (1986, p. 32): . . .a tendency to select their data to fit an ideal conception (preconception) of the phenomenon [and] a tendency to select field data which are conspicuous because they are exotic at the expense of less dramatic (but possibly indicative) data.

He goes on to argue, drawing on Mehan (1979) and Bryman (1988), that the representativeness of the interview data selected is of prime significance when assessing qualitative research. The approach to theorization that Silverman is discussing here is a broadly deductive one (i.e. using theory to explain key issues, in particular ones that are only partially understood by the case informants). But this concern with representativeness is also very apparent in the inductive approach to theorizing in case studies and this is explained next.

In their text The Discovery of Grounded Theory, Glaser and Strauss (1967, p. 2) linked the usefulness of qualitative research to the possibility of generating inductive theory; they stated that, “The basic theme in our book is the discovery of theory from data systematically obtained from social research”. Their description of the data collection process as “systematic” implied that qualitative evidence is collected in a planned and logical manner. The adoption of “systematic-ness” of course, aligned their approach to the scientific method. The later related work undertaken by Strauss and Corbin (1990) went one step further through popularising the notion of applying data coding and analysis techniques to qualitative data. Again the implication was that it was desirable (and, even more significantly, it was possible) to take a scientific approach to the discovery of common themes in qualitative data. Moreover, potentially, the use of these coding methods would allow an “audit trail” of how the theory was generated from the evidence of these common themes. With such an approach potentially all researchers would propose the same grounded theory from the same qualitative database. To summarize on this section, the focus on the common view in case studies stemmed from two broad issues: the history of the development of qualitative research and the role of theory. The legitimacy of the case study, as a scientific method[4], was at stake in relation to both of these. As a new methodology, how could the findings of qualitative case studies be defended from accusations of lack of scientific rigour? If theory is the “value-added” of qualitative case study work (and can enable generalization), how can the validity of this theory be established in relation to the findings of the case? The “common view” came to the rescue in answer to both questions. First, reliance on the common view links into the scientific criterion of representativeness and echoes the scientific idea of picking up on the “regularities” or “pattern” in the data. Second, if the theorization of the case can be shown to link into the common view then accusations that theory has been imposed on the case in accordance with the predilections of the researchers can be warded off. The essential problem with this adherence to the “common view” is that it makes much more sense in relation to the “characteristics” of the research phenomenon than in relation to its “meaning and significance.” In social science, meaning and significance is usually much more important than characteristics. As argued earlier, ultimately, at a certain point in time, there may be only one respondent who discerns the meaning and significance of an organizational change. The average hospital: case details and discussion To illustrate how a singular view can be key to qualitative management research, this paper explores an interview-based empirical project (Llewellyn and Northcott, 2005). The project focused on the introduction of a mandatory form of costing in English hospitals[5] – the National Reference Costing Exercise (NRCE). In 1998, the UK New Labour Government introduced a requirement for all NHS hospitals to report their costs across the range of clinical activities undertaken[6]. Everything from a hip replacement to a course of treatment for bronchopneumonia to the removal of an in-growing toenail would have a cost attached and that cost would be known for each and every hospital. The exercise was complex and the cost data was vast; by 2002 alone it had encompassed 2.1 million items (Department of Health, 2002a, p. 39).

“Singular view” in management

199

QROM 2,3

200

Yet despite this complexity, a costing “index” was compiled that ranked hospitals on their relative cost efficiency, by assigning them a single score [7]. This index was politically sensitive; the “fact” that a hospital was a “67” or an “156” was publicly available information, so hospitals which were “outliers” sometimes found themselves featuring in the local press as either “very cost efficient and successful” or “failing and wasting resources.” The index was also part of a package of indicators that were used to judge the performance of hospital managers – so careers could be at stake. Because the index put managers under pressure they responded by exhorting clinicians to consider the cost-efficiency of their clinical practice more carefully, so the NRCE was often the cause of internal hospital conflict. There is a lot more that could be said about the characteristics of the exercise but, in brief, the research phenomenon under investigation was a mandatory form of hospital costing that took clinical interventions[8] as the cost object. The main powerful stakeholder groups involved were: politicians (who initiated the exercise); regulators (who ran the technical aspects of the cost-data collection); managers (who collected the data and who were held to account for the results); clinicians (whose work activities were being costed); and latterly management consultants who came “on board” (as hospitals began to rely on them for help with the analysis and presentation of their costs). The research project set out to investigate the meaning and significance of this costing exercise through asking members of the stakeholder groups what they thought “was going on”. A total of 38 interviews were carried out with personnel involved in the NRCE. Initially at least, stakeholders varied enormously in their assessments of the significance of the NRCE. Clinicians in particular were inclined to be dismissive, even to ridicule the exercise, possibly in the hope that it could not be made to “stick”! In the early days, the reported HRG costs varied tremendously. For example, a primary hip replacement could apparently cost between £213 and 19,960 (Department of Health, 1998). Many clinicians cited this extreme variability in support of the “it may be getting into the press but it won’t work” view. One clinician said: . . . you still see in the [news]papers comments like in some hospitals it [an HRG] is ten times the expense to do in others, which is clearly not true. It just means that the costings are very, very imprecise . . . it [the NRCE] gets rubbished [by some clinicians and managers] because it can’t be true – some of the costings that you see. I mean you can’t do hips for £190 pounds or something . . . So some of the figures that appear are just ridiculous and the sad thing is that they are taken semi seriously in the [news] articles.

When the HRG categories were set up clinicians were involved. This involvement was, in part, to take advantage of their technical expertise but also attempted to engage their commitment to the NRCE. One clinician responded to the idea of his colleagues “caring” about the NRCE as follows: Well the ones [clinicians] who care [about costing] can’t really agree with each other, but most clinicians couldn’t care less anyway! The answer is who gives a monkey’s about HRGs?

On the other hand, in the political world, people did “give a monkey’s”; the politicians had instigated the NRCE so, not surprisingly, in political realms it was taken very seriously. One hospital manager reported that:

Everything that moves [now] seems to have an HRG associated with it and, hence, a reference cost . . . a memo on some minor refinements [to the reference costs] was copied from the Chief Statistician at the Department of Health to the Secretary of State! It’s at that level of interest, help!

A finance director at one of the hospitals pointed out that not only were the government serious about the NRCE, but also they were determined to make it work: You know that the Secretary of State has a timetable by which he wants reference costs to cover the entire range of NHS activity, that is national policy and that is set.

So stakeholders differed in their assessments of the importance of the NRCE. If the significance of any social phenomenon is low, generally people would not be too concerned about what it means. Individuals follow the premise that if something is not important then they probably do not need to worry about its meaning. The corollary also holds, if something is (or can be portrayed as) meaningless then this is strong evidence that it is insignificant. In the case of the NRCE, many clinicians were convinced (or convinced themselves) that the variability in the data rendered the exercise meaningless, so they were inclined to dismiss it as probably insignificant in the longer term. On the other hand, the government was evidently “for real” about the NRCE and was intent on making it significant. What did the NRCE mean for the government? What did they want to achieve through it? Their stated intention was performance management and the raising of standards; in the 1997 White Paper “The new NHS: modern, dependable” they explained: Our new approach will tackle unacceptable variations in performance and raise overall standards across the NHS. We will achieve this by comparing, not competing, by sharing information and comparing performance; not by financial competition. Unit cost information is a central feature of this new approach.

But some stakeholders thought that the government’s intentions were rather more robust! A management consultant opined: Reference costs give managers the ability to go to doctors and to start to beat them around the head, which is what the government wants managers to do to doctors for the most part . . . now there is real power for managers to take direct action against clinicians – before it was impossible, even when you knew that harm was occurring to patients.

Why did the NRCE give power to managers to control clinicians? The answer is that although the index reported just one cost figure for each hospital, this index was compiled from very detailed information that included costs for individual clinicians. These individual costs allowed managers to challenge clinicians where their practice was more costly than their peers for, apparently, no good reason. One manager commented: So why does Mr X [consultant] cost double Mr Y? Our consultants are very different; one wonders why one does far more day cases than another and why one is far more conservative at keeping patients in than another. The useful information challenges practice and says “your cost per hernia is double what this one is.”

“Singular view” in management

201

QROM 2,3

202

And also on the theme of challenging clinical practice a regulator remarked: At one Trust orthopaedics was very expensive and they boiled it down to one consultant. He did his ward rounds before the physio came around on a Friday afternoon and the consultant wouldn’t discharge them until after they had seen the physio. So every one of his patients had a 3 day longer length of stay because none of them were discharged until Monday . . . The nurses realised but there was no common communication – the nurses never got to tell anyone. But when people start talking money, it brings it into common terms and people start listening.

But before the managers could use cost data to challenge clinical practice, the data had to have more credibility than it had at the beginning of the NRCE, or else the clinicians would continue to “rubbish” it. The regulators thought part of the answer to be a reconciliation back to the financial accounts. One advised: We do a check. What I did last year was a 30 percent check on reconciling back to the final accounts. If they were more than 1 percent out they were sent back. Because it is fast track information, we accept a 1 percent variation.

But another even more powerful incentive for hospitals to report accurate data is to fund on the basis of it. This is another view from a regulator: We have always said that if you reimburse people on the basis of the data that they submit for reference costs then the data they submit will start to become more accurate. If you say that you can do a hip replacement for six pounds and you get reimbursed on that basis then people will start to do it right.

As with respect to the “characteristics” of the NRCE, there is much more that could be said with regard to its meaning and significance. But a few things become clear through the above quotations. In terms of the differential stakeholder perspectives discussed earlier, clinicians were likely to lose power (or at any rate stood to lose some control over their clinical practice) through the introduction of the NRCE. This gave them an incentive to argue that it was going to be insignificant in the longer term, indeed many clearly believed that it was rather meaningless and, hence, it was likely to go away. In contrast, politicians as the “owners” of the exercise were determined that it would be a significant and meaningful intervention in health care. Regulators were accountable for making the exercise a success; hence they were also likely to express views that accorded significance to the NRCE. Managers were in a more equivocal position. On the one hand, they were expected to challenge clinical practice on the basis of the NRCE; this gave them an incentive to argue for the robustness of the data. On the other hand, this positioning made for internal conflict with the clinicians, so for those managers averse to conflict there may well have been an inducement to downplay the exercise. So, clearly, differing perspectives, knowledges and judgements led stakeholders to view the NRCE in very different ways. As argued earlier the government’s stated intent for the exercise was “tackling variations in performance and raising standards,” but was there more to it than this? Initially there was just one respondent who suggested that there was. The singular view One clinical director (a clinician with management responsibilities) made the following observation:

I think that the problem with the way things are being generated at the moment is that they [the government] are seeking to make everyone as average as they can and I, personally, don’t think that that is a good thing (Clinical Director at an NHS Trust).

This comment implied that the meaning of the government’s proposal to “tackle variations in performance and raise standards,” was actually to “make everyone as average as they can.” Why might that be the case and what other evidence from the respondents indirectly supported this view? It is clear from some of the above quotes that the government wanted to use the NRCE exercise to empower managers to take action against clinicians. One view was that reference costs gave managers a weapon to “beat them [clinicians] around the head” with. Another was that reference costs were “. . . useful information [that] challenges [clinician] practice and says ‘your cost per hernia is double what this one is’”. Such insights suggest that wherever clinical practice looked out of line and, particularly, where idiosyncrasy increased cost (for example the clinician described above who kept all his/her patients in over the weekend), managers would put pressure on clinicians to conform to standards that brought their costs down. But this evidence suggests pressure to attain lower costs rather than average costs. One reason why “average costs” may be more what government were aiming for lies in the propensity of some hospitals “not to take the NRCE seriously” and report unrealistically low costs. So long as “some of the figures that appear are just ridiculous” as one of the clinicians quoted above reported, the NRCE lacked credibility. As one of the regulators commented, a possible governmental response was to reimburse hospitals on the basis of their reported costs (If you say that you can do a hip replacement for six pounds and you get reimbursed on that basis then people will start to do it right). The problem with this strategy – from the government’s perspective – is that it gives an inherent incentive to report high costs. Hence, along with the direct comment cited above, there was other indirect evidence on the logic of the “average cost” as what the government was aiming for. In terms of the conduct of the research project, having heard this singular view on “making everyone as average as they can . . . ” other respondents were asked directly if they thought that the government were seeking to “make everyone as average as they can.” Some respondents said they did not agree with this and some looked blank, but a few did concur. For example, one manager commented on the “comfort factor” of having an average score on the index (this index was explained earlier): There is a sense of comfort in being around the middle [on the Index] and not standing out too much. Being cheap isn’t bad, but obviously being expensive is going to make you a target and it seems the standard at the moment is to be at 100 index or 99/98 – to be cosy around the middle.

But, even when prompted, there were only a few respondents who supported the idea of the government seeking “to make everyone as average as they can.” Nevertheless, to the research team, this view seemed to make sense. Therefore, theoretical backing was sought for the idea. Two main sources were found for the propensity for both categorization and measurement (key aspects of the NRCE) to result in more homogeneity and, hence, more “averageness.” Bowker and Star (1999, p. 53) comment that all classification systems increase comparability and “averageness” as peoples’ behaviour and organizational practices are moulded so as to fit into pre-specified categories.

“Singular view” in management

203

QROM 2,3

204

Latour (1993, p. 113) points out that all measures, “. . . construct a commensurability that did not exist before their calibration.” HRG costing necessitated the classification, counting and coding of clinical activities to categories. Latour’s work indicates that the cost measurement of these HRGs to produce the index actually makes clinical work in hospitals more commensurable and more standardized. Nevertheless, despite this later theoretical support and the prompted views of a few other respondents, just one person mooted the initial idea about “averageness.” And, on the basis of this singular view, “averageness” became the central notion in the research project for understanding the sense and significance of the NRCE. Personal reflections and concluding comments We now return to the earlier arguments about why minority rather than majority views may be the most insightful when it comes to investigating the meaning and significance of organizational events. This paper has argued that over an emergent change scenario, such as the NRCE, there are likely to be multiple and conflicting stakeholder views. Is it legitimate to privilege one view over the others? The contention of this paper is “Yes.” A singular view may be based on more knowledge, may be formed on a superior judgement and may be made from a position that gives more insight into “what’s going on.” In this research project, a clinical director proposed the singular view about “averageness.” The role of clinical director combines clinical and managerial expertise, so it has a wide knowledge base; also this position gives more insight into “what’s going on” than either the purely clinical or the solely managerial perspectives (Llewellyn, 2001). In addition, it may be surmised that this individual was simply more perceptive and, hence, formed a superior judgement when it came to discerning the “direction of travel” in health care management. Nevertheless, in the world of academic publishing, reviewers’ comments, even for qualitative work, frequently focus on the extent of the evidence base for conclusions drawn and the question of whether the results can be generalized. On the first submission of this paper, our judgements about the government aiming for “averageness” were felt to lack substance. As argued earlier, such responses are driven by expectations that qualitative work can be judged against the criteria applied to scientific research. We were asked to offer further evidence from the politicians themselves that “averageness” was indeed being promoted and comment on whether our findings could be generalized – for example, to other countries. These reviewer requirements posed difficulties for us! Given that the NRCE was a politically sensitive initiative that was likely to elicit resistance from clinicians, is seems hardly likely that the politicians would publicise any intent to promote “averageness.” On generalization to other countries, it was impossible for us to say. The UK was the first country to introduce a mandatory costing initiative like the NRCE and fund on the basis of it. Therefore, at the time we undertook the study, the events surrounding the NRCE were unique. Whether similar costing initiatives would have the same “averaging” effect in other countries is unknown. One possibility is that knowledge of the impact of the NRCE in the UK would lead some clinicians in other countries to oppose costing even more vigorously and, therefore, avoid the “averaging” consequence. Such an eventuality would not undermine the validity of the result in the UK – it would merely indicate that human beings learn from the experiences of others in how to derail a change that threatens their interests!

As it happened, we were fortunate that within the extended time period that academic reviewing occupies, two events occurred which lent additional weight to our conclusions. The first was that quantitative evidence emerged on a convergence to the average cost amongst English hospitals. The second was that the government did make an announcement about their further intentions; they revealed that funding to hospitals in the future would be made on the basis of a “national tariff” and the “small print” showed that this tariff would be based on the average cost (Department of Health, 2002b). With these two additional sources of evidence to draw upon in the paper the reviewers were much more willing to be persuaded and, at the second review stage, the basic premise of the paper was accepted. Although one of the reviewers asked for further amendments, these were not concerned with the extent of the evidence base for our arguments. This convergence to the average cost and the government’s announcement that they intended to fund on the basis of the average cost were both fortuitous developments so far as our quest to have our paper published was concerned as, in the reviewers’ eyes, these were indications that our singular respondent was correct. However, even if these developments had not occurred, our respondent could still have been right. The government could have intended to “make everyone as average as they can” but failed. If, for example, the clinicians had been successful in preventing the government from implementing the NRCE, then the two events that were taken by the reviewers as confirming evidence would not have taken place. This would not necessarily have implied that our respondent was mistaken about both the government’s intentions and the change agenda that was being implemented; only that their intentions were not ultimately realised and, hence, the “direction of travel” in healthcare organizations shifted from one where “averageness” was becoming the norm to one where the status quo was being re-instated. If the government had failed on this occasion, the value of our “singular view” does not necessarily become obsolete; it may well function as an indication that the government will try again, and, therefore, be predictive of their future intentions. We have argued here that ideas about representativeness and generalizability are much more applicable when management research is concerned with the characteristics of a phenomenon than with its meaning and significance. Canvassing all opinions and taking the “common view” on the NRCE was useful to fill out the detail on its characteristics. But, when it came to the meaning and significance of the NRCE as a politically inspired change, initially just one person discerned “what was going on.” Notes 1. Although the terms “sense” and “significance” are related in everyday discourse this paper uses them in somewhat different ways. “Sense” is taken as equivalent to “meaning” which not only follows from individual sense-making, but hinges on how something is connected or related to something else – for example, how an event is connected into an episode or how social actors are related in an organizational structure (Polkinghorne, 1988, p. 6; Llewellyn, 2003). Assessments of “significance” are taken as equivalent to the judged importance of an event and may well vary more over time than views about “meaning or sense.” For example, as organizational events unfold an “event” that was initially judged to be of little significance may assume much greater importance as the context changes over time. 2. Initially, the task of developing new linguistic terms and judgement criteria for case study research was probably largely impossible – this development had to proceed in conjunction with the development of the method itself.

“Singular view” in management

205

QROM 2,3

206

3. Winch (1990), following Wittgenstein (1953), thinks that regularities follow from the rule-based nature of social life. 4. Another approach that could have been taken by case study researchers was to deny that scientific criteria were relevant to cases studies but, as argued earlier, this seemed to position case studies as merely anecdotal, unsubstantiated and subjective. It is outside the scope of this paper to report on new, non-scientific criteria for case studies, but see for example Alvesson and Sko¨ldberg (2000), Llewellyn (2003) and Sayer (1992). 5. The quotes in this section sometimes refer to hospitals as “trusts,” a label that reflects their status as semi-autonomous entities within the NHS. 6. The NRCE began in elective surgery but has been progressively expanded since then. 7. An index score of 100 was an “average” cost performance, whereas scores above or below 100 indicated above or below average cost performance, respectively, e.g. a score of 102 reflects costs that are 2 percent above the average whereas a score of 98 may indicate a more efficient hospital performance. 8. There are a huge number of possible clinical interventions. Consequently, the NRCE costed clusters of them rather than single treatments. Each cluster was named a health resource group (HRG). HRGs are a variant on the diagnostic related groups developed in the USA for pricing healthcare services. The UK National Casemix Office constituted HRGs to “. . . group together treatments that are clinically similar, consume similar quantities of resources and are likely to be similar in cost” (Department of Health, 1998, p. 4). References Alvesson, M. and Sko¨ldberg, K. (2000), Reflexive Methodology: New Vistas for Qualitative Research, Sage, London. Bowker, G.C. and Star, S.L. (1999), Sorting Things Out: Classification and its Consequences, The MIT Press, Cambridge, MA. Bryman, A. (1988), Quantity and Quality in Social Research, Unwin Hyman, London. Department of Health (1998), The New NHS – 1998 Reference Costs, NHS Executive, Leeds, November. Department of Health (2002a), Reference Costs 2002, NHS Executive, Leeds, November. Department of Health (2002b), Reforming NHS Financial Flows: Introducing Payment by Results, Department of Health, London, October. Fielding, N.G. and Fielding, J.L. (1986), Linking Data, Sage, London. Glaser, B.G. and Strauss, A.L. (1967), The Discovery of Grounded Theory: Strategies for Qualitative Research, Aldine de Gruyter, New York, NY. Latour, B. (1993), We Have Never Been Modern, Cambridge University Press, Cambridge, MA, C. Porter (trans). Llewellyn, S. (1992), “The role of case study methods in management accounting: a comment”, British Accounting Review, Vol. 24, pp. 17-31. Llewellyn, S. (2001), “‘Two-way windows’: clinicians as medical managers”, Organization Studies, Vol. 22, pp. 593-624. Llewellyn, S. (2003), “What counts as ‘theory’ in qualitative management and accounting research? Introducing five levels of theorizing”, Accounting, Auditing & Accountability Journal, Vol. 16 No. 4, pp. 662-708. Llewellyn, S. and Northcott, D. (2005), “The average hospital”, Accounting, Organizations and Society, Vol. 30 No. 6, pp. 555-83.

Mehan, H. (1979), Learning Lessons: Social Organization in the Classroom, Harvard University Press, Cambridge, MA. Polkinghorne, D.E. (1988), Narrative Knowing and the Human Sciences, State University of New York Press, New York, NY. Sayer, A. (1992), Method in Social Science, 2nd ed., Routledge, London. Scapens, R. (1990), “Researching management accounting practice: the role of case study methods”, British Accounting Review, Vol. 20 No. 3, pp. 260-79. Silverman, D. (2001), Interpreting Qualitative Data, 2nd ed., Sage, London. Strauss, A.L. and Corbin, J.M. (1990), Basics of Qualitative Research: Grounded Theory Procedures and Techniques, Sage, Newbury Park, CA. Wainwright, D. (1997), “Can sociological research be qualitative, critical and valid?”, The Qualitative Report, Vol. 3 No. 2, available at: www.nova.edu/ssss/QR/QR3-2/wain.html (accessed May 16, 2006). Winch, P. (1990), The Idea of a Social Science and its Relation to Philosophy, 2nd ed., Routledge, London. Wittgenstein, L. (1953), Philosophical Investigations, Blackwell, London. Yin, R.K. (1989), Case Study Research: Design and Method, Revised Edition, Sage, London. Further reading Department of Health (1997), The New National Health Service: Modern, Dependable, HMSO, London. Corresponding author Sue Llewellyn can be contacted at: [email protected]

To purchase reprints of this article please e-mail: [email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints

“Singular view” in management

207

The current issue and full text archive of this journal is available at www.emeraldinsight.com/1746-5648.htm

QROM 2,3

208

Introducing strong structuration theory for informing qualitative case studies in organization, management and accounting research Lisa Jack Department of Accounting, Finance and Management, University of Essex, Colchester, UK, and

Ahmed Kholeif Department of Accounting, Faculty of Commerce, Alexandria University, Alexandria, Egypt Abstract Purpose – The aim of this paper is to present a reinforced version of structuration theory, known as strong structuration theory, set out in Stones as a disciplined approach to qualitative case study research in the organization, management and accounting fields. This framework challenges the belief held by certain critics that structuration theory cannot be used in substantive empirical research but is only a sensitising device or analytical tool. Design/methodology/approach – A conceptual discussion is the approach of the paper. Findings – The key concepts of strong structuration theory are outlined and then put in the context first of two attempts to apply the framework to empirical research and second of two recent papers which address theoretically informed qualitative research and the use of structuration theory in IT studies. Research limitations/implications – There are some limitations of this paper. The framework offered was not used to set the original research questions in the two case studies employed as these cases were conducted before the publication of Stones’ book in 2005. Also, as weaknesses in the framework can best be assessed using empirical findings, a full evaluation cannot be carried out until such research is undertaken. Originality/value – This paper draws on recent research and thinking in sociology that have yet to be brought into case studies in the fields of accounting and management in particular. Keywords Case studies, Accounting, Research methods Paper type Conceptual paper

Introduction In the 20 years since the publication of Giddens’ (1984) The Constitution of Society structuration theory has been widely used in organisation, management and Qualitative Research in Organizations and Management: An International Journal Vol. 2 No. 3, 2007 pp. 208-225 q Emerald Group Publishing Limited 1746-5648 DOI 10.1108/17465640710835364

The authors would like to acknowledge advice and comments from members of the Management Control Association, colleagues at the University of Essex, Department of Accounting, Finance and Management and the reviewers of earlier versions of their work. In particular, they would like to thank the two anonymous reviewers of this paper for their helpful, and very challenging, comments.

accounting qualitative research. Baxter and Chua (2003, p. 100) observe that “structuration theory has provided a small but distinctive contribution to management accounting”[1]. Pozzobon (2004, p. 268) discusses the growing use of the theory in strategic management studies and the use of the theory has been developed in information technology research by Orlowski (1991) and more recently Pozzebon and Pinsonneault (2005). Moreover, a number of organisation studies by Willmott, Roberts and others are included in Brandt and Jary’s (1996) definitive collection of Giddens’ work on structuration theory and its influence. However, the use of structuration theory is problematic: the complexity of the theory can mean that its use is somewhat selective and “lop-sided” to use Whittington’s (1992, p. 693) term. More crucially, there are fundamental areas of underdevelopment in Giddens’ work, such as the relationship between agents, structures and external pressures, and there has been significant debate about the central tenet of the “duality of structure” from critics such as Mouzelis and Archer (Parker, 2000). The applicability of the theory to empirical research has also been considered doubtful by Baumann, Thrift, Gregson and others who see Giddens as a meta-theorist (Stones, 1996, pp. 115-117) and the majority of studies employ the tool as an analytical device or as Giddens’ himself put it, “a sensitising device” (Giddens, 1984, p. 231, 1989, p. 294; MacIntosh and Scapens, 1990, p. 469). A recent book by Stones (2005), a sociologist who has written and debated on these matters over the last 15 years, distils the criticism, debate and enhancements of structuration theory into a form that he terms “Strong Structuration Theory”. This is not an alternative version of Giddens’ theory but an attempt to provide a strengthened version of the theory that has developed among current sociological thinkers which will be primarily of use in empirical research. Recent reviews of Stones’ (2005) Structuration Theory have recognised this work as being a substantive and considerable development in the theory (Edwards, 2006, p. 911) and “the most serious attempt to date to give structuration theory a new lease of life” (Parker, 2006, p. 122). Parker (2006) goes as far as to say that Stones provides a “radically different and original theory” to Giddens, despite being grounded in Giddens’ structuration theory. Anyone attempting to explain or apply structuration theory has to consider the “structure/agency debate [that] continues to haunt organizational studies” as Reed (1997, p. 22) puts it. Stones (2005) does not solve the duality-dualism divide or the problematic of critical realism as put in opposition to structuration theory (Parker, 2000)[2] although he has contended elsewhere that the realism-structure divide is not irreconcilable (Stones, 2001). However, Edwards (2006, p. 913) is of the opinion that Stones (2005) does provide a valuable contribution to the debate and will be used to inform future research studies. Parker (2000, p. 137) acknowledges the contribution to the debate and the positive implications of the work as a research guide but is ultimately unconvinced that the problematic has gone away: he argues for the critical realist approach of Archer and others (Parker, 2000, p. 138). Users of structuration theory must be prepared to accept that the duality of structure and the problematic nature of understanding the objective-subject nature of human beings. Later in the paper, the use of position-practices in Stones (2005) formulation of strong structuration theory is outlined. Earlier versions of this paper elicited the enquiry “why not use actor-network theory instead?” Part of the answer lies in whether or not the researcher wishes to investigate the role of structure in the matter under investigation. Actor-network theory (like ethnomethodology and Foucaldian approaches)

Strong structuration theory 209

QROM 2,3

210

relies on the disruption of the dichotomy between structure and agency altogether (Steen et al., 2006, p. 303; Reed, 1997, p. 23), whereas structuration theory relies on the identity of action and structure. Here, the authors are to some extent experimenting with theory by asking what the applications to research are if the duality of structure is accepted: the subject of the paper is not a justification duality of structure against dualism or non-structural ontologies. However, would be researchers need to confront this debate for themselves. Actor network theory is addressed again later in the paper. The authors believe that the framework offered by Stones (2005) has significant potential for qualitative researchers in organisation, accounting and management and that conversely, these fields offer a prime field in which to test the worth of the framework. The key strength of Stones’ work is that it presents a well articulated, ontologically sound argument for the development of structuration theory, which has a much wider value than other empirically based approaches in the organisation and management field, such as the recent study by Pozzebon and Pinsonneault (2005). The contribution that the authors are seeking to make is not only in the introduction of a new theoretical framework to their field of research. Underlying this is the promotion of the case study as a means of experimenting with theory, testing it and teasing it out using the empirical data whilst at the same time using the theory to mine ever further to get that data. To that extent, there is much in common with Eisenhardt (1989) on using case studies to build theory and Lee (1989) on case studies as experimentation, rather than case studies as almost quantitative analyses of data (Yin, 1981) or as action research (Park, 2001). It certainly resonates with the more recent exposition by Ahrens and Chapman (2006) on qualitative research and theory, which is discussed below. Both theoretically and practically however, what is developed here is the role of the researcher as an investigator, not just as an observer, interested stranger or participant. The first section of this paper simply offers an introduction to Stones’ (2005) conception of strong structuration theory, emphasising three key contributions for organisation research, namely the claim that structuration theory can be used meaningfully for empirical work by providing an “ontology in situ” to support Giddens’ “ontology in general”; the concept of a “sliding ontological scale” and of the “quadripartite nature of structuration”. Stones’ framework for empirical study is then presented, which has at its centre the identification of an “agent in situ” and develops to identify the internal and external agents and structures associated with that pivotal agent, and the importance of using the methodological bracketing of institutional analysis and agents conduct analysis which is a key element of Giddens’ theory. Because the aim of this paper is to set out the theory for consideration by researchers, the discussion that follows is focussed on two areas. The first of these areas reflects on two attempts to apply Stones’ theory in accounting contexts: one on the institutionalisation of accounting practices in UK agriculture in the post war period and the other on the introduction of an IT system as part of a programme sponsored by the EU/Mediterranean programme. The second area puts what is offered here in the context of two very recent papers, Ahrens and Chapman (2006) on the development of theory and case studies generally and Pozzebon and Pinsonneault (2005) more specifically on the use of structuration theory in IT studies.

An introduction to strong structuration theory In his book, Structuration Theory, Stones (2005) synthesizes criticism of Giddens’ structuration theory in the two decades since the publication of The Constitution of Society (Giddens, 1984), to suggest a reinforced ontology that allows substantive empirical research to be developed using the theory. Arguing that Giddens operates on the level of “ontology-in-general” Stones (2005, p. 75) argues for the development of structuration theory to encompass “ontology-in-situ” and the “ontic”: structure and action are not contemplated in abstract but observed in concrete situations, through the why, where and what of everyday occurrence, and through understanding the dispositions and practices of agents. A structuration study is one that involves hermeneutics as well as structural analysis (Stones, 2005, pp. 81-2), and preserves the central tenet of the duality of structure (Giddens and Pierson, 1998, p. 78). Stones’ term for this reinforced version is “strong structuration theory”. One of the other strengths of Stones (2005) work is to reintroduce epistemology and methods back into structuration theory. Giddens’ work on structuration theory was deliberately focused on ontology, which at the time he felt was more neglected. His approach to empirical method was “minimalist” (Stones, 2005, p. 13; Giddens, 1984, p. 231). The broad epistemological approach in Giddens’ structuration theory is that knowledge is socially constructed and that all human beings are knowledgeable agents. The knowledge of actions and structure of the context in which they act and the conduct that follows are the subject of research. The purpose of structuration investigations is to elicit that knowledge from actors and from their context. A structuration study is one that considers both hermeneutics and diagnostics of structure (Stones, 2005, p. 133). It also follows that the researcher must be aware of their own context and conduct in understanding actions and diagnosing structure. In this however, they are no different to any other critical theorist in having to consider distance and de-familiarisation in their reflections (Alvesson and Deetz, 2000, p. 174) and the dangers of privileging certain voices (Putnam et al., 1993, p. 232). The first key element of strong structuration theory that develops the original theory is the “meso-level” ontological concept. If ontology in general operates at an abstract level, and the ontic at the level of concrete details and specificities (Stones, 2005, p. 77), then the value of the meso-level ontology in situ is that the researcher can then analyze action and structure in relative terms: more or less knowledgeability, for example (Stones, 2005, p. 78). It provides a sliding scale on which to locate a particular study (Stones, 2005, p. 78). In an earlier work, Stones presented the idea that structuration studies may be characterized by the depth of contextualization, from an in-depth concrete study of an individual through to an abstract sweep of historical and global phenomena (more characteristic of Giddens’ own work), and the sliding scale is a development of this idea (Stones, 1996, pp. 74-5). Meso-level studies “may not cover every nook” (Stones, 1996, p. 83) and the researcher may be placed outside or above the situation under view. But wherever placed on the scale, the researcher then needs to strive for a “sufficiently discriminating, austerely delimiting, focus of attention on a restricted number of germane points on the historical and geographical landscape” (Stones, 1996, p. 82), within which patterns of action and structure may be drawn out and raised for inspection. At this point, Stones takes Cohen’s development of structuration theory to encompass position-practices and further develops it to point out that the proper realm

Strong structuration theory 211

QROM 2,3

212

of position-practices is the meso or intermediate zone (Stones, 1996, p. 83). From there the researcher can examine the networks and relationships between clusters of agents within the delimited landscape they are observing – part of an organization, for example, or a department, or a government. Position-practices were posited by Cohen (1989), drawing on the work of Bhaskar (1979), to provide what Thrift (1985, p. 618) saw as the “missing institutional link” in Giddens’ work. Structuration theory hangs on the methodological bracketing of institutional analysis and strategic (or agents) conduct analysis (Giddens, 1984; Scapens and Macintosh, 1996; Stones, 1996, 2005). Within that bracketing, Giddens used the term “social positions” as providing an identity, prerogatives and obligations: specific institutional roles are a sub-set of social positions, but the weakness is that he does not explain how these are fully reproduced in the duality of structure (Cohen, 1989, p. 208). Social identity may explain how structures persist but not how the actions of the incumbents of the positions reproduce those identities – structures run the danger of being reified, which is the problem the duality of structure seeks to avoid. The work of Bhaskar (1979), in turn, envisages practices of actors (in clustered groups) as creating structure, but Cohen dislikes the notion that positions are “slots” into which actors are placed; this ignores the fact that actors can take, modify and abandon roles rather than act within roles assigned to them (Cohen, 1989, p. 209). Stones (2005, p. 62) adopts Cohen’s (1989, p. 210) delineation of position-practices[3] which enables the researcher to stress: . . . the enactment of identities, prerogatives and obligations so as to form a link between structure and agency. To speak, for example, of . . . a Chief Executive Officer, is not only to refer to a positional identity, but also to a set of structured practices which position-incumbents can and do perform [whether the incumbent chooses to act as expected or to do otherwise].

Position-practice relations may be “traced out” or “mapped” (Cohen, 1989, p. 211): examples might be vertical hierarchical relations between levels of employees and management in a firm, or the horizontal relations between clusters of academics and administrators in different disciplines within a university (Cohen, 1989, p. 212). Stones’ (2005, p. 94) mappings between clusters of actors are webs of polygonal links between agents in focus and external structures. Stones also claims that these networks of position-practices within the quadripartite model of structuration that he proposes (see below) address another criticism of Thrift (1996, p. 54) that Giddens’ “over-emphasis on action as individual . . . never fully considers the ghost of networked others that continually informs action”. At this point, certain readers have asked why this methodology should be used rather than actor-network theory, which is ostensibly very similar in concept. The key difference, as has been mentioned above, is that whilst the central tenet of structuration theory is the duality of structure and whilst critical realists and other critics assert dualism, actor-network theory: . . . not only effaces the analytical divisions between agency and structure, and the macro- and the micro-social, but it also asks us to treat different materials – people, machines, “ideas” and all the rest – as interactional effects rather than primitive causes (Law, 1992, p. 5).

As in Giddens’ structuration theory, Law (1992, p. 7) claims, structure is a verb not a noun, and “is not free-standing, like scaffolding on a building-site, but a site of struggle,

a relational effect that recursively generates and reproduces itself”. The point is that for actor network theorists organisations and other structures are networks which come to look like single point actors (Law, 1992, p. 2): for structurationists, agency and structure are always present together, separately identifiable but not identical. Actor-network theory and strong structuration theory both purport to address the ontological and methodological weaknesses in Giddens’ structuration theory, particularly its tendency to abstraction and difficulty of application to the “ontic” as Stones (2005) terms it. The difference in approach is that whilst, as Parker (2006, p. 132) notes, Stones’ strong structuration theory is “strong because restricted” to meso-level studies, actor network theory claims to have done away with the need to consider the divide between macro and micro-social considerations (Law, 1992, p. 7). Here, the authors are not arguing for the superiority of actor network theory over structuration theory. They are simply stating that Stones (2005) framework is a robust and credible theory for interpretative research that has particular potential for case studies in organisation, accounting and management studies. Jones and Dugdale’s (2002) paper applying actor network theory and Giddens’ later theories on modernity to the ABC story could have been written using strong structuration theory. It would not necessarily have been a better or a worse paper but it would have been a different one, because the diagnosis of structure and the emphasis on agents conduct and context in strong structuration theory would have produced a different account of the emergence of ABC. As Stones (2005, p. 7) states: [The] networks of relevant relationships can be researched and investigated more or less conventionally, or more or less on the basis of the structural-hermeneutic diagnostics at the heart of structuration.

This notion of position-practices was very relevant in the farm accounting case study discussed below, where there are obvious clusters of actors within and without the organization field, and where the relations between them (and the “ghosts” of past and present actors) impact on outcomes. In the other study, the conceptualization of external structures and resistance in strong structuration theory had the potential to articulate the tensions and outcomes observed in the course of the study. Stones (2005, p. 75) conceptualizes the duality of structure as “four analytically, separate components” which he labels “the quadripartite nature of structuration” (Figure 1). One aspect of Stones (2005) strong structuration theory that comes as a surprise to a number of readers is the apparent disappearance of the three modalities of structure identified by Giddens (1984, p. 29), namely signification, legitimation and domination. These are downplayed in Stones work, and more emphasis is given to the methodological bracketing of institutional analysis and the analysis of strategic conduct in Giddens theory (Giddens, 1984, p. 289; Stones, 1996, 2005; Scapens and Macintosh, 1996). Stones (1996, 2005) re-terms these as agents context and agents context analysis, and proposes a quadripartite framework of structure in place of the more recognised S-L-D (tripartite) framework. In essence, an analysis of the conduct and context of different clusters of actors is one of the schemes of interpretation, norms and allocation of resources/power that Giddens identifies but Stones’ broader approach allows for a less restricted form of verstehen than the original. These four components are external structures as conditions of action, internal structures (i.e. within the agent), active agency and outcomes (Stones, 2005, pp. 84-5). The researcher must carefully delimit the action-horizons of the agents in situ in order to

Strong structuration theory 213

QROM 2,3

214

Figure 1. The quadripartite nature of structuration

Agent

(1) External Structures

(2) Internal Structures

(a) ConjunctuallySpecific knowledge of external structures

(3) Active Agency/ Agent’s practices

(4) Outcomes

(b) General dispositions or habitus

Source: Stones (2005, p.85)

establish what they and/or the agents regard as the line between external and internal structures (Stones, 2005, p. 84) for the context studied. Position-practices within the external, autonomous structures can be considered in abstract or substantively. External structures constitute acknowledged and unacknowledged (by the agent in focus) conditions of action and “may be the basis for unintended consequences of action” (Stones, 2005, p. 109); the conditions may constrain or enable action by the agent in focus (Stones, 2005). Where the external structures are completely autonomous of the agent, affecting social conditions regardless of the agent’s own wishes (housing markets for example), then actions by the external agents may influence the actions of the agent in focus, but these will be independent causal influences (Stones, 2005, p. 111). Stones distinguishes these occurrences from those where the agent in focus has the physical capacity to resist the external influence but feels that they do not have the ability to resist; these he terms as “irresistible external influences” (Stones, 2005, p. 112). The latter was of some importance to the study on IT adoption discussed later in this paper, because whether the accountants in the organisation in focus felt that they have the ability to do otherwise than as the EU body wants, and whether in the long-term they can resist the external influences on action from this external structure, was at the heart of the events described in the case study. Internal structures in this quadripartite scheme are divided analytically into two components (Stones, 2005, p. 85). The first of these is termed “conjecturally specific internal structures” and the second “general-disposition structures” or “habitus” (after Bourdieu, though the previous term is preferred to distance the theory here from too close an association with practical action) (Stones, 2005, p. 87). Stones (2005, p. 88) envisages the general-dispositional as something the agent draws on without thinking and so encompasses: . . . transposable skills and dispositions, including generalized world-views and cultural schemas, classifications, typifications of things, peoples and networks, principles of action, typified recipes of action, deep binary frameworks of signification, associative chains and connotations of discourse, habits of speech and gesture, and methodologies for adapting this range of particular practices in particular locations in time and space.

The conjecturally-specific relates to the role or position occupied by an agent or cluster of agents (Stones, 2005, p. 89). The virtual structures of legitimation/norms and domination/power come into play here and, in this study, would cover the position and practices of accountants, IT specialists and managers within the industrial modernization centre (IMC), and the rules and routines, the specific contexts of action that happen within the time and place in which they are situated (Stones, 2005, p. 90). Conjecturally-specific knowledge is gained over time – “that is, knowledge of interpretative schemes, power capacities, and normative expectations and principles of the agents within context” (Stones, 2005, p. 91). Such knowledge is related outwards, towards external structures (Stones, 2005, p. 90) and their overall hermeneutic structures (Stones, 2005, p. 91). When an agent in focus acts – and this is the third component of active agency in the quadripartite paradigm – it is the “active, dynamic moment of structuration” (Stones, 2005, p. 86). The outcomes – the fourth component – are the result of active agency: structures may be changed or preserved, consequences may be intended or unintended, and the agent may be facilitated or frustrated (Stones, 2005, p. 85). The analytical framework The quadripartite nature of structuration then becomes a framework for analysis of empirical material. The starting point, Stones (2005, p. 117) suggests, has to be the internal structures based on the agents in focus; the conjecturally-specific and/or the general-dispositional. For example, the researcher could first identify the general-dispositional frames of meaning for the agent in focus. Then, the conjecturally-specific interpretative schemes, norms and allocation of resources of the agents in focus would be analyzed (Stones, 2005, p. 123). This would extend to include their perceptions of the external terrain and their “networked others”: the practices observable by each positional group and the relationships between them would lead to the analysis of the agent in focus as being more or less powerful, knowledgeable, critically reflective (Stones, 2005, p. 78), and identify the possibilities and constraints facing them. The next step would be to identify the relevant external structures, and the authority and material resources at their disposal. Whether or not these structures are modifiable to a greater or lesser extent by the agents in focus, will indicate whether or not the causal influence of the external on the internal structures is independent or irresistible (Stones, 2005, p. 78). The extent to which there might exist external structure resistance to an agent’s project (as in the case of the EU and the organisation adopting IT systems sponsored by the EU below) could include the number of agents involved in the external structures, the types of power available to them and the intensity of active resistance to the project (Stones, 2005, p. 80). Finally, the researcher should examine the outcomes and analyze the extent to which these were intended or unintended, and whether these are more or less important to specified agents (Stones, 2005, pp. 78-80); the extent to which structures (external and internal) have been modified and the extent to which rules and routines have endured. The role of the researcher and the means of research In practice, the researcher may employ a number of data collection techniques in the study. Giddens (1984) in The Constitution of Society illustrated his work with studies

Strong structuration theory 215

QROM 2,3

216

drawn both from ethnography, survey and interviews. Stones (2005) uses studies drawn from ethnography and historical research, as well as a re-interpretation of Ibsen’s The Doll House. As in any good sociological research, triangulation of sources provides reassurance and credibility (Yin, 2000, pp. 91-3) and furthermore, the theory itself provides one point of triangulation (Stones, 1996, pp. 77-8). The use of the framework neither prescribes nor rejects the use of any particular method of data collection. What Stones (2005, p. 81) does envisage though is that the researcher is carrying out an investigation. A structuration study is one that engages “at least at a minimal level, with a combination of hermeneutics and structural diagnostics” (p. 133). Such a study also recognises that: . . . it will simply not be possible to investigate [authors’ italics] the interplay of structure and agency in any meaningful sense without a sufficiently discriminating, austerely delimiting, focus of attention on a restricted number of germane points on the historical and geographical landscape (p. 82).

This is in accord with Giddens’ (1989, p. 300) reflections on a sensitising broad theory: . . . every social system, no matter how small or ephemeral, or large-scale and permanent, gains its systematic qualities only through regularities of social reproduction. The ways in which such regularities – which consist of social practices are organized in and through the behavior of contextually located actors have to be subjected to empirical investigation. Modes of institutional articulation – across time and space – are building blocks of time-space distanciation.

Parker (2006) despite his overall misgivings, does acknowledge that the framework offered forces the researcher to thoroughly engage with the “who did what, where, when, how and why” of the study. This is research as drilling down, as detective work, as systems auditing, as psychotherapy: thorough investigation carried out with a healthy dose of professional scepticism. It is the researcher as the highly sensitised, methodical, professional asking questions until all angles are covered and an intuitive feel for the patterns and subtle abnormalities involved has been achieved. It is not possible to give precise guidelines on how an analysis of agents’ schemes can be undertaken or a diagnosis of structural techniques. This is a situation where the professional researcher must call on the concepts of the general-dispositional and the conjecturally-specific to frame questions that will elicit from the subjects, from documentation and from observations their knowledge of themselves, their context and their boundaries. It is building up a very visual, spatial image of the networks and patterns of action involved and separating the intended consequences from the unintended. The epistemological justification for our interpretation must be judged in much the same way as in other areas of critical theory. The aim is not verification or verisimilitude (Putnam et al., 1993, p. 234). Whether the final case study is a “convincing account, one that will persuade the readers that his or her portrayal is a good one” as is contended in that paper by ethnographists, or whether it is an account that “produces social transformation through developing insights that allow actors to cope with distorted communication and to participate openly in constructing new meanings” as the critical theorists contend or whether the researcher manages to make the familiar exotic and problematic (Alvesson and Deetz, 2000, p. 167) or simply unearths patterns and maps that were previously unconsidered, then there is

justification for the approach. There is a contribution to knowledge. The metaphor of the investigator may suggest that a solution or explanation is found: the plausibility of the case interpretation may in part be decided by the professionalism, thoroughness and detachment of the investigator but it does not have to be a right and final solution. That is for the readers (“the jury” to stretch the metaphor), themselves knowledgeable actors, to exercise their own judgement, as should be the situation with any case study. Stones’ framework offers significant guidance for the researcher using structuration theory, whilst retaining the spirit of Giddens’ original theory, where the study of the day-to-day “immediately means ‘opening out’ across time and space” and accepting the necessity of “a historical or developmental perspective and a sensitivity to variations in location” (Giddens, 1989, p. 298) Discussion Stones (1996, p. 117, 2005, pp. 34-40), unlike Parker (2000, p. 9) who regards the time for structuration theory as finished, sees the theory as the basis for lively research which has barely been exploited as yet. Despite the number of commentators (Baumann, Thrift, Gregson) who see Giddens as a meta-theorist (Stones, 1996, pp. 115-17) whose innovations are most relevant to the large-scale, long-term processes of society, Stones (1996, 2005), Thrift (1996) and others claim that structuration theory can be relevant to more small-scale, short-term empirical work, including the design of such studies and his approach to this has been presented above. In order to commence an evaluation of the contribution that this conception of strong structuration theory could make to future case study work, this discussion first reflects on two attempts to apply the framework to empirical data and then on two current papers on the application of social theory in qualitative research. Study one A study on the institutionalisation of farm management accounting practices in the UK in the postwar period (Jack, 2004) was initially approached using fusions of institutional theories and structuration based on Barley and Tolbert (1997) and Burns and Scapens (2000). The data collected revealed that different clusters of actors were acting in different ways that together contributed to the inertial state of the institution at the time of study and the theoretical frameworks were inadequate to explain the findings of the study. An early draft of Stones’ (2005) chapter on Ontology being made available, the researcher then re-cast the study in terms of strong structuration theory ontology and also drew on Stones’ (1996, p. 77) earlier work, Sociological Reasoning, particularly the “floater metaphor” which: Seeks to capture the way in which a certain type of study acquires a broader and longer perspective by means of floating over the surface of events, as if in a hot air balloon, from which one’s eye is extensive but lacking in detail.

In brief, the question of why certain accounting practices have persisted over a 50-year time span and a wide space of action – as in the case of the use of the agricultural gross margin in UK agriculture since the 1960s – involved a taking a broad view over that time-space and the clusters of actors involved. The researcher needed to float over the structures and over the longue dure´e of their history. Yet the research also needed to be hermeneutically informed: why had the actors chosen to reproduce the institution over

Strong structuration theory 217

QROM 2,3

218

the years in this form? As noted above, structuration studies are characterised by a concern for both hermeneutics and structural diagnostics (Stones, 2005, p. 81). However, it was clear that in a farming context, accounting practices are not necessarily day-to-day actions or even thought of from one day to the next by some actors. A high level of contextualization and concentration on a very few actors would not expose why the institution persists over such a wide space and time in such a situation. A number of actors from each identified group (farmers, government, and advisors), giving overviews of their own and the industry’s practices – a mid-level contextualization – were required. The empirical findings were presented as two investigations into two episodes in the lifecycle of the agricultural gross margin. The first investigation was an historical review of contemporaneous literature and documentation covering the episodes of initiation and implementation c.1960 and the episodes of institutionalization, which was completed c.1972 (when Britain joined the European Economic Community and the Common Agricultural Policy (CAP) regime). The second investigation covered the present day, which could be characterised as a period of apparent inertia and is an episode where the institution could be on the verge of change arising from external pressures. Thus, the two episodes of institutionalisation and of current position-practices – which may or may not be on the verge of change – were covered. This follows Stones’ (2005, p. 82) advice that: . . . one could imagine focusing in detail on two events separated by: (i) a long period of time; or (ii) large tracts of space, that can be demonstrated to have a relation to one to the other which is identifiably a relation of structuration.

This chimes with one of Giddens’ (1984, p. 244) key concepts that “all social life is episodic”. He says that: . . . in referring to the type of social change involved in an episode I mean to indicate both how intensive and how extensive it is . . . how profoundly a series of changes disrupts or reshapes an existing alignment of institutions and how wide-ranging such changes are (p. 246).

By then exploring more deeply the aspects of signification, legitimation and dominance within the institution and its clusters of actors the author was able to successfully achieve a rich analysis of the nature of the accounting practice ( Jack, 2005). Study two The second case study is about the experience of introducing an ERP system into the IMC, the executive body of the Industrial Modernization Programme (IMP) jointly funded by the EU and the Government of Egypt (Kholeif, 2005). This study, taken from the viewpoint of the agents in focus (the accountants in the Finance Department), covers a short time-span, but within that short time (2001-2005) presents a failed attempt to establish organizational structures. The Finance Department had a dominant role in the running of the organization, but this was insufficient either to resist the external pressures placed by the governments of the EU and Egypt or the internal pressure exerted by the component managers. The ERP system, chosen as the means of interpreting the IMC’s role in the IMP, failed to gain legitimacy with either cluster of agents, and the outcomes were a system that suited the way the component managers wished to work and the imposition of the intranet-based system created by those put in place by the EU Commissioners. The Finance Department contained

agents more knowledgeable about the possibilities and functions of an ERP system, but less knowledgeable about the overall aims and ambitions of the Euro-Mediterranean process. The EU, although an external agent, had greater legitimacy through its broader programmes and established structures, and through the resources – in terms of money and established procedures – than the Finance Department. The component managers had greater legitimacy in their position-practices, as they were more directly carrying out the work of the IMC with businesses; thus they were able to more clearly define their role as being separate from that of the Finance Department and not integrated with them, and so simplified the practices of the Finance Department to being that of information provider. Thus, we have a clear outline of what Stones (2005, p. 75) terms the quadripartite nature of structuration. The external agents (the EU, Egyptian Government and vendor) provided the conditions of action. The agents in focus were attempting to create organizational structures, rules and routines to govern action, but were in turn acting in accordance with general dispositions (as accountants and former corporate employees) and performing conjecturally specific actions (setting up an ERP system) that reproduced familiar structures from their past. The outcomes were unintended, but arose from the resistance generated between the structures and actions envisaged by the external agents and between internal agents influenced separately by external agents and the agents in focus. The data collection for this study was carried out before the publication of Stones’ book in 2005. However, the researchers were informed by, and intended to use, structuration theory and institutional theories in their interpretation. Strong structuration theory gave an enhanced analysis of the actions and structures observed, by allowing the analysis to stretch into the conditions of action set by the external agents to become a more substantial element of the analysis. However, had Stones’ (2005, p. 75) framework been used from the start of the project, then it would indeed have guided the questions and framework of the research, sensitizing the researchers to ask further questions concerning the role of the EU and, perhaps, directing the research to include more interviews with the external agents themselves. The sliding scale image would guide the researchers to position the research – to severely delimit it (Stones, 2005, p. 78) – and thus sharpen the analysis obtained. Had the EU Commission (in the Euro-Mediterranean partnership) been chosen as the agent in focus, then we would have moved up the sliding scale and the study would have been more of a “floating” study (Stones, 1996, p. 77) and less deeply contextualized. The relationship between the EU and Egypt stretches over a much longer timescale than the existence of the IMC, and is bound with general policies of democratization, trade agreements and modernization. We would have “touched down” (Stones, 1996, p. 77-8) in the IMP and seen just one example of what Lister (1997, p. 70) called “the awkward development of the EU’s Mediterranean policy” and what Youngs (2002, p. 54) claims is the EU’s lack of “effort to work out a strategy to encourage a type and form of economic engagement conducive to prompting political change” the underpinning reason behind EU investment in the south Mediterranean region. Were the researchers to go further up the sliding scale again, into the realm of ontology-in-general, then this same study could be viewed from above as part of the working out of ideologies of democratization and modernization.

Strong structuration theory 219

QROM 2,3

220

In both studies, the use of strong structuration theory ideas enhanced the analysis of the data available but by themselves they are not sufficient to fully evaluate the framework and its potential. In order to fully evaluate the strengths and the weaknesses, research should be carried out that is designed as a structuration study from the outset following the principles in the offered framework: the potential of this method is that it both expands the sensitivities of the researcher to the actors and structures under observation whilst imposing a level of discipline on the qualitative researcher. In the context of current papers There are two recent papers which indirectly offer support for the use of strong structuration theory in qualitative case study research. In the first of these, Ahrens and Chapman (2006, p. 837) are concerned with the process of research in which: . . . to generate findings that are of interest to the wider management accounting research community, the qualitative field researcher must be able to continuously make linkages between theory and findings from the field in order to evaluate the potential interest of the research as it unfolds. This ongoing engaging of research questions, theory, and data has important implications for the ways in which qualitative field researchers can define the field and interpret its activities.

They observe that qualitative field study is not simply empirical but a profoundly theoretical activity where the task for the researcher and writer is “to express the field as social and not simply to clarify it” (p. 819). Researchers should avoid a banal application of theory to findings, implying relevance but instead use the findings to draw out new theoretical insight into management practices (accounting in this instance). Their paper is salutary reading for the qualitative researcher. Stones’ (2005) framework for substantive research that is theoretically informed is in sympathy with the views of Ahrens and Chapman (2006). What is being offered in the strong structuration framework is an approach (not a prescription) for carrying out field work that envelopes data, theory and research problems. The conflation of ontology and method is avoided (Ahrens and Chapman, 2006, p. 822), as Stones’ framework is developed from strongly argued ontological grounds and the insistence on combining hermeneutics and structural diagnosis (Stones, 2005, p. 81). Although only one theory is applied, rather than multiple theories as Ahrens and Chapman prefer to see ( p. 823), that theory is complex and many layered. Domains must be strictly delineated (Stones, 2005, p. 82; Ahrens and Chapman, 2006, p. 827). Thus, strong structuration theory appears to offer an approach to field studies that is both disciplined and non-trivial, and thus in tune with best practice as set out in Ahrens and Chapman’s paper. The second paper, by Pozzebon and Pinsonneault (2005) is entitled “Challenges in conducting empirical work using structuration theory: learning from IT research” is interesting because it broadly comes to the same conclusions as Stones (2005). They suggest concrete directions for improving empirical research using structuration theory. Firstly, researchers should hone in on three sensitising devices: duality of structure, time/space and actors’ knowledgeability. Secondly, employing both narrative and temporal bracketing either fine-grained or broad-grained is similar to the ideas of levels of contextuality employed by Stones (1996). They observe (p. 1369) that:

The use of ST has helped IT researchers to understand better how technologies provide meaning, are used to exercise power and legitimize certain outcomes to the detriment of others, and how people produce or reproduce or enact organizational practices by using certain technological properties and not others.

These findings, which are closely aligned to the approach taken by Stone, support the potential relevance of strong structuration theory in organisation, management and accounting studies. However, the paper by Pozzebon and Pinsonneault (2005) also indicates a trend in recent structuration studies in management and accounting. Pozzobon (2004) in his review of structuration theory in the management literature concludes that the theory has “often been appropriated by researchers in strategy not as the primary theoretical foundation but as a broad framework or envelope, as a general premise incorporated into existing approaches or as an integrative theoretical tool”. He also notices that there is a greater concentration of papers c.2000 than earlier, suggesting an increasing adoption of Giddens’ ideas: what is noticeable is that the majority of these papers draw on earlier organisational writers and the Giddens’ earlier works as their primary sources. Very few, if any, draw on criticism or thinking from sociology in the last decade. The latest sociological work cited by Pozzebon and Pinsonneault (2005) is Cohen (1989). Similarly, in accounting research using structuration theory, no papers with the exception of Scapens and Macintosh (1996) touch on the extensive critical work on Giddens since 1984, even where such works as Bryant and Jary (1996) are cited. Very few in the accounting field draw on his later work on modernity (with the exception of Jones and Dugdale, 2002; Seal, 2003; Seal et al., 2004). The primary source for accounting researchers is Roberts and Scapens (1985) paper. Therefore, the methodological developments of Stones, Cohen, Thrift and others and a number of critical debates have been bypassed: one contribution of this paper is to draw recent sociological research into the organisational domain. Concluding comments The use of structuration theory in case studies by organisation, management and accounting researchers is relatively well established, but in order to develop researchers in these fields need to remain aware of the growing body of work by sociologists in what might be called the school of structuration theorists (Stones, 1998, p. 11). The debates around duality, dualism or the non-necessity of either are still ongoing, and the arguments that Giddens’ theory is too abstract and underdeveloped for use in empirical study has led a number of social theorists to develop and strengthen the theory. The most significant recent contribution to this ongoing development is Stones (2005) book, Structuration Theory, in which he argues for a reinforced ontology that enables substantive empirical social studies to be designed and carried out being informed throughout by the theory. The book has been recognised as being of considerable use to future researchers (Edwards, 2006; Parker, 2006). The use of structuration theory in accounting, organisation and management studies has been largely as an interpretative tool, often applied at some point mid-way or towards the end of a study. Here, it has been argued that Stones (2005) framework could enhance case study work in these fields by introducing both a design stage to those studies that is specifically structurationist and impose a discipline on the

Strong structuration theory 221

QROM 2,3

222

researcher, to ask more penetrating questions of their sources and themselves that will elicit responses about internal and external agents and structures, context and perceptions of conduct. A structuration study as Stones (2005, p. 133) says, is one that encompasses both hermeneutics and diagnosis of structure. The use of strong structuration theory though is more than just a promotion of good research practice based on a well-argued ontology and epistemological practice. Being relatively untried, using strong structuration with the case study method also allows researchers to experiment with the theory, to test it and in so doing, to contribute to the ongoing debate, to develop theory themselves. Ahrens and Chapman (2006) have argued that researchers should regard qualitative field study as a profoundly theoretical activity and the response here is that the strong structuration theory framework fully embraces that idea. The contribution of this paper is to argue that the use of Stones (2005) framework is particularly suited to case study research and to the development of the practice of developing case studies in accounting, organisation and management that also contribute back into social theory. The position of the researcher is presented as analogous to that of an investigator, elucidating the case through evidence, theory, experience and intuition. Strong structuration theory may address a significant number of the problems perceived to be associated with Giddens’ structuration theory but it does not solve the problematic of duality against dualism, although it does go some way to lessening the gap between them. Just as in mathematics, however, the acceptance or refusal of one axiom can lead to the construction of a new mathematical system, so if the researcher is prepared to “go along with” the duality of structure, the results may be surprising and insightful. Social theory in accounting, organisational and management studies should be respected – but is also there to be played with and investigated and made part of “what if” trains of thought in the mind, based on the findings of empirical research. Yes, this is a new framework which needs to be evaluated but organisational, management and accounting research is ideally suited for this task: by their nature, organisations are placed in the meso-level subject to the pressures of external institutional and societal actors and structures and vulnerable to the actions of its actors who might at any point choose to do otherwise. Notes 1. Baxter and Chua (2003) are looking only at papers published in Accounting, Organisations and Society, although this does contain most of the significant accounting papers on the subject. There has been a growing number of case studies since 2000 (Conrad, 2005; Caglio, 2003) that use structuration theory but these are almost exclusively based on the earlier seminal papers by Roberts and Scapens (1985) and MacIntosh and Scapens (1990). Developments in management studies have been reviewed by Pozzobon (2004). 2. “The critics of Giddens . . . hold that to explain the structuration of social structures by recognising both the contributions of objective processes and human powers of agency does not necessitate abandoning the dualism of structure (object) and (agency). Dualism asserts the non-identity of the two, whereas Giddens asserts their identity” (Parker, 2000, p. 9). To adopt Giddens’s theory, one must subscribe to this concept of the duality of structure, that structures are expressions of action and that there is always an agent involved in the reconstitution or reproduction of structures (Giddens and Pierson, 1998, p. 78). For Giddens it then follows that institutions are structures that are chronically reproduced by knowledgeable actors over time and space.

3. Cohen (1989, p. 210) sets out the minimum definition of institutionalised position-practices as being an observable positional identity with associated prerogatives and obligations; clusters of such practices; other interrelated incumbents of position-practices; reciprocities between incumbents of clusters of position-practices.

Strong structuration theory

References Ahrens, T. and Chapman, C. (2006), “Doing qualitative field research in management accounting: positioning data to contribute to theory”, Accounting, Organizations and Society, Vol. 31 No. 8, pp. 819-41. Alvesson, M. and Deetz, S. (2000), Doing Critical Management Research, Sage, London. Barley, S.R. and Tolbert, P.S. (1997), “Institutionalization and structuration: studying the links between action and institution”, Organization Studies, Vol. 18 No. 1, pp. 93-117. Baxter, J.J. and Chua, W.F. (2003), “Alternative management accounting research – whence and whither”, Accounting, Organisations and Society, Vol. 28 Nos 2/3, pp. 97-126. Bhaskar, R. (1979), The Possibility of Naturalism: A Philosophical Critique of the Contemporary Human Sciences, Humanities Press, Atlantic Highlands, NJ. Bryant, C. and Jary, D. (1996), Anthony Giddens: Critical Assessments, Routledge, London. Burns, J. and Scapens, R.W. (2000), “Conceptualizing management accounting change: an institutional framework”, Management Accounting Research, Vol. 11, pp. 3-25. Caglio, A. (2003), “Enterprise resource planning systems and accountants: towards hybridisation?”, European Accounting Review, Vol. 12 No. 1, pp. 123-53. Cohen, I.J. (1989), Structuration Theory: Anthony Giddens and the Structuration of Social Life, Macmillan, London. Conrad, L. (2005), “A structuration analysis of accounting systems and systems of accountability in the privatised gas industry”, Critical Perspectives in Accounting, Vol. 16, pp. 1-26. Edwards, T. (2006), “Developments toward the operationalization of structuration”, Organization, Vol. 13 No. 6, pp. 911-3. Eisenhardt, K.M. (1989), “Building theories from case study research”, Academy of Management Review, Vol. 14 No. 4, pp. 532-50. Giddens, A. (1984), The Constitution of Society, Polity Press, Cambridge. Giddens, A. (1989), “A reply to my critics”, in Held, D. and Thompson, J.B. (Eds), Social Theory of Modern Societies: Anthony Giddens and his Critics, Cambridge University Press, Cambridge. Giddens, A. and Pierson, C. (1998), Conversations with Anthony Giddens: Making Sense of Modernity, Blackwell, Oxford. Jack, L. (2004), “The persistence of post war accounting practices in UK agriculture”, unpublished thesis, University of Essex, Colchester. Jack, L. (2005), “Stocks of knowledge, simplification and unintended consequences: the persistence of post-war accounting practices in UK agriculture”, Management Accounting Research, Vol. 16 No. 1, pp. 59-79. Jones, T.C. and Dugdale, D. (2002), “The ABC bandwagon and the juggernaut of modernity”, Accounting, Organisations and Society, Vol. 27, pp. 121-63. Kholeif, A. (2005), “Enterprise resource planning (ERP) implementation and management accounting change in a transitional country: an interpretive case study from Egypt”, unpublished PhD thesis, the University of Essex, Colchester.

223

QROM 2,3

224

Law, J. (1992), “Notes on the theory of the actor network: ordering, strategy and heterogeneity”, published by the Centre for Science Studies, Lancaster University, Lancaster, available at: www.comp.lancs.ac.uk/sociology/papers/Law-Notes-on-ANT.pdf (accessed 19 February 2007). Lee, A.S. (1989), “Case studies as natural experiments”, Human Relations, Vol. 42 No. 2, pp. 117-37. Lister, M. (1997), The European Union and the South: Relations with Developing Countries, Routledge, London. MacIntosh, N.B. and Scapens, R.W. (1990), “Structuration theory in management accounting”, Accounting, Organizations and Society., Vol. 15 No. 5, pp. 455-77. Orlowski, W.J. (1991), “The duality of technology: rethinking the concept of technology n organizations”, Organization Science, Vol. 3 No. 3, pp. 398-429. Park, P. (2001), “Knowledge and participatory research”, in Reason, P. and Bradbury, H. (Eds), Handbook of Action Research: Participative Inquiry and Practice, Sage, London. Parker, J. (2000), Structuration, Open University Press, Buckingham. Parker, J. (2006), “Structuration’s future? From ‘All and Everywhere’ to ‘Who Did What, Where, When, How and Why?’”, Journal of Critical Realism, Vol. 5 No. 1, pp. 122-38. Pozzobon, M. (2004), “The influence of a structurationist view on strategic management research”, Journal of Management Studies, Vol. 41 No. 2, pp. 247-72. Pozzebon, M. and Pinsonneault, A. (2005), “Challenges in conducting empirical work: using structuration theory: learning from it research”, Organization Studies, Vol. 26 No. 9, pp. 1353-76. Putnam, L., Brantz, C., Deetz, S., Mumby, D. and Van Maanen, J. (1993), “Ethnography versus critical theory”, Journal of Management Inquiry, Vol. 2 No. 3, pp. 221-35. Reed, M.I. (1997), “In praise of duality and dualism: rethinking agency and structure in organizational analysis”, Organization Studies, Vol. 18 No. 1, pp. 21-42. Roberts, J. and Scapens, R. (1985), “Accounting systems and systems of accounting: understanding accounting practices in their in their organisational contexts”, Accounting, Organisations and Society, Vol. 10 No. 4, pp. 443-56. Scapens, R.W. and Macintosh, N.B. (1996), “Response to Boland’s interpretative act”, Accounting, Organizations and Society, Vol. 21 Nos 7/8, pp. 675-90. Seal, W. (2003), “Modernity, modernization and the deinstitutionalization of incremental budgeting in local government”, Financial Accountability & Management, Vol. 19 No. 2, pp. 93-116. Seal, W., Berry, A. and Cullen, J. (2004), “Disembedding the supply chain: institutionalized reflexivity and inter-firm accounting”, Accounting, Organizations and Society, Vol. 29 No. 1, pp. 73-92. Steen, J., Coopmans, C. and Whyte, J. (2006), “Structure and agency? Actor-network theory and strategic organization”, Strategic Organization, Vol. 4 No. 3, pp. 303-12. Stones, R. (1996), Sociological Reasoning: Towards a Past-Modern Sociology, Macmillan, London. Stones, R. (Ed.) (1998), Key Sociological Thinkers, Palgrave Macmillan, Basingstoke. Stones, R. (2001), “Refusing the realism-structuration divide”, European Journal of Social Theory, Vol. 4 No. 2, pp. 177-97. Stones, R. (2005), Structuration Theory, Palgrave, London. Thrift, N. (1985), “Bear and mouse or bear and tree? Anthony Giddens’ reconstitution of social theory”, Sociology, Vol. 19 No. 4, pp. 609-23. .

Thrift, N. (1996), Spatial Formations: Theory, Culture and Social Spaces, Sage, London. Whittington, R. (1992), “Putting Giddens into action: social systems and management agency”, Journal of Management Studies, Vol. 29, p. 6. Yin, R.K. (1981), “The case study crisis: some answers”, Administrative Science Quarterly, Vol. 6 No. 1, pp. 58-65. Yin, R.K. (2000), Case Study Research: Design and Methods, 2nd ed., Sage, London. Youngs, R. (2002), “The European Union and democracy promotion in the Mediterranean: a new or disingenuous strategy?”, Democratization, Vol. 9 No. 1, pp. 40-62. Corresponding author Lisa Jack can be contacted at: [email protected]

To purchase reprints of this article please e-mail: [email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints

Strong structuration theory 225

The current issue and full text archive of this journal is available at www.emeraldinsight.com/1746-5648.htm

QROM 2,3

Case study research and network theory: birds of a feather Evert Gummesson

226

Stockholm University School of Business, Stockholm, Sweden Abstract Purpose – The purpose of this paper is to advocate that case study research needs to renew itself and employ its full potential as an innovative theory-generating methodology in management disciplines; and to propose that a viable strategy for such renewal is to exploit the power of case study research and network theory as supplementary methodologies. Design/methodology/approach – The paper is a reflective and synthesising comparative study. Findings – If one steps down from the tip of the iceberg and inspects the underwater properties of case study research and network theory a common core is found: the recognition of complexity. The methodologies supplement each other, case study research primarily using verbal language and qualitative data, while network theory uses a nodes-and-links language that opens up for verbal, graphic and mathematical treatment. Case study research is primarily associated with qualitative research in social sciences and network theory with quantitative research in both social and natural sciences. By abolishing the unfortunate categories of qualitative/quantitative and natural sciences/social sciences that have been set against each other, and letting them join forces for a common goal – to learn about life – people open up for methodological creativity. Originality/value – By comparing case study research with network theory on a fundamental level, the paper offers a novel perspective on research. It is a contribution to an overriding desire to improve the understanding of management and society. Keywords Innovation, Quality, Case studies, Complexity theory, Research methods Paper type Research paper

Qualitative Research in Organizations and Management: An International Journal Vol. 2 No. 3, 2007 pp. 226-248 q Emerald Group Publishing Limited 1746-5648 DOI 10.1108/17465640710835373

Introduction After long experience with case study research, I have begun to feel increasingly uncomfortable. The feeling is that case study research does not develop as a research methodology. Over several decades, I have also devoted my research to relational aspects of marketing. This has progressively led up to network theory and its application to marketing. I have realized that it is all about life. I am not a student of marketing, I am a student of life – through marketing. Embracing a broad qualitative and humanities-oriented world-view, I believe you can study life through anything; life exists in each cell, each galaxy, each event and each discipline, irrespective of how mundane or grand it may appear. I discovered that network theory and case study research are birds of a feather. As such, they should flock together – but so far they do not. The paper is an invitation to dialogue rather than an effort to tell the reader how things are or should be. Put in internet lingo, there is an open-source code like the Linux operative system which invited computer geeks to develop an initially crude concept and has continued to do so since 1991; and the more recent Wikipedia, presenting itself as “the free encyclopaedia that anyone can edit”. Our increased understanding of customer participation and customer-to-customer interaction (C2C), the explosion of user-controlled web sites and communities and the idea about the

future internet as a platform for user participation (Web 2.0), and the current interest in a new service-dominant logic (Vargo and Lusch, 2004; Lusch and Vargo, 2006) support the customers’ role as co-creators of our economies. The same role duality applies to scholars as they swing between being producers of science and consumers of science. “Dialogical orientation” (Ballantyne and Varey, 2006) should outrank the hard sell, win-lose debates and defence of established but often outdated concepts, theories and methods. The term management will be used henceforth as the collective designation for business administration (or increasingly just business) and public administration. It includes a series of subdisciplines, among them organization, leadership, accounting, finance and marketing. In the centre of the paper is a treatise of the two methodologies case study research and network theory. They have basic properties in common; above all they address complexity. As case study research is known among qualitative researchers, only some specific aspects, which I consider less known and in need of dialogue and rethinking, are presented. In contrast, network theory is less practiced in management research, particularly when you consider the contributions from natural sciences. The paper presents links between case study research and network theory and offers an overview of network properties. I do not as yet feel the time is ripe to offer a more structured comparison but rather to draw the readers’ attention to an opportunity for exciting developments in research methodology. The paper ends with conclusions and recommendations. A mainstream dividing line in research is whether one should test extant theory or generate new theory. Let me initially establish my conviction that management is in need of more innovative and daring research, quantum leaps and paradigm shifts. As social scientists look up to the scientific approaches in physics, it may be appropriate to quote Einstein: “To raise new questions, new possibilities, to regard old problems from a new angle, requires imagination and marks real advances in science.” This is a broader, overriding alternative to the current piecemeal contributions where tiny details and simplistic causal relationships between two or a few variables are studied in a contextual vacuum, deprived of real life complexity and dynamics. The current research tradition is more rituals-oriented in an effort to stand out as scientific, and it is less result-oriented. This is unnecessary as the demarcation line between theory testing and theory generation is artificial and superfluous. By emphasising innovation and continuous improvements, theory-in-use will be constantly exposed to comparison with new theory. So, may the best one win and the loser withdraw with no hard feelings. The two methodologies could be combined, using network theory within case studies as both a way to generate and structure data and as an analytical technique. Case study research speaks a verbal language while network theory speaks a nodes-and-links language and provides a foundation for graphical, mathematical and computer processing without rejecting verbal language. Network theory offers traditionally narrative case study the option to take a leap forward by introducing a different type of data generation and processing. Those who master verbal language and write well can offer both rigour and freedom in expression and interpretation. By being different but with the same interest in complexity, cross-fertilization between the two is possible.

Case study research and network theory 227

QROM 2,3

228

Case study research Is the quality of case study research in management disciplines higher today than 20 years ago? There is no straightforward answer to the question. An evaluation could include three aspects. The first is whether case study methodology has improved, through incremental polishing and/or quantum leaps. The second is whether researchers have become more skilled in applying the methodology. Has education in case study research improved? Have supervisors of term papers all the way to PhD theses become better coaches? The third concerns examiners, reviewers of journal articles, and members of promotion, grants and certification committees. Do they have adequate knowledge of case study research and qualitative methods in general or are their minds mainly formatted by a statistical paradigm? Assessing these quality issues and possible changes requires specific study. Even so, it will not be easy to come up with unambiguous conclusions. In an empirical study of the status of qualitative research in the management field, Cassell et al. (2006) found indications that qualitative research was often assessed by the wrong criteria (inappropriate procedural correctness) and inconsistent use of criteria. On the training side, they found a bias in favour of quantitative approaches. Other indicators of inadequate knowledge and training are that Flybjerg (2006) – finds it called for to explain five misunderstandings about the basics of case study research; should not these have reached the mainstream by now? Personally, I have found it necessary to structure case study research in management into five foci, all elementary but not yet well applied (Gummesson, 2007). I consider cases central in management research. In my home country, Sweden cases constitute the most important empirical base for masters and PhD theses at business schools; in many other countries, they are the exception. Case study research is criticized by quantitative researchers for being just conceptual, useful at an exploratory stage but not for proving anything, lacking in rigour, and offering journalism and “anecdotal evidence” with non-generalisable outcomes. Let us hope that a falling number of researchers will remain mesmerized by the hubris of quantitative researchers who take their superiority for granted, allowing little reflection but endless fundamentalist rhetoric. In the absence of further studies and more conclusive evidence, I will discuss some of my favourite ideas related to the three aspects noted above, and based on several decades of case study research experience. These ideas come under two headings: complexity and quality/productivity.

Complexity Business is in constant flux and it is only partially predictable. A research method therefore must allow the study of change processes. Snapshot at one point of time (statics) may be totally inadequate and a series of discrete snapshots (comparative statics) may be more adequate but not enough to satisfy business reality. For an extreme case that contains all the ingredients of a Shakespearean play, follow the developments of the European aircraft manufacturer Airbus and the intricacies in designing, financing, producing and marketing its new super-jumbo passenger jet and its race with the market leader, the American Boeing. The drama is continually

reported in the business press. Even the brief media reports offer enough material to create respect for the complexity that has to be tackled. Our methodology must be responsive to complexity, or research in management will neither make a contribution to theory, nor to practice. An example of such responsiveness is Bakir and Bakir (2006) who apply grounded theory to the elusive concept of strategy. They do not buy the conventional and rationalistic simplifications but dive right into the complexity of strategy through “multiparadigm inquiry”. Case study research tries to respond to complexity by providing rich and thick descriptions in the sense suggested by Geertz (1973). But the genome of case study research stretches beyond such descriptions. It allows the study of complexity, context, ambiguity and chaos. It allows a holistic, systemic approach with an unlimited number of variables and links. It allows an inductive approach without considering extant theory, but can also be deductive or a combination of the two. It offers freedom in the choice of data generation and analytical techniques with little regulation. It is innovative, entrepreneurial and non-bureaucratic. It endorses the urgency of access to reality and has validity and relevance in focus. Access may be denied for social, physical, or resource reasons, but should not be impaired by the imposition of “approved” but insensitive data collection and analysis techniques. Complexity will be used as a term to represent all those factors, links and other circumstances that make research difficult and uncertain. Although accepted in natural sciences, the social sciences quantitative mainstream, including management disciplines, feels uncomfortable with complexity. Being complex means that multiple factors and relationships are interdependent. Context therefore is a major dimension of complexity. With reference to quantum theory, Zohar (1997, p. 46) points out that “To be known, to be measured, to be used, a quantum entity must always be seen within the larger context of its defining relationships.” Pick one or two factors from a context and insulate them and you regress to the mechanical idea that if you study all the details you can screw them together like they were parts of an engine – and there is the whole! The Humpty-Dumpty syndrome, derived from the old nursery rhyme in which Humpty Dumpty fell off a shelf and went to pieces, shows in common-sense language that specialization: . . . resembles all the king’s horses and all the king’s men tackling the puzzle created by the fragments of Humpty Dumpty’s broken body. . . Despite the fragmentation in professional specialties, professionals and managers are expected to somehow put their – and only their – pieces of Humpty Dumpty back together again. Further, they are to accomplish this task without really understanding what Humpty Dumpty looked like in the first place, or what the other professions can do to make him whole again (Waddock and Spangler, 2000, p. 211).

The fact that Humpty Dumpty was an egg and assuming that it was raw and not hard-boiled, makes the dilemma even more obvious. How do you put a broken egg together? It is an organic, live phenomenon whose elements mix and merge after a crash and quickly degenerate – much like an organization or a market. We will now proceed to the assessment of quality and productivity of case study research and how management research can leave behind routinely applied quality criteria and learn from industry.

Case study research and network theory 229

QROM 2,3

230

Quality and productivity It is often said in texts based on cases that the cases “illustrate” something. You can probably find examples that seem to support any absurdity but such “exampling” is not proof of anything (Glaser and Strauss, 1967). However, examples can facilitate the understanding of theoretical constructs and complex argumentation by relating them to everyday experiences and familiar metaphors and theory. For scholars, the task of genuine case study research is the provision of empirical data for analyses and conclusions; practitioners have to go further and make decisions, initiate action and achieve results. It is correct, as is often pointed out, that one or a few cases cannot answer the questions of how often, how much, and how many. But is it not better to understand a phenomenon in depth than to know how often the not understood phenomenon occurs? Excellence in case study research requires skills to access data and to analyze and interpret documents, interviews, observations and experiences. It requires that you are critical to the data offered by your sources – but constructively critical. A respondent’s answers and comments are always slanted, unknowingly and without malice or deliberately, and with various motives and power behind it. Increasingly – which should be obvious to those who are in accounting, corporate strategy or marketing, and to economists and political scientists – politicians, reporters, public relations people and lobbyists manipulate our perception of what is going on or what happened. Politicians are considered liars, it seems, by a majority of people; salespeople are known to be overpromising; and accountants help “cook the books”. The phenomenon is not new but today “spinning” of information has become a profession. Consultants are hired to promote certain “facts” and suppress others and their only loyalty is to those who pay their invoices. The distortion of fact can be skilfully disguised as documentaries on television or scientific columns in journals. “Truth” in the classic sense is for the weak and gullible. A majority of the professionals probably does not even know what the truth is and if they are lying and distorting fact. In the paper scripts I get for review the most common references on case study research are Yin’s (1984) book, originally from 1984 and lightly revised for later editions, and article from Eisenhardt (1989). Although these sources offer important insights and guidance they seem all too often to be routinely applied as if they were complete and forever valid. If nothing has happened with case study research since the 1980 s, we had better watch out. I also fear that not every author has read Yin and Eisenhardt in depth but settled for them because of their ubiquity in reference lists; they are a safe choice. Increasingly, reference lists become mere name-dropping that bring referencing into a vicious circle: the more a reference is listed the more it is listed – whether it is the most suitable reference for a specific script or not. A new reference by an unknown author in a second tier journal stands little chance. References become a social rather than an intellectual dimension of the researchscape. Objectivity could not do without subjectivity and intersubjectivity. The ultimate decision to release a wine on the market is taken by a single person or a small group of connoisseurs, each with a superior nose and tongue. They snuff and taste. No better method has been found, which frustrates bureaucratic technology freaks: why cannot we get a machine do it, an objective system with quantitative indicators? The assessment of the quality of academic research is no more scientific than the assessment of wine quality; it is also largely a nose-and-tongue exercise.

There is also talk about productivity of research but quality and productivity are interrelated in a complex web. What would a working trade-off between quality and productivity look like? There is no general answer, only specific, context-dependent answers. To force pseudo-productivity, for example measuring how many articles faculty wrote last year, is detrimental over the long-term. Currently, those who want to pursue and maintain an academic career are pressed to publish at least two articles per year in blind-reviewed international journals, preferably in top journals. In a benevolent interpretation, the number is a productivity measure, and the peer review and journal status are quality indicators. Turned into points, scales and rankings, however, a deceptive security is reached; we bury our heads in the sand in the belief that we become invisible. The demands on the researchers can be viewed as efforts to keep quality and productivity under political and bureaucratic rather than scholarly control. In academe, certification bodies and promotion committees have established elaborate systems of credits that include, for example, the ranking of the journal in which you published, if you wrote it alone or with co-authors, if these co-authors come from the same department and the same university as you or from elsewhere, and if your name is listed first, second, etc. This system has given birth to large publishing houses specialising in journals for academic promotion rather than journals for disseminating scientific advancements. From a marketing perspective, publishers are right; they fill a need in the market. There are many catches here which would take separate articles to discuss. Let me just mention one or two. It is recommended that everybody should publish in “international top journals”. To become a top journal usually takes decades and the “international” top journals in management disciplines are American. European journals are scornfully referred to as “second tier journals”. New journals covering innovative research and being entrepreneurial are looked down upon. The whole situation is a paradox but driven with ferocious blindness. For example, in marketing everybody wants to publish in the Journal of Marketing (JM). In approximate terms JM receives 300 new scripts per year and 100 scripts that come back after revision. It can publish 30, meaning that very few academics can get a script accepted in JM, even in a lifetime. Further, there is a backlog of several years. A rejection does not automatically mean bad quality; it is also a matter of what the editor considers a good fit in accordance with JM’s mission, and an appealing mix of articles in a certain issue. Each script is reviewed by three reviewers – which are at least in theory blind or anonymous – and the editor. There are certain criteria but all these require a series of judgement calls from each individual judge, meaning that the review process is guided by a mix of objective, intersubjective, subjective, quantitative and qualitative criteria in unknown proportions and with unknown weights. The good news is that the real bad scripts are weeded out but the bad news is that the real good ones may be as well. Innovation, breaking with a reigning paradigm, which is a necessity for quantum leaps in science, by definition does not comply with mainstream criteria. At the same time, we cry for innovation. The risk is that we get more-of-the-same mediocrity, albeit the mediocrity is of high standard. Quantitative research is a priori defined as superior to qualitative research, a claim which is on such divine level that no evidence is necessary; it is a God-given fact. A wide-spread rumour claims that you must include a quantitative part to get articles

Case study research and network theory 231

QROM 2,3

232

published and that one or a few cases are not enough. This is only partially true and my prediction is that it is changing in favour of cases. The best thing with the plethora of journals and conferences is that they open up an unregulated market with numerous, decentralized decision-makers guided by different paradigms. People can still take initiatives without being dependent on established, power-playing professors and university bureaucrats, and innovation and entrepreneurship is given a chance. Sometimes an abundance of data that has required numerous interviews, audio recordings and transcripts during a lengthy period of time, is mistaken for high quality. In a discussion with Barney Glaser, co-creator of grounded theory, he stated that a research project should not take too long – six months at the most. The quality will not forever go up, but after a while starts to decline and so will productivity. Rich descriptions can become filthy rich; thick ones can become overweight. Academe can learn from business and government organizations where quality management under different labels, including productivity and financial outcomes, is a big issue. Belatedly, in the 1980s, it was realized in the Western World that quality and productivity cannot be upheld without the concerted effort of every little detail in an organization, its suppliers, distribution network, leadership, human resource management, and so on. Among the quality systems are Lean Production, Six Sigma, ISO, and the quality awards such as the European Quality Award and the US Baldrige Award. They provide extensive lists of criteria, questions and comments – 40-50 pages – which have to be considered by those who seek to improve and maintain quality. They increasingly stress the outcome while initially the stress was on following procedures and installing enablers. Compared to this variety and richness, mainstream scientific quality criteria stand out as poor. As uncovered in the study by Cassell et al. (2006), it is not unusual that ignorant reviewers of case study research lean on criteria from statistics and hypotheses testing in the conviction that these represent general criteria for scientific evaluation. Three common quality criteria in science are validity, generalisability and reliability or some variation of these. Validity in its generic sense is of cardinal importance in case study research. Have the researchers been able to capture the phenomenon they are chasing or have they studied something else? Generalisability is closely related to validity and is sometimes called external validity. It can take place on many levels, from a narrow generalization within a limited substantive area to universal validity. In applied research and consulting, the interest is primarily in specific applications of research results. However, it is desirable and sometimes mandatory that academic research gives a contribution to science in general, albeit it may be a limited contribution. The favourite of science is reliability. A study with high reliability can be replicated by others and everyone should arrive at roughly the same result. This is usually not attainable when you study complex phenomena, especially when change is a major force. Still reliability can perform a few odd jobs in case study research: as a police function (uncover dishonest research); as an intelligence test (are the scientists clever or stupid); and as a “validity crutch” (validity seems beyond reach and reliability is established and validity is assumed) (Gummesson, 2000). Others have pointed out the need for “contingent criteria” to evaluate qualitative research in management and that there are institutionalized biases in favour of quantitative research (Lee, 2006). For example, Guba and Lincoln (1994) argued that qualitative research should not be evaluated by means of reliability and validity

alone – those criteria were designed for quantitative research – and suggest trustworthiness, divided in four sub-criteria: credibility, transferability, dependability and conformability. Seale (1999) lists several detailed efforts to establish general quality criteria but advocates that such “criteriology” is non-productive. He sees qualitative research as a craft skill in the same sense as the researcher’s experience and personality are the most valuable instrument and that each methodology is affected by that, including quantitative research. He wants to erase the borders between philosophical, political and theoretical positions; research could benefit from whatever there is, be it positivism, constructivism, or postmodernism. But, he is not as eclectic as I am when I claim that we should include approaches from natural sciences as well. This standpoint will be further explained in the section on network theory. The first quality decision concerns what research to prioritize. It should follow the industrial imperative: “Do the right thing”. But, who decides what is the right thing to focus on? It creates a problem for innovative research as it may antagonize mainstream supervisors, examiners, reviewers and funding bodies. They do not believe in it or they simply do not grasp it. We have to accept that innovative research always involves risk-taking. Although I have argued well over 20 years now that academe should listen to what manufacturing, services and government have learnt about quality I seem to be crying in a desert (Gummesson, 2000). But we are addressing quality in business and management research and why cannot we learn from its practice? Is it still that business schools only go to sociology and ethnography to learn about research and science? It certainly seems so when looking at the references to articles. How then are we going to establish methodology in business and management? Having accepted the choice of a certain research topic, the quality focus moves to the implementation and outcome of the project. In Table I my own efforts to combine lessons from quality management in business and academe, quality criteria and strategies have been assembled in a checklist. It can be used in two ways. The first is during a research project to avoid that early mistakes are repeated and contaminate future stages of the project. It follows another industry imperative “Do it right the first time”. However, it is essential to note that quality management in industry is primarily focused on systematic assessment of continuous and controlled activities and not on innovation. One of the most innovative companies since decades is 3M. It has boasted a creative culture and proved that it gives results. When a Six Sigma programme was implemented profitability stock price and shareholder value rose, mainly because productivity was improved (meaning short-term cost reduction). After a couple of years, the creative environment had declined to a stage where it is now critical to re-establish its vitality (Hindo, 2007). If innovation is restricted to certain rules, it dies by definition, but at certain stages, it can include elements of discipline and standard procedure. The second use is to assess the outcome of a project. Industry has gone from quality control of the finished product to a focus on each element during the process. In academe, there is a similar development albeit not as clear especially as a true research project only contains a certain amount of repetitive behaviour and a larger element of developmental behaviour. Research should not be the object of just a last minute verdict. What is required is a delicate balance between making quality certain under way without interfering with unorthodox and innovative thinking. The ideal is that a

Case study research and network theory 233

QROM 2,3

Readers should be able to follow the research process and draw conclusions of their own

234

As far as realistically feasible researchers should present their paradigm and pre-understanding

The research should possess credibility

The researcher should have had adequate access

Table I. Checklist for quality assurance of case study research

Well written, intelligible report A comprehensive account of the research process A statement of the problem, purpose and research questions of the study A description of methods of data collection, coding, analysis and interpretation procedures A well documented and rich description of cases Motives for the selection of cases Limits of the research project Clear presentation of results and conclusions Information to the reader if taboo information has been discovered but is made anonymous or disregarded Personal and professional values and if these have changed in the course of the research Values of the system under analysis Theories and concepts that govern the project together with the reasons for the choice of these theories and concepts The researcher’s prior experience and other pertinent information on the researcher Correct data including correct rendering of statements and views of informants How analysis and interpretation are supported by data Demonstrated confidence in the theory, concepts and conclusions that are used or generated in the research Honest presentation of alternative interpretations and contradictory data The avoidance of deliberate or unintentional deception The conclusions should accord with one another (internal logical consistency) The actors in the cases should be able to recognize what is presented in the report (external logical consistency) Presentation of all relevant data and information used in the case study Selected methods and techniques should be appropriate to the problem, purpose and research questions Used methods and techniques that ensured adequate access to the processes under study Account of any difficulties in deploying desired access methods Account of any problems and limitations which arose through denied access Account of any problems and limitations in access which arose through time and money constraints How access limitations have possibly impaired the research (continued)

An assessment on the generality and validity of the research

The research should make a contribution

The research process should be dynamic

The researcher should possess certain personal qualities

To what areas the results apply How closely the research represents the phenomenon which the researcher aimed to study If other research confirms or disconfirms the findings If results bear out or disagree with extant theories and concepts Contribute to increased knowledge Deal with relevant problems Optimize the trade-off between methods, techniques and results Be of value to the scientific community, the client, and the public Actively be made available to the scientific community, the client, and the public The extent to which the researcher has continuously learnt through own reflection and dialogue with others Demonstrated creativity and openness to new information and interpretations The ability to switch between deep involvement and distance A demonstrated awareness of changes of research design, methods application and so on during the research process Commitment to the task of research Integrity and honesty, being able to voice his or her conviction Flexibility and openness, being able to adjust to changed conditions and new – even disturbing – information

Source: Gummesson (2000) reproduced with permission

project should not need any quality checks when finished; it has been taken care of during the journey. That is what supervisors are for; they are coaches. Nobody can score high on each issue; such a demand would inhibit innovation. The goal should be to reach a satisfactory level with regard to the type of research and the imperfections that occur in the practice of research. However, procedural adherence – “I followed the rules and did it by the book” – is not sufficient; it will promote research ritual over results. The outcome of the quality assessment therefore is a weighted impression through examiner reflection and dialogue. For example, a precise statement of the researcher’s paradigm (point 2 in Table I) may be premature if the researcher is experimenting with a new paradigm. It may not be clear what premises guided the researcher; this requires further study in a new project. If the demand is initially too strict, it would impede innovation. Another example is the demand for adequate access (point 4). There can be many impediments in getting access but the awareness of access failures helps to assess validity (point 5).

Case study research and network theory 235

Table I.

QROM 2,3

236

As case study research handles complexity, it seems natural that the assessment of quality and productivity is complex, too. I mentioned earlier that the mere listing of criteria and questions on quality applied to organizations soliciting quality awards requires 40 to 50 pages. Mine is not that long but could of course be expanded. In its current form, it is a manageable compromise and it is far beyond what is recommended in methodology handbooks. Ideally, the checklist should also include productivity and the links between productivity and quality and the generation of new knowledge. However, nobody so far has been able to find guidelines for such a trade-off. It is possible to establish simplistic demands like “two articles per year in blind reviewed top scientific journals” and “books do not count”. Although such statements may seem rock solid and hands-on, they promote rigidity and tie the hands of scientists rather than stimulate quality, productivity and innovation. They encourage risk avoidance, me-too mainstream research and robot-like testing of extant theory, and they impose arbitrary constraints on what is amenable to research – all because this is convenient to measure. It encourages ritual over results. The ritual should be a supportive enabler, it should facilitate the researcher’s survival in a messy world, but the enabler is just a means, not the end. Strict ritualism discourages true search which includes risk-taking but can result in discovery and improved knowledge. At the end of the day – even with this convenient rigidity – the overall assessment rests with the personal feelings of the judges and their nose-and-tongue perception of the world. Objective criteria are preferable (if they exist in a truly objective form) and intersubjective, peer-approved criteria can facilitate research during a period (but not for ever). Researchers must also be trusted with the ability to understand what is right. Further, we have to acknowledge that the bulk of knowledge has not yet been communicated because it lacks words or numbers. It is experienced by individual researchers or research teams; it is tacit knowledge. Hopefully, it may some day become explicit knowledge but that may take its time. Demanding immediate clarity is “the kiss of death before birth”. To scorn intuition, sound judgment, common sense, and experience is an expression of academic snobbism. Even these academic bureaucratic and snob scientists end up with nose-and-tongue overall assessments, where the tipping-point in one direction or the other may be no more than a personal and emotional whim. Network theory Establishing the link between cases and networks My first encounter with network theory came through sociograms in the 1960 s. They described how Laura related to John, how John related to Richard, and so on. Since, the 1970 s, interorganizational networks have been successfully tried on, for example, business-to-business marketing (B2B). These applications stayed with me and kept pleading for attention. In the early 2000 s, I suspected that one of the big problems with relationship marketing, customer relationship management (CRM), and one-to-one marketing was their focus on the dyad, the relationship between a single supplier and a single customer, rather than on the whole context in which the dyad is embedded. I decided to write a book about networks in marketing. It first came out in Swedish under the title Many-to-many Marketing with the subtitle “From one-to-one to many-to-many in the marketing of the network economy” (Gummesson, 2004).

Network theory became enormously helpful to better understand relationships and to design many-to-many marketing. I gradually realized that life is a network of relationships in which interaction takes place. This must of course show in each specific facet of life, including management. An observation that intrigued me was that network theory is both a theory of life and a methodology to explore life. Sometimes I hear that network thinking is just a metaphor which can be intriguing and enlightening but hardly demands commitment. I am willing to defend a contrary standpoint: networks are the real thing. If I am exposed to reasons to abandon the network paradigm, I will be happy to do so. Until then, as network theory as no other scientific approach is helping me to discover a new world, I will remain loyal. Network theory is universal and can be applied to anything. Setting out on my network Odyssey, I had no clue that network theory had such wide capacity to handle complexity. “Complexity” is derived from the Latin verb complecti (to twine together) and the noun complexus (network). The word “system” is derived from the Greek systema, meaning “a whole composed of many parts”. Complexity, including networks and systems thinking, has started a natural science family, complexity theory. Its members embrace complexity instead of shunning it. Among them are quantum physics, chaos theory, autopoiesis (self-organizing systems), fractal geometry and string theory. I would like to bring case studies into the complexity theory family, thus transcending the boundaries between social and natural sciences. Natural scientists promote visionary thinking in elevating theory to new heights. They have even the audacity to suggest “a theory of everything” (Barrow, 1992). Social scientists expose an inferiority complex to natural sciences and the “physics envy” to use a Freud-inspired term, is widespread. This is unfortunate as social scientists rarely have any idea what modern physics and mathematics are about. Greenwood and Levin (2005, p. 53) say that “Everyone is supposed to know by now that social research is different from the study of atoms, molecules, rocks, tigers, slime molds, and other physical objects.” However, they have mainstream physics in mind – which has its place – but they do not refer to modern physics and controversial, yet exciting visions such as string theory. How about medicine and psychology? Do they not clearly embrace both the physical, mental and social sides of life? Orthodox western medicine likes to see itself as a natural science. Social medicine, psychiatry and other border subdisciplines rank low in the pecking order of medical doctors; hard core physically oriented doctors like surgeons and cancer specialists reside in the top positions. Although Hippocrates once founded medical science on seven cases and Freud’s psychoanalysis is based on five cases, orthodox western medicine today rejects clinical cases and condescendingly stamps them as “anecdotal”. They want “evidence-based medicine” an extremist, narrow form of contextless, statistical cause and effect research ground through a bureaucratic apparatus. There is nothing to stop us from using quantitative elements in case study research. For example, a case of a merger can include spread-sheet analyses of financial data from a series of years, and a statistical survey of how employees perceived the merger. Coviello (2005) combines case study research with network theory and classifies the former as qualitative and the latter as quantitative. This is so in her specific application but the classification is not universally valid.

Case study research and network theory 237

QROM 2,3

238

I have found close affinity between case study research and network theory, but the grammar is different. Case study research is not a theory. It is usually a verbal, narrative description sometimes supplemented by graphs, pictures and quantitative elements that matures into analysis and conceptualization. There is an art and science of text interpretation (hermeneutics) offering guidelines but not strict rules. The grammar of networks is nodes and links. It was not until recently that it dawned on me how close the two methodologies are in addressing the complexity of masses of data, contextual dependency, dynamic situations, and fuzzy variables. It became clear to me that the two are not in conflict but supplement each other. Buchanan (2003, p. 6) makes a connection between social networks and natural sciences: Networks that have grown up under different conditions to meet markedly different needs turn out to be almost identical in their architecture. Why? A new theoretical perspective is helping to answer this question and is enabling researchers in almost every area of science to begin tackling some of their most challenging and important problems.

He concludes that the network perspective has come into its own: Physicists have entered into a new stage of their science and have come to realize that physics is not only about physics anymore, about liquids, gases, electromagnetic fields, and physical stuff in all its forms. At a deeper level, physics is really about organization – it is an exploration of the laws of pure form (p. 165, italics added).

As a physicist, Baraba´si (2002, p. 200, p. 208) underscores the application to markets: “. . . understanding network effects becomes the key to survival in a rapidly evolving new economy” and “In reality, a market is nothing but a directed network”. Properties and concepts of network theory Network theory has a long history in social sciences (on social network analysis, see Scott (1991), Degenne and Forse´ (1999) and Kilduff and Wenpin (2003); on applications of networks and relational thinking, see Granovetter (1973, 1978, 1985), Schluter and Lee (1993, 2003), Castells (1996), Gladwell (2000), Rosen (2002) and Tanner (2003). It also has a long history in natural sciences in which there is currently a keen interest in general network theory and the dissolution of boundaries between natural and social sciences (Barrow, 1992; Zohar, 1997; Capra, 1997, 2002; Baraba´si, 2002; Buchanan, 2003; Stacey, 2007). Network theory is used in organization theory (for a state-of-the-art theoretical account see Czarniawska and Hernes (2005); for a practical application, see Lipnack and Stamps (1994), and in marketing (Iacobucci, 1996; Mo¨ller and Wilson, 1996; Achrol and Kotler, 1999; Christopher et al., 2002; Gummesson, 2004; Coviello, 2005). Next is an account of concepts and properties of network theory that I have encountered in either or both social and natural sciences, but there is more from natural sciences. Why this expose´ and why its length? The rationale is that I want to draw the attention of researchers in management disciplines, who most likely limit their search for methodology to the social sciences literature, to extend their search to modern natural sciences from the past century and to those authors who already transcend the boundaries between the two. The account is primarily extracted from sources given in

the previous paragraph, but in some instances specific references are given. To contribute to the overview, Table II exhibits an index of the concepts and properties. Nodes, hubs (highly connected nodes), links and interaction are the basic elements of all networks. The nodes can represent anything of importance to describe and explain a phenomenon. Often, a node is a person or an organization but it can just as well be a concept, an event or a machine; it is a matter of researcher discretion. For management disciplines which deal with both people and technology, especially information technology that is now brought into every corner of society, it is essential that both human and technological elements can be shown in interaction. One example is the concept of high tech/high touch, saying that when technology becomes more intense in our lives we need to compensate that with human touch. It can also be used to emphasize the need for technological and human balance. High tech and high-touch interact and are supplementary. The network builds up the whole and the parts at the same time, offering an explicit and orderly way of doing so. It is systemic or holistic and thus caters for the basic property of context. It allows the study of fragmented detail but offers techniques to put detail into context and not leave it hanging as does conventional statistical research. Each specific network applies the elements and properties in an individual way. Networks therefore come in many shapes and are shaped by many forces. Examples are the centralized network (one hub), the decentralized network (many hubs) and the distributed network (no hub). The topology of networks refers to the network landscape, its size and the structure of nodes and links. Structured descriptions are usually linear, presenting events in steps like a chain (an example is the well known value chain) or chronologically. Networks are independent of sequence; they are nonlinear and allow iterations, the jumping back and forth between the elements of a phenomenon. Linear equations require that anomalies and other “disturbing” phenomena are sedated and toned down or the equation cannot be solved – but gone is validity. Cascading failure Centralised networks Change Cluster Cluster coefficient Connectors Context Contingency Critical state Decentralized Degree exponent Distributed networks Dynamics Embeddedness Error tolerance Note: For explanation, see the text

Fit-get-rich Fitness High tech/high touch Holistic Hub Interaction Iterative Links Nodes Non-linear Phase transition Planned network Power law Preferential attachment Process

Random network Rich-get-richer Robustness Scale-free network Self-organizing Six degrees of separation Small world Spreading rate Structure Systemic Threshold Tipping point Topology Winner-takes-all

Case study research and network theory 239

Table II. Network theory index in alphabetical order

QROM 2,3

240

In early network theory, networks were treated as random. Randomness in the mathematical sense exists in nature and social life as special cases but not as a general characteristic. Business networks can include random dimensions but they are primarily planned networks, modified by the intentions and behaviour of numerous companies and consumers, by governments and others. The structures and processes of networks have to be selective to be manageable. This becomes particularly obvious when a network grows. A dyad includes two people and one link. Double it to four people and the number of potential links increases to six; double it to eight people and the potential links will be 28, and so on at an exponential rate. In principle, networks are scale-free meaning that their size has no limit. In practice, they are limited by specific conditions and circumstances such as a company owner’s objectives, the size of the market, access to capital, and government regulations. Clusters are dense groupings of nodes and links within which everybody can easily reach everybody. The cluster coefficient is a measure of closeness. If the coefficient is 1.0 all members of the cluster are in contact with each other. It is zero when one member is related to all but the others only relate to this one member. If in a cluster of four, which allows a maximum of six links, only four links exist the coefficient becomes 0.66. Granovetter’s (1973) concept the strength of weak ties shows that society consists of highly connected clusters which are linked to each other by weak but yet important ties. His concept of embeddedness (Granovetter, 1985) – everything is embedded in networks and thus connected – supports the notion of the small world, popularly expressed as six degrees of separation (nobody is more than six steps away from everybody else in the world). Hubs (or connectors) are organizations or people with a particular gift or position to attract others and build contacts with them. Becoming a hub is essential in business. For example, the more people who visit your web site, the more visible you become and the more you can sell. The number of hits and orders a web site receives is an indicator of hub status. Marketing management then could be described as nodes fighting for links. Studies of webpages show that they follow the mathematical power law. It says the same as the well known 80/20 rule or Pareto optimal distribution. For example, 80 per cent of the links in a network go to 20 per cent of the nodes, that is, to the powerful hubs. It can be expressed mathematically by the degree exponent which for most systems varies between two and three. A study showed that the incoming links to webpages had a degree exponent close to two telling us how many highly popular links there were relative to the less popular (Baraba´si, 2002, p. 68). It has been shown that the more links a hub has, the more likely it is that this hub will be preferred by newcomers; it is known as preferential attachment. It is an inherent growth factor – the bigger you are, the quicker you grow – expressed as the rich-get-richer syndrome. In business, large hubs kill small hubs through competition or swallow them through mergers and acquisitions. This is claimed to be an unavoidable consequence of the networked economy and growth strategies governed by natural laws (Baraba´si, 2002, p. 200). Not only size but also fitness explain the attractiveness of a hub. A dominant hub with the most links can be overtaken by a new kid on the block with greater fitness. Very rapidly, Google passed established search engines such as Alta Vista; fitness compensated for the lack of hubs. At a slower rate and thanks to fitness, the Linux open code operative system has become a threat to the Microsoft monopoly. The fit-get-rich network is scale-free with many hubs. A few big hubs co-exist with

numerous links to a large number of small hubs with fewer links; it is oligopoly. The winner-takes-all network leaves little to others and a single hub controls the bulk of the links; it is monopoly. Once IBM was close to monopoly in computer hardware and only intervention from anti-trust laws stopped them. Airlines are obvious networks and in Europe the upstart Ryanair in 2007 passed the long established British Airways in size. The reason was fitness, manifested in low price and overall efficiency. Networks are traditionally perceived as structures but they are as much the processes going on in the structures and the processes of changing the structures. So network theory allows change, the dynamics that characterize life in general and is a pressing issue in business. Phase transition is about transfer from disorder to order. Power laws take over in phase transitions and the laws are general to behaviour in nature and society. For example, there are parallels between atoms and consumers. At a critical point, we have to stop viewing atoms as individuals as they group themselves in communities where the atoms act in unison. We recognize this in marketing: going from the individual to communities or segments that buy the same things for the same reasons. Free capitalist markets are self-organizing when millions of consumers make choices, not independently because they are influenced by the context of network belonging, but the variation is so huge that the choices can only be loosely and temporarily controlled. Companies try to exert control through individual relationship-building; parasocial relationship-building though symbols such as brands and storytelling; availability and distribution networks; and even the creation of physical addiction (examples are medication and sugar). Companies are trapped between order and chaos and the dream is to reach the state of zero degrees Celsius when the rather disordered liquid water – customers – suddenly changes into a perfectly ordered state – ice. Nature’s ecosystem has a greater topological robustness and error tolerance than human-made systems. It can sustain basic functions even if many nodes and links go bust. Cascading failure refers to a breakdown in one part of a network that builds up and spreads throughout the network. While sometimes a small node or link can make all the difference, sometimes the breakdown of a large number of nodes and links does not incapacitate a network. There is a tipping point, meaning that events accumulate and reach a point of sudden change. We may not note the signs of the gradual process because we understand too little of network behaviour. The breakdown of a large hub may cause instant failure. If the focus by business, government and the media is on big corporations, the gradual disappearance of small firms and the slowing down of start-ups and entrepreneurial activity will not be noted underway. Having reached the tipping point, the process may be irreversible and the effect can hit hard. When an epidemic or an innovation spreads we want to know the spreading rate – the likelihood that a person will adopt it – and the critical threshold, which is the quantity determined by the properties of the network in which it spreads. If the spreading rate is below the critical threshold it will die out; if it is above, the number of adopters will grow exponentially. The thresholds of individuals vary widely, but a single person’s behaviour can trigger collective behaviour and cause unexpected and sudden events (Granovetter, 1978). We do not know why mobile phones suddenly spread so fast and why other IT products did not or died. In management, we rely on plausible explanations and storytelling about events and their links. New theory of

Case study research and network theory 241

QROM 2,3

242

critical states may in the future give other explanations, equally applicable to physical and social phenomena (Buchanan, 2003, p. 106). Several of the general law-like findings that we have discussed are partly counterintuitive: the small world, the spreading rate, and the tipping point. It is therefore essential that the outcome of studies in modern mathematics, physics and other sciences is also tried on social phenomena. Much to the surprise of natural scientists, certain network laws have been found to be universal and apply to such differing phenomena as the foodweb of an ecosystem (who eats who), the connected neurons in the human brain, the dissemination of innovation in consumer markets, the breakdown of financial markets, and the growth of Google. The world wide web is one of the largest human-made networks; it is an infrastructural network just like the roads, the electricity grid and the telecom system. They all display general structures and processes although the applications include specific features. There is obviously some organizing principle of the world on a deeper level that transcends the boundaries between social and natural sciences. For example, history is usually associated with the humanities; it is the history of people and nations. Anything that is exposed to change has a history and therefore not only social sciences but also natural sciences are dependent on the past. Among them are archaeology, evolutionary biology, geology and astronomy and the economic sciences economic history, management, and economics. History, as we know it from the school-books, is usually based on narrative efforts to link events, find possible causality and make interpretations. The rationale for such storytelling is contingency; unique events occur in unpredictable ways. If life is networks and there are laws that control networks, sometimes mathematical laws can be found. As it is often claimed that history – at least in part – determines the present and the future we need to be able to chart events and their links better than just as accidents and qualitative interpretations. It now seems as if the power laws and hubs of complex systems can help us. It has been demonstrated that dominant web sites, mergers, globalization and financial breakdowns obey the same laws as cells and fractals of self-similarity of river networks and tree branches. General laws apply to some extent whether the object is people, corporations, cells, galaxies or the internet. Discussion Network theory offers several advantages to researchers. In a systematic and rigorous, yet innovative fashion it can accommodate more dimensions than any other approach that I have come across – with case study research as the runner up. First, network theory offers an attitude to management thinking. By accepting the network lenses of nodes and links and all the other concepts and properties that network theory has brought to the fore, we can see management in a new and productive light. Second, the application of network theory to management is a supplement to case study research. By trying out network thinking, we will increase our understanding for its potential for research in management. Third, it allows us to work on different levels of sophistication including verbal and theoretical discourses, field studies, experiments, computer simulations, graphics all the way from hand-drawn sketches to computer-generated patterns, and mathematical and statistical studies. It can be applied intuitively and experientially but also in a scientific and scholarly mode. It fits both theoretical and practical requirements.

On the surface, cases and networks may seem very different. However, snorkelling and diving into their underwater world has revealed a close kinship between the two. They share the same mission: to uncover complexity. Case study research and network theory are supplementary, and it is my contention that increased understanding of networks will advance case study research. The two can make a marvellous team. Case study research is associated with social sciences and qualitative research, while network theory transcends the boundaries between social and natural sciences and can be used both qualitatively and quantitatively. Social scientists do not seem to tread on the turf of natural sciences while natural scientists, unabashedly offer comments and conclusions on social phenomena. Network theory is used in social sciences but to a lesser degree than case study research. Business schools in general are not well-versed in network theory, some not even in case study research. However, I sense quicker development of network theory than of case study methodology. To really understand a methodology, you need first-hand experience of applying it to the complex reality of management and you need a good coach and a mentor to guide you. Once, after everybody had talked about the difficulties to grasp all the subtleties of grounded theory, its foremost advocate Barney Glaser got tired and said: “Just do it!” I am of the same mind as Barney. Academics often elaborate endlessly and eloquently on abstractions and details instead of getting started and learn in action. A study reported by Johnson et al. (2007) found eight ways of perceiving qualitative management research. Having a clear identity – which obviously quantitative social researchers believe they have – may seem enviable but it can be a token of fear of complexity, a way to cocoon itself from outside threats. If we deal with a fuzzy reality we have to match that with research techniques that allow for fuzziness, and I think this is what modern physics and mathematics have been doing during the past 100 years. This has gone unnoticed by quantitative social sciences. The abundance of literature on methodology gives me an uncanny feeling of being exposed to Harry Potter-like witchcraft and wizardry beyond my intellectual comprehension. For example, The Sage Handbook of Qualitative Research is 1,210 pages and it is a goldmine (Denzin and Lincoln, 2005). But gold does not come in pure form; it is embedded in rock and the magic trick is to extract the gold grains or occasionally even lumps. Despite the rich content, I often cannot find what I am looking for and I get distracted by less precious metals. At the same time, I feel that it is too much. It suffocates me through its wealth and it alienates me: do I belong here? Of the 59 authors of the handbook, 21 come from education, 17 from sociology, ten from anthropology, eight from communications, and three from other areas. One chapter claims that economics, sociology and political science receive the bulk of social science research money and dominate social science publications (Greenwood and Levin, 2005, p. 53). However, economics and political science are not in the book, and psychology and management are marginally mentioned. At Stockholm University, Sweden, management as measured by the number of students, constitutes half of social sciences and is ten times bigger than economics. In this sense, the handbook is not representative of social sciences. In reviewing the Second International Congress of Qualitative Inquiry, Lee (2006) notes that out of about 200 sessions, only one was dedicated to qualitative research in different management disciplines. He further says:

Case study research and network theory 243

QROM 2,3

244

This under-representation appeared to be more attributable to qualitative researchers in management failing to look beyond their normal audiences, rather than any unwillingness of the congress to provide space for people conducting qualitative research in management.

It can be claimed that management has borrowed much of its methodology from other sciences, not least statistics and sociology, and that the methods are universally applicable. However, to productively deploy a method or research technique requires preunderstanding of institutional conditions of the object under study. For example, to use sampling techniques and make a survey inside a company or of its customers, to interpret the data, and to transform them into decisions, action and eventually financial results requires understanding of the profit concept and consumer behaviour. Market research may be the only management area where the application of research techniques has acquired a recognized identity. Another example, recognized only in limited circles, is management action research (Gummesson, 2000; Coghlan and Brannick, 2005). Originally inspired by Clark (1972), I adapted action research from being a method to activate underprivileged groups to solve their own problems with researcher support, to a method for management researchers to get privileged access through close involvement as decision-makers and actors, simultaneously researching the process for scholarly purposes. The key characteristic then is not primarily “help others help themselves” but “being involved in what’s going on and understand it through first hand experience”. Making research work requires both regulated procedure and judgment calls, both preunderstanding and sensitivity to new data, both logical reasoning and free-wheeling creativity. If too regulated research becomes mechanical, stifles the minds of students, and makes them follow rituals that are against their experience and beliefs, makes them simplify, reduce and even cheat to come to the finish of a study. For the convenience of academics, a series of dubious strategies have become mainstream; I am even inclined to refer to them as institutionalized deception. They are part of intersubjective peer agreement and their application is little challenged; critique may even jeopardize an academic career. As researchers, we may never have understood the underpinning qualitative and subjective assumptions of what we do – our paradigm – or we have forgotten how harmful assumptions can be if unwisely chosen. Harmful and deceptive research strategies include routinely reducing human beings to numbers, statistics, averages, distributions, probabilities, negligible minorities (a negligible 0.1 per cent of China’s population is 1.3 million individuals!), side-effects, anomalies (irritating little things that disrupt approved theories or collapse a “beautiful” structural equation), and contextless fragments. Further, straightforward causality between an independent and a dependent variable is just a special case. It is often no more than co-variation, and independent variables are the exception as they are all embedded in a context. Delimitations by routinely excluding small factors and weak links ignore the tipping point effect. I agree with the claims of Greenwood and Levin (2005, p. 53) in their demand that social sciences and universities must re-think their direction and mode of operation: . . . one can only be amazed by the emphasis that so many conventional social scientists still place on the claim that being “scientific” requires researchers to sever all relations with the observed. Though epistemologically and methodologically indefensible, this view is largely dominant in social science practice . . . This positivistic credo obviously is wrong and it leads away from producing reliable information, meaningful interpretations, and social actions. . .

They further say that the positivist credo “. . . has been subjected to generations of critique, even from within conventional social sciences. Yet it persists, suggesting that its social embeddedness itself deserves attention” (italics added). This embeddedness in networks of relationships, its contextual dependency, has led me to the psychology and sociology of science. It may be claimed that methods should not be handled individually by each researcher, although researcher personality always exerts an influence on their practice. I refer to it as persona and researchscape embracing the individual personalities who inhabit a discipline or a university department, and their behaviour (Gummesson, 2006). Behind the scientific front, the persona and researchscape of real life offer a spectrum of human virtues and frailties. Scientists are no more or less moral and dedicated than any other human species. They, too, are affected by the seven deadly sins and the counteracting seven virtues (in brackets): pride/hubris (humility), envy (satisfaction), wrath (patience), indifference/laziness (diligence), greed (generosity), gluttony (abstinence), and lust (chastity). We can apply yin and yang on each of these “opposites” and recognize that it is the tension between them that makes life vibrate. For example, a dash of greed is necessary to make a profit but coupled with generosity it makes life so much more pleasant for all stakeholders, and too much humility is just as counterproductive as is too much pride. Exposing that the alleged objectivity of scientific research is tinted by human persona, widens the understanding of the reality of science. I recommend that every methods book should have a chapter on persona and researchscape. Western science is full of stereotyped categories. The tendency to turn categories into foes rather than co-existing buddies is self-deceptive. One of the two is appointed superior – the winner – and the other may be tolerated as a temporary alternative, but is in essence a looser. Examples are qualitative/quantitative, natural sciences/social sciences, theory generation/theory testing, goods/services, supplier/customer, hierarchy/network, and competition/cooperation. These categories are neither rooted in the concrete mud of reality – they are pseudo-empirical; nor are they well conceptualized on a higher level of abstraction – they are pseudo-theoretical. They are stuck in the middle offering sweeping generalizations and paradigmatic assumptions that are taken for granted, often unknowingly. For example, goods and services have become separated in literature and research without proper attention to their interdependency (Gro¨nroos, 2007). The natural sciences/social sciences divide is counterproductive, and it has been mentioned above that modern physics can be viewed as organization theory. The divide is based on a notion that natural life and social life obey different laws. This is true with mainstream social and natural sciences but not with their modern versions. Both, case study research and network theory look for patterns but they approach them differently. In similar vein, the qualitative/quantitative dichotomy in social sciences and management is unfortunate. It can be used in select applications but not as a general vantage point for the choice of techniques for data generation; the need for close access is a healthier start (Gummesson, 2000). All quantitative research includes qualitative and subjective assumptions and conclusions. Interestingly, modern natural sciences and mathematics represent a shift from quantity to quality and they are more concerned with qualitative features than with precise values of variables (Capra, 1997, p. 134).

Case study research and network theory 245

QROM 2,3

246

Qualitative and quantitative, natural and social are not in conflict but they should be treated in symbiosis. Conclusions and recommendations In summary, I would like to advance the following conclusions recommendations: . Further develop the quality and productivity of case study research as a methodology, the way it is taught in classes and supervised for theses on all levels, the way it is practiced, and the way it is assessed by examiners and others who have the power to officially determine the quality of science. . Complexity – in the broad sense in which it appears in complexity theory including context, dynamics, nonlinearity and other related areas, and thus addresses phenomena that are fuzzy, ambiguous, chaotic and unpredictable – is the core phenomenon which unifies case study research and network theory. . Consider network theory a supplementary and supportive companion to case study methodology, start teaching it, and apply it in research, papers and theses on all levels. . Management is in need of innovation and theory generation, more so than the testing of contextless detail. Owing to their capacity to address complexity case study research and network theory are ideal for creating better and more general theory on a higher level of abstraction. . Bring in modern natural sciences and weed out old and inadequate mathematical and statistical variants that are non-productive together with the pompous declarations of “being scientific” when using numbers. . Accept that management and its subdisciplines constitute the bulk of social sciences today and develop more textbooks, classes, and conference tracks that are based on experience from research of management issues. Learn from the comprehensive methodological developments in sociology, ethnography, education and other social sciences but acknowledge that management has its own tradition of developing, adapting and using methodology, and lift this to the fore. . Stimulate researchers create a scientific persona and researchscape that is less dependent on bureaucratic regulations and cover-ups for human shortcomings (such as defending status quo, a narrow world-view, rituals at the cost of results, and greed and envy) and more dependent on a constructive will to contribute to our understanding of management, with the ulterior motive to improve life quality and offer a better society. References Achrol, R.S. and Kotler, P. (1999), “Marketing in the network economy”, Journal of Marketing, Vol. 63, pp. 146-63. Bakir, A. and Bakir, V. (2006), “Unpacking complexity, pinning down the ‘elusiveness’ of strategy”, Qualitative Research in Organizations and Management, Vol. 1 No. 3, pp. 152-72. Ballantyne, D. and Varey, R.J. (2006), “Introducing dialogical orientation to the service-dominant logic of marketing”, in Lusch, R.F. and Vargo, S.L. (Eds), The Service-dominant Logic of Marketing: Dialog, Debate, and Directions, M.E. Sharpe, Armonk, NY, pp. 224-35. Baraba´si, A-L. (2002), Linked: The New Science of Networks, Perseus, Cambridge, MA.

Barrow, J.D. (1992), Theories of Everything, Vintage, London. Buchanan, M. (2003), Small World, Phoenix, London. Capra, F. (1997), The Web of Life, Flamingo/HarperCollins, London. Capra, F. (2002), The Hidden Connections, HarperCollins, London. Cassell, C., Symon, G., Buering, A. and Johnson, P. (2006), “The role and status of qualitative methods in management research: an empirical account”, Management Decision, Vol. 44 No. 2, pp. 290-303. Castells, M. (1996), The Rise of the Network Society, Blackwells, Oxford. Christopher, M., Payne, A. and Ballantyne, D. (2002), Relationship Marketing, 2nd ed., Butterworth-Heinemann, Oxford. Clark, P. (1972), Action Research and Organizational Change, Harper & Row, London. Coghlan, D. and Brannick, T. (2005), Doing Action Research in Your Own Organization, 2nd ed., Sage, London. Coviello, N.E. (2005), “Integrating qualitative and quantitative techniques in network analysis”, Qualitative Market Research, Vol. 8 No. 1, pp. 39-60. Czarniawska, B. and Hernes, T. (Eds) (2005), Actor-network Theory and Organizing, Liber & Copenhagen Business School Press, Malmo¨. Degenne, A. and Forse´, M. (1999), Introducing Social Networks, Sage, London. Denzin, N.K. and Lincoln, Y.S. (Eds) (2005), The Sage Handbook of Qualitative Research, 3rd ed., Sage, Thousand Oaks, CA. Eisenhardt, K.M. (1989), “Building theories from case study research”, Academy of Management Review, Vol. 14 No. 4, pp. 532-50. Flybjerg, B. (2006), “Five misunderstandings about case-study research”, Qualitative Inquiry, Vol. 12 No. 2, pp. 219-45. Geertz, C. (1973), The Interpretation of Cultures, Basic Books, New York, NY. Gladwell, M. (2000), The Tipping Point, Abacus, London. Glaser, B.G. and Strauss, A.L. (1967), The Discovery of Grounded Theory, Aldine, New York, NY. Granovetter, M.S. (1973), “The strength of weak ties”, American Journal of Sociology, Vol. 78, pp. 3-30. Granovetter, M.S. (1978), “Threshold models of collective behavior”, American Journal of Sociology, Vol. 83, pp. 1420-43. Granovetter, M.S. (1985), “Economic action and social structure: the problem of embeddedness”, American Journal of Sociology, Vol. 91, pp. 481-510. Greenwood, D.J. and Levin, M. (2005), “Reform of the social sciences and of universities through action research”, in Denzin, N.K. and Lincoln, Y.S. (Eds), The Sage Handbook of Qualitative Research, Sage, Thousand Oaks, CA, pp. 43-64. Gro¨nroos, C. (2007), Service Management and Marketing, Wiley, Chichester. Guba, E.G. and Lincoln, Y.S. (1994), “Competing paradigms in qualitative research”, in Denzin, N.K. and Lincoln, Y.S. (Eds), The Sage Handbook of Qualitative Research, Sage, Thousand Oaks, CA, pp. 43-64. Gummesson, E. (2000), Qualitative Methods in Management Research, Sage, Thousand Oaks, CA. Gummesson, E. (2004), Many-to-many Marketing, Liber, Malmo¨. Gummesson, E. (2006), “Qualitative research in management: addressing complexity, context and persona”, Management Decision, Vol. 44 No. 2, pp. 167-79.

Case study research and network theory 247

QROM 2,3

248

Gummesson, E. (2007), “Case study research”, in Gustavsson, B. (Ed.), The Principles of Knowledge Creation Methods, Edward Elgar, Cheltenham. Hindo, B. (2007), “At 3M, a struggle between efficiency and creativity”, Business Week, June 11, pp. 8-12. Iacobucci, D. (Ed.) (1996), Networks in Marketing, Sage, Thousand Oaks, CA. Johnson, P., Buering, A., Cassell, C. and Symon, G. (2007), “Defining qualitative management research: an empirical investigation”, Qualitative Research in Organizations and Management, Vol. 2 No. 1, pp. 23-42. Kilduff, M. and Wenpin, T. (2003), Social Networks and Organizations, Sage, London. Lee, B. (2006), “The ‘Qualitative inquiry in the business and management field’ Symposium at the Second International Congress of Qualitative Inquiry”, Qualitative Research in Organization and Management, Vol. 1 No. 2, pp. 141-5. Lipnack, J. and Stamps, J. (1994), The Age of the Network, Wiley, New York, NY. Lusch, R.F. and Vargo, S.L. (2006), “Service-dominant logic: reactions, reflections and refinements”, Marketing Theory, Vol. 6 No. 3, pp. 281-8. Mo¨ller, K. and Wilson, D. (Eds) (1996), Business Marketing: Interaction and Network Approach, Kluwer, Boston, MA. Rosen, E. (2002), The Anatomy of Buzz, Currency, New York, NY. Schluter, M. and Lee, D. (1993), The R Factor, Hodder & Stoughton, London. Schluter, M. and Lee, D. (2003), The R Option, The Relationship Foundation, Cambridge. Scott, J. (1991), Social Network Analysis, Sage, London. Seale, C. (1999), “Quality in qualitative research”, Qualitative Inquiry, Vol. 5 No. 4, pp. 465-78. Stacey, R. (2007), “The challenge of human interdependence”, European Business Review, Vol. 19 No. 4, pp. 292-302. Tanner, L. (2003), Crowded Lives, Pluto Press, North Melbourne. Vargo, S.L. and Lusch, R.F. (2004), “Evolving to a new dominant logic for marketing”, Journal of Marketing, Vol. 68, pp. 1-21. Waddock, S.A. and Spangler, E. (2000), “Action learning in leadership for change”, in Sherman, F. and Torbert, W. (Eds), Transforming Social Inquiry, Transforming Social Action: New Paradigms for Crossing the Theory/practice Divide in Universities and Communities, Kluwer, Boston, MA, pp. 207-28. Yin, R.K. (1984), Case Study Research: Design and Methods, Sage, Thousand Oaks, CA. Zohar, D. (1997), Rewiring the Corporate Brain, Berrett-Koehler, San Francisco, CA. About the author Evert Gummesson is a Professor of Marketing at the Stockholm University School of Business. His interests embrace services, quality management, relationship marketing and CRM, and currently a network approach to a new logic of marketing, reflected in his latest book Many-to-many Marketing. He is the author of several articles on methodology and theory generation and one book, Qualitative Methods in Management Research. His article (with Christopher Lovelock) “Whither services marketing?” in the Journal of Service Research, won the American Marketing Association’s Award for Best Article on Services in 2004. Evert Gummesson can be contacted at: [email protected] To purchase reprints of this article please e-mail: [email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints

Obituary We are very saddened by the death on May 27 this year of Professor Frank Heller, a member of the QROM Editorial Board. Frank had a most illustrious academic career over many decades and the organizational research community in general owes him a substantial debt. However, our memory of him is also more personal. Frank was invariably supportive and encouraging of our work towards raising the profile of qualitative research. He participated in seminars we organised, wrote book chapters for us and reviewed articles – all of this in a spirit of unfailing interest and willingness to help. We also shared very many intellectually stimulating conversations at conferences and in meetings which encouraged us to be both inclusive and reflexive in our work. Most of all, we will remember him for his generosity, humour and great patience. He will be deeply missed but will continue to inspire our work.

Obituary

249

Gillian Symon and Catherine Cassell QROM Joint Editors

Qualitative Research in Organizations and Management: An International Journal Vol. 2 No. 3, 2007 p. 249 q Emerald Group Publishing Limited 1746-5648