Techniques for Performance Improvement in Organizations 9781846636011, 9781846636004

This e-book contains seven papers, discussing a range of techniques for improving organizational performance. The papers

180 56 1MB

English Pages 124 Year 2007

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Techniques for Performance Improvement in Organizations
 9781846636011, 9781846636004

Citation preview

jmtm cover (i).qxd

04/10/2007

13:48

Page 1

ISSN 1741-038X

Volume 18 Number 7 2007

Journal of

Manufacturing Technology Management Techniques for performance improvement in organisations Guest Editors: Chien-Ta Bruce Ho and S.C. Lenny Koh

www.emeraldinsight.com

Journal of Manufacturing Technology Management

ISSN 1741-038X Volume 18 Number 7 2007

Techniques for performance improvement in organisations Guest Editors Chien-Ta Bruce Ho and S.C. Lenny Koh

Access this journal online _________________________

783

Editorial review board ____________________________

784

Guest editorial ___________________________________

785

Domain-concept association rules mining for large-scale and complex cellular manufacturing tasks Wannapa Kay Mahamaneerat, Chi-Ren Shyu, Shih-Chun Ho and C. Alec Chang _________________________________________________

787

Plug and play (PnP) modelling approach to throughput analysis Kim Hua Tan and James Noble __________________________________

807

The cluster approach and SME competitiveness: a review Aleksandar Karaev, S.C. Lenny Koh and Leslie T. Szamosi ____________

818

An ANN-based DSS system for quality assurance in production network Walter W.C. Chung, Kevin C.M. Wong and Paul T.K. Soon ____________

Access this journal electronically The current and past volumes of this journal are available at:

www.emeraldinsight.com/1741-038X.htm You can also search more than 150 additional Emerald journals in Emerald Management Xtra (www.emeraldinsight.com) See page following contents for full details of what your access includes.

836

CONTENTS

CONTENTS continued

Total acquisition cost of overseas outsourcing/ sourcing: a framework and a case study Ninghua Song, Ken Platts and David Bance_________________________

858

Wireless technologies for logistic distribution process Win-Bin See __________________________________________________

876

The aggregation for enterprise distributed databases: a case study of the healthcare national immunization information system in Taiwan Ruey-Kei Chiu, S.C. Lenny Koh and Chi-Ming Chang _________________

889

www.emeraldinsight.com/jmtm.htm As a subscriber to this journal, you can benefit from instant, electronic access to this title via Emerald Management Xtra. Your access includes a variety of features that increase the value of your journal subscription.

Structured abstracts Emerald structured abstracts provide consistent, clear and informative summaries of the content of the articles, allowing faster evaluation of papers.

How to access this journal electronically

Additional complimentary services available

To benefit from electronic access to this journal, please contact [email protected] A set of login details will then be provided to you. Should you wish to access via IP, please provide these details in your e-mail. Once registration is completed, your institution will have instant access to all articles through the journal’s Table of Contents page at www.emeraldinsight.com/1741-038X.htm More information about the journal is also available at www.emeraldinsight.com/ jmtm.htm

Your access includes a variety of features that add to the functionality and value of your journal subscription:

Our liberal institution-wide licence allows everyone within your institution to access your journal electronically, making your subscription more cost-effective. Our web site has been designed to provide you with a comprehensive, simple system that needs only minimum administration. Access is available via IP authentication or username and password.

E-mail alert services These services allow you to be kept up to date with the latest additions to the journal via e-mail, as soon as new material enters the database. Further information about the services available can be found at www.emeraldinsight.com/alerts

Emerald online training services Visit www.emeraldinsight.com/training and take an Emerald online tour to help you get the most from your subscription.

Key features of Emerald electronic journals Automatic permission to make up to 25 copies of individual articles This facility can be used for training purposes, course notes, seminars etc. This only applies to articles of which Emerald owns copyright. For further details visit www.emeraldinsight.com/ copyright Online publishing and archiving As well as current volumes of the journal, you can also gain access to past volumes on the internet via Emerald Management Xtra. You can browse or search these databases for relevant articles. Key readings This feature provides abstracts of related articles chosen by the journal editor, selected to provide readers with current awareness of interesting articles from other publications in the field. Non-article content Material in our journals such as product information, industry trends, company news, conferences, etc. is available online and can be accessed by users. Reference linking Direct links from the journal article references to abstracts of the most influential articles cited. Where possible, this link is to the full text of the article. E-mail an article Allows users to e-mail links to relevant and interesting articles to another computer for later use, reference or printing purposes.

Xtra resources and collections When you register your journal subscription online, you will gain access to Xtra resources for Librarians, Faculty, Authors, Researchers, Deans and Managers. In addition you can access Emerald Collections, which include case studies, book reviews, guru interviews and literature reviews.

Emerald Research Connections An online meeting place for the research community where researchers present their own work and interests and seek other researchers for future projects. Register yourself or search our database of researchers at www.emeraldinsight.com/ connections

Choice of access Electronic access to this journal is available via a number of channels. Our web site www.emeraldinsight.com is the recommended means of electronic access, as it provides fully searchable and value added access to the complete content of the journal. However, you can also access and search the article content of this journal through the following journal delivery services: EBSCOHost Electronic Journals Service ejournals.ebsco.com Informatics J-Gate www.j-gate.informindia.co.in Ingenta www.ingenta.com Minerva Electronic Online Services www.minerva.at OCLC FirstSearch www.oclc.org/firstsearch SilverLinker www.ovid.com SwetsWise www.swetswise.com

Emerald Customer Support For customer support and technical help contact: E-mail [email protected] Web www.emeraldinsight.com/customercharter Tel +44 (0) 1274 785278 Fax +44 (0) 1274 785201

JMTM 18,7

784

EDITORIAL REVIEW BOARD

Lynne Baxter University of York, UK

Douglas K. Macbeth University of Glasgow, UK

Nourredine Boubekri University of North Texas, USA

Bart MacCarthy Nottingham University Business School, Nottingham, UK Marly Monteiro de Carvalho Universidade de Sao Paulo, Brazil Shunji Mohri University of Hokkaido, Japan Andy Neely Cranfield University, UK Kul Pawar University of Nottingham, UK

Felix T.S. Chan The University of Hong Kong, Hong Kong Ian Gibson National University of Singapore, Singapore A. Gunasekaran University of Massachusetts, USA Abdel-Aziz Hegazy Helwan University, Egypt Bob Hollier Manchester Business School, UK He Jinsheng Tianjin University, China Tarek Khalil University of Miami, USA Ashok Kochhar University of Aston, UK Siau Ching Lenny Koh University of Sheffield, UK Doug Love University of Aston, UK

Journal of Manufacturing Technology Management Vol. 18 No. 7, 2007 p. 784 # Emerald Group Publishing Limited 1741-038X

Roy Snaddon University of Witwatersrand, South Africa Amrik Sohal Monash University, Australia Harm-Jan Steenhuis Eastern Washington University, USA Mile Terziovski The University of Melbourne, Australia Juite Wang National Chung Hsing University, Taiwan

Guest editorial About the Guest Editors – Chien-Ta Bruce Ho is an Associate Professor in the Institute of E-Commerce at National Chung Hsing University. His current research interests include customer relationship management, value chain management and performance evaluation. He has authored and co-authored eight books, 25 refereed journal articles in the performance measurement area, and has presented more than 25 papers at national and international conferences. Samples of his work, could be found in Journal of the Operational Research Society, Journal of Air Transport Management, Industrial Management & Data Systems and Production Planning and Control. He is also the Editor of the International Journal of Electronic Customer Relationship Management. S.C. Lenny Koh is the Director of the Logistics and Supply Chain Management Research Group and a Senior Lecturer in Operations Management at the University of Sheffield Management School UK. She has 190 publications in journal papers, book, edited book, edited proceedings, edited special issues, book chapters, conference papers, technical papers and reports. Her work appears in some of the top journals, including Journal of the Operational Research Society, International Journal of Production Research and International Journal of Production Economics. She is the Editor in Chief of the International Journal of Enterprise Network Management, International Journal of Value Chain Management and International Journal of Logistics Economics and Globalisation, and serves on editorial boards of some leading journals including Journal of Manufacturing Technology Management.

Techniques for performance improvement in organisations We are pleased to introduce this special issue of Journal of Manufacturing Technology Management on “Techniques for performance improvement in organisations”. This special issue is one of the important deliverables from the 4th International Conference on Supply Chain Management and Information Systems (SCMIS2006), Taiwan, 5-7 July 2006. Suitable papers were invited to submit to this special issue, and the journal’s review process was undertaken. This special issue contains seven papers, discussing a range of techniques for improving organisational performance. Below is a brief overview of the papers that appear in this issue. The first paper by Mahamaneerat, Shyu, Ho and Chang, is to provide a novel domain-concept association rules (DCAR) mining algorithm that offers solutions to complex cell formation problems, which consist of a non-binary machine-component (m/c) matrix and production factors for fast and accurate decision support. The proposed DCAR algorithm considers a wide range of production parameters, which makes the algorithm suitable to the real-world manufacturing system settings. Tan and Noble, in their, propose a “plug and play” approach for decision modelling. The notion of this approach is to build models from components based on a “LEGO block” style of manufacturing simulation and analysis. This approach facilitates managers to rapidly build up models increase communication and decision support efficiency, and improve productivity. The third paper by Karaev, Koh and Szamosi, reviews the effect of a cluster approach on SMEs’ competitiveness. The review focuses on the use of a cluster approach among SMEs as a tool for meeting their challenges related to globalisation and trade

Guest editorial

785

Journal of Manufacturing Technology Management Vol. 18 No. 7, 2007 pp. 785-786 q Emerald Group Publishing Limited 1741-038X

JMTM 18,7

786

liberalisation, as well as investigating its contributing factors in the process of increasing their competitiveness. The findings from this paper enable business managers to make more informed decisions regarding the adoption of a cluster approach and entering into cluster-based relations, as well as to assist policy makers in designing more efficient cluster policies. The fourth paper by Chung, Wong and Soon, proposes an ANN-enabled decision support system to solve a simple but semi-structured production supply problem in a lens manufacturing environment. A case study approach was used to show how the system is implemented. The authors conclude that a significant improvement in quality level can be achieved by holding the knowledge worker accountable for making the decision to stop the production line, rather than made by default, as is with most traditional operations. The fifth paper by Song, Platts and Bance, develops a framework of total cost for overseas outsourcing/sourcing in manufacturing industry with input from both academic literature and industrial offshoring practices. An exploratory case study is carried out in a multinational high-tech manufacturer to apply this framework. Practical barriers for implementing this model are discovered. The development of mobile communication technologies and other information technologies can be used to boost the real-time capability in logistics. The sixth paper by See, presents an application of the wireless wide area network and personal area network technologies in logistic fleet operation management. The result shows a real-world fleet management system that integrates mobile communication and supports real-time logistic information flow management. The seventh paper by Chiu, Koh and Chang, provides a data framework to support the incremental aggregation of, and an effective data refresh model to maintain the data consistency in, an aggregated centralised database for Taiwan’s National Immunization Information System. The authors note that the approach to implement the data refreshment for the aggregation of heterogeneous distributed databases can be more effectively achieved through the design of a refresh algorithm and standardisation of message exchange between distributed databases and central database. To summarise, this special issue shows that there are a broader range of techniques available for improving organisational performance. It ranges from quantitative-based techniques such as modelling, to qualitative-based techniques such as cluster approach. These techniques spawn to provide a scientific and holistic basis for managerial decision-making for performance improvement in organisations. The Guest Editors would like to thank all the authors for their contributions to this special issue, the reviewers for their valuable comments, and the Editor of the Journal – Professor David Bennett and the Emerald Editorial Office, for support to make this special issue possible. Chien-Ta Bruce Ho and S.C. Lenny Koh Guest Editors

The current issue and full text archive of this journal is available at www.emeraldinsight.com/1741-038X.htm

Domain-concept association rules mining for large-scale and complex cellular manufacturing tasks Wannapa Kay Mahamaneerat and Chi-Ren Shyu Computer Science Department, University of Missouri, Columbia, Missouri, USA, and

Domain-concept association rules mining 787 Received September 2006 Revised January 2007 Accepted March 2007

Shih-Chun Ho and C. Alec Chang Industrial and Manufacturing Systems Engineering Department, University of Missouri, Columbia, Missouri, USA Abstract Purpose – The purpose of this paper is to provide a novel domain-concept association rules (DCAR) mining algorithm that offers solutions to complex cell formation problems, which consist of a non-binary machine-component (MC) matrix and production factors for fast and accurate decision support. Design/methodology/approach – The DCAR algorithm first identifies the domain-concept from the demand history and then performs association rule mining to find associations among machines. After that, the algorithm forms machine-cells with a series of inclusion and exclusion processes to minimize inter-cell material movement and intra-cell void element costs as well as to maximize the grouping efficacy with the constraints of bill of material (BOM) and the maximum number of machines allowed for each cell. Findings – The DCAR algorithm delivers either comparable or better results than the existing approaches using known binary datasets. The paper demonstrates that the DCAR can obtain satisfying machine-cells with production costs when extra parameters are needed. Research limitations/implications – The DCAR algorithm adapts the idea of the sequential forward floating selection (SFFS) to iteratively evaluate and arrange machine-cells until the result is stabilized. The SFFS is an improvement over a greedy version of the algorithm, but can only ensure sub-optimal solutions. Practical implications – The DCAR algorithm considers a wide range of production parameters, which make the algorithm suitable to the real-world manufacturing system settings. Originality/value – The proposed DCAR algorithm is unlike other array-based algorithms. It can group non-binary MC matrix with considerations of real-world factors including product demand, BOM, costs, and maximum number of machines allowed for each cell. Keywords Data handling, Flexible manufacturing systems, Group technology, Cellular Manufacturing Paper type Research paper

Introduction Owing to changes of customers’ demand pattern in contemporary market places, traditional fixed production lines that produce very large batches of products with long-production lead time have been becoming out-of-date shop floor plans. Modern manufacturing entities must adopt flexible production approach to accommodate the

Journal of Manufacturing Technology Management Vol. 18 No. 7, 2007 pp. 787-806 q Emerald Group Publishing Limited 1741-038X DOI 10.1108/17410380710817255

JMTM 18,7

788

challenge from competitive and changing market. The flexible manufacturing system (FMS) has been emerging as an essential concept to conform the task to cluster flexible facility assemblages for small batches that can rapidly respond to changes for different product orders and design changes. Cellular manufacturing (CM) is an effective approach for determining functional machine layouts when sequential production lines are no longer practical in small-median batch manufacturing environments. CM posits a common management principle: grouping related manufacturing tasks such that tasks with similar requirements are associated within the same work cells (Askin and Standridge, 1993). In CM, the manufacturing facilities are divided into “cells” where distinctive functional machines produce a family of products or parts. Grouping machines and parts according to the ideas of Group Technology (GT) is a natural starting point of CM and cell formation – the fundamental problem of CM system design. Given the entire set of parts and available machines, the objective of cell formation is to configure a set of machine-cells and a partition of parts which streamlines the production flow. By devoting a machine-cell to the manufacturing of a part family, advantages have been reported in many aspects, such as setup time reduction, work-in-process reduction, throughput time reduction, material handling costs reduction, scheduling simplification, and product quality improvement (Dimopoulos and Mort, 2001). It is well recognized that simply grouping machines from a binary machine-component (MC) incidence matrix is a far cry from real-world situations; other important manufacturing factors should also be considered and recorded in the matrices. Additionally, to mimic the real-world setting, hierarchies of components should be included in the decision-making process. Figure 1 shows an example of such hierarchies, and Table I shows the corresponding bill of material (BOM) matrix. The numbers in the BOM matrix represent the amount of components needed for parent components in the hierarchical structure. Figure 1(a)-(c) show the hierarchies of components to produce final products, Pa, Pb, and Pc, respectively. For example, to produce one unit of a final product Pa in Figure 1 (a), three P1 and four P2 are needed. This also suggests the operation sequence of Pa. In general, a component may be needed to produce various components and/or final products, such as three units of P1 are needed to produce a unit of Pa, and nine units of P1 are needed to produce a unit Pb. The second factor to be considered is the production cost matrix, which includes an Pa

P1

Figure 1. An example of a bill of material for three final products – Pa, Pb, and Pc, where (a), (b) and (c) show components needed and production sequences to produce Pa, Pb, and Pc, respectively

Pb

P2

P2

P3

P3

Pc

P4

P5

P1

P6

P7

P9

P8

P10

P3 (a)

(b)

(c)

P11

Parts Final product Pa Final product Pb Final product Pc P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11

P1

P2

3

4 5

P3

P4

P5

P6

P7

P8

P10

P11

2

9 0

1 0

P9

Domain-concept association rules mining

1

789

5 0 0

2 0

5 0 0

7 0 0

2

8 0

4 0

Note: The matrix suggests units of components needed to produce other components and/or final products

aggregation of labouring, material, and handling costs. One more factor to be considered is the maximum number of machines allowed for each machine-cell, which can estimate the area required to locate the cell that a manufacturing facility will have to handle. The above-mentioned matrices and factors should be utilized to generate an efficient production plan, possibly with alternatives for unexpected changes under some circumstances. The production plans will be used in the decision-making process that responds to the users’ predefined criteria, which include the demands for products, possible machine breakdowns, and changes in production costs. Literature review Approaches that customarily deal with the cell formation problems can be briefly categorized into various methods such as: mathematical programming, array-based algorithms, hierarchical clustering algorithms, non-hierarchical clustering algorithms, and heuristics. Many mathematical models were recently proposed to deal with particular versions of cell formation problem (Cao and Chen, 2005; Chan et al., 2004; Chang et al., 2004). The advantage of the mathematical programming is that the formulations are capable of considering a variety of manufacturing information, such as space limitation, alternative production sequence, and/or product demand. Harhalakis et al. (1994) and Cao and Chen (2005) represented the physical limitation of maximum number of machines per cell by a constraint or an upper bound. Balakrishnan and Cheng (2005) proposed a two-stage method that took into consideration of rearrangement cost and product demand of multi-period planning horizons. Sofianopoulou (2006) presented an implementation of CM, which was able to evaluate the alternative production scenarios by data envelopment analysis (DEA). However, the balance between modelling a meticulous manufacturing system and simplifying the computational complexity is always difficult to maintain. Thus, finding comprehensive and yet feasible approaches is still a challenging research problem.

Table I. A bill of material matrix for Figure 1

JMTM 18,7

790

Compared to mathematical programming approaches, array-based algorithms, which solve the binary MC matrix problem, are relatively efficient in terms of computational complexity and feasibility. The machine-cells and part families can be obtained simultaneously on the main diagonal of the MC matrix by rearranging the matrix, where the columns are in accordance with parts and the rows are in accordance with machines. Unlike the complex mathematical approach, the MC matrix only provides limited binary information (i.e. zero or one for each element), so important manufacturing information, such as product demands and inter-cell material movement costs, are rarely taken into consideration in the former algorithms, which could solve only the binary MC matrix problems. There are three approaches, which utilize the binary MC matrix, namely array-based, hierarchical clustering, and non-hierarchical approaches. Array-based methods rearrange the rows and the columns in an MC matrix in order to group the machines and the parts. An early contribution for array-based methods was made by Burbidge (1963). Array-based methods are a part of the production flow analysis (PFA) procedure for the implementation of the CM system. A computational implementation of the PFA method, named GROUPTEC, and its case study have been reported by Santos and Arau´jo (2003). Other notable array-based methods include rank order clustering (ROC) (King, 1980), Bond-Energy Algorithm (BEA) (McCormick et al., 1972), and MODROC (Chandrasekharan and Rajagopalan, 1986b). In contrast, hierarchical clustering methods use similarity or distance information to produce a hierarchy of clusters or partitions. Such methods are normally unable to arrange both machine-cells and part families simultaneously. Pioneering work on hierarchical clustering method was proposed by McAuley (1972), and the most recent hierarchical clustering methods are GP-SLCA (Dimopoulos and Mort, 2001) and MOD-SLC (Selim et al., 2003). Gupta (1991) argued that the hierarchical clustering methods suffer from chaining effect problems. Similarly, non-hierarchical clustering approaches form machine-groups and part families by using similarity and distance functions. The number of clusters is often assigned a priori for non-hierarchical clustering algorithms. Chandrasekharan and Rajagopalan (1986a) proposed an ideal seed non-hierarchical clustering (ISNC) for binary cell formation problems. Other common algorithms in this field include ZODIAC (Chandrasekharan and Rajagopalan, 1987) and GRAFICS (Srinivasan and Narendran, 1991). Miltenburg and Zhang (1991) reported a comprehensive comparison and evaluation of many known algorithms including array-based, hierarchical clustering, and non-hierarchical clustering algorithms. Other approaches apply heuristics such as fuzzy logic (Chu and Hayya, 1991), evolutionary algorithms (Joines et al., 1996), and genetic programming (Cheng et al., 1998; Zolfaghari and Liang, 2004) have been utilized to search a feasible solution. In addition to the above-mentioned production-oriented algorithms, a hybrid manufacturing system (HMS) has been proposed to solve the cell formation problems by Zolfaghari and Roa (2006). The HMS is an integration of CM and job shop. The major advantage of the HMS approach is the ability of producing non-family part. Recently, data mining techniques, such as an association rules (AR) mining, have been applied to the same research problem. Chen (2003) proposed an approach called association rule induction (ARI) by applying the Apriori algorithm (Agrawal et al., 1993) from the data mining field to group machines. Although the above-mentioned contributions are promising, the cell formation task is still challenging in actual

practice because of large-scale MC relationships and difficulties in construction criterion functions. Therefore, a novel domain-concept association rules (DCAR) mining algorithm is developed in this paper to solve large-scale and complex cell formation problems, where factors such as operation sequences, unit inter-cell material movement costs, demand for products, production quantities, and maximum number of machines allowed for each cell are considered for fast and accurate decision support. A domain-concept is a partition mechanism for machines based on a prioritized factor list from the complex cell formation setting. For instance, if users would like to see machines, which will be grouped within a cell, to produce components that are required for some products under some prioritized constraints (such as demands and a BOM), the DCAR algorithm will mine rules from machines with parts that have high demands followed by machines with parts from a selected list in a BOM. Depending on the number of partitions in the domain, the DCAR algorithm keeps mining secondary prioritized partitions and generates extra rules which will then be utilized for cell formation by their priorities. Rules mined from the DCAR algorithm could be efficiently indexed in a database and utilized to meet the needs of decision support when unexpected changes happen. The details of the DCAR algorithm, its architecture, and its criterion function are presented in the next section. Research methodology As shown in Table II, the DCAR algorithm first accumulates historical demand information (which product with what quantity) of each product into a demand vector (D), where Di is a total demand value of Pi. The DCAR also utilizes the predefined BOM matrix, as shown in Table I, to calculate the total number of product needed to be produced, as shown at BOM row in Table II. The algorithm uses these values instead of 0’s and 1’s in an MC matrix. Each row of an MC matrix is a machine that is considered as a transaction to be mined for AR by the DCAR. Each column is a component that regards as an attribute and may be identified as a domain-concept. Furthermore, the algorithm also accepts input matrices of the unit inter-cell material movement cost of each product (Vi) and the unit intra-cell void element cost (Ei) then minimizes these values when forming machine-cells. It is essential to understand the basic idea of the AR mining that was introduced by Agrawal et al. (1993) in order to better understand the DCAR algorithm that served as the backbone of our decision-making process. The pseudo codes of the DCAR algorithm and its procedures are shown in Figure 2. The AR mining statistically finds relationships among attributes of the underlying data without a prior knowledge or hypothesis. A discovered association rule X ! Y tells that mutually disjoint sets of attributes X and Y co-occur with an observed frequency of X and Y happening at the same transactions. This frequency is called a support value. Moreover, a rule X ! Y also carries a conditional probability of Y given X, which is called a confidence value. The confidence value indicates how often Y occurs when X occurs. To efficiently use the rules for a real-time decision support, we developed the DCAR algorithm, which is an extension of the original AR algorithm. In general, the AR mining with domain-concept will report the associations among attributes within each domain-concept, with the support and confidence values, without considering the other criteria. The complex cell formation problem, on the

Domain-concept association rules mining 791

5 4P10 3 3 4 8P9 1 1 6 Pc 1 1 2 7P7 2 2 5 Pc 3 3 8 5P4 5 5 4 2P4 3 3 1 2Pb 1 1 2 (5P2) þ (2P10) 4 4 2 (4 Pa) þ (5Pb) 7 7 3 (3 Pa) þ (9Pc) 3 3 35 35 25 25 18 18 15 15 Di BOM Vi Ei

15 15 10 10

P11 Pb Pa Components

Table II. Demand values (Di), resulting BOM calculations by applying Di to Table I values, predetermined unit inter-cell material movement costs (Vi), and predetermined unit intra-cell void element costs (Ei)

Pc

P1

P2

P3

P4

P5

P6

P7

P8

792

P9

P10

JMTM 18,7

DCAR (parameters: C, M, D, MC, BOM, V, E, max_m) 1. Identify the x highest demand products, px, in D 2. (T1, T2, MC) = BuildTransactions(D, px,C, M, MC, BOM) 3. Execute AR mining to build rules of all machine pairs and obtain ART1, ART2, respectively. 4. FOR (each domain-concept dc, where dc = 1 to 2) { 5. WHILE ((MC ≠ f) AND (ARTdc ≠ f) DO

Domain-concept association rules mining 793

6. Form the tentative MG } } 7. Calculate F(MG) 8. (MG',F(MG')) = AdjustCells(MG, C, M, BOM, max_m) 9. WHILE (F(MG') < F(MG)) DO { 10. MG = MG' 11. (MG',F(MG')) = AdjustCells(MG, C, M, BOM, max_m) 12. RETURN (MG, F(MG)) BuildTransactions (parameters: D; px,C, M, MC, BOM) 1. Set count =1, T1 ≠ f, T2 ≠ f, M' = M, C' = C, D' = D 2. FOR (ALL Dl ∈D) { 3. IF ( px IN Dl) { 4. FOR (ALL pj IN Dl) { 5.

Filter BOM using pj to obtain BOMj

6.

FOR (ALL Ci IN BOMj) {

7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18.

Identify the quantity needed for Ci. Add this number to quant. Filter MC using Ci to obtain MCi. Create Tcount by Including Ci and Including all machines in MCi. Update M' by excluding the previously selected M in MCi Update C' by excluding the previously selected C in MCi count++ } FOR (q = 1 to quant) { Add a new transaction Tcount into T1 } Update the corresponding MC cells with quant }} ELSE D' = D' – Dl }

19. FOR (ALL Dl ∈D') { 20. Perform steps 4 to 17 for the machines in M' and the components inC' to build T2. } 21. RETURN (T1, T2, MC) AdjustCells(MG, C, M, BOM, max_m) 1. MG'= AdjustMachines (MG, M, max_m) 2. IF (F(MG') < (F(MG)) { 3. MG' = AdjustComponents(MG', C, BOM) 4. RETURN (MG', F(MG')) } 5. RETURN (MG, F(MG))

(continued) (continued)

Figure 2. Pseudo codes for the DCAR algorithm, BuildTransactions, AdjustCells, AdjustMachines, and AdjustComponents procedures

JMTM 18,7

794

AdjustMachines(MG, M, max_m) 1. FOR(ALL Mj in M ){ 2. Identify original_cell of Mj 3. Initialize min_cost to a large number 4. selected_cell = 0 5. FOR (ALL mgk in MG) { 6. Calculate F(Mj) 7. IF (F(Mj) < min_cost) { 8. min_cost = F(Mj) 9. selected_cell = k }} 10. IF((|mgselected_cell |< max_m) AND (selected_cell ≠ original_cell)) {

11. Remove Mj from mgoriginal_cell 12. Assign Mj to mgselected_cell 13. Update MG }} 14. RETURN (MG)

Figure 2.

AdjustComponents(MG, C, BOM) 1. FOR(ALL Ci in C) { 2. Identify original_cell of Ci 3. Initialize min_cost to a large number 4. selected_cell = 0 5. FOR (ALL mgk in MG) { 6. Calculate F(Ci) 7. IF((F(Ci) < min_cost) AND (Ci is from the same BOM as other C in mgk) ) { 8. min_cost = F(Ci) 9. selected_cell = k }} 10. IF (selected_cell ≠ original_cell) { 11. Remove Ci from its mgoriginal_cell 12. Assign Ci to mgselected_cell 13. Update MG }} 14. RETURN (MG)

other hand, has the knowledge for a certain set of machines that should be favourably grouped together to preserve the operation sequence and the hierarchical (a PART-OF) relationship among components shown in a BOM. The BOM suggests the DCAR to make a decision to add only related components and their associated machines into a cell. Rules are then used to form efficient cells with regards to the total inter-cell material movement costs (V) and the total intra-cell void element costs (E). Both V and E directly determine the total cost (F) of the resulting machine-groups. In this paper, we introduce a cost function F that the DCAR attempts to minimize while forming machine-cells. The F function is expected to fulfil two objectives simultaneously: minimization of the inter-cell material movement cost and maximization of the machine utilization when a new machine is added to a cell. F is defined as follows: FðMGÞ ¼

jC j X

ðwN V i Di N i þ wG E i Gi Þ;

ð1Þ

ðwN V ij Dij N j þ wG E ij Gj Þ;

ð2Þ

i¼1

FðC i Þ ¼

mgk j jM X

j¼1

FðM j Þ ¼

C mgk j jX

ðwN V i Di N i þ wG E i Gi Þ;

ð3Þ

i¼1

where: j· j ¼ total number or cardinality; MG ¼ machine-group matrix that contains mgk cells, where k is the index of machine-cells, k ¼ 1; 2; . . . ; jMGj; Ci ¼ component, where i is the index of components, i ¼ 1; 2; . . . ; jCj; Mj ¼ machine, where j is the index of machines, j ¼ 1; 2; . . . ; jM j; Vi ¼ unit inter-cell material movement cost of component Ci; Vij ¼ unit inter-cell material movement cost of component Ci at machine Mj; Di ¼ demand of component Ci; Dij ¼ demand of component Ci at machine Mj; Ni ¼ number of inter-cell material movements of component Ci; and where: Ni ¼

jMGj jM j XX

oij £ ð1 2 qjk Þ

ð4Þ

k¼1 j¼1

Nj ¼ number of inter-cell material movements at machine Mj, where: Nj ¼

jMGj jCj XX

oij £ ð1 2 qjk Þ

ð5Þ

k¼1 i¼1

Ei ¼ unit intra-cell void element cost of component Ci; Eij ¼ unit intra-cell void element cost of component Ci at machine Mj; Gi ¼ number of void elements of component Ci; and where: jMGj jM j XX Gi ¼ ð1 2 oij Þ £ qjk ð6Þ k¼1 j¼1

Gj ¼ number of void elements at machine Mj, where: Gj ¼

jMGj jCj XX

ð1 2 oij Þ £ qjk

ð7Þ

k¼1 i¼1

wN ¼ weight of inter-cell material movement cost; wG ¼ weight of intra-cell cost of void elements: wN þ wG ¼ 1 ( oij ¼

1; 0;

ð8Þ

when component C i is produced on machine M j otherwise

;

and: ( qjk ¼

1; when machine M j is assigned to machine 2 group mg k 0; otherwise

The F(MG), F(Ci), and F(Mj) functions, as shown in equations (1)-(3), represent costs that are incurred when we form machine-cells. The F(MG) is the function to calculate the total

Domain-concept association rules mining 795

JMTM 18,7

796

cost of the entire MG matrix. Equations (2) and (3) calculate the inter-cell and intra-cell material movement costs for only a portion of the MG matrix. In equation (2), F(Ci) calculates the costs with respect to the cell, mgk. It sums up the costs incurred for each machine (a row in the MC matrix) if it is assigned to the cell, mgk, where jM mgk j is the total number of machines in the mgk cell. In equation (3), F(Mj) computes the costs similar to equation (2). The difference is that equation (3) adds up the costs for each component (a column in the MC matrix) which assigns to the cell, mgk, where jC mgk j is the total number of components in the mgk cell. The afore-mentioned functions are composed of two terms, the inter-cell material movement cost and the intra-cell cost of void elements. The first terms of the F functions are the weighted summation of the inter-cell material movement cost (VDN). The inter-cell material movement cost is often considered as an important measurement to evaluate a CM system. The product demand (D) is considered in the computation in order to obtain a practical machine-group arrangement. To compute the inter-cell material movement (N), we apply equations (4) and (5) which captures inter-cell material movements, i.e. the non-zero elements outside the diagonal cells. The second terms of the F functions indicate the weighted intra-cell cost of void elements (G) for all components Ci’s. A void element is an empty or a zero element inside a diagonal cell. The density of each cell is considered as a significant indicator of the efficiency of a cell formation solution. The higher density a cell has the better cell formation. Therefore, minimization of the second term can improve machine utilization. Equations (6) and (7) calculate the number of void elements for the corresponding column (component) i and row (machine) j in the MC matrix. For the experiments conducted in this research, we weigh both terms equally, wN ¼ wG ¼ 0.5. Identifying machines to be grouped in a cell is an optimization problem. The DCAR initializes machine-groups by employing algorithms proposed by Chen (2003) that builds AR from all pairs of machines, places machines into cells according to their highest support values, and places each part into cells based on the maximum number of operation between the part and the machines in the cell. To reduce the chance of obtaining local optima that usually associate with greedy algorithms (Cormen et al., 1998), the DCAR makes selections of a MC to be in a cell aiming to minimize overall F as well as to maximize the grouping efficacy by incorporating an idea of reevaluating a criterion function from the sequential forward floating selection (SFFS) algorithm (Pudil et al., 1994). The SFFS is a selection procedure that repeatedly includes or excludes features (machines and components) by evaluating a criterion function (F) when it forms a new set of features. By following the idea of SFFS, the DCAR is able to iteratively adjust the machines and components in the currently formed machine-groups, through the following procedures – AdjustCells, AdjustMachines, and AdjustComponents, to improve the total cost. Figure 3 shows the flowchart of the DCAR algorithm. The input parameters for the DCAR algorithm are as follows: C is a set of components, where C i [ C and i ¼ 1, 2, . . . ,jCj; M is a set of machines, where M j [ M and j ¼ 1,2, . . . ,jMj; D is a demand vector, where D l [ D and l ¼ 1, 2, . . . jDj; MC is a machine-component matrix; BOM is a matrix that represents BOM structure; V is a unit inter-cell material movement cost matrix, which V will incur costs to the total cost (F) only when Ni or Nj value is 1; E is an intra-cell void element cost matrix, which E will incur costs to F only when Gi or Gj value is 1 (details of cost calculation are in equations (1)-(3)); max_m is an integer indicates the maximum number of machines

Domain-concept association rules mining

The DCAR Flowchart Demand History Matrix

797 Summarize Demand History Matrix to Build Demand Vector (D) Where Each Cell is a Total Demand for a Product Identify the Highest Demand Product as a Domain-Concept

BuidTranscations T1: Domain-Concept Transactions

T2: Other Transactions Bottom-Up Association Rules Mining

Association Rules from T1

Association Rules from T2

Form the Tentative Machine-Groups (MG) Matrix and Calculate Cost F(MG)

BOM Matrix

MG

MachineComponent Matrix (MC)

AdjustCells

Unit Inter-Cell Material Movement Cost Matrix (V)

AdjustMachines MG'

MG = MG'

AdjustComponents

Unit Intra-Cell Void Element Cost Matrix (E)

Outputs: Machine-Groups Matrix (MG), Total Cost F(MG)

MG

MG' Yes No

Is F(MG') < F(MG) ?

Figure 3. The flowchart of the DCAR algorithm

JMTM 18,7

798

allowed for each cell. Let AR be a set of association rules; ART dc be a set of AR from a transaction matrix Tdc where dc ¼ 1 indicates domain-concept based transactions, and dc ¼ 2 indicates other transactions (with a setting of two partitions); MG be a machine-group matrix that contains mgk groups, and k ¼ 1, 2, . . . jMGj. The outputs of the DCAR algorithm are MG and F(MG). At line 1 of the DCAR algorithm, the x highest demand products, px, are identified as domain-concepts. For each px, the DCAR algorithm calls BuildTransactions to generate two transaction matrices, T1 (the group of MC transactions that belong to the domain-concept px) and T2 (the group of other MC transactions). Please note that a flexible domain-concept setting with any number of partitions could be set. Lines 2 to 18 of the BuildTransactions algorithm separates transactions that are derived from demands (D) to build T1, where line 7 shows that each demand contains components ðC i [ CÞ and their quantities (quant). The variable quant is be used at line 17 to update and build a non-binary MC matrix that reflects the quantity of each MCij. At line 18, the BuildTransactions updates the unselected demands ðD0 Þ. This D0 is be used when the algorithm builds T2 at lines 19-20 by performing Steps 4 to 17 using M 0 (machines that are not associated with px) and C 0 (components that are not associated with px), and D0 . The BuildTransactions terminates at line 21 and return T1, T2, and non-binary MC to the DCAR. After the results of the BuildTransactions algorithm have been returned, the DCAR continues with AR mining process at line 3. The DCAR then extracts two sets of AR, one for T1, and another for T2, where each association rule contains two machines. From lines 4 to 6, the DCAR forms a tentative machine-group (MG) by incorporating Chen’s (2003) approach that first places machines into cells based on the support values then places components into cells based on the number of operations between components and cells. Moreover, the DCAR also maintains the following: 1. Placing machines that are from the set of rules from the domain-concept, ART dc , before placing the rest of the machines, and 2. Arranging components that are from the same BOM sub-structure. At line 7, the DCAR calculates the cost F(MG) of its tentative cell formation MG. From lines 8 to 11, the algorithm iteratively processes the AdjustCells to obtain a stabilized cell formation. The DCAR finally returns the set of the final machine-groups MG and the cost F(MG) at line 12. The AdjustCells is an evaluation process of machines, using F(Mj) function through AdjustMachines sub-procedure, and components, using F(Ci) function through AdjustComponents sub-procedure. The AdjustMachines and the AdjustComponents work similarly. The former is to reevaluate each machine and reassign the machine to the cell that incurs the minimum cost with a criterion that the size of the newly selected machine-group, mgselected_cell, is not more than the maximum number of machines allowed for a cell (as indicated by max_m). The latter is to reevaluate each component then reassign the component to the cell that incurs the minimum cost with a criterion that the component is from the same BOM sub-structure as the others in the cell. Both of the sub-procedures update and return MG to the AdjustCells. More detail explanations are as follows. The AdjustCells procedure starts by executing its sub-procedure, AdjustMachines, to evaluate each machine, Mj, against the currently assigned components, Ci’s, of each machine-group, mgk, in MG. In AdjustMachines, all machines are evaluated and to be reassigned to other cells if the cost F(Mj) is reduced. The AdjustMachines returns an

updated machine-group matrix to the AdjustCells which then compares whether the new machine-groups (MG0 ) matrix is different from the previous machine-cells (MG) at line 2. If the cost is improved, the AdjustCells executes the AdjustComponents sub-procedure at line 3. This is to evaluate each component, Ci, against the machines, Mj’s, of each machine-group, mgk, in MG. However, if there is no improvement in cost, the AdjustCells terminates without further executing the AdjustComponents, and returns the original MG and the cost F(MG) to the DCAR at line 5. Results, analysis and discussions The experiments are conducted on a standard server with an Intel Xeon IV 2.40 GHz CPU and 1 gigabytes memory machine. The DCAR program and its modules are written in Java programming language (JDK 1.5). There are two experiments conducted to evaluate the DCAR algorithm. The first experiment is to demonstrate the DCAR algorithm is able to produce comparable results to existing methods on binary data set using a single domain setting without constraints. This experiment is conducted on 20 data sets to demonstrate the grouping efficacy. On average, the computation time for this data collection is approximately 0.153 seconds. The second experiment is conducted using a randomly generated data set of the MC matrix with dimensions of 200 machines and 2,000 components, where each value in the matrix represents a multiplication value of MCij *V i that can be any positive number rather than 0 or 1. This data set also includes other input parameters. They are a product demand vector (D) and a BOM matrix that is generated with two criteria, a maximum fan-out of 20 and a maximum height of 6 for each component path. Owing to space limitations, only a subset of the second collection with the dimensions of 25 machines and 14 components that utilizes the BOM structure from Figure 1 and Table I are shown in this section. The average computation time from the experiments using the second data collection is about 6 minutes. The grouping efficacy measure, G introduced by Kumar and Chandrasekharan (1990) is used to evaluate the experimental results of the proposed DCAR algorithm and to compare with other approaches. The G formula is as follow: ev þ e0 e 2 e0 ¼ ð9Þ G¼12 e þ ev e þ ev where e is the total number of non-zero cells in the matrix, ev is the total number of zero cells inside machine-groups, meaning that there is no component produced by the particular machine, and e0 is the total number of non-zero cells outside the machine-groups, meaning that the component has to be transported among machine-groups. An ideal grouping result has G ¼ 1. Table III shows the grouping efficacy (G) values from the experiments conducted using the known binary MC matrices, but without constraints, such as BOM and product demands. The DCAR algorithm takes a binary MC matrix and a maximum number of machines allowed for each cell as its input parameters. The algorithm reports G values of the tentative cell formations, the iterations, and the final cell formations. Each iteration involves a series of machine movement and component rearrangement. Table IV shows the Gvalues of the same experiments as above by comparing the DCAR with other approaches. The DCAR approach has comparable results to the ARI and the GP-SLCA. However, either the ARI or the GP-SLCA can provide flexible mechanism to take into considerations of BOM and demands.

Domain-concept association rules mining 799

Table III. The details of the DCAR experiments using known binary MC matrices

20

19

18

17

16

15

1 2 3 4 5 6 7 8 9 10 11 12 13 14

Boctor (1991)-1 Boctor (1991)-2 Boctor (1991)-3 Boctor (1991)-4 Boctor (1991)-5 Boctor (1991)-6 Boctor (1991)-7 Boctor (1991)-8 Boctor (1991)-9 Boctor (1991)-10 Boe and Cheng (1991) Burbidge (1975) Carrie (1973) Chandrasekharan and Rajagopalan (1987) Chandrasekharan and Rajagopalan (1986a) Chandrasekharan and Rajagopalan (1989)-1 Chandrasekharan and Rajagopalan (1989)-2 Chandrasekharan and Rajagopalan (1989)-3 Chandrasekharan and Rajagopalan (1989)-5 Seifoddini (1989) 24 £ 40 11 £ 12

24 £ 40

24 £ 40

24 £ 40

131 78

131

130

131

0.373 0.646

0.551

0.833

0.611

0.656

40 £ 100 420 61

0.476

121 106 92 111 107 101 112 114 118 108 153 126 136

16 £ 30 16 £ 30 16 £ 30 16 £ 30 16 £ 30 16 £ 30 16 £ 30 16 £ 30 16 £ 30 16 £ 30 20 £ 35 16 £ 43 20 £ 35

8 £ 20

0.457 0.571 0.583 0.411 0.658 0.533 0.593 0.562 0.595 0.593 0.474 0.544 0.475

e

Size

0.428 0.731

0.677

0.851

1.000

0.852

0.840

0.492 0.579 0.7 0.462 0.727 0.766 0.732 0.579 0.774 0.638 0.556 0.561 0.757

0.455 n/a

0.735

n/a

n/a

n/a

n/a

n/a 0.609 n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a

0.455 0.731

0.735

0.851

1.000

0.852

0.840

0.492 0.609 0.700 0.462 0.727 0.766 0.732 0.579 0.774 0.638 0.556 0.561 0.757

Final MG G

2 4

5

5

5

4

6

8 6 5 7 5 5 7 7 6 5 6 4 5

12 3

7

7

7

3

10

3 3 4 4 4 4 4 5 4 5 4 6 4

201 60

210

251

180

51

461

100 100 140 130 130 140 101 100 101 140 150 130 190

Final MG’s characteristics Number of machines in a Number of Time cell (at most) cells (milli-seconds)

800

No Data set

Tentative Iteration1 Iteration2 MG G G G

JMTM 18,7

No DCAR ARI GP-SLCA ZODIAC GRAFICS MST-GRAFICS MST GA-TSP SLINK ALINK 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

0.492 0.609 0.700 0.462 0.727 0.766 0.732 0.579 0.774 0.638 0.556 0.561 0.757 0.840 0.852 1.000 0.851 0.735 0.455 0.731

n/a 0.571 0.708 0.478 0.727 0.766 n/a 0.579 0.774 n/a 0.527 0.549 0.751 0.842 0.852 1.000 0.851 0.735 0.520 0.742

0.509 0.618 0.700§ 0.496 0.727§ 0.782 n/a 0.774§ n/a n/a 0.568 0.568§ 0.767 0.840§ 0.852§ 1.000§ 0.851§ 0.735§ 0.479 0.731§

0.349 0.586 0.686 0.267 0.727 0.764 n/a 0.320 0.774 n/a 0.511 0.538 0.751 0.839 0.852 1.000 0.851 0.730 0.204 0.731

0.481 0.534 0.675 0.449 0.691 0.771 n/a 0.579 0.774 n/a n/a 0.544 0.751 0.839 0.852 1.000 0.851 0.735 0.433 0.731

0.447 0.508 0.644 0.407 0.727 0.760 n/a 0.530 n/a n/a 0.471 n/a n/a n/a n/a n/a n/a n/a 0.466 n/a

n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a 0.831 0.852 1.000 0.851 0.730 n/a n/a

n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a 0.551 0.539 0.753 0.840 0.852 1.000 0.851 n/a 0.494 n/a

n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a 0.544 n/a n/a n/a n/a n/a n/a n/a 0.522

n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a 0.483 n/a n/a n/a n/a n/a n/a n/a 0.720

Note: § – denotes the resulting machine-groups contain one or more singletons, where a singleton is a machine-cell that has only one machine

Domain-concept association rules mining 801

Table IV. The grouping efficacy (G) values as the experimental results comparisons among various approaches using known binary MC matrices

Table V shows the resulting G values and the F values of machine-group matrices from three settings when apply the DCAR algorithm with the subset of the data in the second experiment. Each setting has a different maximal number of machines per cell. For all settings, we execute the DCAR with the complex constraints. Please note that the demand values (Di) are randomly generated, and they associate with the BOM shown in Figure 1 and the parameters listed in Table II. The resulting G values and the F values consistently agree, e.g. the setting that results in a better G value (higher) also has the better F value (lower). As shown in Table VI, the DCAR generates the MG matrix with three machine-cells that optimizes the total cost (F) under the max_m constraint of 11. The F value calculated from the DCAR is 54,504.5. The G value obtained from the experiment with this setting is 0.326. However, it is important to mention that the low G value is a result of four factors:

G F

2 machine-groups with max_m ¼ 13

3 machine-groups with max_m ¼ 11

4 machine-groups with max_m ¼ 9

0.192 86,275.5

0.326 54,506.5

0.347 50,390.5

Note: The resulting grouping efficacy (G) values and total cost (F) values when apply the DCAR algorithm with the subset of 200 £ 2,000 data set in various settings

Table V.

JMTM 18,7

802

Table VI. MG grouping experimental results with three groups of machines (max_m ¼ 11) from a MC matrix of size 25 £ 14, a subset of the 200 £ 2,000 data set

MC

P3

Pb

M2 5,900 875 M13 5,900 875 M11 M25 M18 M15 M6 M5 875 M8 M1 M7 M16 M24 875 M12 M19 M4 M9 M10 M21 M14 M17 M3 M22 M23 5,900 M20 Vi 5,900 875 Ni 1 1 Ei 5,900 875 Gi 7 6 Fi 23,600 3,062.5

P2

P6

P4

P11

Pa

P8

1,729 1,750 1,750 1,750 1,750 1,750

70 70 70

1,729

210 70 2,400 270 70 2,400 2,400 2,400 2,400 210 270 210 270 210

1,729

P1

P10

P9

Pc

P7

P5

75 75

420

480 567

270

90 90 90 480 567 480

90 150

480

150 150 150

567 567

2,400 1,729

480 70

270 210

75

567 1,729 1,729 1,750 70 2,400 270 210 567 480 90 150 3 0 3 1 1 2 2 3 0 1 1,729 1,750 70 2,400 270 210 567 480 90 150 7 4 6 5 6 7 7 8 6 3 8,645 3,500 315 7,200 945 945 2,551.5 2,640 270 300

420 420 420 75 420 0 0 75 420 3 2 112.5 420

Notes: The last five rows show the resulting unit inter-cell material movement costs (Vi), the number of inter-cell material movements of component i (Ni), the unit intra-cell void element costs (Ei), the number of void elements of component i (Gi), and the total costs of each component (Fi). The total cost of the whole MG (F) is 54,506.5

(1) Fully applying the DCAR algorithm with the objectives of rearranging MC matrix in favour of the highest demand products as domain-concepts. (2) Rearranging MC matrix by the DCAR with a restriction of the given BOM. (3) Allowing only MC’s that minimize the cost (F) being added into cells. (4) The sparseness of the generated data set is high. Table VII is used to directly compare the MG results with Table VI. The DCAR generates the MG matrix result with four machine-cells that optimizes the total cost (F) under the max_m constraint of 9. This experiment results in an F value of 50,390.5 and the G value of 0.347, which both values indicate that the resulting MG matrix in Table VII is better than the one in Table VI. If there is no max_m constraint, the DCAR

MC

P3

Pb

M2 5,900 875 M13 5,900 875 M11 M25 M18 M15 M6 M5 875 M8 M1 M7 M16 M24 875 M12 M19 M4 M10 M21 M9 M14 M3 M17 M22 M23 5,900 M20 Vi 5,900 875 Ni 1 1 Ei 5,900 875 Gi 7 6 Fi 23,600 3,062.5

P2

P6

P4

P11

Pa

P8

1,729 1,750 1,750 1,750 1,750 1,750

70 70 70

1,729

210 70 2,400 270 70 2,400 2,400 2,400 2,400 210 270 210 270 210 270

1,729

P1

P10

Pc

P9

P7

P5

Domain-concept association rules mining

480

803

567

90 90 90 480 567 567 567 480 480 480

1,729 2,400 70

150 150 150 150

90

270 210

567 1,729 1,729 1,750 70 2,400 270 210 567 3 0 3 1 1 2 2 1,729 1,750 70 2,400 270 210 567 7 4 6 4 5 6 6 8,645 3,500 315 6,000 810 840 2,268

480 2 480 1 720

150 90 0 3 150 90 0 3 0 270

75 420 75 75 420 420 420 75 420 2 1 75 420 2 0 150 210

Note: The total cost of the whole MG (F) is 50,390.5

algorithm will execute all possible settings (2 # max_m # jMj) and select the lowest cost for the final formation. In conclusion, from both collections of the experiments discussed in this section, the DCAR algorithm has demonstrated that it is not only able to generate comparable grouping efficacy results when applied to the documented data sets, but also possesses the advantages of flexibility, efficiency, and applicability for large-scale and complex CM settings that can optimize costs while maintaining the production requirements based on a given BOM. Conclusions and future work The formation of CM is an indispensable procedure for the implementation of FMSs. The proposed DCAR algorithm provides an effective method for such a task. To solve the large-scale and complex cell formation problems, the DCAR applies an AR approach with a consideration of real-world factors, which include the MC relationships, the demands for the products, the inter-cell material movement costs, and the intra-cell void element costs. The DCAR forms manufacturing cells by

Table VII. MG grouping experimental results with four groups of machines (max_m ¼ 9) using the same data set as Table VI

JMTM 18,7

804

grouping the machines and parts according to their associations and relationships, while also balancing the possible highest interaction within cells and the lowest inter-cell movements. From the experimental results, we found that the DCAR algorithm could be used to solve various cell formation problems with results that are at least as efficient as other approaches in terms of grouping efficacy values using binary MC matrices. Moreover, the DCAR algorithm has its main advantages over the other approaches which include the following: . The ability to handle more parameters rather than the co-occurrence of MC in term of binary matrices. . Its ability to efficiently handle bigger data set sizes. . Its capability to optimize machine-groups according to the criterion function while making decisions of adding a machine into a group. . The ability to allow machine-cells to be reevaluated by a series of inclusion and exclusion processes to improve a preset criterion value. Cell rearrangement may be required in various manufacturing situations, such as machine breakdown and machine/part inclusion after the machine-group matrix has been determined. In practice, the DCAR algorithm has the capability of dealing with cell rearrangement because it can regenerate the cell formation speedily. System managers thus can use the information to make an appropriate decision. Moreover, since the DCAR algorithm includes an extensive set of manufacturing parameters, the resulting cell formation is more down-to-earth. Further improvements to this ongoing research include the following: . Assistance in decision-makers definition and validation of BOM because there are currently no limitations of PART-OF relationship structures. . Prioritizing and weighting the influences of BOM, demands, the inter-cell movement costs, and the intra-cell void element costs in the DCAR algorithm. . Adding the ability to give an incentive when grouping machine-cells based on the BOM and to assign penalty otherwise. . Adding the ability to collect the production information as feedback and to use this information to further improve machine groupings. References Agrawal, R., Imieliaski, T. and Swami, A. (1993), “Mining association rules between sets of items in large databases”, Proceedings of the 1993 ACM-SIGMOD International Conference on Management of Data, pp. 207-16. Askin, R.G. and Standridge, C.R. (1993), Modeling and Analysis of Manufacturing Systems, Wiley, New York, NY. Balakrishnan, J. and Cheng, C.H. (2005), “Dynamic cellular manufacturing under multiperiod planning horizons”, Journal of Manufacturing Technology Management, Vol. 16 No. 5, pp. 516-30. Boctor, F.F. (1991), “A linear formulation of the machine-part cell formation problem”, International Journal of Production Research, Vol. 29 No. 2, pp. 343-56.

Boe, W.J. and Cheng, C.H. (1991), “A close neighbour algorithm for designing cellular manufacturing systems”, International Journal of Production Research, Vol. 29 No. 10, pp. 2097-116. Burbidge, J.L. (1963), “Production flow analysis”, Production Engineering, Vol. 42, pp. 742-52. Burbidge, J.L. (1975), The Introduction of Group Technology, Wiley, New York, NY. Cao, D. and Chen, M. (2005), “A robust cell formation approach for varying product demands”, International Journal of Production Research, Vol. 43 No. 8, pp. 1587-605. Carrie, A.S. (1973), “Numerical taxonomy applied to group technology and plant layout”, International Journal of Production Research, Vol. 11 No. 4, pp. 399-416. Chandrasekharan, M.P. and Rajagopalan, R. (1986a), “An ideal seed non-hierarchical clustering algorithm for cellular manufacturing”, International Journal of Production Research, Vol. 24 No. 2, pp. 451-64. Chandrasekharan, M.P. and Rajagopalan, R. (1986b), “MODROC: an extension of rank order clustering for group technology”, International Journal of Production Research, Vol. 24 No. 5, pp. 1221-33. Chandrasekharan, M.P. and Rajagopalan, R. (1987), “ZODIAC – an algorithm for concurrent formation of part families and machine-cells”, International Journal of Production Research, Vol. 25 No. 6, pp. 835-50. Chandrasekharan, M.P. and Rajagopalan, R. (1989), “GROUPABILITY: an analysis of the properties of binary data matrices for group technology”, International Journal of Production Research, Vol. 27 No. 6, pp. 1035-52. Chan, F.T.S., Lau, K.W. and Chan, P.L.Y. (2004), “A holistic approach to manufacturing cell formation: incorporation of machine flexibility and machine aggregation”, Proceedings of the Institution of Mechanical Engineers, Part B Engineering Manufacture, Vol. 218 No. 10, pp. 1279-96. Chang, O.K., Baek, J-G. and Baek, J-K. (2004), “A two-phase heuristic algorithm for cell formation problems considering alternative part routes and machine sequences”, International Journal of Production Research, Vol. 42 No. 18, pp. 3911-27. Chen, M.C. (2003), “Configuration of cellular manufacturing systems using association rule induction”, International Journal of Production Research, Vol. 41 No. 2, pp. 381-95. Cheng, C.H., Gupta, Y.P., Lee, W.H. and Wong, K.F. (1998), “A TSP-based heuristic for forming machine groups and part families”, International Journal of Production Research, Vol. 36 No. 5, pp. 1325-37. Chu, C-H. and Hayya, J.C. (1991), “A fuzzy-clustering approach to manufacturing cell formation”, International Journal of Production Research, Vol. 29 No. 7, pp. 1475-87. Cormen, T.H., Leiserson, C.E., Rivest, R.L. and Stein, C. (1998), “Greedy algorithms”, Introduction to Algorithms, 2nd ed., McGraw-Hill, St Louis, MO, pp. 329-55. Dimopoulos, C. and Mort, N. (2001), “A hierarchical clustering methodology based on genetic programming for the solution of simple cell-formation problems”, International Journal of Production Research, Vol. 39 No. 1, pp. 1-19. Gupta, T. (1991), “Clustering algorithms for the design of a cellular manufacturing system – an analysis of their performance”, Computers & Industrial Engineering, Vol. 20 No. 4, pp. 461-8. Harhalakis, G., Ioannou, G., Minis, I. and Nagi, R. (1994), “Manufacturing cell formation under random product demand”, International Journal of Production Research, Vol. 32 No. 1, pp. 47-64.

Domain-concept association rules mining 805

JMTM 18,7

806

Joines, J.A., Culbreth, C.T. and King, R.E. (1996), “Manufacturing cell design: an integer programming model employing genetic algorithms”, IIE transactions, Vol. 28 No. 1, pp. 69-85. King, J.R. (1980), “Machine-component grouping in production flow analysis: an approach using a rank order clustering algorithm”, International Journal of Production Research, Vol. 18 No. 2, pp. 117-33. Kumar, C.S. and Chandrasekharan, M.P. (1990), “Grouping efficacy: a quantitative criterion for goodness of block diagonal forms of binary matrices in group technology”, International Journal of Production Research, Vol. 28 No. 2, pp. 603-12. McAuley, J. (1972), “Machine grouping for efficient production”, Production Engineering, Vol. 51, pp. 53-7. McCormick, W.T., Schweitzer, R.J. and White, T.W. (1972), “Problem decomposition and data reorganization by clustering techniques”, Operations Research, Vol. 20 No. 5, pp. 993-1009. Miltenburg, J. and Zhang, W. (1991), “A comparative evaluation of nine well-known algorithms for solving the cell formation problem in group technology”, Journal of Operations Management, Vol. 10 No. 1, pp. 44-72. Pudil, P., Novovicˇova´, J. and Kittler, J. (1994), “Floating search methods in feature selection”, Pattern Recognition Letters, Vol. 15, pp. 1119-25. Santos, N.R.dos and Arau´jo, L.O.de Jr (2003), “Computational system for group technology – PFA case study”, Integrated Manufacturing Systems, Vol. 14 No. 2, pp. 138-52. Seifoddini, H. (1989), “Single linkage vs average linkage clustering in machine cells formation application”, Computers & Industrial Engineering, Vol. 16 No. 3, pp. 419-26. Selim, H.M., Abdel Aal, R.M.S. and Mahdi, A.I. (2003), “Formation of machine group and part families: a modified SLC method and comparative study”, Integrated Manufacturing Systems, Vol. 14 No. 2, pp. 123-37. Sofianopoulou, S. (2006), “Manufacturing cells efficiency evaluation using data envelopment analysis”, Journal of Manufacturing Technology Management, Vol. 17 No. 2, pp. 224-38. Srinivasan, G. and Narendran, T.T. (1991), “GRAFICS – a non-hierarchical clustering algorithm for group technology”, International Journal of Production Research, Vol. 29 No. 3, pp. 463-78. Zolfaghari, S. and Liang, M. (2004), “Comprehensive machine cell/part family formation using genetic algorithm”, Journal of Manufacturing Technology Management, Vol. 15 No. 6, pp. 433-44. Zolfaghari, S. and Roa, E.V.L. (2006), “Cellular manufacturing versus a hybrid system: a comparative study”, Journal of Manufacturing Technology Management, Vol. 17 No. 7, pp. 942-61. Corresponding author Chi-Ren Shyu can be contacted at: [email protected]

To purchase reprints of this article please e-mail: [email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints

The current issue and full text archive of this journal is available at www.emeraldinsight.com/1741-038X.htm

Plug and play (PnP) modelling approach to throughput analysis

Plug and play modelling approach

Kim Hua Tan Nottingham University Business School, University of Nottingham, Nottingham, UK, and

James Noble Department of Industrial and Manufacturing Systems Engineering, University of Missouri, Columbia, Missouri, USA

807 Received September 2006 Revised January 2007 Accepted April 2007

Abstract Purpose – The purpose of this paper is to propose a “plug and play” (PnP) approach to decision modelling. An approach to building models from components based on a “LEGO block” style of manufacturing simulation and analysis. The objective of this paper is to present ideas central to PnP modelling for throughput analysis. Design/methodology/approach – Firstly, the PnP concept is introduced and the proposed framework is described. Then different techniques adopted in the framework are explained, and their applications are illustrated using a case example on manufacturing throughput analysis. Finally, the implications of this research are discussed and plans for further work are outlined. Findings – The proposed PnP approach for rapid decision modelling was capable to support two important goals: productivity improvement to ensure that managers can rapidly build up models, and increase communication and decision support efficiency to ensure that those who should be using it, can and will use it. Research limitations/implications – This research has so far introduced, described, and explained the PnP approach. The idea of PnP to support manufacturing decision modelling is new and not yet well developed. It is recognised that there are a number of additional issues that must be addressed before the proposed PnP approach is ready to make a practical impact. Originality/value – The paper introduces a PnP approach to decision modelling. Keywords Operations and production management, Operations management, Process planning, Modelling Paper type Research paper

Introduction In today’s dynamic and competitive manufacturing environment, making decisions based on “gut feel” is no longer suitable. Manufacturing operations have become complex, embodying many of the latest technologies and techniques such as lean production, six sigma, theory of constraints, etc. Implementing a change such as introducing a new product can involve making a wide range of difficult choices on alternative actions covering decisions in both structural (capacity, facilities, technology, vertical integration) and operational/infrastructural (quality, production planning, organisation, workforce policies, performance measurement) areas The corresponding author would like to thank the University of Nottingham Research Committee New Lecturers Fund for the support of this research.

Journal of Manufacturing Technology Management Vol. 18 No. 7, 2007 pp. 807-817 q Emerald Group Publishing Limited 1741-038X DOI 10.1108/17410380710817264

JMTM 18,7

808

(Hayes and Wheelwright, 1984). It is becoming increasingly difficult to predict the outcomes of decisions and actions with any degree of confidence. Formal modelling approaches are needed to better inform managers and to assist their decision-making process. A model is an abstraction of reality. However, often the process of developing valid models results in overly complex models that are difficult to understand. Managers would prefer to have simple models to make sense of complex manufacturing operations (Solberg, 1992). However, simple models are difficult to develop, therefore, most often simulation models are developed for complex manufacturing situations. Simulation models contain features, which enable managers to build visual models to experiment with, to study the best possible solution to a problem. Regardless of the type of model, models are vital for managers to get early feedback on actions or decisions such as building new plants. For example, the ability to ramp up a plant for a new pharmaceutical product and bring it to market quickly can help companies to gain longer revenues from sales before the product patent expires. However, there are some drawbacks to the modelling software available today which significantly reduce their value in decision support. Too often the actual costs of designing, building and implementing, and managing models outweigh their purported benefits. The reality of modern manufacturing is that managers need to make decisions such as investing in a new technology or a plant under severe time pressure. However, decisions which need to be made in a couple of days may take several weeks to model and analyse using a simulation package (Suri and Tomsicek, 1988; Tan et al., 2006). Thus, there is a need for approaches that allow managers to rapidly build up models and perform analysis to support their decisions. The need for rapid modelling approaches also arises from the increased flexibility that businesses demand of themselves and consequently, of their modelling capabilities. Typical business imperatives like productivity, quality, time to market, and the ability to adapt to changes, are decisions that companies need to address constantly. A cost effective and timely analysis of manufacturing decisions can lead to an enhanced competitive position. Moreover, if models can be made more accessible through techniques of simplification and visualisation, without undermining their fundamental validity, they stand a better chance of being used by managers. Today, in many companies, models are built by a team of people over a long period of time. Hence, there is a need for a rapid “plug and play” (PnP) modelling framework – a way of building models from components: a “LEGO block” approach to manufacturing simulation and analysis. A PnP framework consists of “basic” units of decision components (modules) that allow managers to rapidly build up a model. For many years researchers have been looking at the possibility of a rapid modelling approach to decision support (Suri, 1988; Nymon, 1987; Brown, 1988; Peikert et al., 1998). Suri and Tomsicek (1988) developed a set of five compatible tools for rapid modelling and analysis of discrete manufacturing systems. These tools are tightly integrated to allow managers to build models in a short span of time especially for production lead time analysis. Peikert et al. (1998) proposed a methodology for quickly investigating problem areas in semiconductor wafer fabrication factories by creating a model for the production area of interest only (as opposed to a model of the complete factory operation). Although these research efforts have been tailored for specific

factory operations and applications, by and large, they have shown the possibility of rapid modelling for decision support. Despite the research efforts mentioned above, the research community has not yet offered managers a practical and actionable PnP approach to decision modelling. This paper proposes such a PnP approach to decision modelling. The objective of this paper is to present ideas central to PnP modelling. Firstly, the PnP concept is introduced, and the proposed framework is described. The different techniques adopted in the framework are explained, and their applications are illustrated using a case example on manufacturing throughput analysis. Finally, the implications of this research are discussed and plans for further work are outlined.

Plug and play modelling approach 809

The plug and play concept The term “plug and play” is taken from software engineering practices, which emphasises component-based and architecture-driven software development (Bronsard et al., 1997). While the PnP concept in software engineering aims for scalability in design, the main idea behind the PnP approach to decision modelling is to provide the capability for managers to quickly configure a model that closely resembles both the decision-making framework and decision objectives under which their company functions (Figure 1). The two key issues of a PnP approach to decisions modelling are: (1) components; and (2) framework. PnP components (modules) Using the analogy of the “LEGO block” concept, a PnP approach to modelling requires a set of “basic” units of decision modules that managers could quickly build on. The idea is to create theoretical models for manufacturing objectives like capacity, quality, flexibility, and throughput from a set of basic decision units. For example, the basic decision units for a capacity module will comprise of labour availability, processing time, setup time, scrap, scheduling/sequencing, machine reliability, shift length, number of shifts, etc. Using a database of over 200 variables, Burbidge (1984) advocated a connectance model for production management. Through the development of TAPS, Tan and Platts (2004, 2003a; b) and Tan et al. (2004) have revived the database and make it suitable for manufacturing decision support, as well as a tool for research and development. To-date, this production variables database is the most comprehensive one available in the literature. Having investigated the database and TAPS, we believe that the

Figure 1. The different foci of PnP concept

JMTM 18,7

810

variables in the database and TAPS could be further developed to serve as a platform for a PnP rapid modelling approach to support manufacturing objective deployment. To start with, each variable in the database would be treated as a basic decision “unit” (element) in a model. Serving as a basic unit block (such as those in LEGO), these decision elements can be used to construct various models for manufacturing decision analysis. For example, decision units for a capacity model could consist of processing time, setup time, scrap, etc. We propose four main basic PnP modules based on the existing objectives, namely cost, quality, flexibility, and throughput (Figure 2). The PnP model for each main module will be structured according to the TAPS structure (Tan and Platts, 2003a) on how to structure a four basic levels network. Thus, each main module will consist of the basic decision units that influence the objective of a module. As the manufacturing objectives are highly aggregated, we propose to break-down the main modules to sub-modules. For example, under the “Throughput” main module, there will be sub-modules such as “capacity” which comprise the basic decision units – processing time, setup time, machine reliability, shift length, number of shifts, etc. PnP framework There also needs to be a blueprint describing the module composition. A framework is needed to guide how all the elements are integrated together. There will most likely still be the need for managers to customize the basic modules to allow for further refinement of the model. However, there must be limits to what can be modified to make sure that the process does not become overly time consuming which typically results in abandonment, and defeating the purpose of a rapid modelling approach. Hence, there is a need to realise that there is an inherent trade-off between model accuracy/validity and model creation time. A sufficiently valid model can be created quickly if the correct level of detail is included. Under the proposed framework (Figure 3), a model for initial understanding of the objectives and variables is first established (Tan and Platts, 2003a). Then, “basic” decision models from the PnP modules could be retrieved to serve as references or to enhance the initial model. The variables relationship in the model will then be analysed, potential actions are generated and prioritised. In the proposed framework, PnP modularity means that basic models can be pulled together to support a fully functioning model. A range of examples and applications of the basic model would be available to serve as a reference point and for managers to understand how the model could be applied. Thus, the PnP module contains a library

Etc. shift length

Figure 2. PnP modules

number of shifts

Direct cost Capacity

Etc.

Flexibility Cost Quality

setup processing time time

Indirect cost

Decision units

Sub-modules

Main modules

Plug and play modelling approach

Objectives Variables

Model Development

Plug & Play modules

811

Analyze Relationships

Prioritisation/ Actions

of tested and validated reusable models. To encourage standardization and reusability, there are common entities that serve as initial building blocks to which problem-unique attributes could be added. Since, any time taken from a model/manager to document, is time taken away from modelling, a PnP software tool will have a built-in support for automatic documentation. For the maximum benefits of modularity, a highly interactive interface design is developed to reduce the time or effort needed to build/test/edit a model. For example, managers could use TAPS to develop a model to analyse “production waste” in a manufacturing process (Figure 4). Basically, the model shows the various variables and actions that could increase the waste in production. To get further ideas about the potential variables and actions on production waste, managers could refer to the PnP modules for relevant examples and information. The interactive functions provided in the PnP software will allow managers to “pick and select” any relevant variables or actions to enhance the initial model. If a more comprehensive model is to be found in the modules, managers could simply discard the initial model and rapidly build up a better model by making minor modification on a basic model. Then, various analytical techniques could be used to perform analysis on the developed model. For example, a study of the incremental changes on maintenance policies on the level of production waste. Case example The power of the PnP approach is that it allows for quickly configuring a model for a given situation in order to obtain insights into system performance. Figure 5 shows how different PnP modules could be configured to various types of analysis (i.e. capacity, flexibility, cost, delivery, quality, etc.). The PnP framework utilises incremental calculus (Eilon, 1984) to analyse the performance networks that are developed based upon the connectance model variables. Eilon developed incremental calculus to study the relative or incremental changes that take place among a set of variables. The relative change in a variable is the ratio between the absolute change, or increment, in the variable and the original absolute

Figure 3. A framework for rapid PnP decision modelling

JMTM 18,7

Throughput

812 10

17

Maintenance

5%

28 Double-H'dling

14

15

Change-over

80%

Tranpt-Defect

5%

16

43

Re-work

15%

Break-down

Scrap

18

31

Obsolete

Mfg-Defect

20%

PnP concept

60%

13

21

Proc-loss

Waiting-T

Capability for 5%

20%

5%

5%

15%

35%

15%

3

PnP concept

Waste

Figure 4. Example of a production waste model under the PnP approach

Scalability in

CAPACITY

COST

Process

Available Time

Figure 5. A set of conceptual PnP modules for different types of analysis

QUALITY

Material

Schedule

DELIVERY

FLEXIBILITY

value of the variable. The primary motivation for focusing on relative increments rather than the absolute values of the variables is that relationships of a more general nature can be developed from them. A secondary motivation is that incremental calculus is by its very nature more robust. The relative increments are not constrained by their sign, positive or negative, or in the manner in which change takes place and the change that a variable undergoes can be continuous or discrete. Noble and Tanchoco (1995) have shown the value of applying incremental calculus in the justification and design of material handling systems. The following will present the basics of an incremental calculus analysis. The incremental calculus is based on four main rules from which it is possible to derive other incremental relationships. The four main rules are: addition, subtraction, multiplication and division. Let y * be the relative incremental change, y * ¼ y */y, where y * ¼ yold – ynew. The addition rule in incremental calculus is: y * ¼ ðx1 þ x2 Þ* ¼ k1 x*1 þ k2 x*2

ð1Þ

where: k1 ¼

1 ; ðx1 þ x2 Þ

k2 ¼

x2 ; ðx1 þ x2 Þ

k1 þ k2 ¼ 1

The subtraction rule in incremental calculus is: y * ¼ ðx1 2 x2 Þ* ¼ k1 x*2 2 k2 x*2 where: k1 ¼

x1 ; ðx1 2 x2 Þ

k2 ¼

x2 ; ðx1 2 x2 Þ

ð2Þ

k1 2 k2 ¼ 1

The multiplication rule in incremental calculus is: *

*

y * ¼ ðx1 x2 Þ* ¼ x*1 þ x*2 þ x1 x2 The division rule in incremental calculus is:  * ðx* 2 x*2 Þ x1 * ¼ 1 y ¼ x2 ð1 þ x*2 Þ

ð3Þ

ð4Þ

(note: y * x*1 , x*2 represent relative incremental changes in variables, y, x1, x2). An incremental calculus approach enables managers to quickly and quantitatively analyse the relationships between variables in a performance model. For example, managers could use the PnP approach to build a model for overall factory capacity (Figure 6). The following example is a simplified model based on an actual company case study. Basically, the model shows that both process capability rate and available time have direct impact on overall capacity. The variables that are linked to the process capability are machine rate, machine uptime, inventory level and raw materials quality. The variables linked to available time are worker skill level, number of workers, shift length, and number of setups. By using incremental calculus, the incremental change in capacity can be modelled as:

Plug and play modelling approach 813

JMTM 18,7 CAPACITY

814 Process

Available Time

Rate

Skill

Reliability

Quality

Manpower

Setup

Figure 6. A conceptual PnP configuration for capacity analysis

Material

Inventory

Regular Overtime

Batch size Rushjobs

Capacity ðCÞ ¼ process capability ðPÞ £ available time ðTÞ

ð5Þ

where: Process Capability ðPÞ ¼ Process Des ðDÞ £ Material Des ðM Þ Process Des ðDÞ ¼ McRate ðRÞ £ Uptime ðU Þ Material Des ðM Þ ¼ InvLevel ðI Þ £ MatlQuality ðQÞ

ð6Þ

Available Time ðTÞ ¼ ðRegTimeðRTÞ þ OT Time ðOTÞÞ 2 Setup Time ðSTÞ RegTime ðRTÞ ¼ RegSkill ðK 1 Þ £ RegWorkers ðW 1 Þ £ RegShift ðH 1 Þ OT Time ðOTÞ ¼ OT Skill ðK 2 Þ £ OT Workers ðW 2 Þ £ OT Shift ðH 2 Þ Setup Time ðSTÞ ¼ Setup Time ðSÞ £ Num Setups ðN Þ ð7Þ such that: C * ¼ P * þ T * þ P *T *

ð8Þ

P * ¼ ðR * þ U * þ R * U * Þ £ ðI * þ Q * þ I * Q * Þ

ð9Þ

  RT þ OT RT OT ðRT* Þ þ ðOT* Þ RT þ OT þ ST RT þ OT RT þ OT ST ðST* Þ 2 RT þ OT þ ST RT* ¼ K*1 þ W *1 þ H *1 þ K*1 W *1 þ K*1 H *1 þ W *1 H *1 þ K*1 W *1 H *1

Plug and play modelling approach

T* ¼

ð10Þ

OT* ¼ K*2 þ W *2 þ H *2 þ K*2 W *2 þ K*2 H *2 þ W *2 H *2 þ K*2 W *2 H *2 ST* ¼ S * þ N * þ S * N * where: McRate (R) ¼ machine rate per hour; Uptime (U) ¼ uptime percentage as a function of machine reliability; InvLevel (I) ¼ (1-InvPercentage), where, InvPercentage is the cube root of the absolute value of the positive/negative deviation for the inventory level required for a CONWIP production line (WIP ¼ Throughput Rate *Time in System) (limited to a ^ 50 per cent maximum deviation); MatlQuality (Q) ¼ incoming material yield; RegSkill (K1) ¼ average percentage level of standard skill level for regular shift workers; RegWorkers (W1) ¼ number of regular shift workers; RegShift (H1) ¼ length of a regular shift (in hours); OTSkill (K2) ¼ average percentage level of standard skill level for overtime shift workers; OTWorkers (W2) ¼ number of overtime shift workers; OTShift (H2) ¼ length of an overtime shift (in hours); SetupTime (S) ¼ time per setup; NumSetups (N) ¼ number of setup as a function of production batch size and rush jobs. Consider the following example of using the capacity PnP analysis that was encountered by the authors in a recent industrial case study. The company is a custom roadway sign production shop. The shop has three primary types of signs and associated production processes: silk screen, white-on-green, and structural signs. One of the bottleneck processes for both the silk screen and white-on-green signs is the squeeze roll process which applies a reflective film onto the sign’s structural sheet metal. The current scenario for this process has four regular time workers (each having an average skill level of 100 per cent) and no overtime workers. The machine is capable of producing 35 parts per hour and has an up-time of 95 per cent. The current product mix requires two setups per day (i.e. changing the colour of the film). The company is currently using a material that has a 100 per cent yield, with acceptable inventory levels. This results in an adjusted rate per hour of 33.25 units for a total production per day of 1,030. The PnP capacity analysis is used to explore the overall impact on capacity when a new, less costly material is utilized that can be processed at a rate of 40 per hour, but has a 90 per cent yield rate and has increased processing complexity that reduces the average worker skill level to 95 per cent of standard. The result of this material substitution is a 2.9 per cent increase in overall machine rate per hour (34.2) due to 14.3 per cent increase in machine rate, combined with a 10 per cent decrease in material yield rate. However, due to the 5 per cent decrease in skill level, available time decreases 5 per cent to 30.4 hours per day. Therefore, the net result of this material change is a 2.5 per cent decrease in overall output capacity (1,005 per day). Figure 7 shows the relationship between the machine rate increase required to compensate for different levels of quality decrease. Overall, this analysis revealed to the company that the economic benefit of adopting a less costly material that can be “processed” faster is questionable at best.

815

JMTM 18,7

Increase Required in Machine Rate toMaintain1030 parts per day 18 16 14

816 % Increase

12 10 8 6 4

Figure 7. Relative change in machine rate required to compensate for reduced quality

2

Increase Required in Machine Rate

0 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 Quality Decrease

0.1

In conclusion, this type of rapid PnP modelling approach enables managers to explicitly define linkages among variables and quickly obtain relationships between key decision variables to support more informed decision making. Conclusion and further work Models should be used by decision makers to support decision making. But the current state of art in developing and using modelling and simulation packages typically demands at least the following skill categories: simulation language programmer, statistician, modeller, etc. It is rare that managers possess all these skills and the time to make build a comprehensive useable model from scratch. The proposed PnP approach for rapid decision modelling aims to support two important goals: productivity improvement to ensure that managers can rapidly build up models, and increase communication and decision support efficiency to ensure that those who should be using it, can and will use it. This research has so far introduced, described, and explained the PnP approach. The idea of PnP to support manufacturing decision modelling is new and not yet well developed. We recognise that there are a number of additional issues that must be addressed before the proposed PnP approach is ready to make a practical impact. Further, work will primarily focus on two areas: (1) developing a PnP software tool to operationalise the proposed framework; and (2) testing its feasibility in a range of companies. References Bronsard, F., Bryan, D., Kozaczynski, W., Liongosari, E.S., Ning, J.Q., O´lafsson, A. and Wetterstrand, J.W. (1997), “Toward software plug-and-play”, Proceedings of the 1997 Symposium on Symposium on Software Reusability, ACM Press, Boston, MA, pp. 19-29.

Brown, E. (1988), “IBM combines rapid modeling technique and simulation to design PCB factory-of-the-future”, Industrial Engineering, June, pp. 23-6. Burbidge, J.L. (1984), “A classification of production system variables”, in Hubner, H. (Ed.), IFIP Production Management Systems: Strategies and Tools for Design, Elsevier Science Publishers B.V., North Holland. Eilon, S. (1984), The Art of Reckoning – Analysis of Performance Criteria, Academic Press, London. Hayes, R.H. and Wheelwright, S.C. (1984), Restoring Our Competitive Edge: Competing Through Manufacturing, Wiley, New York, NY. Noble, J.S. and Tanchoco, J.M.A. (1995), “Marginal analysis guided design justification: a material handling example”, International Journal of Production Research, Vol. 33 No. 12, pp. 3439-54. Nymon, J. (1987), “Using analytical and simulation modeling for early factory prototyping”, Proceedings of the Winter Simulation Conference, pp. 721-4. Peikert, A., Thoma, J. and Brown, S. (1998), “A rapid modeling technique for measurable improvements in factory performance”, paper presented at Winter Simulation Conference, pp. 1011-6. Solberg, J.J. (1992), “The power of simple models in manufacturing”, Manufacturing Systems: Foundations of World-Class Practice, National Academy Press, Washington, DC, pp. 215-23. Suri, R. (1988), “RMT puts manufacturing at the Helm”, Manufacturing Engineering, February. Suri, R. and Tomsicek, M. (1988), “Rapid modeling tools for manufacturing simulation and analysis”, Proceedings of the 20th Conference on Winter Simulation, San Diego, CA, pp. 25-32. Tan, K., Lim, C., Platts, K. and Koay, H. (2006), “Managing manufacturing technology investments: an intelligent learning system approach”, International Journal of Computer Integrated Manufacturing, Vol. 19 No. 1, pp. 4-13. Tan, K.H. and Platts, K. (2003a), “Linking objectives to action plans: a decision support approach based on the connectance concept”, Decision Sciences, Vol. 34 No. 3, pp. 569-93. Tan, K.H. and Platts, K. (2003b), “Winning decisions: translating business strategy into action plans”, CMIL, University of Cambridge, Cambridge. Tan, K.H. and Platts, K. (2004), “A connectance based approach for managing manufacturing knowledge”, Industrial Management & Data Systems Journal, Vol. 104 No. 2, pp. 158-68. Tan, K.H., Platts, K. and Noble, J. (2004), “Building performance through in-process measurement: toward an ‘indicative’ scorecard for business excellence”, Journal of Productivity and Performance Management, Vol. 53 No. 3, pp. 233-44. Corresponding author Kim Hua Tan can be contacted at: [email protected]

To purchase reprints of this article please e-mail: [email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints

Plug and play modelling approach 817

The current issue and full text archive of this journal is available at www.emeraldinsight.com/1741-038X.htm

JMTM 18,7

The cluster approach and SME competitiveness: a review

818

South East European Research Centre, Research Centre of the University of Sheffield and CITY Liberal Studies, Thessaloniki, Greece

Aleksandar Karaev

Received September 2006 Revised January 2007 Accepted March 2007

S.C. Lenny Koh Management School, University of Sheffield, Sheffield, UK, and

Leslie T. Szamosi South East European Research Centre, Research Centre of the University of Sheffield and CITY Liberal Studies, Thessaloniki, Greece Abstract Purpose – The purpose of this paper is to review the effect of a cluster approach on SMEs’ competitiveness. The primary objective is to examine the use of a cluster approach among SMEs as a tool for meeting their challenges related to globalisation and trade liberalisation, as well as investigating its contributing factor in the process of increasing their competitiveness. Design/methodology/approach – The methodology used for this paper is a literature review of published materials. The literature analysis was performed based on research objectivity, coverage and authority. Findings – There is strong evidence to suggest that a cluster policy brings additional positive effect to existing SME policy in industrialised economies, but such effects have not been extensively researched in developing (transition) countries, particularly from the point of view of the SMEs, which are the main actors in the cluster development process, in relation to whether their performance has been improved as a result of cluster effects. Originality/value – The findings from this research will assist business managers in making more informed decisions regarding the adoption of a cluster approach and entering into cluster-based relations, as well as assisting policy makers in designing more efficient cluster policies. The academic value will be added in the context of expanding knowledge in relation to the impact of clusters on economic development in transition countries and as such contribute in filling gaps within the existing body of knowledge. Keywords Competitive strategy, Cluster analysis, Small to medium-sized enterprises, Agglomeration Paper type Literature review

Journal of Manufacturing Technology Management Vol. 18 No. 7, 2007 pp. 818-835 q Emerald Group Publishing Limited 1741-038X DOI 10.1108/17410380710817273

Introduction Over the last few years, trade liberalization and globalization processes have significantly increased customer expectations and competition between companies. Nearly, simultaneously, global markets have begun offering an abundance of opportunities for SMEs (Gradzol et al., 2005). In order to respond to the increased pressures by the globalization process and benefit from global market opportunities, SMEs have begun facing two main challenges: first, to transform themselves and increase their individual competitiveness (Fassoula, 2006), and second, due to their

limited size to take advantage of synergy effects created by entering into cooperative relations with other SMEs and related partner institutions. At the same time, governmental policy at the macro level is attempting to improve the competitiveness of national economies through creating favourable framework conditions for economic activity and promoting various instruments and measures for SME development. According to the literature based on experiences in industrialized countries, the cluster concept has been shown to be an efficient instrument for strengthening regional and national economies, but its applicability for improving the competitiveness of participating SMEs has yet to be fully examined. This paper aims to review whether cluster-based strategies could really improve the competitiveness of SMEs with a primary objective of examining the use of a cluster approach among SMEs as a tool for meeting their challenges related to globalization and trade liberalization, as well as investigating its contributing factor in the process of increasing their competitiveness. This paper discusses the literature on cluster theory and policies for the development of clusters in industrialized economies as a foundation for further research in this field. This is followed by an overview of perceptions of clusters in different countries and preconditions for cluster formation and benefits for cluster members. The research does not attempt to conclusively answer what a cluster is, or is not, but to integrate various definitions which may help in understanding the complexity of the concept. Recognizing their importance in economic development, many governments have been active in designing and implementing policies and incentives to support SME development through both financial and non-financial instruments.

Research methodology The aim of the literature review was to describe and critically appraise studies and articles reporting on clusters and cluster-based strategies implemented through cluster policies. The review focused on gathering evidence to support the relationship between clusters and SME competitiveness. Besides, searching a cross-section of electronic databases, this literature review has been conducted through collecting published materials received from traditional sources. To examine the effect of clusters on SMEs’ competitiveness, in the period of 18 months more than 250 articles from peer-reviewed journals have been selected. A broad search strategy was used, covering separate electronic databases, (e.g. EBSCO and First Search) and through collecting hard copies of documents from field experts, academics, and partner institutions. In the search process key terms used were “clusters” and “competitiveness”. Besides, the official documents from governmental institutions, reports from international donor organizations provided a solid foundation for examining cluster phenomenon. A further source of information was garnered from two conferences, attended by one of the authors: (1) The Conference on Clusters, Industrial Districts and Firms: The Challenge of Globalization held at the University of Modena and Reggio Emilia. (2) The East-West Cluster Conference organized by OECD LEED in the region of Friuli Venetia Giulia, Italy.

Cluster approach and SME competitiveness 819

JMTM 18,7

From this conference more than 50 conference papers were reviewed. During the first stage of literature review the intention was a clearer understanding for clusters to be developed. For that purpose besides taking notes, parts of articles have been summarised and classified in different categories. However, since the sources were secondary, this research should serve as a base for collecting primary data for conducting further research.

820 Cluster definitions Over the last decade, clusters have been widely recognized as one of the ways of overcoming the size limitations of SMEs and as an important instrument for improving their productivity, innovativeness and overall competitiveness. Despite the fact that numerous studies have been conducted in various countries, a common understanding of the cluster concept has yet to be achieved. According to Porter (1990), widely considered to be one of the most prominent authorities in the field, national clusters are formed by firms and industries linked through vertical (buyer/supplier) and/or horizontal (common customers, technology, etc.) relationships with the main players located in a single nation/state. Porter (1998) later expanded this definition by including institutions (formal organizations) such as universities. Accordingly, geographical proximity has been seen as a conduit towards facilitating the transmission of knowledge and the development of institutions, which in turn may enhance cluster effectiveness. According to Porter’s (1998) views, clustering can encourage an enhanced division of labour among firms with physical proximity among numerous competing producers, thereby encouraging innovation. Other authors support that clusters refer to geographically bounded concentrations of interdependent firms, which should have active channels for business transactions, dialogue and communication (Rosenfeld, 1997). Without active channels, even a critical mass of related firms is not generally perceived as a local production or social system, and therefore does not operate as a cluster per se (Rosenfeld, 1997). Clusters consist of private enterprises of various sizes, including producers, suppliers, and customers, plus labour, government, professional associations, and academic, research or training institutes. The United Nations International Development Organization – UNIDO (2000) applies a cluster definition related to the sectoral and geographical concentrations of enterprises that produce and sell a range of related or complementary products and who also face common challenges and opportunities. These concentrations give rise to external economies such as the emergence of specialized suppliers of raw materials and components or growth of a pool of sector-specific skills and can foster development of specialized services in technical, managerial and financial matters (UNIDO, 2000). In Japan two types of clusters can be distinguished; first, the so-called jiba-sangyo (or localized industrial communities of the traditional type), where SMEs link to each other as industrial clusters and second, the geographically concentrated sangyo-shuseki (or industrial agglomerations in a particular locality) of a more recent origin, where SMEs gather together in support of each other in a new industrial activity, around a large-sized enterprise as input suppliers, or around an academic community (universities and research institutions) (Ozawa, 2003). Frequently, the terms clusters and industrial districts are used interchangeably, but it is posited that the main difference between them is that industrial districts are more input oriented,

securing geographically available inputs for production, and clusters are based on generating optimal competitive conditions for firms (Preissl and Solimene, 2003). It is difficult to precisely determine which factors are prerequisites for cluster development and which are appearing as a result of the clustering process. Geographical proximity of markets and suppliers, existence of a pool of specialized labour, presence of input equipment, availability of specific natural resources and infrastructure, low-transaction costs due to geographic proximity among actors and access to information have been commonly cited as requirements for the creation for cluster creation (Gallo and Moehring, 2002). One of the factors that add to the complexity of comparing clusters is their varying geographical coverage, leading to a situation where in certain cases regional clusters are greater in size and population than national clusters in smaller countries. A cluster’ boundaries depend mainly on the linkages between cluster participants and complementarities across industries and institutions that are most important to competition and they do not have to comply with political ones (Porter, 1998). Successful examples of cluster approaches are offered both from regions focusing on “traditional products” such as furniture, ceramics and food (Northern Italy), and from regions with predominantly high technological product outputs (Silicon Valley). Many successful case studies indicate that the coordination of economic activities – depending on the intensity of cooperation in the form of clusters – can also strengthen the competitiveness of, in particular, national economies. Cluster-based economic development has proven highly successful in both smaller and larger EU countries. A good example of increasing regional competitiveness is the so-called “The Chair triangle” in the Udine Region and Friuli Venezia Giulia in Northern Italy, which produces 80 per cent of total Italian chair product and 50 per cent of total European production. It covers an area of 100 km2, on which 1,200 companies are concentrated with 15,000 employees and annual turnover of e2.5 billion (OECD, 1997). Clusters and geographical proximity It has long been recognized that related firms and industries tend to locate in certain geographical proximity but they will concentrate in a location only if that agglomeration brings benefits to them, which are greater than the costs of locating in the area (Wolter, 2003). Some authors have distinguished between various geographical benefits from agglomeration economies, and found that geographical benefits are related to a certain geographical location (e.g. specialized labour, infrastructure, etc.), whereas agglomeration economies (benefits) describe how these and other factors are created by increasing the number of firms (Wolter, 2003). Geographical proximity creates competitive advantages to SMEs which closely cooperate and compete, since a host of linkages among cluster members results in a whole greater than the sum of its parts (Porter, 1998). Competitors within the cluster will benefit from agglomeration effects in a way that they will gain cost advantages and have access to resources that are not available to competitors not located in the cluster (Pouder and St John, 1996). The geographic concentration of clusters contributes to developing additional financial benefits and technological externalities (Belleflamme et al., 2000). Technological externalities are defined as those consequences of activity which directly influence the production function in ways other than through the market

Cluster approach and SME competitiveness 821

JMTM 18,7

822

(Martin and Sunley, 1996). As a result of geographical proximity, communication between cluster members is strengthened and the exchange of knowledge is intensified. Besides, the codified knowledge, which can be easily transferred through different communication media, informal, or so-called tacit knowledge, is exchanged rather accidentally because the senders and the receivers are not aware of its relevance before they are involved in the communication process (Bergman and Feser, 1999). This randomised information flow is transformed into a meaningful context through such tacit knowledge (Preissl and Solimene, 2003). Since, it constitutes part of the assets of cluster companies, tacit knowledge is bound to geographic locations. Tacit knowledge enhances trust between cluster members and together with trust represents the intangible assets of the cluster (Portes and Landolt, 1996). Unlike financial and physical ones, intangible assets are hard for competitors to imitate, which makes them a powerful source of sustainable competitive advantage (Kaplan and Norton, 2004). Tacit knowledge and social capital can also have a negative influence, creating entry barriers for companies outside the cluster – for example, when crucial business information is available only for existing companies inside the cluster (Portes and Landolt, 1996). The rapid advances in information and communication technologies tend to develop virtual links between SMEs, contributing towards realizing collaborative relationships with trading partners and easing the virtual manufacturing processes (Chiu et al., 2006) thereby inferring that clusters do not necessarily have to be locally defined entities (Preissl and Solimene, 2003). In addition, they have been able to reduce costs and to improve the level of service to their customers through enhanced usage of information technologies (Kumar and Petersen, 2006). In contrast, however, the introduction of a business-to-business (B2B) trading networks increases the global market participation of firms from peripheral countries, but does not appear to reduce the importance of geographical concentrations (Zaheer and Manrakhan, 2001). Although in the age of internet-based technologies geographical proximity loses importance because of the easier access to information, still some valuable, non-codified, but tacit, knowledge can be exclusively obtained within a cluster (Preissl and Solimene, 2003). Since, data that are codified, but tacit, convey only half the story, this is partly why information and communication technologies (ICT) does not decrease the importance of geographically concentrated clusters (Andersson et al., 2004). Geographical proximity and informal communication and face-to-face contacts still matter and create competitive advantage, even though transportation and communication costs decline. Geographical proximity decreases the transaction costs (for example, the costs of delivery) in that all stakeholders in a value chain and other related institutions are close to each other. The transportation costs are reduced due to the shorter distances which, by definition, reduce the risks and therefore the insurance costs (Preissl and Solimene, 2003). According to the same authors, the costs for obtaining information could be significantly reduced due to easy access to information about cluster members and their specific competencies and reliability. The concentration of more firms in an area initially decreases local costs because their presence leads to a greater emergence of providers of infrastructure, business services and so on. In some cases, however, congestion costs might occur since infrastructure and other local factors cannot grow without limits (Wolter, 2003). Clusters could emerge in the locations where there is specific infrastructure, enabling

the participants to benefit from it. These can be specialised training institutions, communal infrastructure, telecommunications, etc. At the same time, the developed infrastructure contributes to attracting new cluster members willing to benefit from it. The existence of a cluster also stimulates formation of local supporting institutions oriented towards satisfying specific needs of the cluster participants. Clusters often include strategic alliances with universities, research institutes, suppliers of corporate services (brokers, consultants) and customers (Porter, 1998). Proximity helps to establish co-operative linkages between companies through enhancing mutual learning and knowledge creation and knowledge can “spill over” between local firms due to the easier (informal) contact between them (Wolter, 2003). Exchanges of information between the firms allows for further exploitation of knowledge externalities (Bagella et al., 1998). Competitiveness and business performance indicators The concept of clusters is nearly always related to competitiveness (Porter, 1990), but a distinction should be made between the competitiveness of a nation, a region, an industry or a single company at a micro level. This distinction is important, since different indicators should be used for measuring the competitiveness at both micro and macro levels. The competitiveness of a certain region depends on the nature of the business environment in which firms or industries emerge (Porter, 1990). In order to assess the competitiveness of nations, the World Economic Forum developed the Global Competitiveness Report, which defines competitiveness as the ability of a country to achieve sustained high rates of growth in terms of gross domestic product (GDP) per capita (Schwab and Porter, 2003). Another definition views competitiveness as a measure of the “levers” that a country has to promote sustained improvements in its well-being, given global competition (Sachs et al., 2000). As it was defined in the EU Competitiveness Report (2003), competitiveness is the ability of an economy to provide its population with high and rising standards of living and a high level of employment for all those willing to work on a sustainable basis. According to Zanakis and Becerra-Fernandes (2005), primary macro competitiveness indicators are lower country risk rating and higher computer usage, with higher gross domestic investment, savings and private consumption, more imports of goods and services than exports, increased purchase power parity, GDP, larger and more productive but not less expensive labour force, and higher R&D expenditures. At the micro level, Porter (1990) argues that a firm can gain competitive advantage over its rivals in two ways, namely through cost advantage and differentiation. By lowering costs, Porter describes the ability of a firm to produce and sell comparable products more efficiently than its competitors, while differentiation is the ability to fulfil customer expectations, through providing unique products or services. In any of these definitions at a macro or micro level, the central element is productivity. Intellectual capital and its relation to innovation capacity are common factors observed in the different schemes for the assessment of competitiveness (Solleiroa and Castanon, 2005; Gloet and Terziovski, 2004). Hamel and Prahald (2005) link sustainable competitive advantage with core competence and define it as an advantage that one firm has relative to competing firms. While most of the research focuses on identifying factors that determine an organisation’s competitiveness (Barney and Zajac, 1994),

Cluster approach and SME competitiveness 823

JMTM 18,7

824

some approaches focus more on a survival (Barnet and Pontikes, 2004) as a primary determinant of competitiveness. The interaction between competitive and cooperative attitudes in a cluster has been identified as an important element of cluster dynamics (Porter, 1998). As previously discussed, a cluster combines competing firms in the same industry as well as business partners with compatible competencies. Competitive pressure is an important driver for innovation, while cluster members cooperate along other cluster links (e.g. in a supply chain or an export promotion programme). Thus, some cluster members interact as partners, other as competitors and these roles can change based on market requirements. These complex roles were explained by authors who underlined that firms of different sizes may find themselves working towards compatible interests when they target different, but related, markets (Amorim et al., 2003). Clusters influence competition first, by increasing the productivity of companies based in the cluster; second, by promoting the innovation, and third by stimulating the formation of new businesses, which expands and strengthens the cluster itself (Porter, 1998). The competitive intensity within the cluster is emphasized by Pouder and St John (1996) who argued that the competition may become more intensified among clustered than non-clustered firms, because cluster firms compete directly for human, financial and technological resources. A cluster creates benefits for cluster members that are not available for companies outside the cluster (Camiso´n, 2003). Although the market is a main regulator of competition in clusters, specialized institutions, and business associations can regulate certain aspects (Dwivedi and Varman, 2003). Specialisation and innovation High concentrations of SMEs, both from the supply and demand sides as well as cluster support institutions, can contribute to high levels of specialisation. Similarly with the infrastructure, the existence of specialised companies attracts potential cluster participants, and when they are attracted, they generate additional pressure for further specialisation. This phenomenon has been interpreted as “economies of specialisation” (Preissl and Solimene, 2003). Sectoral specialization and geographical concentration were perceived by Dwivedi and Varman (2003) as instruments for creating collective reputation, which also makes the access of SMEs to local and national clusters more attractive. The influence of specialisation on quality and efficiency was recognised as early as 370 B.C. by Xenophon, who wrote: “He who devotes himself to a very highly specialized line of work is bound to do it in the best possible way” (Ott, 1996). Trade liberalisation, rapid technological change and globalisation create additional pressures for SMEs to specialise and to concentrate on their core competencies (Deavers, 1997), but their survival also depends heavily on their innovation capacity (Joyce and Woods, 2003). Naturally, the degree and the type of innovation depend on the type of industry, the size of the firm and its level and degree of specialisation. Geographical proximity, shared infrastructure and strong links between cluster firms can create a specific innovative environment (Pouder and St John, 1996). Innovation is so related to the clusters that some authors even define them through the innovation process. For example, Preissl and Solimene (2003) defined clusters as a set of interdependent organisations that contribute to the realisation of innovations in an economic sector or industry. In this definition it is obvious that there is no

geographic orientation; the decisive criterion is that the relevant actors take part in the same activity, which then leads to innovation. According to Furman et al. (2000), the innovation orientation is of paramount importance for global competitiveness not only for a cluster, but also for national competitiveness. They define national innovative capacity as the potential of a country – both as apolitical and economic entity – to produce and commercialise the flow of innovative technology at a given point in time. In an economic environment characterised by dramatic change, the ability to explore emerging opportunities by launching and learning from strategic experiments is more critical for survival than ever (Govindarajan and Trimble, 2004). A flexible organisation provides ways for a company to pursue innovation and allows for adaptability to changing circumstances (Goold and Campbell, 2002). SMEs have to realise that they their flexibility and capacity to react to changes and continuously remain open to innovations will be a factor of crucial importance for the future (Muir, 1995). Furman et al. (2000) believe that the organisation, which adapts to changes most effectively will be rewarded by consequential growth in sales, profits and, possibly, employment. In the high tech industries and industries with clustering dynamics, process innovations are more frequent which show a high level of local cooperation with suppliers and universities (Brenner, 2003). Entrepreneurial environment as a precondition for SME development In the literature, there are numerous examples that emphasize the importance of an appropriate business environment, which may be a base for the appearance of critical mass of SMEs as a precondition for cluster formation. In the same direction a lack of such an environment might be a significant barrier for implementing a cluster approach. For example, in order to replicate the success of their northern clusters, the Italian Government has initiated formation of industrial districts in certain areas of Southern Italy, but this top-down approach failed because of the lack of an entrepreneurial environment (Castillo and Fara, 2002). An example of a failed cluster initiative, one of the so-called “Cathedrals in the desert” was the petrochemical plants in this area. The absence of relevant social and economic foundations in the surrounding environment, were suggested as some of the reasons for its failing to achieve results similar to industrial districts in Northern Italy (Castillo and Fara, 2002). An entrepreneurial environment, which is based on openness for criticism, new ideas and risk taking was encouraged even 2000 years ago in ancient times in Mieza, where a generation of leaders was created under the supervision of Aristotle (Bose, 2003), who stressed that the key to risk-taking is an open atmosphere, where challenges to authority and ideas are accepted. Protecting an atmosphere of openness was a critical element of Mieza’s educational environment, regardless of how direct and strong the criticism might have been (Bose, 2003). A so-called “learning organization” requires an environment where experimenting with new approaches is encouraged and errors are not perceived as failures (Love et al., 2004). Such an environment would be appropriate for the formation of a critical mass of SMEs as a base for cluster development. Clusters create an appropriate environment for new start-ups for a variety of reasons: Entrepreneurs working within a cluster can easily perceive unsatisfied needs in their geographical area and using the needed assets, skills, inputs, and staff which are often readily available at the cluster location, they can establish a new enterprise (Porter, 1998). Furthermore, local financial institutions and investors are already

Cluster approach and SME competitiveness 825

JMTM 18,7

826

familiar with the cluster and may be less risk averse towards the cluster members. An entrepreneurial environment encourages and enables an entrepreneurial spirit in ways that generate opportunities and create conditions for establishing new SMEs, and critical mass of SMEs is a crucial factor for cluster development. The role of trust building for cluster development In contrast to the previously described opinions, Ceglie (2003) argued that the geographical concentrations of SMEs that operate in the same sector are not sufficient for producing “external economies”. According to him other factors, such as trust building and constructive dialogue among cluster actors, exchanging of information, identifying common strategic objectives, agreeing on a joint development strategy and its systematic and coherent implementation are of paramount importance for building an efficient cluster. For strengthening the cooperation between cluster firms, formal institutions like business associations, labour associations and specialized institutions are considered very important (Dwivedi and Varman, 2003). Raising the level of trust between businesses that are cluster members is a strategic determination in the successful development of clusters. Industrial districts, as an organizational model similar yet distinct to clusters as discussed earlier, emphasize the contextual significance of shared social institutions and the importance of relationships based on trust and on the sustained reproduction of co-operation between intra-district agents (Camiso´n, 2003). High trust levels also decrease the transaction costs, reducing the costs for legal disputes and administrative procedures. In order to achieve this, rules of business conduct need to be developed on several levels, together with functioning measures (both ethical and legal) that would sanction them. Dwivedi and Varman (2003), however, found that informal institutions also play a significant role in exchanging shared values and norms, which may serve as a starting point for creating work ethics and business practices. Evaluation More than 250 articles from peer-reviewed journals, reports from official governmental institutions and international donor organisations and 50 conference papers have been reviewed over the preceding 18 months. In the literature various, and in some cases quite opposite perceptions have been identified, on how different factors influence clusters development. Table I presents a cross-section of key authors and their themes that are cited frequently, as a critique of previous literature on clusters. In order to help understanding the complexity of the cluster concept the literature review gives an overview of various cluster definitions, without attempting to conclusively answer what a cluster is or is not. The literature provides various perceptions of the importance of the geographical proximity for cluster effects, especially due to rapid development of the information and communication technology. The concept of clusters is always related to competitiveness, but a distinction should be made between the competitiveness of a nation, a region, an industry or a single company at a micro level. In spite of all findings from the literature which support the fact that clusters bring positive effects to economy of geographical location, there is a lack of substantive evidence that the economic progress of industries and regions is result of an organised

Author (Year)

Theme

The United Nations International Development Organization – UNIDO (2000)

Cluster definition

Wolter (2003)

Belleflamme et al. (2000)

Chiu et al. (2006) and Preissl and Solimene (2003)

Cluster approach for SMEs’ competitiveness Critique Clusters are sectoral and geographical concentrations of enterprises that produce and sell a range of related or complementary products and, face common challenges and opportunities

Without active channels for business transactions, dialogue and communication, even a critical mass of related firms does not operate as a cluster per se (Rosenfeld, 1997)

Geographical proximity

Difference between geographical benefits and agglomeration economies – geographical benefits are related to a certain geographical location (e.g. specialized labour, infrastructure, etc.), whereas agglomeration economies (benefits) describe how these and other factors are created by increasing the number of firms

How big should be the area where geographical concentration takes place, in order to produce geographical benefits and what is the required number of companies for creating agglomeration effects

Geographical proximity

The geographic concentration of clusters contributes to developing additional financial benefits and technological externalities

Clusters do not necessarily have to be locally defined entities, due to the rapid advances in information and communication technologies (ICT) (Preissl and Solimene, 2003)

The rapid advances in ICT tend to develop virtual links between SMEs, contributing towards realizing collaborative relationships with trading partners and easing the virtual manufacturing processes thus clusters do not necessarily have to be locally defined entities (Preissl and Solimene, 2003)

ICT and trading networks increases the global market participation of firms from peripheral countries, but does not appear to reduce the importance of locational clusters (Zaheer and Manrakhan, 2001) ICT does not decrease the importance of geographically concentrated clusters since data that are codified, but tacit, convey only half the story (Andersson et al., 2004) (continued)

Cluster approach and SME competitiveness 827

Table I. A summarised critique of previous literature on cluster

JMTM 18,7

Theme

Pouder and St John (1996)

Geographical proximity

Competitors within the cluster will benefit from agglomeration effects in a way where they will gain cost advantages and have access to resources that are not available to competitors not located in the cluster

The literature does not provide clear evidence what is happening with companies which are located in the same area, but are not cluster members. Does it mean that the companies which are located in the same area are by definition cluster members ?

Porter (1990)

Competitiveness The competitiveness of a certain region depends on the nature of business environment in which firms or industries emerge

Competitiveness is a result of entrepreneurial activity of individual firms, but also a result of an appropriate structural policy, functioning competitive policy and adequate infrastructure (Schwanitz et al., 2002)

Porter (1998)

Competitiveness Clusters contribute towards increasing of competitiveness of industries, regions, even the whole nations

Whether the existing cluster initiatives are contributing towards increasing the individual competitiveness of participating SMEs

Porter (1990)

Competitiveness At the micro level a firm can gain competitive advantage over its rivals in two ways, namely cost advantage and differentiation

Hamel and Prahald (2005) link sustainable competitive advantage with core competence and define it as an advantage that one firm has relative to competing firms

Pouder and St John (1996)

Innovation

There is not clear evidence if the innovative environment creates geographical concentration by attracting the related companies or is it a result of already existing concentration of companies in a certain area (continued)

828

Table I.

Cluster approach for SMEs’ competitiveness Critique

Author (Year)

Geographical proximity, shared infrastructure and strong links between cluster firms create a specific innovative environment

Cluster approach for SMEs’ competitiveness Critique

Author (Year)

Theme

Porter (1998)

Entrepreneurial Clusters create an environment appropriate environment for new start-ups for a variety of reasons. Entrepreneurs working within a cluster can easily perceive unsatisfied needs in their geographical area and using the needed assets, skills, inputs, and staff which are often readily available at the cluster location, they can establish a new enterprise

Using clusters as an instrument for creating entrepreneurial environment is in contradiction with those authors (Castillo and Fara, 2002) who believe that clusters should be set up in the areas with already existing entrepreneurial environment in order further to increase their competitiveness

Camison (2003)

Trust building

The industrial district as an organizational model similar to cluster, emphasizes the contextual significance of shared social institutions and the importance of relationships based on trust and on the sustained reproduction of co-operation between intra-district agents

For strengthening the cooperation between cluster firms, formal institutions like business associations, labour associations and specialized institutions as well as informal ones, are considered to play a significant role in exchanging shared values and norms (Dwivedi and Varman, 2003)

Ceglie (2003)

Trust building

In addition to the geographical concentrations of SMEs, that operate in the same sector, for producing “external economies”other factors, such as building of trust and constructive dialogue among cluster actors . . . are of a paramount importance for building an efficient cluster

Poole (1998)

Cluster policy

The lack of a holistic view on all pre-conditions to facilitate cluster adoption and performance, which in turn may result in a discrete analysis on trust building as the underlying factor for a cluster approach to work in the SMEs’ environment. No evidence is demonstrated to show to what extent trust building affects cluster performance on competitiveness In the process of cluster There are some indicators, development, international however, that cluster with donor organizations can high levels of dependence play very important role, on foreign assistance are especially in the developing less autonomous, have countries weaker capabilities and have difficulties in achieving long-term sustainability (Birkinshaw and Hood, 2000)

Cluster approach and SME competitiveness 829

Table I.

JMTM 18,7

830

cluster approach or due to some other external factors. The applicability of successful cluster policies in the developing countries has not yet been substantiated yet, due to the relatively short time frame of implementation. Also the literature does not show if becoming a cluster member is a matter of good strategic business decision-making or if it is determined only by the location where a company operates. Despite the abundance of research of benefits for cluster members, there is insufficient research related to the competitiveness of companies which are not participating in cluster activities, although based in the same geographical location. Conclusions and future research According to Schwanitz et al. (2002), competitiveness means the abilities of individual firms, or whole sectors, regions and even countries successfully to assert themselves in the domestic and global market. It was also suggested by the same authors that competitiveness is a result of entrepreneurial activity of individual firms, but also a result of an appropriate structural policy, functioning competitive policy and adequate infrastructure. Governments – both national and local – have considerable roles to play in the promotion of a clustering approach. Besides, creating the framework conditions, setting the rules for competition and promoting entrepreneurial spirit, they should actively engage in, and promote, such an approach (Porter, 1998). All cluster participants need assistance in this process of strengthening their levels of cooperation, increasing mutual trust and developing effective private/public dialogue. In the process of role definition, international donor organizations can play very important role, especially in developing countries, but they should find their place in providing support measures only, taking into consideration the sustainable development of the country in a sense that economic benefits are available for everyone (Poole, 1998). There are some indicators, however, that cluster with high levels of dependence on foreign assistance are less autonomous, have weaker capabilities and have greater difficulties in achieving long-term sustainability (Birkinshaw and Hood, 2000). The literature has shown the benefits of establishing clusters as an efficient tool for overcoming the size limitations of SMEs. Geographical proximity brings so-called agglomeration effects in terms of higher specialization, innovation and knowledge transfer, which results in costs reduction and improving the competitiveness of industrial sectors, regions and nations. The relationships between the variables such as competitiveness, performance, specialization, innovations and trust have been shown in the conceptual model in the Figure 1. There are some examples of the failure of cluster policy, but in general there is strong evidence that joining forces into clusters bring additional benefits for SMEs that made such strategic decisions. According to the best practices from countries with long tradition of SME clusters, certain preconditions for clusters development have to be fulfilled, instead of top-down driven initiatives, by regional or national authorities. Although there is an abundance of literature about cluster related issues, however, most of it covers experiences in industrialised countries where clusters have shown some positive effects. Regarding transition countries, in reality there is no strong evidence that a cluster policy brings additional positive effect to existing SME policies.

Cluster approach and SME competitiveness

Conceptual model Preconditions for clusters formation in the region

Perceived benefits/measures for competitiveness

Geographical proximity

productivity

Entrepren. culture

specialization

Critical mass of firms Trust building

Clusters innovation

831

Competitive ness

costs ?

? trust ?

?

Such effects have not as yet been researched especially from the point of view of the SMEs, the main actors in the cluster development process, in relation to whether or not their performance has been improved as a result of participation. Another literature gap is that, although the competitiveness of clusters and its cluster members has been widely researched, the literature does not cover the competitiveness of companies that made a decision to remain outside of clusters, so-called non-cluster members. It is not confirmed yet if the SMEs are facing any disadvantages by deciding not to enter in the particular cluster or if there is any type of “knock-on” effect. Since, there is a lack of evidence of relation between clusters and competitiveness, especially from the perspective of key cluster actors – SMEs, who are supposed to be main beneficiaries from clusters, the academic community needs to review in more depth, whether the existing cluster initiatives are: . contributing towards improving the performance of participating SMEs, especially in the developing countries; . are contributing towards increasing the individual competitiveness of participating SMEs; and . creating additional benefits which are not accessible to non-cluster members. Research is needed to show how cluster participants are performing in relation to non-cluster ones from the same industry and to make comparisons of performance of the companies before and after joining a cluster. It would be also interesting to compare the satisfaction of companies from different cluster supporting initiatives and institutions. Besides, the governmental policy support of cluster development, international donor organizations also implement projects, especially in the developing countries,

Figure 1. Conceptual model of the relationships among cluster variables

JMTM 18,7

832

aimed at transferring experiences from countries where clusters contribute to economic development. Hence, special attention should be paid to investigate the experience of SMEs with cluster support projects offered by international donor organizations. References Amorim, M., Rocha Ipiranga, A. and Scipiao, T. (2003), “The construction of governance among small firms: a view from the developing world”, Proceedings of the Conference on Clusters, Industrial Districts and Firms: The Challenge of Globalization held at University of Modena and Reggio Emilia. Andersson, T., Serger, S.S., Soervik, J. and Hansson, W.E. (2004), The Cluster Policies Whitebook, IKED, Malmo. Bagella, M., Becchetti, L. and Sacchi, S. (1998), “The positive link between geographical agglomeration and export intensity: the engine of Italian endogenous growth?”, Economic Notes, Italy, pp. 1-34. Barnet, W.P. and Pontikes, E.G. (2004), “The Red Queen: history-dependant competition among organisations”, Research in Organizational Behavior, Vol. 26, pp. 1-35. Barney, J.B. and Zajac, E.J. (1994), “Competitive organisational behaviour: toward an organisationally-based theories of competitive advantage”, Strategic Management Journal, Vol. 15, pp. 5-9. Belleflamme, P., Picard, P. and Thisse, J.F. (2000), “An economic theory of regional clusters”, Journal of Urban Economic, Vol. 48 No. 1, pp. 158-84. Bergman, E.M. and Feser, E.J. (1999), Industry clusters: A Methodology and Framework for Regional Development Policy in the United States, OECD, Boosting Innovation: The Cluster Approach, OECD Publications, Paris. Birkinshaw, J. and Hood, N. (2000), “Characteristics of foreign subsidiaries in industry clusters”, Journal of International Business Studies, Vol. 31 No. 1, pp. 141-55. Bose, P. (2003), Alexander the Great’ s Art of Strategy: The Timeless Leadership Lessons of History’s Greatest Empire Builder, Gotham Books, New York, NY. Brenner, T. (2003), “Innovation and cooperation during the emergence of local industrial clusters: an empirical study in Germany”, Proceedings of the Conference on Clusters, Industrial Districts and Firms: The Challenge of Globalization held at University of Modena and Reggio Emilia. Camiso´n, C. (2003), “Shared competitive and comparative advantages: a competence -based view of the competitiveness of industrial districts”, Proceedings of the Conference on Clusters, Industrial Districts and Firms: The Challenge of Globalization held at University of Modena and Reggio Emilia. Castillo, D. and Fara, G.M. (2002), “Social capital and clusters”, Proceedings of the East-West Cluster Conference OECD LEED, Grado, Italy. Ceglie, G. (2003), “Cluster and network development: examples and lessons from UNIDO experience”, Proceedings of the Conference on Clusters, Industrial Districts and Firms: The Challenge of Globalization held at University of Modena and Reggio Emilia. Chiu, M., Lin, H.W., Nagalingam, S.V. and Lin, G.C. (2006), “Inter-operability framework towards virtual integration of SME in the manufacturing industry”, International Journal of Manufacturing Technology and Management, Vol. 9 Nos 3/4, pp. 328-49. Deavers, K. (1997), “Outsourcing: a corporate competitiveness strategy, not a search for low wages”, Journal of Labor Research, Vol. 8 No. 4, pp. 503-19.

Dwivedi, M. and Varman, R. (2003), “Nature of trust in small firm clusters: a case study of Kanpur saddlery cluster”, Proceedings of the Conference on Clusters, Industrial Districts and Firms: The Challenge of Globalization held at University of Modena and Reggio Emilia. Fassoula, E.D. (2006), “Transforming the supply chain”, Journal of Manufacturing Technology Management, Vol. 17 No. 6, pp. 848-60. Furman, J., Porter, M. and Stern, S. (2000), “Understanding the drivers of national innovative capacity”, Academy of Management Proceedings, pp. 1-7. Gallo, C. and Moehring, J. (2002), “Innovation and clusters”, Proceedings of the East-West Cluster Conference OECD LEED, Grado, Italy. Gloet, M. and Terziovski, M. (2004), “Exploring the relationship between knowledge management practices and innovation performance”, Journal of Manufacturing Technology Management, Vol. 15 No. 5, pp. 402-9. Goold, M. and Campbell, A. (2002), “Do you have a well – designed organization?”, Harvard Business Review, March. Govindarajan, V. and Trimble, C. (2004), “Strategic innovation and the science of learning”, MIT Sloan Management Review, Vol. 45 No. 2, p. 67. Gradzol, J.R., Gradzol, C.J. and Rippey, S.T. (2005), “An emerging framework for global strategy”, International Journal of Manufacturing Technology & Management, Vol. 7 No. 1, p. 11. Hamel, G. and Prahald, C.K. (2005), “Strategic internet”, Harvard Business Review, Vol. 83 No. 7, pp. 148-61. Joyce, P. and Woods, A. (2003), “Managing for growth: decision making, planning, and making changes”, Journal of Small Business and Enterprise Development, Vol. 10 No. 2, pp. 144-51. Kaplan, R.S. and Norton, D.P. (2004), “Measuring the strategic readiness of intangible assets”, Harvard Business Review, Vol. 82 No. 2, pp. 31-43. Kumar, S. and Petersen, P. (2006), “Impact of e-commerce in lowering operational costs and raising customer satisfaction”, Journal of Manufacturing Technology Management, Vol. 17 No. 3, pp. 283-302. Love, P.E.D., Edwards, D.J. and Irani, Z. (2004), “Nurturing a learning organisation in construction: a focus on strategic shift, organizational transformation, customer orientation and quality centred learning”, Construction Innovation, Vol. 4 No. 2, pp. 113-26. Martin, R. and Sunley, P. (1996), “Paul Krugman’s geographical economics and its implications for regional development theory: a critical assessment”, Economic Geography, Vol. 72 No. 3, pp. 259-92. Muir, J. (1995), “Managing change”, Work Study, Vol. 44 No. 2, pp. 16-18. OECD (1997), OECD Globalisation and SME, Synthesis Report, Organization for Economic Cooperation and Development. Ott, J.S. (1996), Classic Readings in Organizational Behaviour, 2nd ed., Wadsworth, Belmont, CA. Ozawa, T. (2003), “Economic growth, structural transformation, and industrial clusters: theoretical implications of Japan’s postwar experience”, Proceedings of the Conference on Clusters, Industrial Districts and Firms: The Challenge of Globalization held at University of Modena and Reggio Emilia. Poole, A. (1998), “Opportunities for change”, Structural Survey, Vol. 16 No. 4, pp. 200-4. Porter, M. (1990), The Competitive Advantage of Nations, The Free Press, New York, NY. Porter, M. (1998), “Clusters and the new economy of competition”, Harvard Business Review, Vol. 76 No. 6, pp. 77-91.

Cluster approach and SME competitiveness 833

JMTM 18,7

834

Portes, A. and Landolt, P. (1996), “The downside of social capital”, American Prospect, May-June, pp. 18-21. Pouder, R. and St John, C.H. (1996), “Hot spots and blind spots: geographical clusters of firms and innovations”, The Academy of Management Review, Vol. 21 No. 4, pp. 1192-226. Preissl, B. and Solimene, L. (2003), “Innovation clusters: virtual links and globalization”, Proceedings of the Conference on Clusters, Industrial Districts and Firms: The Challenge of Globalization held at University of Modena and Reggio Emilia. Rosenfeld, S.A. (1997), “Bringing business clusters into the mainstream of economic development”, European Planning Studies, Vol. 5, pp. 3-23. Sachs, J., Clifford Zinnes, C. and Eilat, Y. (2000), “Benchmarking competitiveness in transition economies”, Harvard Institute for International Development, No. 62, p. 5. Schwab, K. and Porter, M. (2003), The Global Competitiveness Report: 2002-2003, World Economic Forum Geneva, Switzerland 2003, Oxford University Press, New York, NY. Schwanitz, S., Mu¨ller, R. and Margret Will, M. (2002), Competitiveness of Economic Sectors in EU Accession Countries: Cluster-oriented Assistance Strategies, Deutsche Geseltschaft Fuer Teschnishe Zusammenarbeit (GTZ), Eschborn. Solleiroa, J.L. and Castanon, R. (2005), “Competitiveness and innovation systems: the challenges of Mexico’s insertion in the global context”, Technovation, Vol. 25 No. 9, pp. 1059-70. UNIDO (2000), Promoting Enterprise through Networked Regional Development, United Nations International Development Organization, UNIDO Publications, Vienna. Wolter, K. (2003), “A life cycle for clusters? The dynamics governing regional agglomerations”, Proceedings of the Conference on Clusters, Industrial Districts and Firms: The Challenge of Globalization held at University of Modena and Reggio Emilia. Zaheer, S. and Manrakhan, S. (2001), “Concentration and dispersion in global industries: remote electronic access and the location of economic activities”, Journal of International Business Studies, Vol. 32 No. 4, pp. 667-87. Zanakis, S.H. and Becerra-Fernandes, I. (2005), “Competitiveness of nations: a knowledge discovery examination”, European Journal of Operational Research, Vol. 166 No. 1, pp. 185-211. Further reading DTI (2004), “Small Business Service”, Department of Trade and Industry, at: www.sbs.gov.uk/ default.php?page ¼ /analytical/statisticsfaq.php (accessed 18 Septemer 2005). Pezzeti, R. and Primavera, S. (2003), “The internationalization of italian industrial district firms in Mexico”, Proceedings of the Conference on Clusters, Industrial Districts and Firms: The Challenge of Globalization held at University of Modena and Reggio Emilia. About the authors Aleksandar Karaev, Economist, MBA, is an experienced and motivated professional in the field of Economic Development. In the last six years he has been employed by German organization for technical cooperation (GTZ), which implement projects in the Republic of Macedonia within a framework of bilateral development cooperation between Federal Republic of Germany and the Republic of Macedonia. From 2000 to 2005 he has worked as a Project Coordinator of the project for Private Sector Promotion and from 2005 he is coordinating the project for Regional Economic Development. The focus of his work is project management, especially in the field of Small and Medium Enterprises and Regional Development. He is currently a PhD student at the South-East European Research Centre (SEERC), Thessaloniki, Greece, an affiliated institution of the

University of Sheffield. Aleksandar Karaev is the corresponding author and can be contacted at: [email protected]. S.C. Lenny Koh is the Director of the Logistics and Supply Chain Management Research Group and an Associate Professor/Senior Lecturer in Operations Management at the University of Sheffield Management School UK. She holds a Doctorate in Operations Management and a First-class honours degree in Industrial and Manufacturing Systems Engineering. Her research interests are in the areas of production planning and control (ERP and ERPII), uncertainty management, modern operations management practices, logistics and supply chain management, e-business, e-organisations, knowledge management, sustainable business and eco-logistics. She has 180 publications in journal papers, book, edited book, edited proceedings, edited special issues, book chapters, conference papers, technical papers and reports. She is the Editor in Chief of the International Journal of Enterprise Network Management, International Journal of Value Chain Management and International Journal of Logistics Economics and Globalisation. She is the Associate Editor of the International Journal of Systems Science, Enterprise Information Systems and International Journal of Operational Research. E-mail: S.C.L. [email protected] Leslie T. Szamosi is a Senior Lecturer and Academic Director of the EMBA program at City College (Affiliated Institution of the University of Sheffield). He holds a PhD from Carleton University (Ottawa, Canada) in the area of Organizational Behaviour as well as a MMS in International Business from the same institution. He has worked both in the private and public sectors in Canada as well as a private consultant for diverse organisations in both Europe and North America such as the European Union, Digital Electronics (now part of Compaq Computers), Human Resources Development Canada, Industry Canada, and Post and Telecommunications of Kosovo. He has published extensively in the areas of Organisational Development, Change Management, and International Business Development and acts as a referee for a wide range of academic conferences and journals. E-mail: [email protected]

To purchase reprints of this article please e-mail: [email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints

Cluster approach and SME competitiveness 835

The current issue and full text archive of this journal is available at www.emeraldinsight.com/1741-038X.htm

JMTM 18,7

836 Received August 2006 Revised January 2007 Accepted April 2007

An ANN-based DSS system for quality assurance in production network Walter W.C. Chung, Kevin C.M. Wong and Paul T.K. Soon Department of Industrial and Systems Engineering, The Hong Kong Polytechnic University, Hung Hom, Hong Kong Abstract Purpose – The purpose of this paper is to propose an integrated model of decision support system (DSS), artificial neural network, information and communication technologies and statistical process control (SPC) to facilitate agreement by different stakeholders with special interests to commit to the decision that will be taken to stop the production line when something goes wrong somewhere in a supplier network environment. Design/methodology/approach – A DSS is proposed to capture exceptional signals from source on deterioration of product quality to alert preventive actions needed before the problems are getting out of hand. The supervisors are given a set of guidelines to support making the decision. Real-time SPC and rule-based decision support procedures are used to trigger pre-defined exceptional signals for forwarding to the most appropriate person (the knowledge holder in the problem domain) to make a decision to stop the line. All servers in all remote sites are internet-connected and provide real-time quality data to the regional supply chain manager. A case study is described to show how this is implemented in a lens manufacturing company. Findings – A significant improvement in quality level can be achieved by holding the knowledge worker accountable for making the decision to stop the production line rather than made by default as is with most traditional operations. Practical implications – To provide a concept to structure activities for decision support so that the persons responsible for making the decision to stop the production line is held accountable by all stakeholders. Originality/value – Practitioners can replicate the approach used in this paper to their own situations involving decisions to be made to address un-structured problems and unclear responsibilities. Keywords Statistical process control, Decision support systems, Neural nets, Problem solving, Decision-making, Communication technologies Paper type Case study

Journal of Manufacturing Technology Management Vol. 18 No. 7, 2007 pp. 836-857 q Emerald Group Publishing Limited 1741-038X DOI 10.1108/17410380710817282

Introduction In a regional network of supply chains, the site production manager not only plays an important role in monitoring the whole production processes but also provides timely information to the regional supply chain manager to ensure quality products are made. The regional supply chain manager should revise the latest regional supply plan in anticipation of any decision to suddenly stop production in a site or transfer the demand forecast from one site to another site to achieve the inter-sites quality performance. The authors acknowledge the support offered by Central Grant of The Hong Kong Polytechnic University under Project Number: G-YA19.

However, deciding on when to stop the production line is not straight forward because this decision involves balancing the different interests of many colleagues he knows, which renders the decision made ineffective to stop the production in time to remedy the problem. The author proposed an artificial neural network (ANN)-enabled decision support system (DSS) to solve a simple but semi-structured production supply problem in a lens manufacturing environment. The company is a multinational manufacturer leading in making corrugated CCD Lens with the plants and subcontractors over many parts of the world including Korea, China, Japan, Malaysia and Hong Kong. It has been using statistical process control (SPC) in providing customers that are requesting statistical reports. The regional supply chain manager needs to maintain the plan for a constant supply to meet global customer demand. The supply and demand networks are very complicated and they may incur huge transportation costs and long replenishment time to shift the supply from one site to another site. It is known that the root cause lies with the site production manager. He is struggling whether to stop the production line due to a drifting of quality level from good to poor. It takes time to prepare handwritten control charts and the time allowed to perform meaningful data analysis is limited. The Company can see the SPC software could be useful and wants to develop a tailored-made system for its purpose. The in-house developed product was able to provide all the charts they needed and responded flexibly to the demands of its employees. A computer based and internet-enabled statistical process control software (e-SPC) is now used in many divisions of the company. Not only are manufacturing plants utilizing the product, but the corporate staff in quality uses it to analyze monthly reports sent electronically by several divisions. Existing method of quality assurance Records of “Quality Checks” are done manually and stored in Excel files. However, the quality-check results are mostly outdated producing a large number of inferior products and the quality level is gradually drifting to the “out of control” region. There are four grades of product quality and the most commonly produced level of products is in B, C and D grade. The graded level definitions are: . A Grade – superior quality; . B Grade – acceptable quality; . C Grade – inferior quality; and . D Grade – defective quality and to be destroyed. It was found that the products with poorly graded quality were mainly caused by the creation of surplus/buffer stock to avoid delays in fulfillment of orders, resulting in: . high volume of customer complaints after goods arrive at the site; . high compensation – normally two to three times the original production cost; and . cash flow problems due to excess stocks and delay in payment. In order to improve the company liquidity level and maintain customer loyalty, the top management discovered that the key problem lies in the difficulty with deciding

ANN-based DSS system for QA

837

JMTM 18,7

838

“when” to cease production. The decision to put the production on hold is not always clearly defined for all cases. Hence, the decision to stopping a production line is extremely difficult to make. More often than not, the production manager is reluctant to call a stop on production even if the quality of the product is seen to be getting worse. Barriers contributing to a delay in deciding to stop the production line are: . Too many parties have an interest to keep the production going: customers, marketing, PMC, Procurement, QA, suppliers and GM/Plant Manager. . Too many concerns/constraints to consider such as late delivery, tight job schedule, customer complaints, and compensation on defective goods produced. . Too many factors contributing to the problem: quality of raw materials, input or setting error on SPC, etc. . Too little information to support the decision-making: no real time data, no communication among departments, no clear boundaries on responsibility. . No data/timely report to alert for action to reverse the worsening situation of defective products being made. Currently, there are three levels of the decision hierarchy (Figure 1). At the strategic decision-making level, the decision to stop production line is clear and is mainly dictated by products manufactured out of specification. Other determinants would be the “Rule Base” and policy settings. Decision-making at this level aims to minimize production idle time and control of the defect rate. At the operational decision-making level, the decision to stop the production line is clear-cut. It is spurned on mainly by the lack of raw materials and resources. At the tactical decision-making level, the decision to stop production line is less distinct. There are many areas that may contribute to the need to call a halt to the production line but to come to this recommendation requires more knowledge and information to assist in the decision-making process. Problem definition At present, the actual grade distribution is 10 percent Grade A, 30 percent Grade B, 40 percent Grade C and lastly 20 percent Grade D (Figure 2).

Strategic Decisions

Tactical Decisions

Figure 1. Decision hierarchy triangle

Operation Decisions

Stop production line when the product parameter is out the specification Clarify the situation and simplify the considerations before making the decision to stop the production line involved. Stop the production line due to operation limitation

Almost 60 percent of the products are made at the grade of unsatisfactory level. There are many factors that complicate the decision-making at the tactical level in deciding that the production line should be stopped. The site production manager will need to consider the concerns of the company; the marketing department would be concerned with meeting the delivery order on time in order to sustain good customer relationship and safeguard customer loyalty. The production manager will receive pressure from the PMC whose main concern is attending to the market needs and ensure that the number of products manufactured is enough to fulfill the agreed upon quotas. The QA department will put pressure to ensure that products are of the highest possible quality. With a broader view, the regional supply chain manager need to be informed on a timely manner to revise the supply planning by shifting the supply chain from the problematic plant to another plant. The decision to stop the production line is a difficult one for the production manager to make without sufficient data and evidence to support his decision. The causes of problems can be summarized as below: . The traditional view has been the production manager is fully responsible for all the production activities especially in the decision of stopping a production line to improve product quality. . Owing to the lack of vital information, the production manager is unwilling to stop the line and bear the immediate consequence of lowered production quantity and hence the increased unit production cost. . He is determined to take a chance and wait for the variations to go away without his direct intervention. He prefers delaying the decision to stop the line.

ANN-based DSS system for QA

839

In most situations, many people are involved in agreeing on making a “good” quality product. However, the question is “How to find a compromising solution to meet the expectations of stakeholders?” (Figure 3). Literature reviews Firstly, statistics is a collection of techniques about a process or population and analysis of the information contained in a sample from that population for making decisions (Montgomery, 2005). Statistical methods play a vital role in quality control and improvement. They provide the principal means by which a product is sampled, tested, and evaluated, and the information in those data is used to control and improve the manufacturing process. Furthermore, statistics is the language in which development engineers, manufacturing, procurement, management and other functional components of the business communicate about quality.

40% 30% 20%

Figure 2. Distribution in grade of products made

10% Product

A

B

C

D

JMTM 18,7

Sales & Marketing

PMC

Customer

Procurement

840

Stop Production? General Manager

Figure 3. Dilemmas of production manager finding a compromising solution

Quality Assurance

Plant Manager

Supplier

SPC is an approach using statistical techniques for assisting operators, supervisors and managers to manage quality and to eliminate special causes of variability in a process (Oakland, 2003). SPC (Quality America Inc., 2006) is the primary analysis tool of quality improvement. It is the applied science that helps one to collect, organize and interpret the wide variety of information available to a business. When applied to track revenues, billing errors, or the dimensions of manufactured components, SPC can help one to measure, understand and control the variables that affect a business processes. Thus, SPC can be used to analyze the variation in whatever process being measured (Montgomery, 2005). It has seven major tools: (1) histogram or stem-and-leaf plot; (2) check sheet; (3) Pareto chart; (4) cause-and-effect diagram; (5) defect concentration diagram; (6) scatter diagram; and (7) control chart. Although these seven tools are an important part of SPC, they cover only its technical aspects. SPC builds an environment in an organization in which all individuals seek continuous improvement in quality and productivity. This environment is best developed when management becomes involved in the process. Once this environment is established, routine application of the seven tools becomes part of the norm in doing business, and the organization is well on its way to achieving its quality improvement objectives. The control charts measures how consistently the process is performing, and alerts whether one should, or should not, attempt to adjust it. The SPC chart compares the process performance to the customers’ requirements, providing a process capability index as an ongoing, accurate direction for quality improvement (Montgomery, 2005). The control charts and its resulting process capability index quickly evaluate the

results of quality initiatives designed to improve process consistency. As part of an ongoing cycle of continuous process improvement, SPC can help one fine-tuning the processes to the virtually error free six sigma level. The most important use of a control chart is to improve the process (Montgomery, 2005). The control chart will address assignable causes. Management, operator, and engineering action will usually be necessary to eliminate the assignable causes. In identifying and eliminating assignable causes, it is important to find the underlying root cause of the problem and to attack it. Developing an effective system for corrective action is an essential component of an effective SPC implementation. The process improvement activity using the control chart is shown in Figure 4. SPC involves complex mathematics. It is easy to do and computers are ideally suited to the task (Quality America Inc., 2006). They can be used to collect, organize and store information, calculate, and present results in easy to understand graphs, called control charts. Computers accept information typed in manually, read from scanners or manufacturing machines, or imported from other computer databases. The resulting control charts can be examined in greater detail, incorporated into reports, or sent to users across the internet. A computer collecting information in real time can detect very subtle changes in a process, and gives warning in time to prevent process errors before they occur. SPC can help better understand how to reduce the variation in any business process (Quality America Inc., 2006). Greater consistency in fulfilling the customer’s requirements leads to greater customer satisfaction. Reduced variation in the internal processes leads to less time and money spent on rework and waste. The quality improvement can directly yield greater profitability and security for the business. SPC is one of the essential tools necessary to maintain an advantage in today’s competitive marketplace. Secondly, an artificial neural network (Stergiou and Siganos, 2006) can be defined that an information processing paradigm that is inspired by the way biological Input

ANN-based DSS system for QA

841

Output Process

Measurement System Verify and follow up

Implement corrective action Source: Douglas (2005)

Defect assignable Cause

Identify root cause of problem

Figure 4. Process improvement using the control chart

JMTM 18,7

842

nervous systems, such as the brain, process information according to some internet resources. The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. ANNs, like people, learn by example. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurons. This is true for ANNs as well. Based on the author (Jen-Tong Sun, 2003), many researches are focused on finding the best combination of different types of ANN architecture and traditional process control methods to meet different manufacturing objectives like process improvement or process optimization to ensure quality assurance. Those researches are only dug into the micro view of the problem. However, the macro view of quality assurance problem is how to develop a complete and self-learning system to stop the unfavorable situation to continue, analyses the possible root causes, and propose a sequence of alternatives with learning ability to improve the situation and instruct the knowledge holder to take corrective actions. Since, neural networks are best at identifying patterns or trends in data, they are well suited for prediction or forecasting needs including: . sales forecasting; . industrial process control; . customer research; . data validation; . risk management; and . target marketing. Thirdly, with information and communication technologies (ICT), supply chain members play important roles in supply chain management. They share information with various parties through the use of ICT. Companies are trying out all possibilities in using ICT to increase their product values in terms of faster time and lower costs. ICT applications can be classified in four levels: commercial application, transaction based, commerce purpose, and message services (MetricStream, Inc., 2006). (1) Commercial application components. All business functions should be included by having all features to run a supply chain. Examples for this type of application are ERP and logistics information systems. (2) Transaction-based components. The focus is on transactions both in real time or batch process; examples are sales distribution systems, HR systems and Call Center systems. (3) Commerce components. They are designed for assisting to improve critical area in a business model; example on this would be EDI program and e-banking. (4) Message services. Such ICT components are generic and can release information out if there is a trigger when exception exists. In manufacturing, companies are often concerned about product quality in the outsourcing parties. Information sharing and communication between these parties is

highly demanded especially on real time information sharing. In general, ICT would provide information for controlling, monitoring and automating the assembly lines; but, in certain cases, ICT can act as tools on re-engineering the business models. ICT application can be run successfully based on providing message services. The main objectives of an ICT enabled message service are to: . make data more understandable as a piece of valuable information and in a format more presentable to fit business need; . reduce the cost of the data interchange where management particularly focus on time delay; . use a systematic approach to monitor a new process; improve or form a new value chain; . form value chains by offering service and products to new customers; and . able to exchange information among various sites. Furthermore, a review of literature (Chung et al., 2004) on “Contract Management in Manufacturing” reflects the future of the firm in organizing work with the best practice in production network. Having short product life-cycle, ever changing technology and cut-throat price competition in today’s electronics industry, companies are making contract manufacturing as the core success in their supply chain management. Outsourcing assembly lines and dispersed manufacturing can cut cost and give flexibility to the companies. However, managing the quality on production with outsourced partners becomes difficult. Companies need the visibility of controlling the product quality as early as possible to avoid shipping defect to them. The ideal approach would be having the quality control managers to monitor the quality from each site. The earlier they have the information about the quality of the components, the better position they have and the chance to make an adjustment to their final products in terms of pricing and marketing strategies. This is particularly important in today’s dispersed manufacturing (So and Chung, 2005) as companies cannot afford to have components shipping from offshore manufacturing sites without knowing the quality. When dealing with contractors, tight quality control is mandatory but how to monitor and control from offsite would be a major difficulty. Contract management plays a major role to manufacturers with outsourcing activities to offshore factories. Below are some concerns when handling contract management: . What is the quality of the components when they arrive? . How many top quality products can arrive to their sites? . How much would it cost the companies to make these components out of the total raw material they provided to the contractors? . Who is the best-outsourced partner? Framework for building a decision support system to support production The concepts in building a DSS to support production and ensuring quality involve the following knowledge domains: . artificial neural networks (ANNs); . rule-based DSS system (RBS);

ANN-based DSS system for QA

843

JMTM 18,7

844

. .

problem domain with knowledge management system (KMS); and quality function deployment (QFD).

Artificial neural networks (ANNs) Neural networks, with their remarkable ability to derive meaning from complicated or imprecise data, are widely used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. A trained neural network can be considered as an “expert” in the category of information it has been given to analyze. This expert can then be used to provide projections given new situations of interest and answer “what if” questions. Other advantages of using ANN include: . Adaptive learning. An ability to learn how to do tasks based on the data given for training or initial experience. . Self-organization. An ANN can create its own organization or representation of the information it receives during learning time. . Real time operation. ANN computations may be carried out in parallel, and special hardware devices are being designed and manufactured which take advantage of this capability. . Fault tolerance via redundant information coding. Partial destruction of a network leads to the corresponding degradation of performance. However, some network capabilities may be retained even with major network damage. Rule-based system (RBS) An expert system can be designed based on a set of rules (Marakas, 1998) to determine what action is implemented when a certain situation is encountered. The rules can be provided either by human experts or derived from mathematical formulas. The rule-based expert system is normally established by acquiring human experts’ knowledge. It allows some areas of redundancy, overlapping and even contradicting rules to coexist. Even though this approach might increase the number of rules in a rule base system, it enables great flexibility in building and maintaining a working rule for the decision-making process.

Problem domain Figure 5 shows the architecture of a KMS. A problem domain identifies the boundary that helps to create teams with appropriate knowledge content to solve a problem (Gunasekaran et al., 2003). It is a key element of a KMS. This can be invaluable to an organization when a team of staff is networked together according to their role, tasks and project deliverable. They have access to an electronic library of the best practices. They share and update it on a project-by-project basis. A networked team that has a specific knowledge of the problem domain will be more effective in finding a solution than teams with general business knowledge but no specific knowledge of the problem domain.

Organization Behaviour Discipline

Production Management Discipline

ANN-based DSS system for QA

Marketing & Sales Discipline

Strategic Management Discipline

Financial Discipline

Information Management Discipline

845 Domain Knowledge

Case/ Problem

Solution to End Users

DSS Engine

Quality function deployment (QFD) The QFD approach is employed to relate different business process and detect key problems, identify possible solutions and find out the importance of each alternative (Gianturco and Behravesh, 1988). The steps associated with using the QFD approach are summarized as follows: (1) Draw the relations diagram (Figure 6). The purpose of this diagram is to find appropriate solution by clarifying the causal relationships in problems with complex interrelated causes. In order words, a process to make a clear problem domain. (2) Assign the influential weighting for each difficulties ranging from 1 (least difficult) to 5 (most difficult). The higher degree the difficulty, the larger number is assigned. This is a relatively subjective rating which may be different for each company situation and environment (Table I). (3) Identify possible solutions to tackle the difficulties (Figure 7). (4) Drawing the matrix diagram (Figure 8). The purpose of it is to clarify problems by analyzing them multi-dimensionally. (5) Finally, formulate a strategy by implementing ranked solutions to achieve a good decision-making support in “Stopping the production line” (Figure 9). Proposed method The aim when building an ICT message service-based networked real-time DSS is for a manufacturer to monitor the outsourcing processes in lens tooling for quality control. The system is able to analyze the output data of a process and detect the presence of any output that exceeds the desired specifications or any pattern that shows the process is going to be out of control. Such information can be tracked from various outsourcing sites each using an online SPC (Figure 10). The result would either be sent by using the “push” method or the “pull” method to the regional center. The regional production manager can view all the processes from the screen and be updated with the quality during production or tooling phase. If there

Figure 5. Architecture of a KMS

JMTM 18,7

Relations Diagram : Difficulties faced in making a good line stop decision Passive Response

Costly Difficult to collect up-to-date quality issues in different supply sites

Difficult to use

846

Difficult to collect up-to-date different sites supply situations

Lack of quality awareness

Hi-end Equipments

Tight Customer Schedule

Tight Customer Schedule

Quality Committment

Aged Machines & Facilities

Poor Customer Service

Unclear MultiSites Supply Situations Marketing Budget

Services

To make a good decision to stop the production line Supplier Relationship

Inconvenient Purchasing Channel

Figure 6. Relations diagram – difficulties associated with making a good decision to stop the production line

Cooperation Reluctant to accept responsibility Machinery Maintenance Schedule

Materials Planning Schedule

Difficulties

Table I. Influential weighting

Internal Communication

Operation Planning

Unreliable Delivery

Unreliable Incoming Materials Quality

Unclear Problem Domain

Marketing Operation planning Unclear problem domain Supplier relationship Unclear multi-sites supply situation Quality commitment Budget

Machine Problem

Rating 4 3 5 2 3 5 1

is an occasion with product quality reaching out of the specification or leaning towards the out-of-bound, an alert message is sent to the relevant parties. These parties can react to provide the earliest solution or offer alternatives to fix the problem. If the process were not declared “out of control,” the process input would be used for data analysis. Since, each outsourcing site would have its own set of unique data, they would be sent to the regional center for explanation. The company can base on this

ANN-based DSS system for QA

Ruled Based DSS

Internal communication Unclear problem domain Reluctant to accept responsibility Cooperation

Real-time SPC

Lack of quality awareness Passive response Machine problem Unreliable incoming materials quality Tight customer schedule

Vendor Relationship Management

Supplier relationship Inconvenient purchasing channel Unreliable supply delivery Unreliable incoming materials quality

847

Customer Relationship management

Tight customer schedule Dynamics market demand Poor customer services

Information and Communication Technology

Difficult to collect up-to-date quality data in different supply sites Difficult to collect up-to-date information on different sites supply situations

Figure 7. Possible solutions derived from relation diagram

Matrix Diagram

Rank

Total Score

Installment

CRM

VRM

SPC

DSS

Difficulty

Weight

Solution

Marketing

4

18

1

Operation Planning

3

12

4

Unclear Problem Domain

5

14

3

Unclear Multi-site Supplier Relationship

2

12

4

5

15

2

1

9

5

Quality Commitment Budget Total Score Rank Relationship:

111

69

47

50

9

1

2

4

3

5

Strong

Medium

Weak

measurement to perform an analysis of the process performance on each of their outsourcing partners. If there is any signal that shows the situation is “out of control/specification,” the output data will forward to the ANN system which will extract patterns and detect trends or possible reasons or faults. After investigation,

Figure 8. Matrix diagram to support making a good decision to stop the line

JMTM 18,7

848

response from the ANN engine will be inputted to the KMS to develop a list of alternatives to improve the situation going worse. Then, the list of alternatives will be inputted to the rule-based DSS to identify which is the most appropriate party to take the related actions especially for the stop line decision within the problem domain. Design of the system If the process was performing within the control limits, the process input would be used for data analysis such as CPk calculation. Otherwise, the “out of control” signal will trigger the DSS to initiate an investigation (Table II). Features of the system Whenever there is any signal from production that shows a situation of “out of control/specification,” the output data will prompt the knowledge worker to detect possible reasons or faults. After investigation, response from the knowledge worker will be inputted to the rule-based DSS which will identify the most appropriate party to make the stop line decision within the problem area.

Start

Figure 9. Proposed strategy to achieve a good line stop decision

End

Setup Project Team

Define the rules in the DSS

Identify the SPC control parameters

To make a good line stop decision

Implement CRM system

In-house training

Time-varying environment inputs (eg. Control limits, materials change, workmanship etc Machine / Controller

User inputs

ANN

No

Process

Rule-based DSS (Decide who is suitable to make the stop line decision within the problem domain)

KM

Figure 10. Proposed process control and decision-making flow

Process Inputs

Yes

Faults or Out of Range ?

Measurement manipulation (SPC)

Setup Internet Connection Infrastructure

Process Outputs (mean)

By implementing a suitably designed ICT, the system can bring the control of a group of manufacturing units under one roof. The use of an electronic network would allow information sharing by various parties. The ICT of the system delivers a message service with all the sampled data from outsourcing factories back to the manufacturer. Whenever encountering an “out of specification” signal, a message can be sent to the control office via the network. The regional production manager and other parties in regional office can react upon before things are getting out of hand. Furthermore, by using this networked ICT application, the control office has all the SPC real time data and graphical displays like any other input sites. The input SPC data is “pushed” to the control office. The office also can get data via a “pull” method from any single site. The regional supply chain manager can view more data on the performance on each site or trigger a further investigation. Each outsourcing factory has a real time SPC server with a suitably design ICT and the regional office also has a server connecting it to all remote sites with functions to display graphical charts on all the sites (Figure 11). The data passes by using replication to synchronize within certain user predefined time interval among servers. Replication allows same data stores on multiple systems or in a distributed computing system. Replication in time makes computer tasks to be executed repeatedly. Servers locating on various factories are using database replication methods to transfer the Out of control signals

Notes

Out of specification control

The customer will demand extra compensation from discovery of Grade D products Product quality deteriorates from region A to B or B to C The quality may be deteriorating but still “within” the control region

Cross region control (CR) Four points up/down trend control within same region

General Manager

Regional SCM Manager Regional Supply Chain Office

849

Table II. Guideline of “out of control” signals

Sales Manager

ITC Application Server with dashboard and multisite display functions

Active Replication - data replicates from factories to Regional Center every 15 minutes

ANN-based DSS system for QA

Passive Replication – Regional Office sends request to a factory to request a set of data e.g. selection criteria on certain time interval

Internet

ITC Application Servers SPC input Outsource Factory 1 Assembly Lines

Outsource Factory 2 Assembly Lines

Outsource Factory 3 Assembly Lines

Figure 11. Diagram showing the use on the distributed ICT real time SPC application

JMTM 18,7

850

latest input data to the server in regional supply chain office. Using this method, data, which does not exist in the request server, would send from another server. In normal case, the input data would transfer to the regional supply chain office via active replication, which is performed by processing the same request at every replica to send data to regional supply chain office. In addition, in the case of a special need, regional supply chain office can send a request using passive replication, which usually performs on a one on one basis between two servers. Passive replication should be seldom to apply in this ICT real time distributed system; one example would be remote users trying to access the database offsite and need the real time data; then, passive replication would be applied. Inside the system, a rule-based DSS is used to deal with the decision-making processes by simulating human behavior. The system performs reasoning by using pre-established rules in a well-defined domain. The authority authorizing the stopping of the production line will be checked with the pre-defined rules to search and notify the most suitable decision maker within the problem domain. Case study The feature of the system can be illustrated by walking through the scenarios with a workable prototype (up to measurement manipulation) below: . at the “Lapping” production process, part “FL-10000“ is made by machine1; . QC operator takes samples from the production line and the readings are captured into the system; . abnormal signal (RED color) is observed; and . how should the production manager react to control different “out-of-control” situations? Scenario 1 – Controlling the “out of specification” Figure 12 shows an input screen showing the “out of specification” out-of-control signal. The system will take the actions: . Show a “STOP by Policy” alert message and popup a Production Manager password access window to be acknowledged by the Production Manager. . The data is shown “RED” color to draw everyone attention. If action is taken and updated to the system, color will be changed from “RED” to “GREEN”. Scenario 2 – controlling the “cross region” Figure 13 shows input screen showing the “cross region” out-of-control signal. The system will take the actions: . SMS and e-mail to marketing manager and all related departments. . The data is shown “RED” color to draw everyone attention. If action is taken and updated to the system, color will be changed from “RED” to “GREEN”. Scenario 3 – controlling the “up/down trend” using “4 up/4 down method” Figure 14 shows an input screen showing the “up/down trend ” out-of-control signal. The system will take the actions:

D

ANN-based DSS system for QA

C B A

851 Region A : 5mm to 6mm Region B : 6mm to 7.5mm Region C : 7.5mm to 10mm Input process, machine, parts, readings, customer order QC badge

Region D : over 10mm Out of Specification :

Figure 12. Input screen showing the “out of specification” out-of-control signal

Readings is measured 11mm

D C B A

Region A : 5mm to 6mm Region B : 6mm to 7.5mm Region C : 7.5mm to 10mm Region D : over 10mm Cross Region : Readings changed from 5.1mm to 6.2mm (from A to B)

Pop up a production manager password access window to let him to select either “STOP,” “Investigate” or “Arrange meeting”:The data is shown “RED” color to draw everyone attention. If action is taken and updated to the system, color will be changed from “RED” to “GREEN”. .

Scenario 3 – what can production manager do if “4up signal”? Figure 15 shows input screen showing the initial decision for the production manager if “4 UP signal” is triggered.

Figure 13. Input screen showing the “cross region ” out-of-control signal

JMTM 18,7

D C B A

852 Region A : 5mm to 6mm Region B : 6mm to 7.5mm Region C : 7.5mm to 10mm 4 UP :

Figure 14. Input screen showing the “up/down trend ” out-of-control signal

Figure 15. Input screen showing the initial decision for the production manager if “4 up signal” is triggered

Region D : over 10mm

Quality are deteriorating in 4 consecutive readings within same region

4 Points Signal: The production manager can trigger an input screen to enter what is his initial decision to handle this situations.

The system will take the actions: The production manager can activate an input-screen to make initial decision either on immediate stopping production, passing the decision to knowledge holder or holding a meeting first Scenario 3 – if 4 UP/4 down, what next after initial decision? Figure 16 shows an input screen to pseudo the ANN and KM modules when “out-of-control” signal is triggered.

ANN-based DSS system for QA

853

4 Points Signal: After meeting, the QA manager will assign the final stop line decision to the knowledge holder

After the meeting, CAUSES will be identified and the knowledge holder will take a prompt decision to stop production line. Implemented results achieved The use of SPC can improve the product quality during inspection. The UCL and LCL of SPC determine the tolerance on the product variance. The application of ANN in this DSS can quickly retain the causes of the failure and turn it to be an explicit knowledge; so that, company can avoid recurrences on the same errors. The use of ICT can improve the communications channel to various parties in SMS or e-mail via GPRS or internet. Connecting various SPC from various sites through internet or WAN can enable regional management to monitor the overall production quality (under the implementation of lean manufacturing, dispersed manufacturing or outsourcing). The DSS allows the management to get the notification and relevant information on possible causes and respond upon immediately. Despite monitoring the product quality, users can expect this DSS to serve as a tool to find out the root causes of product variance by having SPC to perform and display the inspection results; ANN to analysis the result; and KM to retain and produce the expert knowledge on this particular process in the early stage of its services. The problem solving skills and time would drastically reduce By analyzing the data on root causes on variance from KMS, the company can use Pareto diagram to find out the most frequent causes. The management can design certain preventive actions to avoid such recurrences on those controllable factors; e.g. parts maintenance, temperature, staff skill, etc. After the problem domain is defined and an effective quality system is clarified, the different parties involved have a clear responsibility regarding who will make the decision to stop the production line when something goes wrong. QA and

Figure 16. An input screen to pseudo the ANN and KM modules when “out-of-control” signal is triggered

JMTM 18,7

854

plant staff can quickly respond and take swift action to address any quality issues. Visual displays are installed at all production lines. All control charts are visually displayed for any plant staff. Visibility is increased. It provides real time quality measurements for them to take fast remedial action when quality issues arise (Figure 17). A win-win situation is established. Internally, the noise from various departments are reduced as they also share the responsibility for the decision to stop the production line. Externally, customers are happier as they have higher qualities of products. Prompt decision reduces the time in making a decision on quality related issues since SPC is used and a clear problem domain is defined. Problem domain is derived from TQM. Better key performance indicators changed the quality culture. This encouraged everyone in the organization to continuously strive for higher quality standards. Within the organization, most people are aware of the importance of SPC and enjoy the benefits of its use. There was a change in culture and attitude towards quality. The second loop learning occurs when they practice changes or reviews governing variables in the whole operation of an organization. For example, every department will share and enjoy decision about quality issues. They become more effective and efficient in dealing with quality related problems. A reduction in defective rate and rejection rate contributed in a reduction of inventory level. The theory of JIT or zero stock applies to planning in the production line. Future research Development of process performance tools The ICT real-time process-monitoring system sends all SPC input data to a database in central office. The regional production manager can retrieve these data to evaluate the process performance on each outsourcing partner. The manager can calculate to project the profit margin of a product, its cost of production and the defect rate on each outsourcing partner. The data can be presented in report format via OLE on major reporting tools or spreadsheet. Such tools can enhance the sub-contract management capability of the company having a number of outsourcing activities on various processes during manufacturing. Empowering contract management The outsourcing parties can be made aware that their process performances are being closely monitored by the manufacturer. With real time contract management in place, vendor relationship management is enabled. Both the contractor and the sub-contractors will have the motivation to take action in maintaining high-product quality even though a tiny but critical tooling process they made is for a small component in a product. Increased Visibility

Figure 17.

Prompt Decision Making

Win Win Situation

Notes: Results Achieved after the implementation of the DSS system

Integration with ERP The defect ratio and scrap figures are important to procurement, purchasing and account departments. Integrating the data to an ERP would help the whole organization to better understand the area of improvement, where the goal would be to create innovative processes and reduce costs so that the productivity of the overall performance can be increased. Procurement can use the scrap figures to justify their purchasing orders and choosing what kind of ordering methods to best fit their need in order to keep the lowest raw material and inventory. Conclusion Quality assurance in production is a key factor in supply chain management. Outsourcing various processes in manufacturing is a vital strategy in today’s supply chain management. The manufacturing company may lose control in quality assurance when there are time delays in reporting. To make up on this, in regional supply situations, a DSS with an ICT message service network is essential to support the operation of various outsourcing posts using a series of online SPC systems. This concept of DSS with ICT message services network is tested in a small company manufacturing and tooling CCD micro lens with concerns about production quality. The company uses it to control the product quality on various production lines and allow manual data input of test results to be shown as an output containing messages about reasons for production variance. The decision makers in control center can visualize the production quality when products made go out of bound or when production diverges from normal specifications as had happened on any of their factories located in different parts of the world. The networked system allowed all parties a chance to play a responsible role in quality control. With the networked system, the dispersed manufacturing partners, site production manager, marketing and sales, PMC manager and even General Manager become the ones monitoring the production qualities. In the case of production diverging away from specification, relevant parties would be notified immediately for taking a decision to react accordingly. The preset rules define which managers should be informed and what kind of actions they should be able to take if necessary. Moreover, the message can be sent back immediately to the related parties in regional center remotely. Through internet, the total cost of ownership is relatively low and there can be no limit on the boundaries and time delay on messages and data transfer. This networked DSS for production quality control in production network is application software developed by third-party to support the unique quality control method. The combination of the SPC method with a DSS and ICT technology provide segmentation on responsibilities in terms of production quality. The system clearly determines which parties in the problem domain need to be notified for taking action on the given signals. With all internal parties aware of the quality required, its application is extended to the external parties: customers on demand side and suppliers on supply side. Eventually, with all areas of participation in terms quality control from various parties, it is expected high-product quality as well as high-customer satisfaction and good vendor relationship would be built. In addition, by utilizing the data from various outposts, this networked system may act as a dashboard for the

ANN-based DSS system for QA

855

JMTM 18,7

856

regional production manager to evaluate the performance of different outsourcing partners. More importantly, the company image, profit margin as well as market shares can be maintained.

References Chung, W.W.C., Yam, A.Y.K. and Chan, M.F.S. (2004), “Networked enterprise: a new business model for global sourcing”, International Journal of Production Economics, Vol. 87 No. 3, pp. 267-80. Gianturco, D.J. and Behravesh, N. (1988), Microcomputers Corporate Planning, and Decision Support Systems, The WEFA Group, Quorum Books, New York, NY. Gunasekaran, A., Khalil, O. and Rahman, S.M. (2003), Knowledge and Information Technology Management Human and Social Perspectives, Idea Group Publishing, Idea Group Pub, Hershey, PA. Oakland, J.S. (2003), Statistical Process Control, 5th ed., Butterworth-Heinemann, Oxford. Marakas, G.M. (1998), Decision Support Systems in 21st Century, Prentice-Hall, Upper Saddle River, NJ. MetricStream, Inc. (2006), Managing Quality at Outsourced Manufacturing Operations, MetricStream, Inc., Redwood Shores, CA, available at: www.metricstream.com/insights/ qltyOtsrcng.htm (accessed June 14). Montgomery, D.C. (2005), Introduction to Statistical Quality Control, 5th ed., Wiley, New York, NY. Quality America Inc. (2006), available at: www.qualityamerica.com/knowledgecente/ knowctrSPC_Concepts.htm (accessed June 26). So, H.W.T. and Chung, W.W.C. (2005), “Mobile IT infrastructure in value network development: a case study of property management business”, Production Planning & Control, A Special Issue on Extended Enterprises in Changing Business Environments, Vol. 16 No. 6, pp. 586-96. Stergiou, C. and Siganos, D. (2006), “Neural networks”, available at: www.doc.ic.ac.uk/ , nd/surprise_96/journal/vol4/cs11/report.html (accessed June 15). Sun, J. (2003), “Construct MIMO process control system by using neural network”, available at: http://thesis.lib.cycu.edu.tw/ETD-db/ETD-search/view_etd?URN ¼ etd-0707103-174439 (accessed June 16, 2006).

Further reading Biemans, F.P.M. (1999), Major Issues When Applying ICT to Improve the Performance of Value Chains, Telemetric Institute, Amsterdam. Case-based reasoning (a subtopic of Reasoning) (2006), American Association for Artificial Intelligence (AAAI), available at: www.aaai.org/AITopics/html/casebased.html (accessed June 16). Papadakis, V. and Barwise, P. (1998), Strategic Decisions, Kluwer Academic Publishers, Dordrecht. Shun-ichi, A. (1998), “Adaptive blind signal processing – neural network approaches”, available at: http://ieeexplore.ieee.org/iel4/5/15548/00720251.pdf?tp ¼ &isnumber ¼ & arnumber ¼ 720251 (accessed November 17, 2006).

Viharos, Z.J., Monostori, L. and Markos, S. (1999), “Selection of input and output variables of ANN based modeling of cutting processes”, Proceedings X Workshop on Supervising and Diagnostics of Machining Systems, March 21-26, Karpacz, Poland, Vol. 26, pp. 121-31, available at: www.sztaki.hu/ , viharos/homepage/Publications/1999_CIRP_ DIAGNOSTIC/PaperKarpatz99.pdf (accessed June.

Corresponding author Walter W.C. Chung can be contacted at: [email protected]

To purchase reprints of this article please e-mail: [email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints

ANN-based DSS system for QA

857

The current issue and full text archive of this journal is available at www.emeraldinsight.com/1741-038X.htm

JMTM 18,7

Total acquisition cost of overseas outsourcing/sourcing: a framework and a case study

858

Ninghua Song, Ken Platts and David Bance Department for Engineering, Institute for Manufacturing, University of Cambridge, Cambridge, UK

Received August 2006 Revised January 2007 Accepted April 2007

Abstract Purpose – The purpose of this paper is to develop a framework of total acquisition cost of overseas outsourcing/sourcing in manufacturing industry. This framework contains categorized cost items that may occur during the overseas outsourcing/sourcing process. The framework was tested by a case study to establish both its feasibility and usability. Design/methodology/approach – First, interviews were carried out with practitioners who have the experience of overseas outsourcing/sourcing in order to obtain inputs from industry. The framework was then built up based on combined inputs from literature and from practitioners. Finally, the framework was tested by a case study in a multinational high-tech manufacturer to establish both its feasibility and usability. Findings – A practical barrier for implementing this framework is shortage of information. The predictability of the cost items in the framework varies. How to deal with the trade off between accuracy and applicability is a problem needed to be solved in the future research. Originality/value – There are always limitations to the generalizations that can be made from just one case. However, despite these limitations, this case study is believed to have shown the general requirement of modeling the uncertainty and dealing with the dilemma between accuracy and applicability in practice. Keywords Modelling, Outsourcing, Purchasing, Manufacturing systems, Cost effectiveness Paper type Research paper

Journal of Manufacturing Technology Management Vol. 18 No. 7, 2007 pp. 858-875 q Emerald Group Publishing Limited 1741-038X DOI 10.1108/17410380710817291

Introduction Overseas outsourcing can be costly. The cost savings may not be as great as they seem (Gilley and Rasheed, 2000). Recently, many UK manufacturers have transferred their production to low-cost regions all over the world including Mexico, India, and People’s Republic of China (PRC). Among the various motives for these international outsourcing/sourcing projects, seeking cost effectiveness is very frequently mentioned. However, the cost savings may not be as great as they seem. Although cheaper labour can be obtained, other extra expenses will also occur. Furthermore, the further the distance between the host location and the outsourcer, the more the uncertainties and risks are. These uncertainties and risks may lead to unexpected costs which offset gains from cheaper labour, or even result in losses to the outsourcer. In order to have a complete picture of all the potential costs of the offshore outsourcing projects, companies should adopt a total cost model. However, the systematic utilization of cost measurement in outsourcing is quite rare in practice (Lindholm and Suomala, 2004). The total cost of outsourcing is not widely addressed in the academic literature. As Ellram and Maltz (1995, p. 58) state,

“Most research done on how the outsourcing decision is or should be made focuses at a macro level . . .Yet little research has been done to explore how cost is determined in outsourcing decisions”. This is still the case in the overseas outsourcing/sourcing field. Therefore, this research aims at bridging this gap. Literature review In the transaction cost theory and general purchasing literature, there have been some approaches proposed that aim at modelling the costs occurring during the general purchasing or sourcing process. Transaction cost economics (Coase, 1937; Williamson, 1975) is the earliest theory addressing this topic. However, although many attempts to operationalise transaction cost theory have been made, it seems that due to its complexity, they have only been able to address a few specific transactions (Calfee and Rubin, 1993). From the late 1980s, in the field of purchasing, some researchers began to propose making the supplier selection decision from a total cost point of view instead of just focusing on the purchase price. Monckza and Trecha (1988) introduce the supplier performance index, which is the sum of purchase price and non-performance cost divided by the purchase price. “Non-performance cost” refers to the additional costs incurred by the buying organization to correct the deficiencies, when a supplier fails to meet delivery, quality and price requirements. Since, the early 1990s, several authors have proposed the adoption of the concept of total cost of ownership (TCO) into the purchasing field. Smytka and Clemens (1993) divide the total costs into two categories: external costs and internal costs. The external costs include: price; discount term; ordering costs; transportation; supplier visits; tooling and technical support. The internal costs include: inventory cost, delivery expediting cost, line down cost and non-conformance cost. Carr and Ittner (1992) contend that suppliers’ value should be defined in terms of the costs of purchasing, the costs of holding inventory, the costs of poor quality and the costs of delivery failure. From another perspective, Ellram (1993) proposes a framework of the cost components that divides the costs into categories of pre-transaction, transaction, and post-transaction. In addition, Ellram argues there are several barriers to applying TCO in practice, including the lack of readily available accounting and costing data due to the allocation mechanism of traditional cost accounting systems. Combining TCO and activity-based costing, Degraeve et al. (2005) develop a framework containing activities, resources consumed and resource drivers for allocating the relevant costs, according to the value chain and purchasing activities. In contrast to TCO, life cycle costing (LCC) (Taylor, 1981; Lindholm and Suomala, 2005; Woodward, 1997; Emblemsva˚g, 2003) emphasizes estimating costs on a whole life cycle basis and monitoring the incurred cost through a product’s life cycle. Additionally, LCC communicates the costs as time-dependent variables and underlines dealing with uncertainty and risk. Different from TCO and LCC, zero base pricing (Burt et al., 1990) focuses heavily on understanding the supplier’s cost structure. The above literature addresses sourcing activities in general; there are also contributions specific to overseas sourcing. In the field of global outsourcing/sourcing and foreign direct investment (FDI), many academic efforts have been put into the areas of entry mode selection (Anderson and Gatignon, 1986; Burpitt and Rondinelli, 2004; Elango and Sambharya, 2004), the effect of protection of intellectual property rights (IPR) on global sourcing and FDI (Javorcik, 2004; Seyoum, 1996; Smith, 2001),

Overseas outsourcing/ sourcing 859

JMTM 18,7

860

location selection (Pongpanich, 2000; Burpitt and Rondinelli, 2004; Chadee et al., 2003; Coughlin et al., 1991). Some authors address the topic of performance consequences of global sourcing. For example, Min et al. (1994) underline the risks and complexities involved in the global sourcing process compared with sourcing from a domestic supplier, including political atmosphere, tariff barriers, variations in ethical and quality standards, currency exchange rate, cultural and communication obstacles and so on. Rangan (2000) argues that, while a foreign supplier’s production cost levels will generally be lower and the quality of its products may or may not be higher, the transaction cost levels attached to international transactions are almost invariably higher. Mol et al. (2005) state that international outsourcing is a balancing act between lower production costs abroad and lower transaction costs locally. Kotabe and Murray (2004) explore potential limitations and negative consequences of global outsourcing and mention the gradual loss of design and manufacturing abilities because of heavy dependence on global suppliers on an arm’s-length basis. However, the topic of modelling the total cost of global sourcing has not been widely addressed. Methodology In order to develop a total cost framework for overseas outsourcing/sourcing practice, this research combined inputs from literature and from practitioners. A review of the general purchasing literature and a specific review of global outsourcing/sourcing literature provided an initial list of costs. Interviews with practitioners from seven manufacturing companies and a consulting company with the experience of overseas outsourcing or sourcing provided practitioner input. The interviews were semi-structured. Each of the discussions was guided by pre-designed questions, but frequent departures from the agenda were made in the interest of exploring new and particularly interesting points raised in the course of each interview. Based on the inputs from both academic literature and interviews, a framework of the total cost of overseas outsourcing or sourcing was developed. The framework was then tested by a case study in a multinational high-tech manufacturer to establish both its feasibility and usability (Platts, 1993). Interviews with industrial practitioners and framework development The practitioner interviews explored: their general perceptions of offshoring; how they estimated offshoring costs; how they handled risk and uncertainty during cost estimation; how they measured and controlled the offshoring costs; whether they had any disappointments or shocks during the offshoring process; what costs were underestimated or overestimated; and what were their requirements or suggestions for the cost model. The main contributions from these interviews to the total cost framework are summarized as follows: . The cost for sending UK managers overseas. According to a consultant, this cost usually doubles the cost for hiring managers in UK. Related costs include travel expenses, health insurance, international school expenditure for their children and so on. . Changes in payment terms. In some low-cost regions, such as developing countries in East Europe and Asia, the payment terms may be only 5-15 days, which are much shorter than the typical 30-60 days in UK. According to three

.

.

.

. .

.

. .

practitioners’ experience, some manufacturers will not even begin production until they are paid fully or partly. Counterfeit products. For example, a case company was producing shampoo in PRC and they used a supplier to produce shampoo bottles. Later, they found that the supplier had provided the same bottles to a smaller producer who produced counterfeit shampoo of the case company’s brand. Infrastructure (energy, steel, transportation, etc.). For example, according to a consultant’s experience, in PRC, there is a huge diversity between different regions. In regions around Yangzi River Delta and Pearl River Delta, the infrastructure is satisfactory, but the labour cost is much higher than other regions in PRC. Travel expenses between two countries. According to seven case companies’ experience, this part of ongoing expenditure is the one usually under-estimated. Increasing overhead for telephone calls, faxes and video conferences. Corruption of local government. One case company decided to select a supplier in a less developed country. It turned out that they had to select the one recommended by the local government, because the supplier was heavily invested in by the local government. IPR protection. Because the legal systems for IPR protection in some less developed countries are not mature, or are not enforceable, it is quite possible that companies cannot get satisfactory results through the courts. Therefore, in order to avoid the potential risk, six of the interviewed companies did not let one supplier get hold of the whole manufacturing process. Instead, they divided the manufacturing processes and employed many suppliers. Or, they kept the important processes in their own site, or finished the manufacturing in a country where IPR law is enforceable. Risk of currency fluctuation. Culture and language issue. According to seven case companies, the development of a relationship is essential for dealing with Chinese partners. To do business with a Chinese supplier, it is necessary first to be introduced to each other, to become familiar through meeting socially, to build up trust and only then begin to talk about business. It is usual to give gifts to build up the relationship. Furthermore, because of languages difficulties, there maybe obstacles to effective communication. For example, a case company hesitates to communicate with their Chinese supplier through telephone, because sometimes it can lead to misunderstanding. It is said by some practitioners that Asian people say “yes” to show that they are listening, but it does not mean that they have understood entirely or are giving a positive answer. Sometimes, people are just reluctant to say they do not understand. An interviewed managing director says that each time she writes e-mails to her Chinese supplier, she has to write only one issue in one e-mail and write several e-mails to cover all the problems in order to avoid misunderstanding. One case company complained that their local Chinese employees were reluctant to tell their headquarters when there were problems and that they also hesitated to directly tell the suppliers that their quality would not be acceptable.

Overseas outsourcing/ sourcing 861

JMTM 18,7

862

.

.

Finding a right partner/supplier usually takes longer than expected. Costs of dealing with wrong partners are usually underestimated. As one interviewed global sourcing manager said, “suppliers may not have the skills and experience they claimed and cannot control the manufacturing process effectively. They may manage to replicate the sample as required, but in full production, products are of such low quality that they cannot be accepted”. Learning curve for quality improvement. According to one case company’s experience, “The initial quality was okay, but our supplier worked hard to improve it. Six months later, it got even better than other European factories”.

Based on the inputs from both academic literature and preliminary interviews, a framework of the total cost of overseas outsourcing or sourcing was developed (Table I). In this framework, the cost items are classified into six categories. Additionally, all the cost items are classified as “one-off” and “ongoing” cost, which helps estimating the internal return rate, payback time and net present value of offshoring projects to support decision-making.

Information collection, supplier selection and negotiation (one-off)

Price (ongoing)

Administration (ongoing) Logistics and inventory (ongoing)

Table I. Framework of the total cost of overseas outsourcing/sourcing

Gather information and codify knowledge of the process transferred Package the process for IP protection Modify and pilot the process outsourced or re-sourced (including modification due to different climate) Search for and visit supplier Quality audit cost Tooling cost Negotiation with supplier Add supplier to internal IT system Invest in suppliers’ IT systems (e.g. MRP, ERP, TCM, etc.) Price (knowing supplier’s cost structure) Discount term Tax and duty Benefit from payment terms changes Currency fluctuation Ordering process Payment/billing process Transportation Expediting Lost sales owing to late deliveries Holding and administrative costs related to early delivery Receiving (including moving heavier packaging for shipment protection) Inspection Holding inventory (heating costs, warehouse maintenance, etc.) Insurance Obsolescence Capital charge for keeping inventory (continued)

Quality issue (ongoing)

Supplier management (ongoing)

Other costs

Rejection, return and re-receiving Defective material disposition Rework Scrap Line down Repackaging Retesting Warranties and customer complaint handling Outgoing credit note Loss of brand reputation Learning curve for quality improvement should be considered Supplier training and technical support Co-operation with supplier for innovation Update forecast and convey it to suppliers Performance review and meeting Renegotiation Costs of phone calls, faxes, video conferences Litigation Impact on residual supply from the previous supplier Personnel recruit and training (one-off) Send staff to work abroad (one-off and ongoing) Get rid of redundant capacity and labour (one-off) Dealing with inferior infrastructure (one-off and ongoing) Special regulations from local government (environmental policy, working hours and pattern restriction, etc.) (ongoing) Culture and language issues (additional costs for training and re-designing of job process, performance measurement system, etc.) (ongoing) Cost for dealing with counterfeit products (one-off and ongoing) Loss from IPR infringing Dealing with corruption of local government (ongoing)

Sources: (1) Academic literature: Smytka and Clemens (1993), Carr and Ittner (1992), Ellram (1993), Degraeve et al. (2005), Monckza and Trecha and Trecha (1988), Burt et al. (1990), Grant (1999), Minshall (1999), Pongpanich (2000) and Min et al. (1994). (2) Interviews with industrial practitioners

A case study A case study was carried out in Company A, in order to test this framework in practice, to determine whether it was comprehensive (Did it cover all the costs experienced in practice?) and usable (Was it possible to find the necessary information for quantifying the cost items?). Company A is a high-tech multinational manufacturer. Its turnover is between £150 and 200m and the number of employees is 1,700 worldwide. For one of its products, Company A has been buying in kits and parts from more than 100 suppliers and doing the assembling work in its own factory. Currently, 95 per cent of its suppliers are in UK. In the future, Company A plans to re-source 80 per cent of its components from PRC. As a pilot of this PRC re-sourcing project, Company A has re-sourced one of its product components, cabinets, from PRC since October 2004. In estimating cost savings of this PRC Re-sourcing project, Company A only considered the direct cost

Overseas outsourcing/ sourcing 863

Table I.

JMTM 18,7

864

items, including price, freight, duty and tax. Other cost items in the proposed total cost model have not been considered. Hence, it is unclear how much money has been actually saved from this project. Therefore, this research aims at estimating the total cost saving of the cabinet re-sourcing project during the first year. Archival records investigation and semi-structured interviews with managers and senior managers in relevant departments were carried out to estimate the total cost reduction based on the total cost framework proposed by this research. Logistics flows before and after the cabinet re-sourcing project. The logistics flows of cabinets before and after the PRC sourcing project are shown as Figures 1 and 2. Before October 2004, the supplier for cabinets was in UK. Annually, this supplier supplied the factory in the UK with about 8,000 cabinets and the factory in PRC with about 4,000. For the factory in UK, the supplier came and checked the stocks and delivered the next day. For the factory in PRC, cabinets were sent by ship once a week. The time for transportation was six weeks. About two weeks’ stocks were held on the site of the factory in PRC; six weeks’ stocks were held in transit. Now, the supplier for cabinets is in PRC. They still supply the same product and same amount as before. For the factory in PRC, the supplier delivers cabinets twice a week by road. For the factory in UK, cabinets are transported usually by ship once a week with six weeks’ lead time. Airplane transportation is used only in emergency. A hub (third party warehouse) is hired for receiving and storing products from PRC and sending them to A’s UK factory every day. Total cost savings of the cabinet overseas sourcing project. The developed total cost framework (Table I) was used to identify the costs incurred by Company A in overseas sourcing of the cabinet. Each element in the framework was addressed. The total cost savings based on the comparison of the situations before and after Company’ A’s overseas sourcing project was estimated. Table II summarizes the savings (or additional costs) of each items in the framework and shows the assumptions underlying

6 weeks’ lead time by ship, delivered weekly

Supplier in UK

Figure 1. Logistic flow of cabinet before PRC re-sourcing

About 8 thousand units annually

Deliver daily by road

About 4 thousand units annually

Factory in UK

Factory inPRC

About 8 thousand units annually Hub (third part warehouse)

Figure 2. Logistics flow after PRC re-sourcing

Deliver daily by road Factory in UK

Deliver weekly with 6 weeks’lead time by ship

Supplier in PRC About 4 thousand units annually

Deliver twice a week by road

Factory in PRC

Administration cost Ordering and billing process

Currency fluctuation

Tooling cost Add supplier to internal IT system Invest in suppliers’ IT systems Extended price Savings from price (CIF), tax and duty Payment terms changes

Quality audit cost

Information collection, supplier selection and negotiation Gather information and codify knowledge of the process transferred Package the process for IPR protection Modify and pilot the process outsourced or re-sourced (including modification due to different climate) Search for and visit supplier and negotiation with supplier

Additional costs (savings) occurred

Comparison of the amount of financial people’s work. Working amount has not changed

2 388,000 GBP (savings) Benefit currently. The amount owing to supplier £ current payment term £ capital cost rate Benefit previously. The amount owing to supplier £ previous payment term £ capital cost rate Cost savings from payment term changes. Benefit currently – benefit previously (2 18,200 (savings) Based on the first month (October 2004)’s exchange rate, the sum of gains and losses of each month (from November 2004 to September 2005) because of exchange rate fluctuation (2 27,045 GBP (saving))

Travel expenditure. Travel expenditure per trip (including airplane ticket and accommodation) £ number of trips (7,500 GBP) People’s time cost. People’s wage per month £ time devoted to this project (14,100 GBP) Sample inspection cost and validation cost: standard cost per hour £ time spent (1,125 GBP) Paid by the supplier Included in the ECO cost Supplier purchased the IT system

Not occurred Not occurred

Standard cost in Company A (ECO cost) 3,600 GBP

Calculation assumption

(continued)

High

Low

High High

High High High

High

Medium

High High High

Predictability

Overseas outsourcing/ sourcing 865

Table II. Total cost savings of cabinet PRC re-sourcing project

Table II.

Litigation

Renegotiation Costs of phone calls, faxes, video conference

Co-operation with supplier for innovation Update forecast and convey it to suppliers Performance review and meeting

No additional cost compared with before because of the existence of an engineering support group in the factory in PRC Not occurred Included in the communication cost Travel expenditure each time £ number of meetings (2,400 GBP) Not occurred Video conference between the sourcing director in China and sourcing manager in UK (Chinese sourcing director manages the local supplier): rate per hour £ duration of each conference £ number of conferences per year (2,800 GBP) Not occurred

no additional cost compared with before rework cost per cabinet £ number of cabinet reworked (9,000 GBP)

No inspection in company A’s factory Included in the payment to the Hub Included in the payment to the Hub Not occurred (Current average monetary value of inventory – previous value) £ capital cost rate. 12,840 GBP

195,000 2 123,000 ¼ 72,000 GBP Included in the CIF price and payment to the Hub Cost for transportation by airplane (3,000 GBP) Not occurred Not occurred Included in the payment to the Hub

Calculation assumption

866

Supplier management cost Supplier training and technical support

Quality failure cost Quality failure cost Rework of cabinets because of wrong drawings

Logistics and inventory Payment to the third party warehouse (Hub) out of budget Transportation Expediting Lost sales owing to late deliveries Holding and administrative costs related to early delivery Receiving (including moving heavier packaging for shipment protection) Inspection Holding inventory (heating costs, warehouse maintenance, etc.) Insurance Obsolescence Capital charge for keeping inventory

Additional costs (savings) occurred

Medium (continued)

Medium Medium

Low Medium High

Low

Low Low

High High Medium High Low Medium

High Low Medium Medium

Predictability

JMTM 18,7

Loss because of counterfeit products or Loss because of IPR infringing Dealing with local government’s corruption

cost because of the location selected

cost because of the location selected cost cost because a British wholly owned supplier is

No impact in this case

Impact on residual supply from the previous supplier Other costs Personnel recruit and training send staff to work abroad Get rid of redundant capacity and labour Infrastructure Special regulation from local government Culture and language 670 GBP Not occurred Not occurred No additional No additional No additional selected Not occurred Not occurred No additional

Calculation assumption

Additional costs (savings) occurred

Low Low Low

High High High Medium Medium Low

Medium

Predictability

Overseas outsourcing/ sourcing 867

Table II.

JMTM 18,7

868

the calculations. Table II also indicates the degree of uncertainty associated with the cost. For example, the cost change due to a contracted change in payment terms can be accurately predicted, whereas the cost change due to currency fluctuations is less predictable. Information collection, supplier selection and negotiation. In Company A, the costs related with supplier selection and negotiation were not directly available, because they were considered as general overhead and were not recorded on a project level. This research estimated these costs by talking with buyers of Company A and referring to the documentation such as supplier selection procedure, quality audit procedure, engineering change order (ECO), etc. Search for supplier. Firstly, quality engineers specify a clear capability requirement. Based on this requirement, the buyer in Company A, then generates a long list of potential suppliers. From this a short list is created and one to two suppliers are chosen to be visited. However, because cabinet sourcing was Company A’s first sourcing project in PRC, the relevant knowledge was limited. Therefore, the first, two to three suppliers were identified by asking which suppliers other relevant companies were using. It took a buyer about one month to select the final supplier to visit. Quality audit and supplier visit. This process took another two months. During this process, drawings and a sample were provided to the supplier. A quality engineer was involved during this process to ensure the supplier had the capability to provide products with reliable and consistent quality. A senior sourcing manager also spent a week visiting the supplier. Terms and conditions negotiation. With little PRC sourcing experience, it took Company A three months to negotiate terms. This was considerably longer than negotiating with its European suppliers. Price, volume, currency fluctuation, legislation, and frequency of delivery were discussed. Tooling cost was paid by the suppliers. Overall, the costs incurred during the above processes included travel expenditure, accommodation, cost of staff’s time and quality auditing costs. Travel expenditure comprised five trips to PRC: two by the buyer, two by the engineer and one by the senior sourcing manager. These costs included 1,000 GBP for flights and 500 GBP for accommodation on average. Overall, the travel expenditure was 7,500 GBP. Staff’s time was calculated by multiplying the hourly rate by the time spent on the project. The buyer’s wage is 28,000£ per year, the senior sourcing manager is 54,000 per year and the engineer’s is 31,000£ per year. Overall, staff cost was 14,100 GBP. The cost of the quality audit included sample inspection cost and validation cost. In Company A, there are standard rates of cost per hour for validation (25 GBP/hour) and inspection (15 GBP/hour). These rates come from the average cost of engineer’s wages per hour and the cost for equipment involved. Therefore, the quality validation and inspection cost was 1,125 GBP. Gather and codify information and add new supplier to IT system. In Company A, each time a supplier is changed or drawing is modified, the ECO process is launched. All the changes are reported by ECO and communicated through the whole company. Updating the IT system is incorporated in the ECO process. The average cost for an ECO is 1,500-1,800 GBP, comprising the cost of people’s time. In this project, there were two ECOs, one for supplier validation and one for changing the design to correct a wrong drawing at the cost of 3,600 GBP.

Supplier’s IT system. Company A needs its suppliers to have a compatible design system in order to share engineering drawings and asked the supplier to purchase an appropriate system. Intellectual property (IP) protection. For the cabinet, Company A has not taken any action to protect the IP, because it is thought that the cabinet has nothing to do with its core competence (the printing head technique). However, Company A has more than 40 competitors in PRC. A competitor may ask the supplier to provide the same cabinet for their own products, or copy Company A’s cabinet and then compete with Company A. If Company A takes no action to protect the cabinet IP, they risk losing IP and, in the worst case, losing local market share. Price. For this category of costs, the only ones formally recorded in Company A are price, tax and duty. Other costs were either not considered by Company A or subjectively considered as unchanged. CIF (carriage, insurance and freight) price is paid to the Chinese supplier and hence the supplier is in charge of transportation. The annual sum of price (CIF), tax and duty is 1,488,000 GBP. When sourcing from the UK supplier, the annual purchasing price was 1,876,000 GBP. The direct cost savings (price, transportation, tax and duty) from the China sourcing project is, therefore, 388,000 GBP annually. The payment terms. The terms changed from 30 days for factories both in UK and in PRC to 60 days for the factory in UK and 45 days for the factory in PRC. This change in payment terms was valued as interest of capital. Hence, according to economic value added (Stewart, 1991), it is the amount owing to the supplier times the capital cost rate. The benefit from payment term changes was about 18,200 GBP. Currency fluctuation. Before PRC sourcing project, cabinets were bought from UK supplier with GBP. After the project, cabinets for the products to be sold in PRC are purchased with local currency (RMB); those for products to be sold to other global regions are purchased with USD. The fluctuation between RMB-GBP and USD-GBP during one year could result in either a gain or loss. The effect of fluctuating exchange rates was valued by taking the first month (October 2004)’s exchange rate as basis and calculating the gain and loss for every month. The final result shows that from October 2004 to September 2005, the gain was 27,045 GBP. Administration cost. Ordering and billing process. After the PRC sourcing project, the work of the financial department has changed in the following aspects: . adding the freight and duty to the material cost; . calculating manually the actual cost savings against the forecast (based only on price, freight, tax and duty); and . about 80 per cent less invoices to be handled, because the ordering frequency changed from daily to once a week. Overall, the amount of work has remained about the same, and hence no cost changes have occurred. Logistics and inventory. In UK, a third party warehouse (Hub) is paid to hold the entire inventory and send cabinets and other components to the factory every day. Hence, the receiving and material handling process for UK factory has not changed compared to the process before PRC re-sourcing. However, there are extra costs associated with packaging. Cabinets from PRC are much more heavily packaged for protection during the

Overseas outsourcing/ sourcing 869

JMTM 18,7

870

transportation. It takes more time to remove the packaging, and there are additional disposal costs. The hub is paid for doing the work on behalf of Company A. The actual payment to the hub for cabinets turned out to be 195,000 GBP, which was 59 per cent higher than the budget amount (123,000 GBP) included in the transportation cost. The hub is also in charge of receiving cabinets at a port in the UK and sending them to Company A’s factory everyday. The reason for the excess cost was a higher than budgeted inventory, in order to buffer the longer lead time, which led to increased storage cost in the hub. Expediting Company A used to adopt a “make to order” system. This system does not support a long-term forecasting process. Now forecasting is done manually. Resulting from inaccuracy in the forecasting system, expediting transportation by airfreight has been required once during the first year. This incurred an extra cost of 3,000 GBP. Capital charge for inventory. Before China sourcing, the average inventory was 70,000 GBP; while after China sourcing, it increased to 177,000 GBP. The capital cost for this increased inventory is about 12,840 GBP, valued at an annual capital cost rate of 12 per cent. Obsolescence. With the increased inventory, there is also an increased risk of obsolescence. For cabinets, the obsolescence risk is nearly zero, because they are used in every Company A’s product. Quality issue. The quality of cabinets from the supplier in PRC, according to the buyer in Company A, “is better than that from the previous supplier”. But the quality failure cost before and after PRC sourcing are found to be nearly the same. This is because in the past, the supplier took defective cabinets away, reworked them and sent them back. The quality cost for Company A was minimal. When the PRC sourcing project began, the historical record on quality problems of previous supplier was sent to the new supplier and the new supplier was asked to make sure that these problems would not happen again. Additionally, there are more people at the new supplier’s factory to inspect the final product for Company A. (There is no inspection at Company A’s site.) Therefore, very few defective cabinets are sent to Company A. At the very beginning, there was a problem caused by Company A having a wrong specification in a drawing. About 180 cabinets required reworking at a cost of 50 GBP each. Also, in order to change the wrong drawing, an ECO was raised (as mentioned in the paragraph “Gather and codify information and add new supplier to IT system”). Supplier management. Previously, to maintain relationships with the UK suppliers, meetings were held once a week. There was minimal travel expenditure, because Company A’s location was very close to the UK cabinet supplier’s site. Now, meetings are held every two weeks between a global sourcing manager of Company A, based in PRC, and the new supplier in another city. Additional expenditure for travel is hence incurred, which is 2,400 GBP annually. Ongoing engineering support from the UK is not normally required, because Company A has a factory in PRC and there is an Engineering support team there. However, because the headquarter is still in UK and the main supplier quality development (SQD) Projects are carried out in UK, it is still necessary that at least once a year engineers from UK visit the supplier to understand their process and problems. This is budgeted to cost 2,000 GBP for each visit, including travel expenditure and the engineer’s wage cost. During product innovation projects, travel to PRC will be much more frequent, and the cost will increase.

The cost for communication between PRC and UK is mainly the cost of video conference. But this cost is not directly available in Company A because they are considered as general overhead and not available on the project level. Estimates of these costs were made by asking relevant people the frequency and cost rate for these forms of communication. The cost for video conference is 70 GBP per hour (including the equipment cost). It is held once a week and usually lasts for about one hour each time. While the conference is not only focused on the cabinet, the purchasing director who organizes the conference estimates that about 80 per cent of the time during the past one year is about the cabinet. Therefore, the cost is about 2,800£. Other costs. Recruiting. Since, the PRC Sourcing project began, 11 new staff have been recruited in the UK headquarter of Company A, including three buyers, eight engineers. In the factory in PRC, four new staff have been hired, including one sourcing manager, one buyer and two sourcing engineers. Two more sourcing engineers will be recruited in UK and one more buyer and engineer will be recruited in PRC. However, the new staff are not only devoted to the cabinet project. There have been more than 20 components re-sourced to PRC during the past year; hence the recruiting cost should be allocated across all the components. The allocated recruiting cost for the cabinet sourcing project was about 670 GBP. No staff in the UK headquarter were sent to work in PRC. Redundant capacity and people. Because the cabinet was re-sourced to PRC, the in-house process has not changed; hence, there is no redundant capacity to be got rid of. Infrastructure. The supplier for cabinet is based in Su Zhou. This city is located in the Yang Zi River Delta, where the infrastructure is considered to be satisfactory. Culture and language. The supplier for the cabinet is a British wholly owned company. Culture difference, therefore, is not a problem. Dealing with local government. Company A’s factory in PRC is located in Shanghai, where the law and regulation is the most transparent in PRC. Therefore, there is no additional cost for dealing with local government. Discussion Considering only the price, transport, tax and duty, the direct cost saving of this China sourcing project in Company A was 388,000 GBP per year. When the total cost is considered, the saving is less significant, 294,210 GBP. However, had it not been for a windfall saving due to currency fluctuations, this saving would have been even less. This shows the necessity to carry out total cost analysis for decision-making, especially when the direct cost saving (price, freight, tax and duty) is not significant, say less than 10 per cent. Although some of the additional costs can be relatively easily predicted and will remain fairly stable, there are a number of sources of volatile costs. These costs are difficult to predict and introduce a significant amount of risk into the outsourcing project. The main volatile costs are discussed below. . In this case, because of the character of cabinets, Company A did not take any action to protect the IPR, which is a main concern for many manufacturers when their products are outsourced or sourced to the less developed countries. Protections such as patenting the process for IPR or separating the process and using different suppliers will lead to additional costs. On the other hand, if no IPR protection action is taken, the potential loss may be great.

Overseas outsourcing/ sourcing 871

JMTM 18,7

.

.

872

.

.

.

.

Currency fluctuation during the past year has brought Company A a benefit of about 27,000 GBP; but this kind of benefit cannot be counted on, as currency fluctuation can also bring the same or even larger amount of loss. Companies should consider hedging to mitigate such risks. Because of the mature design, the obsolescence risk of the component (cabinets) re-sourced by Company A was nearly zero. Therefore, this major side effect of soaring inventory resulted from offshoring has not appeared in this case. However, with a product that undergoes frequent engineering changes or has large variety and fluctuating customer demand, the risk of obsolescence is much higher, when a large amount of inventory is held. The component re-sourced, was not technically complicated and it was easy for the supplier to guarantee the quality. However, for other overseas outsourcing/sourcing cases, reliable quality is always a big concern. Before the re-sourcing project, Company A already had a factory in PRC. With this presence, it was much easier to find local suppliers and provide technology support. Therefore, the expenditure for engineering support turned out to be nearly the same as before. With a sourcing director who was a Chinese national, the cultural differences which always hinder an overseas sourcing process were greatly relieved. Located in Shanghai where legal systems are very transparent, Company A encountered no local government corruption. But it may not be the case in other regions of PRC or other developing countries.

During the case study, the biggest challenge was the shortage of data. Most of the information needed is either unavailable from the accounting system or buried in the general overhead and not available on the individual project level. For example, . Cost of people’s time for supplier selection is not available from the existing accounting system. To quantify this cost, the relevant staffs who were involved in this project were asked to estimate the time contributed to the project. The time was then multiplied by the wage rate of the relevant staff to calculate the staff time cost. . Benefit from payment term changes and capital cost of inventory are not quantified at all in the company. They were calculated by multiplying the tied up capital by the capital cost rate. . Benefit (or loss) from currency exchange rate fluctuation is not quantified in the company. It was calculated by figuring out the gain (or loss) in each calendar month (see the relevant section for detail). . Quality costs were estimated by talking with the operations director in the company to discover what problems had occurred and the cost of dealing with the problems. . Video conference cost and travel expenditure are accounted in the company in the overhead. Video conference cost was calculated as specified in section “supplier management”. Travel expenditure was quantified by referring to the invoices and receipts documents (e.g. flight tickets, hotel invoices, etc.) in the company.

This situation (shortage of data) is consistent with literature (Ellram and Siferd, 1998; Lindholm and Suomala, 2004). In addition, through interviews it was discovered that due to the different goals in different departments, purchasing people still pay much attention to the direct costs such as price, freight, tax and duty. They lack understanding of what costs should be included in the total cost model. This also supports the points from Ellram and Siferd (1998). Another issue emerged from the case study: how to deal with the tradeoff between accuracy and applicability. During the case study process, because of the lack of direct available data and the large number of cost items to be considered, it took nearly two weeks to collect information. The question is when companies are using the model by themselves, will they have time, resource and patience to gather all the information for the long list of cost items? As a purchasing project manager in Company A said: We don’t need so scientific a model that several weeks are needed to populate it. What we need is a model which is just accurate enough for our decision-making and does not occupy too much time and resource.

Future research, therefore, will endeavour to rate the cost items according to their importance. Hence, during the offshoring decision-making process, management can concentrate on the relatively important cost items and may neglect the less important ones. In the case study, the total cost model was used to estimate the historical cost. Another application of this cost model is to estimate the future cost for the offshoring decision-making and budgeting. How to handle uncertainty is an issue, because a part of the input data have to be defined on the basis of different estimations and assumptions about the development of costs (Lindholm and Suomala, 2005). Conclusions Based on the inputs both from academic literature and interviews with practitioners, this research has proposed a total cost framework consisting of the cost items that may occur during the overseas outsourcing/sourcing process. A case study has been carried out to demonstrate the feasibility and usability of the framework. As always, there are limitations to the generalizations that can be made from just one case. However, despite these limitations, we believe this case study has showed the general requirement of modelling the uncertainty and dealing with the dilemma between accuracy and applicability in practice. The above show the first stage of the research. The next step is to create a mechanism for dealing with uncertainty in estimation and to rate the importance of the cost items in the proposed framework. Afterwards, further testing the cost model will be carried out through multiple case studies. With the future research outcomes, we expect to make contribution to both academia and industrial practice in this field. References Anderson, E. and Gatignon, H. (1986), “Modes of foreign entry: a transaction cost analysis and propositions”, Journal of International Business Studies, Fall, pp. 1-26. Burpitt, W.J. and Rondinelli, D.A. (2004), “Foreign-owned companies’ entry and location strategies in a U.S. market: a study of manufacturing firms in North Carolina”, Journal of World Business, Vol. 39, pp. 136-50.

Overseas outsourcing/ sourcing 873

JMTM 18,7

874

Burt, D.N., Norquist, W.E. and Anklesaria, J. (1990), Zero Base Pricing, Probus, Chicago, IL. Calfee, J.E. and Rubin, P.H. (1993), “Nontransactional data in managerial economics and marketing”, Managerial & Decision Economics, Vol. 14 No. 2, pp. 163-73. Carr, L.P. and Ittner, C.D. (1992), “Measuring the cost of ownership”, Journal of Cost Management, Vol. 6 No. 3, p. 7. Chadee, D.D., Qiu, F. and Rose, E.L. (2003), “FDI location at the subnational level: a study of EJVs in China”, Journal of Business Research, Vol. 56, pp. 835-45. Coase, R.H. (1937), “The nature of the firm”, Economica, Vol. 4 No. 16, pp. 386-405. Coughlin, C.C., Terza, J. and Arromdee, V. (1991), “State characteristics and the location of foreign direct investment within the United States”, Review Economic Statistics, Vol. 73 No. 4, pp. 675-83. Degraeve, Z., Labro, E. and Roodhooft, F. (2005), “Constructing a total cost of ownership supplier selection methodology based on activity-based costing and mathematical programming”, Accounting & Business Research, Vol. 35 No. 1, pp. 3-27. Elango, B. and Sambharya, R.B. (2004), “The influence of industry structure on the entry mode choice of overseas entrants in manufacturing industries”, Journal of International Management, Vol. 10, pp. 107-24. Ellram, L.M. (1993), “Total cost of ownership: elements and implementation”, International Journal of Purchasing & Materials Management, Vol. 29 No. 2, pp. 3-11. Ellram, L.M. and Maltz, A. (1995), “The use of total cost of ownership concepts to model the outsourcing decision”, International Journal of Logistics Management, Vol. 6 No. 2, pp. 55-66. Ellram, L.M. and Siferd, S.P. (1998), “Total cost of ownership: a key concept in strategic cost management”, The Journal of Business Logistics, Vol. 19 No. 1, pp. 55-84. Emblemsva˚g, J. (2003), Life Cycle Costing: Using Activity-based Costing and Monte Carlo Methods to Manage Future Costs and Risks, Willey, Hoboken, NJ. Gilley, K.M. and Rasheed, A. (2000), “Making more by doing less: an analysis of outsourcing and its effects on firm performance”, Journal of Management, Vol. 26 No. 4, pp. 763-90. Grant, E. (1999), Fitness for Transfer – Assessing Manufacturing Technologies for Relocation, University of Cambridge Institute of Manufacturing, Cambridge. Javorcik, B.S. (2004), “The composition of foreign direct investment and protection of intellectual property right: evidence form transition economies”, European Economic Review, Vol. 48, pp. 39-62. Kotabe, M. and Murray, J.Y. (2004), “Global sourcing strategy and sustainable competitive advantage”, Industrial Marketing Management, Vol. 33, pp. 7-14. Lindholm, A. and Suomala, P. (2004), “The possibilities of life cycle costing in outsourcing decision making”, Frontiers of E-business Research, Vol. 1, pp. 226-41. Lindholm, A. and Suomala, P. (2005), “Learning by costing: sharpening cost image through life cycle costing”, paper presented at 7th Manufacturing Accounting Research Conference, Tampere, May 30-June 1. Minshall, T. (1999), Manufacturing Mobility – A Strategic Guide to Transferring Manufacturing Capability, University of Cambridge Institute of Manufacturing, Cambridge. Min, H., LaTour, M. and Williams, A. (1994), “Positioning against foreign supply sources in an international purchasing environment”, Industrial Marketing Management, Vol. 23, pp. 371-82.

Mol, M.J., van Tulder, R.J.M. and Beije, P.R. (2005), “Antecedents and performance consequences of international outsourcing”, International Business Review, Vol. 14, pp. 599-617. Monckza, R.M. and Trecha, S.J. (1988), “Cost-based supplier performance evaluation”, Journal of Purchasing and Materials Management, Vol. 24 No. 1, pp. 2-7. Platts, K.W. (1993), “A process approach to researching manufacturing strategy”, International Journal of Operations & Production Management, Vol. 13 No. 8, pp. 4-17. Pongpanich, C. (2000), Manufacturing Location Decision – Choosing the Right Location for International Manufacturing Facilities, University of Cambridge Institute of Manufacturing, Cambridge. Rangan, S. (2000), “The problem of search and deliberation in international exchange: microfoundations to some macro patterns”, Journal of International Business Studies, Vol. 31 No. 2, pp. 205-22. Seyoum, B. (1996), “The impact of intellectual property rights on foreign direct investment”, The Columbia Journal of World Business, Spring, pp. 50-9. Smith, P.J. (2001), “How do foreign patent rights affect U.S. exports, affiliate sales and licenses?”, Journal of International Economics, Vol. 55, pp. 411-39. Smytka, D.L. and Clemens, M.W. (1993), “Total cost supplier selection model: a case study”, International Journal of Purchasing & Materials Management, Vol. 29 No. 1, pp. 42-9. Stewart, G.B. (1991), The Quest for Value,Vol. III, Harper Business, New York, NY. Taylor, W.B. (1981), “The use of life cycle costing in acquiring physical assets”, Long Range Planning, Vol. 14 No. 6, pp. 32-43. Williamson, O.E. (1975), Markets and Hierarchies Analysis and Antitrust Implications, The Free Press, London. Woodward, D.G. (1997), “Life cycle costing-theory information acquisition and application”, International Journal of Project Management, Vol. 15 No. 6, pp. 335-44. Further reading Ellram, L.M. (1995), “Total cost of ownership: an analysis approach for purchasing”, International Journal of Physical Distribution & Logistics Management, Vol. 25 No. 8, pp. 4-20. Samli, A.C., Browning, J.M. and Busbia, C. (1998), “The status of global sourcing as a critical tool of strategic planning: opportunistic versus strategic dichotomy”, Journal of business research, Vol. 43, pp. 177-87. Wouters, M., Anderson, J.C. and Wynstra, F. (2005), “The adoption of total cost of ownership for sourcing decisions – a structural equations analysis”, Accounting Organizations and Society, Vol. 30, pp. 167-91. Corresponding author Ninghua Song can be contacted at: [email protected]

To purchase reprints of this article please e-mail: [email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints

Overseas outsourcing/ sourcing 875

The current issue and full text archive of this journal is available at www.emeraldinsight.com/1741-038X.htm

JMTM 18,7

Wireless technologies for logistic distribution process Win-Bin See

876 Received August 2006 Revised January 2007 Accepted April 2007

Aerospace Industrial Development Corporation, Taichung, Taiwan Abstract Purpose – The purpose of this paper is to present the integration of logistic management with information and communication technologies to largely improve the effectiveness of logistic fleet operations. The work presented here shows a real-world fleet management system that integrates mobile communication and supports real-time logistic information flow management. Design/methodology/approach – First, the application of information and mobile communication technologies in providing effective logistic distribution service is introduced. Then, the proposed real-time fleet management system (RTFMS) architecture is depicted, the technology profiles for mobile data terminal (MDT) and logistic information system are described, and the considerations of various wireless mobile communication technologies for logistic distribution process are also addressed. Finally, the implications of this paper are discussed and plans for further work are outlined. Findings – The proposed architecture for a real-world logistic fleet management system, the RTFMS, can be served as reference architecture for real-time logistic fleet management design. The major components of the RTFMS have been described in UML use cases to facilitate reuse of this design. This paper presents the RTFMS architecture with associated information flows and timing considerations could be used for the architecture adaptation in similar applications. Wireless technologies provide the logistics feet management with bi-directional real-time information flows as shown in this paper, and this would stimulate new ideas in logistics management and services models. Research limitations/implications – This paper provides a reference model with implementation in adopting wireless technologies in logistics distribution process. However, the services provided by each specific system would depend on all stakeholders in specific chain of logistics service provider and consumer. Originality/value – The work presented here shows a real-world fleet management system that integrates mobile communication and supports real-time logistic information flow management. Keywords Fleet management, Wireless, Communication technologies, Distribution management Paper type Technical paper

Journal of Manufacturing Technology Management Vol. 18 No. 7, 2007 pp. 876-888 q Emerald Group Publishing Limited 1741-038X DOI 10.1108/17410380710817309

Introduction Adapting appropriate information and mobile communication technologies into logistic fleet management, can effectively improve the fleet resource utilization and customer satisfaction. Modern logistic system requires real-time monitoring and interaction with fleet vehicles to attain high fleet utilization and provide fast response to customer’s need. Enabling technologies that support these real-time logistic requirements are mobile communication, global positioning system (GPS), geographical information system (GIS) and embedded real-time system design and implementation technologies. Wireless wide area network (WAN) communication system provides the real-time mobile information link for logistic information centre and all vehicles in the logistic service fleet. Short-range wireless communications technology integrates vehicle borne sensors and gadget into an ad hoc network and substitutes the role of cumbersome wiring harness.

Logistic fleet management using mobile communication infrastructure enables real-time transportation status information transfer. Modern vehicle dispatching system integrates mobile data communication and embedded real-time system technology into this domain to achieve dynamic dispatching and real-time monitoring. In See et al. (2002), modular mobile dispatching system (MMDS) had been introduced that realize a logistic fleet dispatching through the integration of a dispatching centre software, dispatching control centre system (DCCS), and a set of vehicle borne data terminals, modular mobile data terminals (MMDTs). Figure 1 shows MMDS architecture; it adopts the general packet radio service (GPRS) mobile communication. In See et al. (2003), a dispatching system that is capable of accommodating multiple mobile communication protocols was introduced; the protocols are trunking radio, global system for mobile communications (GSM) short-message service (SMS), and GPRS packet data communication. In See and Chen (2003), dispatching system affordability from the perspective of total cost of ownership had been addressed. In the design of modern logistic management system, a generic system architecture that could be used as a baseline framework for the assessment of new technology adaptation would be very helpful for this application domain. This paper presents the real-time fleet management system (RTFMS) architecture, and uses it as reference architecture for modern mobile logistic fleet management design and analysis. RTFMS consists of three major constituents, the logistic information service (LIS) system, vehicle-borne mobile data terminal (MDT), and the mobile communication infrastructure. LIS system serves as data communication hub for MDTs, it maintains a reliable and smooth communication link for the fleet. LIS also provides fleet dispatching function and information query services to the enterprise internal and external users. In modern logistic fleet management, MDT data terminal plays a very

Wireless technologies

877

SGSN GGSN

BSC WAN BTS

Internet

DCCS

MMDT (GPRS)

MMDT (GPRS) PAN

Notes: GPRS service infrastructure BTS-Base Transceiver Station BSC-Base Station Controller G/SGSN-Gateway/Serving GSN WAN-Wide Area Network PAN-Personal Area Network

Figure 1. MMDS architecture with GPRS

JMTM 18,7

878

important role as a vehicle borne sensor integrator; it integrates GPS and mobile communication capabilities into a tiny box with in affordable unit price. The technology profile for each RTFMS constituent will be presented accordingly. The RTFMS information flows will also be depicted to facilitate system response time analysis. In this paper, unified modelling language (UML) (Booch et al., 1999; www.uml.org/) notation has been used to describe this RTFMS fleet management system. In UML, there are several modelling diagrams for system behaviour and structural description. Most frequently used diagrams are use-case, sequence and collaboration diagram. Usecase diagram describes the scenario in the usage of system from a specific aspect. Sequence diagram focuses on time ordered messages that passed among related objects in a system to accomplish specific system function requirement. Collaboration diagram is another presentation of system scenario of object interactions that show the objects interconnection through messages. The rest of this paper is organized as follows. Section 2 presents the RTFMS architecture. Section 3 describes the technology profiles for MDT and logistic information system. Section 4 presents the considerations of various wireless mobile communication technologies for logistic distribution process. Section 5 presents the conclusion. Real-time fleet management system Wireless mobile communication technology furnished by GPRS and 3G-data communication infrastructure and real-time system technology can be organized to furnish bi-directional information transfers amongst logistic fleet management centre and all vehicles in the fleet. Based on previous field experiences in logistic fleet management system design and following the progress of related technologies, this work proposed the RTFMS architecture. RTFMS consists of three major constituents, vehicle on-board MDT, LIS system, and the mobile communication infrastructure. Figure 2 shows the RTFMS architecture and three information flows, annotated as TA, TB, and TC. These flows are used to assess the RTFMS system timing behaviour. TA represents the time required for a message to travel from MDT back to LIS database. TB represents the LIS user service request processing time. TC represents the application event announcement time. The reset of this section describes the RTFMS architecture from the following three aspects: (1) RTFMS information flows; (2) LIS services and further value-added service integration of the RTFMS; and (3) system response time and system resource considerations in RTFMS deployment. RTFMS information flows Using MDT, vehicle driver owns a continuous virtual connection with respect to the LIS database server, and then to all RTFMS system users accordingly. Mobile communication operator provides the path for this continuous virtual connection. Internet service provider (ISP) bridges the gap between the LIS data centre and the

Wireless technologies Database MDT

baseStation

TC commSW

879

appSW TB

enterprise Gateway

mComGateway

TA

ISP

inhouseUser

Internet Notes:

Query Service Vehicle Communication Event Announcement

externalUser

mobile communication operator. This seamless communication path between the MDT micro-controller and the computer in LIS service centre enables the real-time automated logistic information flow and processing. For the TA information flow shown in the left side of Figure 2, it shows the message travelling time required for a message to travel from MDT back to LIS database. The V-shape flow shows the message flow from vehicle MDT to LIS database server. MDT connects to the base station (the baseStation object) using mobile communication hardware module. The message then passes through the network gateway of mobile communication operator (the mComGateway object), and connects to the internet world. It then passes through the enterprise internet gateway (the enterpriseGateway object) reaches the LIS application system server. The communication middleware in LIS application system server receives the message, associates the connection and vehicle identification, and stores the message into the indexed database server. The travelling time for the left-hand branch of the V-shape message flow will be influenced by spot mobile communication traffic volume and operator’s resource allocation. The message travelling time for the right branch of the V-shape message flow depends on the enterprise’s fixed-wire network resources available. The time for the LIS communication middleware processing that includes the data table access time for the database server. The outgoing traffic from LIS communication middleware, the commSW object, to MDT shares the above communication path in reverse direction. For the TB information flow that shows the LIS user service request processing time. TB represents the time that starts from the initiation of a user’s service query request, and the request goes to LIS and reaches the application software, the appSW object, and then it processes the request and sends the required information back and presents to the requester. TB will be conditioned by the computing and networking resources committed by enterprise in handling incoming messages from MDT and user service queries and responses.

Figure 2. RTFMS architecture and information flow

JMTM 18,7

For the TC information flow, it represents the application sensor event trigger and report time. The application event indicates the occurrence of special vehicle situation. Typical events are driver activated vehicle emergency alert, vehicle astray from predefined geographical zone, vehicle borne sensor data goes beyond prescribed threshold, etc. Application software checks the vehicle situation data in database for prescribed event of interest with respect to designated vehicle.

880 LIS services and further value-added service integration The implementation of LIS service could involve several information flows. For instance, fleet dispatching service has the message flow of TA and TB. Dispatcher issues the dispatch message using TB, LIS sends the message to vehicle driver using TA, vehicle driver acknowledges the dispatch using TA message path. Information flow time required for a complete message dispatching becomes the summation of TC, TA and TB. The functions of the LIS data centre include fleet dispatching and providing information service to the enterprise internal and external users. MDT provides a user interface to presents the dispatching message from LIS service centre. The LIS service centre provides web-based logistic service information to enterprise internal or external users, they have been shown as the inhouseUser and the externalUser objects in Figure 2. The architecture of the RTFMS focuses on providing an integrated mobile and static communication flow for the fleet dispatching management and cargo delivery status information. The LIS database can be used as a fleet management information repository; and the logistic service provider can further integrate the LIS database with backend information system to provide value-added services. The customisations would depend on the management and service strategy of the service provider. There are two approaches to integrate this RTFMS system further. Firstly, additional software components could be added directly into this RTFMS architecture to customize the service behaviour. Secondly, the LIS data can be relayed to another backend management information system server and integrate the data at that server. System response time and system resources considerations The required response time of the RTFMS has been defined to be in three seconds; that starts from the sending of the message from vehicle side to the message received by the LIS server. The GPRS service has been an extension to the original GSM communication system that provides mobile voice communication service. Some related timing figures for voice communications (Katz et al., 2006) could be used as reference and indicated as follows. For the public switched telephone network (PSTN) phone that communicates by establishing a dedicated link between the caller and the receiver; the round-trip latency time is less than 150 milliseconds. In recent Voice-over-Internet-Protocol (VoIP) phone, the one-way latency has been set to be under 200 milliseconds. Typical parcel delivery status information from logistic vehicle can be encoded into a data packet around 300 bytes of data. The GPRS infrastructure could delivery the data packet in the order of 100 milliseconds. In the phase of RTFMS system full-fledged deployment, the communication and computing resources utilizations have to be closely watched to find out the possible system bottlenecks and resolve them accordingly. The resources for TA information flow could be divided into three factors; the:

(1) mobile factor – MDT mobile to enterprise ISP gateway; (2) ISP Factor – the ISP gateway to LIS database; and (3) data server factor – LIS database server. The LIS data server computing and communication capacity has to ensure that it can accommodate the incoming data packet from the fleet vehicles. The ISP factor stands for the external internet bandwidth capacity between mobile communication provider and enterprise ISP gateway. The mobile factor could be suffered from the mobile communication provider’s GSM/GPRS base-station bandwidth resource allocation. The resources for the TB and TC information flows relate to the ISP factor and data server factor in TA. To provide the LICs based on the LIS database repository, the amount of services requested would escalate demands on both the network bandwidth and computing resource for database transactions. Mobile data terminal and logistic information system In the deployment of a logistic fleet management solution, each vehicle in the fleet needs to have a MDT installed to establish wireless virtual connection between vehicle and enterprise fleet management centre. Enterprise needs to construct an information technology infrastructure that integrates fleet MDTs and provides LIS to various users that need the underlying information. The UML (Booch et al., 1999) use cases of the MDT and LIS system are described as follows. Mobile data terminal To reduce the workload of vehicle driver, the real-time fleet management requires an on-board computing device that automates the communication control, sensor data acquisitions, and providing easy reading message display and simple driver interactions. This kind of on-board device has been called MDT. MDT interacts with vehicle driver and acts as an online remote agent in this real-time fleet management infrastructure. MDT helps the dispatching centre in presenting the messages to the driver. Driver produces cargo delivery status information, and then MDT sends it back to dispatching centre. MDT micro-controller autonomously acquires GPS’s position data and sends back to dispatching centre; dispatching centre uses this data to plot the geographical position of the vehicle with the support of digital map and the GIS software. Modern logistic service requires the support of growing number of input devices and sensors connected to the vehicle. For instance, some goods would need to be shipped in an air-conditioned container that is controlled within prescribed temperature. Temperature sensors would be installed to monitor and control the container temperature. Continuous remote sensor data monitoring and automatic abnormal event notification would help to ensure the delivery service quality. Other examples for MDT controlled devices are bar code reader, printer, and radio frequency identification (RFID) reader, etc. Embedded micro-controller system technology enables the effective integration of these sensors and devices with affordable cost and reasonable overhead for driver intervention. Figure 3 shows the typical MDT operating scenarios using UML use cases. Wireless WAN communication technology provides a real-time virtual connection between each vehicle and the central management database server at LIS system. Use

Wireless technologies

881

JMTM 18,7

Mobile Data Terminal

Virtual connection

1.Communicate with LIS centre DB

882

2. Collect and send sensor data LIS MDT/ Vehicle Driver

3. Display dispatching message 4. Acknowledge dispatching assignment

Figure 3. MDT use cases

LIS/ Dispatcher

5. Produce cargo delivery status

case 1 shows the virtual connection between MDT and LIS database server. In use case 2, MDT autonomously collects sensor data and sends back to LIS database server without vehicle driver intervention. In use case 3, MDT receives and displays the dispatching message on MDT screen. Vehicle acknowledges to the dispatching command message by depression acknowledge key; this is the use case 4. In use case 5, vehicle driver provide cargo shipment status through the aids of interface devices, such as bar-code reader, RFID reader or manual keypad. Considering the technology profile of the MDT, there have been two categories of MDT implementations; one is industrial MDT, the other is personal digital assistant (PDA) from consumer electronics sector. In general, industrial MDT provides larger display fonts display and with better interface customisation to accommodate related sensors and readers. Industrial MDT provides simpler interface to than PDA without additional features like those personal organizer function provided on Microsoft PocketPC or Palm OS. Recently, PDA that integrates several wireless communication protocols, GPS, and GIS has been getting popular with competitive price. Different fleet operators could have different workflows. Both, industrial MDT and PDA need different level of customisation in software and/or hardware interface to cope with different workflows and usage scenarios. Choosing between various industrial MDT and PDA models, the total cost of ownership shall be considered. For instance, maintenance cost for the expected fleet dispatching system operation period shall be considered before actual system deployment. Being as consumer electronics, PDA has relatively short product lifecycle. The maintenance of PDA as MDT could be an issue that needs to be addressed carefully. Logistic information services LIS system serves as a data communication hub to vehicle with MDT, it maintains a reliable and smooth communication data streams with respect to all vehicles. LIS

system also provides fleet dispatching function and serves the requests from all the RTFMS information users. These users include enterprise internal management people and external users who concern the shipment status of the delivered goods. Figure 4 shows the UML use cases of the LIS system. In use case 1, LIS sets up a virtual connection with respect to all vehicles on top of the communication infrastructure. Mobile communication service provider and the ISP provide the communication infrastructure. A communication processing middleware on LIS system maintains all the MDT connections, associates the connection with vehicle identification, and directs the bi-directional information flows. Incoming vehicle status information will be stored in LIS database for further processing and to serve RTFMS user queries. In use case 2, fleet management dispatcher sends logistic service assignment to specific vehicle, and the MDT presents the message to vehicle driver. Having the underlying internet infrastructure, dispatching can be proceeded on any web-connected PC terminals. In use case 3, fleet dispatcher can monitors the vehicle status, such as vehicle geographical position. In use case 4, LIS system prompts vehicle sensor event to fleet dispatcher for exception situation handling. LIS system could define sensor event to be triggered automatically according to the sensor data collected from MDT. Example vehicle events are the occurrence of over temperature situation in a low temperature cargo container, or an astray from pre-defined working zone. During cargo delivery flow, the consigner who entrusts the parcel for delivery, and the expected recipient could be interest in knowing the delivery status. Consigner queries cargo shipment status through web-based interface to send the query request, and LIS application responds with the query result based on the recent information report from MDT and stored in database. Use case 5 dedicates to this is the usage scenario. LIS system use cases can be extended and customized to provide more services as required by the system users. However, additional services and the changes in existing service requirements could change the system resource requirements that include the Virtual connection

Wireless technologies

883

Logistic Information Service 1. Communicate with vehicles

DB

2. Dispatch vehicle MDT/ Vehicle Driver

3. Monitor vehicle status

LIS/ Dispatcher

4. Handle vehicle sensor event 5. Query cargo delivery status

Consigner /Receiver

Figure 4. – LIS use cases

JMTM 18,7

884

communication bandwidth, computation power and the database system capacity. Take vehicle geographical route tracking as an example, the amount of vehicle position data that could flood into LIS database server will depend on the GPS position data resolution requirement and the number of vehicles being tracked. The amount of user queries to be served should also be examined in LIS system resource planning. Adoption of wireless communication technologies Advancement and availability of wireless communication technologies change the shape of logistic dispatching operation drastically and the evolutions are still underway. They can be categorized into WAN infrastructure and personal area network (PAN) organization (Callaway et al., 2002; Bisdikian, 2001). GSM, GPRS and 3G belong to the category of providing WAN connection. Recent PAN wireless communication schemes are Bluetooth (Bisdikian, 2001) and Zigbee (Callaway et al., 2002). RFID has also been technology that provides nonline-of-sight and wireless access to the identification of shipped good (Weinstein, 2005). The application aspects of wireless WAN and PAN technologies in logistic fleet operations are described as follows. WAN mobile communication infrastructure GSM, GPRS and 3G (third-generation) communications provide several alternatives of mobile data communication services. They are GSM SMS, circuit switched data (CSD), high-speed circuit switched data (HSCSD) (Vrdoljak et al., 2000), GSM GPRS (Sarikaya, 2000) and recently deployed 3G (Wisley et al., 2002; Varma et al., 2003). Data terminal equipment uses AT commands (ETSI, 1999) to control the GSM module for sending and receiving text messages, like SMS. Table I compares asset of attributes among these mobile communication data services. In GSM SMS, each message carries a string of up to 160 characters in length. SMS service is a “store and forward” service, short message will be sent to a SMS Message Centre first and then forwarded to the recipient. Communication operator charges SMS service by the number of messages sent. In vehicle location tracking application with a time resolution of every two minutes per location data transfer, eight hours of tracking needs 240 SMS messages. This cost around NT$720 for one vehicle tracking per day. Approximately, 1US$ ¼ 34NT$. This is too expensive and hindered the SMS from

Table I. Attribute comparison among various mobile communication data services

Attribute\service

Message length

Speed (bps)

Connection type

Charge unit

GSM SMS

9.6 Kbps

Store and forward Dialup Circuit connected

Message count

GSM CSD

160 bytes per message User defined

GPRS (2.5G)

IP Packet size

3G

IP Packet size

CSD: 9.6 Kbps HSCSD:14.4K , 57.6 Kbps 115 Kbps 144 Kbps in mobile 2 Mbps in fixed

Dialup, connected packet based Dialup, connected packet based

Connection time Data size sent Data size sent

adoption in continuously vehicle tracking applications. However, SMS service charge will still be reasonable for an “on demand position query” application. GSM CSD uses GSM voice channel connection to transfer modulated data in a data rate of 9.6 kbps. HSCSD is a variant of CSD that allows a user to take up to four CSD channels for data transfer. The number of CSD channels taken will multiply communication cost of the HSCSD. GSM CSD needs real-time circuit connection between sender and receiver. In case of mobile data call to a PSTN fixed wire telephone, it could take around 15-25 seconds for data call set-up. The connection between the sender and receiver will be occupied for the whole duration of the data transfer. Accordingly, CSD has the disadvantages of low-data rate, long set-up time and circuit occupation. GPRS is an extension to GSM, and it provides good geographical coverage in Taiwan. In Taiwan, the GPRS networks have been installed since 2003. The packet-based mobile data services from GPRS would be a lot cheaper than GSM SMS or CSD approach. With appropriate GPRS message flow design, the monthly mobile communication service expenditure can be within 300NT$ per month per vehicle. NTT DoCoMo deployed the first commercial 3G Wideband Code-Division Multiple Access (WCDMA) in Japan by 2001, it has realized the Universal Mobile Telecommunications System (UMTS) 3G standard. International Telecommunication Union requires that the 3G system supports data services at the rate of 144 Kbps in outdoor mobile, and 2 Mbps in fixed indoor situation. The CDMA 2000 1X system could deliver data rate at 144 Kbps. In the second phase, the CDMA 2000 1X EVDO will provide data rate at 2.4 Mbps. WCDMA has the theoretical download data rate of 384 Kbps. In Taiwan, all major mobile communication operators provide 3G service by the end of the 2006. Vehicle fleet MDT providers have integrated 3G into their data terminal. However, there is no substantial difference in GPRS and 3G mobile communication service charges. The adoption of 3G in logistic fleet management will have to wait until the looming of application scenario that demands bandwidth higher than GPRS. Having the aid of bandwidth growth in 3G and the cost decline in electronic devices, real-time image and video transfer could be candidate technologies to be integrated into MDT in real-time logistic application domain. PAN mobile communication organization Logistic vehicle could collect and deliver goods incrementally among a sequence of spots. At the spot of good delivery and collection, the goods involved shall be identified and registered with associated specific logistic operation. Traditional, each good will be identified with printed barcode. Recently, RFID has been developing its acceptance. To achieve real-time goods delivery status report back to logistic service centre at spot, the goods identification collection needs to be done in goods receiving and delivery. A stationary RFID reader installed besides the cargo door could be a solution for RFID tag read-out. Some logistic vehicle needs to provide specific storage environment for goods under delivery, such as refrigeration. Sensors that detect the situation of these specific operating environment and reports to the vehicle operator and the fleet management centre will help to ensure the quality of service. Temperature sensor can be installed without harness and reports the sensed value via a Bluetooth gateway to the vehicle PAN master.

Wireless technologies

885

JMTM 18,7

886

Accordingly, modern logistic vehicle will be equipped with various user interface devices and sensors. Recent PAN mobile communication technologies provides low cost, low power and hoc connection, makes the installation of such devices become affordable and flexible. Figure 5 shows a PAN organization on logistic vehicle operating environment. In this logistic vehicle PAN organization, GPRS communication device works as a WAN gateway to connect the vehicle to logistic information centre. The WAN gateway has an associated PAN gateway that works the PAN-Master to communicate with other PAN gateways that transfer information from RFID reader, operating environment sensors (the Op-Env-Sensor object), and the vehicle operator’s handheld mobile gadget for other delivery information collection activity. The selection of Bluetooth (Bisdikian, 2001), Zigbee (Callaway et al., 2002) or RFID and its reader (Weinstein, 2005) will depend on the attributes of the information transferred, the availability and the cost of devices for integration. Conclusion and further work The architecture for a real-world logistic fleet management system, the RTFMS system, was proposed in this work. This RTFMS can be served as reference architecture for real-time logistic fleet management design. The major components of the RTFMS have been described in UML use cases to facilitate the reuse of this design. RTFMS architecture with information flows that show the routes of messages and service request and response has been illustrated in this work. The timing considerations for each route of information flow have been clearly identified and addressed. The considerations for the selection of vehicle borne data terminals and the mobile communication technologies have been presented with associated technology profiles. This reference architecture had been condensed from the field experiences in actual deployment of mobile fleet management systems to vehicle fleets of some major logistic service providers in Taiwan. Currently, there have been more than 3,000 vehicles that operate daily island-wise, in Taiwan. Using this RTFMS as domain

PAN Gateway WAN Gateway

PAN Gateway

RFID Reader

PAN-Master

PAN Gateway

Op-EnvSensor

PAN Gateway

Figure 5. PAN organization on logistic vehicle

Barcode Reader

Operator Interface

Vehicle Driver

reference system architecture, fleet management system architect could assess and postulate the requirements for physical components, system scale, and system performance. The real-time logistic fleet management system provides real-time data services for all stakeholders in the chain of logistic service provider and consumers. Since, all these service interactions are being based on the internet infrastructure, the future works in this direction would be investigating in composing the logistic fleet management services to other supply chain services from the aspect of service-oriented architecture (Alonso and Casati, 2005; Perrey and Lycett, 2003; Baglietto et al., 2002) to promote further service integration. This paper described the adoption of wireless WAN, and short-range vehicle borne PAN technologies. The development of mobile technologies are changing the shape of logistic distribution process, their progresses deserve our continuous attention. References Alonso, G. and Casati, F. (2005), “Web services and service oriented architectures”, Proceedings of the 21th International Conference on Data Engineering (ICDE’2005). Baglietto, P., Maresca, M., Parodi, A. and Zingirian, N. (2002), “Deployment of service oriented architecture for a business community”, Proceedings of the 6th International Conference on Enterprise Distributed Object Computing Conference (EDOC’02), pp. 293-304. Bisdikian, C. (2001), “An overview of the bluetooth wireless technology”, IEEE Communications Magazine, Vol. 39 No. 11, pp. 86-94. Booch, G., Rumbaugh, J. and Jacobson, I. (1999), The Unified Modeling Language User Guide, Addison-Wesley, Chicago, IL. Callaway, E. et al., (2002), “Home networking with IEEE 802.15.4: a developing standard for low-rate wireless personal area networks’”, IEEE Communications Magazine, Vol. 40 No. 8, pp. 70-7. ETSI (1999), At Command Set for GSM Mobile Equipment (ME) (GSM 07.07 Version 4.4.1) Digital Cellular Telecommunications System, European Telecommunications Standards Institute, (Phase 2). Katz, D., Lukasiak, T., Gentile, R. and Meyer, W. (2006), “Design your own voip solution with a blackfin processor – add enhancements later”, Analog Dialogue 40-04, p. April, available at: www.analog.com/analogdialogue (accessed April). Perrey, R. and Lycett, M. (2003), “Service-oriented architecture”, Proceedings of the 2003 Symposium on Applications and the Internet Workshops (SAINT’03 Workshops), IEEE Press, Piscataway, NJ, p. 116. Sarikaya, B. (2000), “Packet mode in wireless networks: overview of transition to third generation”, IEEE Communication Magazine, Vol. 38 No. 9, pp. 164-72. See, W.B. and Chen, S.J. (2003), “An affordable dispatching system”, Proceedings of 6th Asia-Pacific Intelligent Transportation System (ITS) Forum, Taipei, Taiwan, p. 283. See, W.B., Hsiung, P.A., Lee, T.Y. and Chen, S.J. (2002), “Modular mobile dispatching system (MMDS) and logistics”, Proceedings of the 2002 Annual Conference on National Defense Integrated Logistics Support, Taipei, Taiwan, pp. 365-71. See, W.B., Yang, J.Y., Hsiung, P.A. and Chen, S.J. (2003), “Multiple-protocol mobile data terminal for logistic dispatching applications”, Proceedings of the 2003 Annual Conference of the National Defense Integrated Logistics Support, Taoyuen, Taiwan, pp. 348-53.

Wireless technologies

887

JMTM 18,7

888

Varma, V.K., Wang, K.D., Chua, K.C. and Paint, F. (2003), “Integration of 3G wireless and wireless LANs”, IEEE Communication Magazine, Vol. 41 No. 11, pp. 72-3. Vrdoljak, M., Vrdoljak, S.I. and Skugor, G. (2000), “Fixed-mobile convergence strategy: technologies and market opportunities”, IEEE Communications Magazine, Vol. 38 No. 2, pp. 116-21. Weinstein, R. (2005), “RFID: a technical overview and its application to the enterprise”, IEEE IT Pro., May/June, pp. 27-33. Wisley, D., Eardley, P. and Burness, L. (2002), IP for 3G Networking Technologies for Mobile Communications, Wiley, New York, NY. Corresponding author Win-Bin See can be contacted at: [email protected]

To purchase reprints of this article please e-mail: [email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints

The current issue and full text archive of this journal is available at www.emeraldinsight.com/1741-038X.htm

The aggregation for enterprise distributed databases A case study of the healthcare national immunization information system in Taiwan Ruey-Kei Chiu Department and Institute of Information Management, Fu-Jen Catholic University, Taipei, Taiwan

Enterprise distributed databases 889 Received September 2006 Revised February 2007 Accepted April 2007

S.C. Lenny Koh Supply Chain Management Research Group, Management School, University of Sheffield, UK, and

Chi-Ming Chang Center for Disease Control, Department of Health, Taiwan, Taipei, Taiwan Abstract Purpose – The purpose of this paper is to provide a data framework to support the incremental aggregation of, and an effective data refresh model to maintain the data consistency in, an aggregated centralized database. Design/methodology/approach – It is based on a case study of enterprise distributed databases aggregation for Taiwan’s National Immunization Information System (NIIS). Selective data replication aggregated the distributed databases to the central database. The data refresh model assumed heterogeneous aggregation activity within the distributed database systems. The algorithm of the data refresh model followed a lazy replication scheme but update transactions were only allowed on the distributed databases. Findings – It was found that the approach to implement the data refreshment for the aggregation of heterogeneous distributed databases can be more effectively achieved through the design of a refresh algorithm and standardization of message exchange between distributed and central databases. Research limitations/implications – The transaction records are stored and transferred in standardized XML format. It is more time-consuming in record transformation and interpretation but it does have higher transportability and compatibility over different platforms in data refreshment with equal performance. The distributed database designer should manage these issues as well assure the quality. Originality/value – The data system model presented in this paper may be applied to other similar implementations because its approach is not restricted to a specific database management system and it uses standardized XML message for transaction exchange. Keywords Distributed databases, Health services, Information systems, Taiwan Paper type Research paper

1. Introduction As the universally change of organization structure from centralization to decentralization in this digitalized era and the increased use of the emerging technologies of computer networks and databases to build enterprise-wise information systems by modern digital firms, enterprise databases are widespread and stored at a

Journal of Manufacturing Technology Management Vol. 18 No. 7, 2007 pp. 889-903 q Emerald Group Publishing Limited 1741-038X DOI 10.1108/17410380710817318

JMTM 18,7

890

number of sites. Each site logically consists of a single and autonomous data processing unit while, data processing units at different sites are interconnected by a computer network. Sheth and Larson (1990) indicated that many organizations need to deal with distributed, pre-existing, heterogeneous, and autonomous databases in this explosive computerization age. They need methods incorporating evolutionary database expansion to promote data sharing among users, applications, and system development. Amoroso et al. (1993) surveyed the issues of distributed database management encountered by large enterprises in the USA. They found that the issue of managing and manipulating distributed data was an inevitable problem during the implementation of information systems in an enterprise. This calls for a distributed database management methodology and technology to provide the enterprise for carrying out the integration of such databases. The modern database and application designers may need to elaborate on how to effectively overcome the issues in managing enterprise’s increasing dependence on distributed databases and maintaining data consistency among enterprise distributed databases in this digital age. The objective of this research was to provide a feasible solution for incrementally conducting distributed database aggregation and maintaining data consistency within the centralized database with its participating distributed databases for increasingly dispersed databases distributed nationwide of the new-generation of Taiwan’s internet-based National Immunization Information System (NIIS). 2. The history of NIIS Since, the early 1980s, the Department of Health (DOH) of Taiwan has been building a National Health Information Network (HIN). The HIN is a nation-wide broadband network system aiming to streamline the information flow in supporting the general affairs of public health control, management, and administration. By leveraging the use of this national health network, the nationwide medical and health information interchange and transmission may be more efficiently and effectively conducted. In the 1990s, a stand-alone Primary Health Information System (PHIS) was developed on MS.DOS (Microsoft’s Disk Operating System) which was then deployed nationwide to 357 townships’ and cities’ District Health Centers (DHCs). It took almost five years to finish the implementation of this deployment and popularize PHIS operations nationwide (Yuan et al., 2003). Owing to the lack of data interchange capability between these stand-alone systems, a yellow-colored paper card was issued by each DHC to each individual for bookkeeping his/her life vaccination historic records and some required demographics such as name, date of birth, correspondence address, registration address, parent/guardian name and occupation, and so on. These hand-written data were then manually keyed into the database at each DHC where each individual was registered. Whenever, there was any change of registration address or a vaccination was taken at a health center different from his registered center, the vaccination records of an individual were manually transferred back by post mail or facsimile and then documented into the center’s database of his/her registered address. This tedious process is called “shifting and homing” of vaccination records. This legacy immunization system not only suffered from a lack of intercommunication capability, but also the absence of a unique and centralized database to support the real-time process of shifting and homing vaccination records

began to reveal many inefficiencies. The vaccination records could not be accessed and shared among health centers through the network as well. As a result, it was very difficult to keep current and complete information to effectively conduct vaccination operation, management, and decision-making by the immunization authority of Center for Disease Control (CDC). In the beginning of 2002, the Division of immunization of CDC set forth a new project plan to redesign an information system to more effectively support the National Immunization Management and Control Program. The new system was officially named the NIIS, later shortened to NIIS. In reality, the development of the NIIS was not formally started until the May of 2002. The scope was set to include all the functions of immunization sub-systems already existing in its legacy DOS version but many new extension functions were added for the new system requirements. Particularly, it was developed with a new data model and system paradigm in which each individual’s records of vaccination could be remotely retrieved from a centralized database hosted in the data center of each bureau when vaccine inoculation is conducted. Meanwhile, the system was developed by adopting web-based client/server architecture with visualized graphic user interfaces and standardized XML-formatted documents for message interchanging. As a result, the new system may be fully operated using a web browser with higher system visualization and performance. In addition, the transaction data can be updated to the centralized database in real time over HIN while they are generated. The process of shifting and homing for vaccination records among bureaus may be effectively managed and the data synchronization may be maintained in real time by leveraging the use of this centralized database and new system paradigm for NIIS operation and management. The NIIS data framework with its network operational environment in its first stage of development is conceptually shown in Figure 1 (Yuan et al., 2003) In the first stage of NIIS development, the immunization data records belonging to the district health centers, clinics and hospitals superintended by a bureau were centralized to the NIIS database located in the bureau and maintained by the IS specialists of the bureau. The first stage of system development and test was completed in the first quarter of 2003. It was then deployed nationwide covering 26 bureaus of health in counties and major cities (except Taipei City) and the 375 district health centers superintended by these bureaus of health. After completing the implementation of NIIS at the 375 DHCs in the August of 2003, the CDC of Taiwan was attempting to build an integrated central database for its widespread distributed immunization databases located at 26 bureaus of health (BOH). By doing so, the demand of agile report generations and ad hoc queries for efficiently handling the increasingly diversified and internationalized immunization affairs could be effectively supported. The development also intended to able to respond the various data demands from health administrative staffs and decision makers in DHCs, CDC, DOH, the healthcare application developers, and the public health informatics researchers. As a whole, the main goals of building a national centralized immunization database were set forth as follows Yuan et al., 2003: . to enhance the availability and accessibility of nationwide vaccination data; . to improve the efficiency of daily vaccination operation and management; . to enhance the coverage rate of vaccinations at district health centers and clinics;

Enterprise distributed databases 891

Figure 1. NIIS data framework with its network operational environment

NIIS Database

Transaction Log Files

District Health Center

District Health Center

Clinics or Hospitals

ADSL/T1

HINET ATM Network

District Health Center ADSL

Transaction Server

Health Information Network

Bureau of Health

National Demographics Database at DOH

T1/T3

Bureau of Health

District Health Center

HINET ATM Network

NIIS Database

892

Clinics or Hospitals

District Health Center

JMTM 18,7

.

.

to analyze the people’s acceptance behaviors and intentions in rejecting vaccination; and to provide a complete immunization data source for nationwide data statistics and analysis, report and information generation.

The challenges of building such a centralized database will be highly reliant on the technologies and methodologies, database frameworks, and data collecting techniques with the data delivering process to support the data aggregation from a variety of distributed databases, and data refresh model to maintain the data consistency among databases (Lenz and Reichert, 2007). 3. Methodologies for databases aggregation and refreshing Distributed databases aggregation and integration need a well-defined integrated data schema and methods to maintain data consistency and synchronization between the distributed databases and the integrated/aggregated database (Brazhnik, 2007). In this section, prior studies with respect to the distributed databases and methodologies for aggregation and integration as well as the methodologies and mechanism for the distributed database refresh model were investigated. This led to the establishment of a feasible data framework and refresh model for the case implementation. 3.1 The distributed databases and methodologies for aggregation In terms of distributed databases, Connolly and Begg (1998) defined a distributed database system as logically an interrelated collection of shared data which are physically distributed to different locations over a computer network. Franklin et al. (1997) also noted that a distributed database system could be logically viewed as a single database but physically located in different sites allowing users to query and access data through the network. The main issues of managing a distributed database system include distributed database design, semantic data and transparency control, distributed transaction management, concurrency control and reliability, and query processing and optimization (O¨zsu and Valduriez, 1999). Each database in the site is in control of its own database management system with autonomy, and keeps the data consistent is a necessity in the distributed database system. Thomas et al. (1990) indicated that a well-designed mechanism should be able to handle a seamless integration for widespread enterprise distributed databases located in different sites by leveraging the use of database systems, operating systems and data communications. They suggested that the key to success for integration of heterogeneous databases was to have standard language tools and protocols to manipulate those distributed databases. They have also suggested that it should have a deliberately integrated database schema and specification for mapping data from the distributed databases to the integrated database before integration. Sheth and Larson (1990) proposed an integrated reference data schema that could integrate heterogeneous and autonomous database systems, whereby both local applications and global applications accessing multiple database systems located in different sites can be effectively supported. This reference schema is generally accepted as the basic structure in federated database systems or at least for comparison with other specific schemas. They suggested that the approach in building an integrated data schema may be to transfer the database schemas located in different sites for use as a reference

Enterprise distributed databases 893

JMTM 18,7

894

in defining the schema of integrated database. Based on this reference schema, the integrated database designer may define an integrated database schema for the purpose of distributed database systems integration. Once, the integrated database schema is defined and implemented, the database designer further needs to set a filtering and transforming mechanism to conduct data extraction from the distributed databases, followed by data transformation before final transfer of data to the integrated database. 3.2 Refresh model for distributed databases aggregation Traiger et al. (1982) were early researchers beginning to explore the issues of how to handle transaction management and maintenance of data consistency for the distributed databases aggregation and integration. Hence, they proposed a single sequential execution model extended from the notions of system schedule and system clock. It can handle various kinds of transparency such as location transparency, replica transparency, concurrency transparency, and failure transparency among distributed database replications. Adiba and Lindsay (1980) proposed an efficient refresh algorithm for database snapshots by marking with a timestamp each record kept in the transaction log during the data refresh process between snapshots from distributed databases and their source databases. The snapshot presents a part of a database in a remote site by using selective data replication. In addition, Chiu and Boe (1995) proposed a special-purpose refresh mechanism through the hybrid use of a timestamp mechanism, transaction logs, and a push algorithm to refresh the materialized views at the client sites with the source databases at the server site in a client-server system environment. They claimed that the effectiveness of their proposal was not worse than any commercial solution at the time it was presented. Franklin et al. (1997) stated that in order to ensure data consistency among the different sites involved in the refresh process in a client/server database system, it had to provide a sound and well-structured model to control data exchange and access between clients’ and servers’ refresh models and applications. They proposed the use of a data buffer and a lock algorithm to effectively conduct data exchange between both sites. Gamal-Eldin et al. (1988) mentioned that it was particularly important to consider both the strategy of data retrieval and the algorithm to conduct data refresh in order to maintain data consistency and integrity in a distributed database system. In turn, Mao and Chu (2007) studied the phrase-based vector space model for automatic retrieval of medical documents. Pacitti et al. (2001) also proposed a specific lazy replication scheme, called lazy master replication, to enforce consistency among replicate database copies. In this scheme, one replica copy is designated as the primary copy, stored at a master node, and update transactions are only allowed on that replica. The scheme is characterized by using ownership, configuration, transaction model propagation and refreshment. The principle of the algorithm is to let refresh transactions wait for a certain “deliver time” before being executed at a node having replicas. They claimed that this proposal could effectively maintain data consistency while at the same time minimizing performance degradation due to the synchronization of refresh transactions during the execution of refreshment. They also indicated that the performance advantage of a lazy replication scheme had made

lazy replication widely accepted in practice, for example, in data warehousing and collaborative applications on the web (Anzbock and Dustdar, 2005). 4. The database framework for NIIS databases aggregation In considering the effective data management and control as well as the data access efficiency of local users, we can see in Figure 1 the NIIS databases for each county or large city are intentionally designed to use a hybrid strategy. An entire county population’s vaccination records are centralized into a NIIS database residing at the county’s bureau but it is partially replicated to each distributed health center for the purpose of local accessing, mainly for vaccine inventory controls and management. The bureau’s database is the primary copy. Hence, the daily transactions for vaccination are directly updated on this primary database. Both sides use the same type of database management system to maintain consistency between each data replica and the central primary database. For conducting the bureaus’ distributed databases aggregation to build the national centralized immunization database, the 26 bureaus’ vaccination databases are selectively replicated to build this NIIS central database located at CDC site. In other words, the data and data schema for the centralized database are selectively replicated from those in bureaus’ databases. A scheme, similar to one proposed by Pacitti et al. (2001), is applied for building this centralized database. By leveraging the implementation of this central database, it is expected that the manipulation of nationwide immunization affairs and the supporting of research can be more efficiently achieved. Meanwhile, data accessing performance and storage cost invested at all sites can be optimal. The NIIS data framework logically representing its hierarchical and nation widely-spread distributed databases is shown in Figure 2. NIIS Data Warehouse at CDC

Demographics System

Gateway

895

NIIS MIS Server

Hepatitis Database Server

Hepatitis Database

Transaction Server

Firewall

Firewall

HIN

HIN

NIIS Database at CDC

Enterprise distributed databases

Health Census Database in Department of Health

B-Type Hepatitis Data Warehouse HIN

NIIS Server in Bureau

NIIS Server in Bureau NIIS Database in Bureau

NIIS Database in Bureau T1/T3

T1/t3

Figure 2. NIIS data framework User Workstation at DHCs

User Workstation at DHCs

JMTM 18,7

896

In Figure 2, at each DHC, it also maintains a local database to host the vaccine inventory and center’s personal demographics data, which is replicated from its bureaus’ database, for keeping track of each individual’s vaccination schedule. Other than this, vaccination relevant data records under one county or major city are kept in the centralized bureau’s database. Both databases are implemented in Microsoft Window Server 2003 running on the platform of Window SQL Server 2000. Each local NIIS user at DHC may online access vaccination records through a broadband network line linked to the HIN and then to its bureau during the daily operation of vaccination. The nationwide NIIS centralized database is implemented and hosted in CDC’s data center. This centralized database is implemented in Oracle Database Server running on the platform of Sun Solaris Enterprise System. Basically, a selective data replication is adopted to aggregate the distributed databases to the central database. The database schemas defined for NIIS centralized database are partially replicated from the bureaus’ database schema including 20 more tables but they are slightly remodeled by using join and projection operations. The schemas of this centralized database are defined and implemented only before the data are initially loaded. Once the centralized database schemas are implemented, the database designer can start data extraction from the distributed databases, followed by data transforming before they are loaded into the database by leveraging the use of an existing commercialized extraction, transformation, and loading tool, which is commonly called as ETL tool. This process is done one-by-one for each distributed database. The reason why we choose to use an existing ETL tool instead of developing one by ourselves is that it provides more efficient as well as more sophisticate methods for cleaning data and filtering unneeded records before data are loaded into the centralized database. The use of an existing tool not only can accelerate the time period to establish the centralized database but also enhance the quality of data stored in the database while it may increase the implementation cost as well. 5. The data refresh model for NIIS database aggregation In this section, the data refresh model to maintain data consistency between NIIS central database and its distributed databases located in bureaus of health is outlined. The data refresh model is designed and implemented based on the assumption that the distributed database systems participating in aggregation activity are heterogeneous. The algorithm built for the data refresh model is analogous to a lazy replication scheme (Pacitti et al., 2001) but update transactions are only allowed on the distributed databases. 5.1 The refresh model The refresh model with its major components is designed and shown in Figure 3. As we can see from Figure 3, the central database system uses the central log file to record the data for refreshment. Similarly, at each bureau site of a database system a bureau’s log file is installed and maintained to keep the transaction records which have currently been changed but have not yet been refreshed to the central database. Two major types of refresh module are implemented for this model. One type of Refresh module (Type A) is installed at each bureau site as data collection agent which is periodically triggered by another type of refresh module (Type B) which is installed at the central site to retrieve the transaction records from each bureau’s transaction

Site of Bureaus of Health

Exception Handling

Exception Handling

Site of CDC

897

refresh

Data Transformation Service

Transaction records in XML foramt

Enterprise distributed databases

Central Log File

Bureau's Log File collect data and activate

Bureau Database

Activate record

Refresh Stored Procedures trigger (by schuduling)

Transactions

Refresh Module A Source Table

Refresh Module B

Target Table Central Database

Figure 3. The major components of NIIS refresh model

log file. The transactions retrieved are then passed to the data transformation service (DTS) which is activated by Refresh module B as well. The data transformation service then takes the responsibility for conducting the data extraction, transferring, and loading between two log files. In addition, refresh module A is also responsible for converting each transaction into XML format and logging it into the bureau’s log file. Refresh module B is also responsible for reading the transaction records from the central log file and activating the refresh stored procedures to conduct the precise refresh process according to the transaction type specified in each transaction record. The process and data flow for data refresh is conceptually illustrated as it is shown in Figure 4. As we can see in Figures 3 and 4, refresh module A at the bureau site will also play the role of storing transaction records in the bureau’s log file represented in XML NIIS Bureau's Database

refresh module A

Bureau log file

refresh module A

refresh module A

NIIS Central Database

Extraction refresh module B

Transferring

Loading Bureau log file

NIIS BUreau's Database

Data Transformation Service

Central Log File

Figure 4. The processes and data flow for data refresh

JMTM 18,7

898

format for each transaction. There are two transaction records stored for each transaction. One is for recording the data before the transaction occurrence. The other is for recording the data after the transaction occurrence. Both are stored in XML format in order to be transmitted over an open network during the refresh. The contents of each stored transaction record include transaction data, transaction time, table, name, transaction type (i.e. insertion, deletion, and update). To start the process of central database refreshment, refresh module B at the central database site periodically triggers refresh module A at each bureau site in a prescribed interval to retrieve the transaction records stored in each bureau’s log file. This retrieval is controlled by a timestamp algorithm (Lindsay et al., 1986) to avoid duplicate retrieval. The transaction record collected from each bureau’s log file is then passed through the process of data transformation, which is activated by refresh module A as well. Data transformation includes three activities, which are the extraction of the transaction record, the transformation and transfer of the transaction record to the central database over the HIN of Taiwan, and then the loading of the transaction record into the central log file at the central database site. 5.2 The data structure of log files The data structure of a transaction record in the central log file includes the fields OID, TableName, RecordID, BureauID, OldData, NewData, TranType, and TimpStamp which are described in Table I. The data structure for a bureau’s log file is similar to that for the central log file except the field BureauID does not exist. OldData and NewData contain all fields from its source table. The bureau identification (Bureau ID) is appended to each transaction record to identify the bureau to which it belongs while it resides in the central database. In addition, there is a timestamp for each record used to note the time it is successfully retrieved and loaded into the central log file. The TimeStamp in each log file is also used when periodically maintaining the content of log files. In the research, we adopt the timestamp control algorithm proposed by Lindsay et al. (1986) for maintaining the bureau’s log file. The complete process of retrieving transaction records from each bureau site and each stored refresh procedure with respect to the transactions of

Field name

Description

OID

The ordering identification. It is automatically generated when a transaction record is inserted The name of the table which a transaction record belongs to The transaction record identification to be refreshed in the target table The bureau identification appended to each transaction record to identify the bureau to which it belongs The content of data records before refreshing. The content is recorded in XML format The content of data records after refreshing The transaction type which includes type of insertion (xp_insert), updating (xp_update), and deletion (xp_delete) A note of the most recent time the record is retrieved and maintained for central database refreshment

Tablename RecordID BureauID Olddata Table I. The data structure of each transaction record in central log file

Newdata TranType TimeStamp

insertion, deletion, updating are introduced in the following sections. Each stored procedure is implemented using standard Structured Query Language (SQL). Therefore, it will not be limited by the DBMS platform on which it is executed, thus increasing the transportability of this refresh system. 5.3 The collection of transaction records from bureau sites During the collection of transaction records at a bureau site, each transaction record retrieved from the bureau site is attached a bureau’s BureauID to identify its source so that during the central database refreshment the transaction can be correctly refreshed to its corresponding bureau’s tables. The algorithm for this data collection process from each bureau site is shown in Figure 5.

Enterprise distributed databases 899

5.4 Central database refreshment To do the central database refreshment, the central database system periodically activates its Refresh Module B to read the transaction records one by one stored in its log file. For each refresh type of transaction record (TranType), the central database system executes xp_insert, xp_update, xp_delete stored procedures to refresh the database until the end of transaction is encountered. The process of central database refreshment is shown in Figure 6. Process of collecting transaction records from each bureau site Step 1: TRIGGER Refresh Module A at each bureau site for data retrieval at a prescribed interval. Step 2: RETRIEVE the transaction record from a bureau’s log file from the last retrieval timestamp until the end of current log file. FOR each transaction record retrieved, APPEND BureauID to indicate its bureau source. ACTIVATE the Data Transformation Service to EXTRACT the required fields from each transaction record. TRANSFER the extracted record to the central site. LOAD each transaction record to the central log file by appending when the transaction record is successfully completed, otherwise redo for a prescribed number of times before stopping. RETURN a status message to indicate the status of ? receiving?. BACKUP the transaction record and MODIFY Timestamp to note the time if a success, otherwise REDO for a prescribed number of times before stopping. Step 3: SET Refresh Module A to idle state waiting for next activation.

Figure 5. The process of collecting transaction records

JMTM 18,7

Main Process : Central Database Refreshment

Step 1: ACTIVATE Refresh Module B Step 2: READ the transaction records in the Central Log File one at a time until end of file.

900

FOR each record read, CONVERT each XML-formatted transaction record into plaintext and put into a temp PlainTextRecord. EXECUTE the stored procedures xp_insert, xp_update, xp_delete depending upon each record’s transaction type. BACKUP the transaction record and MODIFY Timestamp to note the time if successful, otherwise ROLLBACK and REDO

Figure 6. The main process: central database refreshment

for a prescribed number of times before stopping. Step 3: SET Refresh Module B to idle state waiting for next activation.

Because, the transaction records are represented in XML-format, when the refresh process is conducted each transaction record retrieved from the central log file has to have its content converted from XML format into plaintext format by executing the stored procedure called sp_xml_preparedocument in Refresh Module B. The plaintext transaction record is stored in a temporary area called PlainTextRecord for the use of the subsequent refresh process. After the completion of the refreshment, the content of PlainTextRecord is cleared. 6. Benchmark and discussion In order to measure the efficiency of this research, we also conduct a benchmark test with Microsoft SQL replication in an experimental environment setup as client-server architecture in an intranet network. The architecture consists of one database server and a certain number of databases at client sites representing distributed databases. All client databases have same schemas with numbers of data records which mimic NIIS databases at distributed sites. The database schema in the server is mapped from the clients’ and data records are aggregated from all client databases. The Microsoft SQL 2000 Server mainly handles its replicated database synchronization among different replication sites using its Transact-SQL. It includes two major modes to maintain transactional consistency. One is immediate transactional consistency where data consistency is maintained immediately after the occurrence of transactions. The other is latent transactional consistency which allows a delay between the transaction occurrence and the execution of transactional consistency. Latent transactional consistency is analogous to the lazy replication scheme which is designed by this research. Therefore, the benchmark test is set only the comparison in the use of latent transactional consistency for databases refresh. The benchmark test aims to compare two execution times involved in two major areas in conducting the database refreshment. One area is “transaction data collection at each bureau site”. The other is “central database refreshment”. The numbers of

transaction record are selected from 50,000, 100,000, 150,000 and 200,000 at each client site for these two tests in order to compare the average refresh times between the use of the latent transactional consistency in the replication functions provided by SQL Server and the database refresh model proposed in this research. The results of the test, in terms of total execution time versus different numbers of transaction records, are shown in Table II. As we can observe from Table II, the results show that the total execution time of MS SQL Server is better than our research model when records are less than 100,000. However, when records are more than 100,000, our model shows a better execution time than that of MS SQL Server. Nevertheless, there are no significant differences between MS SQL Server replication and the methods developed by our research model. In our research model, the process of data collection of bureaus’ transactions is triggered remotely by the central refresh module. The transactions are transferred and stored in standardized XML format in order that they can have high transportability over different database platforms. Therefore, during central database refreshment, the refresh module also has to interpret each transaction record retrieved in XML format from the transaction log files before it can be used for database refreshment. Therefore, in comparison with SQL Server replication, the logic of our research model is apparently more complicated but actually more open and transportable. Although, the execution time of our research model appears worse than that of SQL Server replication when the number of records is low, however our research model shows better results when the number is over a certain threshold such as around the number of 150,000.

Enterprise distributed databases 901

7. Conclusions We presented a feasible data framework and database refresh model to effectively support the distributed databases aggregation for the national immunization information system of Taiwan. The data refresh model is designed for this implementation based on the assumption that the database management systems used to implement the central database and each of distributed databases are different. A latent transactional consistency algorithm is applied for implementing the refresh system because the immediate synchronization between the central database and distributed databases are not required in NIIS. The transaction data for database refreshment are formatted in XML standardized files stored in the log files at local and central sites and for exchange between two sites during the database refreshment. The refreshment is periodically activated by the refresh module installed at the central site. Another type of refresh module is installed at each bureau site which is triggered by

No. of records 50,000 100,000 150,000 200,000

Methods Latent transactional consistency of MS SQL Server (unit: seconds) 2:96 4:15 5:10 6:16

Refresh methods of this research (unit: seconds) 3:41 4:30 5:04 5:93

Table II. Comparisons in total execution times (seconds) vs different record numbers

JMTM 18,7

902

the refresh module at central site. Whenever a bureau’s refresh module is triggered to conduct the refreshment, the transaction records logged in bureau’s log file are collected and loaded into the central log file. The collection is done by a specially designed data transformation service which includes the activities of the extraction of each transaction record from a bureau’s log file, the transfer of the transaction record from each bureau site to the central site over the health information network, and then the loading of the transaction record into the central log file. The mechanism of timestamp is also applied to effectively control the transactional retrieval from two log files at both sites and maintain the contents of both log files. Although, this data system model is uniquely designed for NIIS’s databases aggregation of Taiwan, we believe it may be applied to other similar implementations without difficulty because the algorithm we proposed is not restricted to a specific database management system and a standardized XML message for transaction exchange is applied. However, the effectiveness of a database refresh model for distributed databases not only depends upon the design of refresh algorithm, but it also depends upon the performance of computer networks and the reliability of information technologies employed to support this implementation and its later operations. The reliability of this data system model has not been widely verified yet, therefore, as a manager of NIIS database, these issues have to be overcome to assure the performance, reliability, and feasibility of this system implementation before it is widely deployed.

References Adiba, M. and Lindsay, B. (1980), “Database snapshots”, Proceedings 6th International Conference on Very Large Databases, Montreal, Canada, pp. 86-91. Amoroso, D., Atkinson, J. and Secor, S. (1993), “A study of the data management construct: design, construction, and utilization”, System Science, Vol. 4, pp. 490-9. Anzbock, R. and Dustdar, S. (2005), “Modeling and implementing medical web services”, Data & Knowledge Engineering, Vol. 55, pp. 203-36. Brazhnik, O. (2007), “Databases and the geometry of knowledge”, Data & Knowledge Engineering, Vol. 61 No. 2, pp. 207-27. Chiu, R. and Boe, W. (1995), “An efficient refresh mechanism for distributed views: analytic modeling and cost analysis”, MIS Review, p. 5. Connolly, T. and Begg, C. (1998), Database Systems: A Practical Approach to Design, Implementation and Management, 2nd ed., Addison-Wesley, Reading, MA. Franklin, M., Carey, M. and Livny, M. (1997), “Transactional client-server cache consistency: alternatives and performance”, ACM Transactions on Database Systems, Vol. 22 No. 3, pp. 315-63. Gamal-Eldin, M., Thomas, G. and Elmasri, R. (1988), “Integrating relational databases with support for updates”, Proceedings of International Symposium on Databases in Parallel and Distributed Systems, Austin, TX, pp. 202-9. Lenz, R. and Reichert, M. (2007), “IT support for healthcare processes – premises, challenges, perspectives”, Data & Knowledge Engineering, Vol. 61 No. 1, pp. 39-58. Lindsay, B., Haas, L., Mohan, C., Pirahesh, H. and Wilms, P. (1986), “A snapshot differential refresh algorithm”, ACM Proceedings SIGMOD, Vol. 15 No. 2, pp. 53-60.

Mao, W. and Chu, W.W. (2007), “The phrase-based vector space model for automatic retrieval of free-text medical documents”, Data & Knowledge Engineering, Vol. 61 No. 1, pp. 76-92. ¨ Ozsu, M. and Valduriez, P. (1999), Principles of Distributed Database Systems, 2nd ed., Prentice-Hall, Inc., Englewood Cliffs, NJ. Pacitti, E., Minet, P. and Simon, E. (2001), “Replica consistency in lazy master replicated databases”, Distributed and Parallel Databases, Vol. 9 No. 3, pp. 237-67. Sheth, A. and Larson, J. (1990), “Federated database systems for managing distributed, heterogeneous, and autonomous databases”, ACM Computing Surveys, Vol. 22 No. 3, pp. 183-236. Thomas, G., Thompson, G.R., Chung, C.W., Barkmeyer, E., Carter, F., Templeton, M., Fox , S. and Hartman, B. (1990), “Heterogeneous distributed database systems for production use”, ACM Computing Surveys, Vol. 22 No. 3, pp. 237-66. Traiger, I., Gray, J. and Galtieri, C. (1982), “Transactions and consistency in distributed database systems”, ACM Transactions on Database Systems, Vol. 7 No. 3, pp. 323-42. Yuan, V., Chang, C., Chiu, R., Chan, C. and Chen, S. (2003), “National immunization information system”, Proceedings of the 2003 Medical Informatics Symposium in Taiwan Conference, Taipei, Taiwan. Further reading Gray, J., Helland, P., O’Neil, P. and Shasha, D. (1996), “The danger of replication and a solution”, ACM Proceedings SIGMOD, Vol. 25 No. 5, pp. 173-82. Lyons, A., Coleman, J., Kehoe, D. and Coronado, A. (2004), “Performance observation and analysis of an information re-engineered supply chain”, Industrial Management & Data Systems, Vol. 104 No. 8, pp. 658-66. Roger, K. and Dennis, M. (1985), “A database design methodology and tool for information systems”, ACM Transaction on Office Information Systems, Vol. 3 No. 1, pp. 2-21. Corresponding author Ruey-Kei Chiu can be contacted at: [email protected]

To purchase reprints of this article please e-mail: [email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints

Enterprise distributed databases 903