Optimization and Inverse Problems in Electromagnetism 9781784411879, 9781784411862

This special issue presents a selection of papers related to the 12th International Workshop on ‘Optimization and Invers

202 81 32MB

English Pages 360 Year 2014

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Optimization and Inverse Problems in Electromagnetism
 9781784411879, 9781784411862

Citation preview

COMPEL

ISSN 0332-1649

Volume 33 Number 3 2014

Volume 33 Number 3 2014

COMPEL

COMPEL

The international journal for computation and mathematics in electrical and electronic engineering

Number 3 709 710 711

Guest Editors: Professor Luc Dupre´ and Dr Guillaume Crevecoeur

879

Editorial advisory board

894

SPECIAL SECTION: OIPE 2012 Guest editorial Topology optimization of rotor poles in a permanent-magnet machine using level set method and continuum design sensitivity analysis

Jinlin Gong, Bassel Aslan, Frédéric Gillon and Eric Semail

Bi-objective optimization of induction machine using interval-based interactive algorithms

759

927

Benoit Delinchant, Frédéric Wurtz, João Vasconcelos and Jean-Louis Coulomb

941

A modified lambda algorithm for optimization in electromagnetics

953

Optimal household energy management using V2H flexibilities

793

Optimization of EMI filters for electrical drives in aircraft

809

Adaptive level set method for accurate boundary shape in optimization of electromagnetic systems

965

976

Baidy Touré, Laurent Gerbaud, Jean-Luc Schanen and Régis Ruelland

Tadeusz Kaczorek

Current spectrum estimation using Prony’s estimator and coherent resampling

998

Optimal shape design of flux barriers in IPM synchronous motors using the phase field method

Michał Lewandowski and Janusz Walczak

Jae Seok Choi, Takayuki Yamada, Kazuhiro Izui, Shinji Nishiwaki, Heeseung Lim and Jeonghoon Yoo

Takahiro Sato, Kota Watanabe and Hajime Igarashi

834

Use of compensation theorem for the robustness assessment of electromagnetic devices optimal design

846

Multi-physics optimisation of an energy harvester device for automotive application

1017

1038

Elvio Bonisoli, Francesco Di Monaco, Stefano Tornincasa, Fabio Freschi, Luca Giaccone and Maurizio Repetto

868

n-level output space mapping for electromagnetic design optimization

Ahmed Abou-Elyazied Abdallh and Luc Dupré

Multi-level design of an isolation transformer using collaborative optimization Alexandru C. Berbecea, Frédéric Gillon and Pascal Brochet

1051

Model-free discrete control for robot manipulators using a fuzzy estimator Mohammad Mehdi Fateh, Siamak Azargoshasb and Saeed Khorashadizadeh

Ramzi Ben Ayed and Stéphane Brisset

ISBN 978-1-78441-186-2

www.emeraldinsight.com

www.emeraldinsight.com

Stochastic modeling error reduction using Bayesian approach coupled with an adaptive Kriging-based model

Conformal antennas arrays radiation synthesis using immunity tactic Sidi Ahmed Djennas, Belkacem Benadda, Lotfi Merad and Fethi Tarik Bendimerad

Alessandro Formisano, Raffaele Fresa and Raffaele Martone

856

REGULAR JOURNAL PAPERS SECTION Minimum energy control of descriptor positive discrete-time linear systems

989

Kang Hyouk Lee, Seung Geon Hong, Myung Ki Baek, Hong Soon Choi, Young Sun Kim and Il Han Park

A modified immune algorithm with spatial filtering for multiobjective topology optimisation of electromagnetic devices

Radial output space mapping for electromechanical systems design Maya Hage Hassan, Ghislain Remy, Guillaume Krebs and Claude Marchand

Ardavan Dargahi, Stéphane Ploix, Alireza Soroudi and Frédéric Wurtz

821

Multiobjective approach developed for optimizing the dynamic behavior of incremental linear actuators Imen Amdouni, Lilia El Amraoui, Frédéric Gillon, Mohamed Benrejeb and Pascal Brochet

Zoran Andjelic

777

Guest Editors: Professor Luc Dupre´ and Dr Guillaume Crevecoeur

Drive optimization of a pulsatile total artificial heart André Pohlmann and Kay Hameyer

Volume 33 Number 3 2014

Simple sensitivity calculation for inverse design problems in electrical engineering

OIPE 2012

Ant colony optimization for the topological design of interior permanent magnet (IPM) machines Lucas S. Batista, Felipe Campelo, Frederico G. Guimarães, Jaime A. Ramírez, Min Li and David A. Lowther

Framework for the optimization of online computable models

Piergiorgio Alotto, Leandro dos Santos Coelho, Viviana C. Mariani and Camila da C. Oliveira

768

Adaptive unscented transform for uncertainty quantification in EMC large-scale systems Moises Ferber, Christian Vollaire, Laurent Krähenbühl and João Antônio Vasconcelos

Dmitry Samarkanov, Frédéric Gillon, Pascal Brochet and Daniel Laloy

745

Topology optimization of magnetostatic shielding using multistep evolutionary algorithms with additional searches in a restricted design space Yoshifumi Okamoto, Yusuke Tominaga, Shinji Wakao and Shuji Sato

914

Piotr Putek, Piotr Paplicki and Ryszard Pałka

729

The international journal for computation and mathematics in electrical and electronic engineering

High-speed functionality optimization of five-phase PM machine using third harmonic current

Access this journal online

www.emeraldinsight.com/compel.htm

Editorial advisory board

EDITORIAL ADVISORY BOARD

Professor O. Bı´ro´ Graz University of Technology, Graz, Austria

Professor D. Lowther McGill University, Montreal, Quebec, Canada

Professor J.R. Cardoso University of Sao Paulo, Sao Paulo, Brazil

Professor O. Mohammed Florida International University, Miami, Florida, USA

Professor C. Christopoulos University of Nottingham, Nottingham, UK Professor M. Clemens Helmut-Schmidt University, Hamburg, Germany Professor J.-L. Coulomb Laboratoire de Ge´nie Electrique de Grenoble, Saint-Martin-d’He´res, France Professor X. Cui North China Electric Power University, Beijing, China

Professor M. Sever The Hebrew University, Jerusalem, Israel

Dr L. Davenport Advanced Technology Centre, BAE Systems, Bristol, UK

Dr J. Sturgess ALSTOM Grid Research & Technology, Stafford, UK

Professor A. Demenko Poznan´ University of Technology, Poznan´, Poland

Professor W. Trowbridge Honorary President of the International Compumag Society

Professor E. Freeman Imperial College, London, UK

Professor T. Tsiboukis Aristotle University of Thessaloniki, Thessaloniki, Greece

Professor K. Hameyer RWTH Aachen University, Aachen, Germany

709

Professor A. Razek Laboratoire de Ge´nie Electrique de Paris, Paris, France Professor G. Rubinacci Universita´ di Napoli, Federico II, Napoli, Italy Professor M. Rudan University of Bologna, Bologna, Italy

Professor N. Ida University of Akron, Akron, Ohio, USA

Professor T. Weiland Technical University of Darmstadt, Darmstadt, Germany

Professor A. Kost Technische Universita¨t Berlin, Berlin, Germany

Professor Zi-Qiang Zhu University of Sheffield, UK

Professor T.S. Low Republic Polytechnic, Singapore

COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering Vol. 33 No. 3, 2014 p. 709 r Emerald Group Publishing Limited 0332-1649

COMPEL 33,3

710

Guest editorial Optimization and inverse problems in electromagnetism On behalf of the Editorial Board, we are pleased to present a selection of papers related to the 12th International Workshop on “Optimization and Inverse Problems in Electromagnetism (OIPE 2012),” held in Ghent, Belgium, from September 19 to 21, 2012. The aim of the OIPE workshop is to discuss recent developments in optimization and inverse methodologies and their applications to the design and operation of electromagnetic devices. It is intended to establish an occasion when experts in electromagnetism and other areas (e.g. engineering, mathematics, physics), involved in research or industrial activities, can discuss on the theoretical aspects and on the technical relevance of optimization and inverse problems, in the general framework of the innovation in electromagnetic methods and applications. The 12th edition was organized by the Department of Electrical Energy, Systems and Automation within the Faculty of Engineering and Architecture of Ghent University. Among the 107 presentations in the conference program, 65 manuscripts were submitted for peer review. Twenty-one of these papers were selected for publication in the present volume. The high scientific and technical quality of the workshop is well reflected in the quality of the manuscripts contained in this special issue of COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering. We wish to express our gratitude to the members of the editorial board – Hartmut Brauer, Jean-Louis Coulomb, Paolo Di Barba, Alessandro Formisano, Laurent Gerbaud, Alesˇ Gottvald, Jens Haueisen, Laurent Kra¨henbu¨hl, David Lowther, Christian Magele, Iliana Marinova, Raffaele Martone, Maurizio Repetto, Marek Rudnicki, Bruno Sareni, Antonio Savini, Slawomir Wiak, Fre´de´ric Wurtz, Ivan Yatchev – as well as to all reviewers, who provided the necessary volunteer time and expertise to conduct a fair and detailed review, ensuring high publication standards for the selected manuscripts. We would also like to thank Professor Jan Sykulski, Editor-in-Chief of COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering, for his kind support during the whole process of preparation of this special issue. Professor Luc Dupre´ and Dr Guillaume Crevecoeur Department Electrical Energy, System and Automation (EESA), Electrical Energy Laboratory (EELAB), Ghent University, Ghent, Belgium

COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering Vol. 33 No. 3, 2014 p. 710 r Emerald Group Publishing Limited 0332-1649 DOI 10.1108/COMPEL-10-2013-0333

The current issue and full text archive of this journal is available at www.emeraldinsight.com/0332-1649.htm

Topology optimization of rotor poles in a permanent-magnet machine using level set method and continuum design sensitivity analysis

Topology optimization of rotor poles 711

Piotr Putek Department of Mathematical Analysis, Ghent University, Ghent, Belgium, and

Piotr Paplicki and Ryszard Pazka Department of Power Systems and Electrical Drives, West Pomeranian University of Technology, Szczecin, Poland Abstract Purpose – In this paper, a numerical approach to the topology optimization is proposed to design the permanent magnet excited machines with improved high-speed features. For this purpose the modified multi-level set method (MLSM) was proposed and applied to capture the shape of rotor poles on the fixed mesh using FE analysis. The paper aims to discuss these issues. Design/methodology/approach – This framework is based on theories of topological and shape derivative for the magnetostatic system. During the iterative optimization process, the shape of rotor poles and its evolution is represented by the level sets of a continuous level set function f. The shape optimization of the iron and the magnet rotor poles is provided by the combining continuum design sensitivity analysis with level set method. Findings – To obtain an innovative design of the rotor poles composed of different materials, the modified MLSM is proposed. An essential advantage of the proposed method is its ability to handle a topology change on a fixed mesh by the nucleating a small hole in design domain that leads to more efficient computational scheme then standard level set method. Research limitations/implications – The proposed numerical approach to the topology design of the 3D model of a PM machine is based on the simplified 2D model under assumption that the eddy currents in both the magnet and iron parts are neglected. Originality/value – The novel aspect of the proposed method is the incorporation of the Total Variation regularization in the MLSM, which distribution is additionally modified by the gradient derivative information, in order to stabilize the optimization process and penalize oscillations without smoothing edges. Keywords Topology, Design optimization, Torque, Permanent-magnet machine, Sensitivity analysis, Shape optimization of rotor poles Paper type Research paper

1. Introduction Nowadays, permanent-magnet (PM) machines with a brushless direct-current control system due to their advantages are widely used, among others in applications that require a high-power density such as hybrid electric vehicles, compressors or wind This work was supported by the Ministry of Education and Science, Poland, under Grant No. N510 508040 (“The Electric Controlled Permanent Magnet Excited Synchronous Machine (ECPSM) with application to electro-mobiles”).

COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering Vol. 33 No. 3, 2014 pp. 711-728 r Emerald Group Publishing Limited 0332-1649 DOI 10.1108/COMPEL-09-2013-0286

COMPEL 33,3

712

renewable energy systems (Gieras and Wing, 2008; Hughes, 2006; May et al., 2011). In a such type of motors, designers aim above all to minimize the torque fluctuations, which are a main source of vibrations, noise and speed fluctuations. In a case of PM machines, the cogging torque (CT) is primary resulted from the interaction between the stator slot driven air-gap performance harmonics and the magnet driven magnetomotive force harmonics ( Jahns and Soong, 1996). Moreover, the cogging torque and harmonic contents in the back-electromotive force (EMF), besides the saturations of the magnetic circuits and the converted-related issue (Chen et al., 2002) are well-known sources for the torque ripple (TR) in the developed electromagnetic torque. This, in consequence, may significantly affect the machine performance. Therefore, the effective design procedure is desired for high-performance low torque applications particularly suitable for electric vehicles. The various techniques have been reported in literature to reduce cogging torque and to minimize the harmonics contents (Bianchi and Bolognani, 2002; Li and Slemon, 1988; Favre et al., 1993; Zhu and Howe, 2000). It is worth mentioning about skewing the slots and/or magnets, employing dummy slots or teeth, optimizing the magnet pole-arc to pole-pitch ratio, shifting of magnets, employing the fractional number of slots per pole and shaping the magnet, etc. For example, in Bianchi and Bolognani (2002) described a unified approach to the analysis of the cogging torque minimization techniques including the potential impact of these methods on the back-EMF. Li and Slemon (1988) in turn, studied the impact of the pole width and the tooth/slot ratio on reducing the cogging torque in a PM machine. Also Fare et al. applied a comprehensive approach to minimize the ripple and cogging torque. However, all these classical methods for the shape optimization either assume parameterization (e.g. Zhu and Howe, 2000; Di Barba et al., 2012) by means of a few variables such as height of the magnet, angular internal/external width of the magnet, inner rotor radius, etc. or require the mesh generations after changing the geometry of models. Recently, the topology optimization methods have become the subject of very intensive research in designing electromagnetic devices (e.g Putek et al., 2012; Kim and Park, 2010; Lim et al., 2012). They allow finding an optimal shape of complicated electric-mechanic structures, taking simultaneously into account the distributions of all materials under consideration. Besides this, their application enables to overcome the difficulties with the need to trace particles on the curves, because they can easily handle the shape variations – including topological changes of merging, splitting and disappearing of connected materials (Kim et al., 2008; Osher and Sethian, 1988). From this point of view, its application to design the rotor poles in a PM brushless dc motor seems to be very promising. The purpose of this paper is to optimize an Electric Controlled Permanent Magnet Excited Synchronous Machine (May et al., 2011) which has a certain field weakening capability by high speeds and therefore could be used in modern drives for electro-mobiles. In such kind of applications the low level of noise and vibration are equally important requirements as high torque, power and efficiency. Since the shape of the rotor poles mainly determines its torque characteristic, the work deals with designing the iron and magnet rotor poles. Furthermore, for this purpose the modified level set method was proposed and applied to capture the shape of rotor poles on the fixed mesh using FE analysis. To achieve this, the topological derivative was incorporated into the standard level set method. In this way, the topology optimization process was accelerated, allowing this modified level sets algorithm to escape from local minimum.

2. Numerical model of PM motor As a case study, a special construction called ECPSM invented by May et al. (2011) is considered. Figure 1 shows the geometry of the machine, which consists of PM excitation, single tooth winding and an additional circumferential excitation coil (auxiliary coil) fixed on the stator in the axial center of the machine between pole structures of the rotor. Details of the machine can be found in Table I. The proper drive of the auxiliary coil using, e.g. the 2Q-DC chopper enables to control the effective excitation field, which produces induced voltages limited only by the saturation of the iron core in the armature winding between zero and maximum value. In this way, it is possible to obtain a field weakening capability of 1:4 or even 1:5 – which is a very important requirement for the electro-mobiles applications. When the PM machine structure without skewing either the stator slots or the rotor is considered, the 2D FEM with the magnetic vector potential A as unknown can be used for the description of the field distribution: r  ður  AÞ þ

s

dA dt

¼ Js

Topology optimization of rotor poles 713

ð1Þ

Here, s is the conductivity, u(B) means reluctivity (Sergeant et al., 2008) and J s stands for the forced current density in tree phase single tooth windings, where the magnetic armature yoke magnet N magnet S

Figure 1. Cross-section of ECPSM with the surface-mounted PM rotor and stator structure exhibiting the three-phase single-tooth windings with the fixed excitation control auxiliary coil

iron poles

shaft stator core 1 aux control coil stator core 2

2 p: number of poles rostat: outer radius of the stator ristat: inner radius of the stator las: axial length one part of the stator woslot: width of the slot opening ns: number of slots M: number of phases tm: thickness of magnets (NdFeB, Br ¼ 1,2 T)

armature windings

12 67.5 mm 41.25 mm 35.0 mm 4.0 mm 36 3 3.0 mm

Table I. Main parameters of the ECPSM topology

COMPEL 33,3

714

induction is defined as B ¼ r  A. In the magnets and iron parts of rotor and in the air gap, in turn, the equations are written as follows:  r  ður  AÞ ¼ 0 in air gap and iron area r  ðuPM r  AÞ  r  ðuPM MÞ ¼ 0 in permanent magnets area ð2Þ where uPM is reluctivity of PM, whereas M stands for the remanent flux density of the PM. The eddy currents in both the magnet and irons part are not taken into account. The rotation of a PM motor is modeled using the arbitrary Lagrangian-Eulerian method (Braess and Wriggers, 2000), which allows to overcome the necessity of remeshing the whole domain at every time step. Thus, during rotation, meshes of stator and rotor remain fixed in respect to each other. Moreover, it was possible to apply the periodic boundary condition (Zhu et al., 1993) to the created model segment of the ECPS machine (Figure 2). 3. Representation of the multi-level set method (MLSM) In this work a two dimensional model described by Equations (1) and (2), is considered, thus a point in the space is represented by ~ r . Furthermore, for the description of the ECPSM geometry, the MLSM was used by Vese and Chan (2002). The level set method, first proposed by Osher and Sethian (1988) has recently found a wide application in electrical engineering to address to the design, shape and topology optimization problems (Kim and Park, 2010; Lim et al., 2012; Yamada et al., 2010). The basic concept of the proposed approach used for the low cogging torque reduction of the ECPS motor follows the papers (Vese and Chan, 2002) and (Chan and Tai, 2004). However, in contrast to the work (Lim et al., 2012) where the fictitious interface energy was incorporated, the Total Variation regularization technique in the standard MLSM was combined in order to stabilize the optimization process and penalize oscillations without smoothing edges. This last effect might be especially useful in case of design optimization of nonlinear, ferromagnetic materials (Cimrak and Melicher, 2007) (Figure 3). In order to take into account the properties of three different materials such as the PM and iron poles of rotor as well as air between them, the MLSM had to be used (Vese and Chan, 2002). In this way, it was possible to describe by a zero-level set of fi i ¼ 1, 2, Surface: Magnetic flux density, norm [T] Contour: Magnetic potential, z component [Wb/m] Boundary: Magnetic potential, z component [Wb/m Arrow: Magnetic Flux density

0.04

STATOR

AIR GAP

S6 S5

0.03 S4

0.02

Figure 2. The fundamental sector of the ECPS machine

ROTOR

S3

S2

0.01

S1

0 0.01

0.02

0.03

0.04

0.05

×10

-3

5.5 5.0 4.0 4 3.5 3.0 2.5 2.0 2 1.5 1.0 0.5 0 0 -0.5 -1.0 -1.5 -2 -2.0 -2.5 -3.0 -3.5 -4 -4.0 -4.5 -5.0 -5.5

0.06

0.05

-3

×10

0.06

0.07

1.8 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0

iron pole region

magnet pole region

x 10–4 10 9 8 7 6 5

x 10–3 1 0.5

Ω

4

Ω

0

3

0.03 0.036

–0.5 –1

0.034

–0.02 –0.015

–0.01 –0.005

0

0.005

0.01

0.015

715

4

Ω

2

3 2

Figure 3. Signed distance functions in considered regions in fourth iteration of the optimization process

1

0.032

two interfaces Gi between maximally four different regions. Furthermore, these domains were defined as follows: O1 ¼ f~ r 2 Ojf1 40 and f2 40g; O2 ¼ f~ r 2 Ojf1 40 and f2 o0g; O3 ¼ f~ r 2 Ojf1 o0 and f2 40g; O4 ¼ f~ r 2 Ojf1 o0 and f2 o0g

ð3Þ

where a signed distance function took the form of:  rÞ ¼ fi ð~

Topology optimization of rotor poles

rI 2 Oiþ2 nqOiþ2 ; minðj~ r ~ rI jÞ ~ ~ 0 rI 2 qOiþ2

ð4Þ

Thus, the material properties of each region could be represented as follows: ur ðf1 ; f2 Þ ¼ ur1 H ðf1 ÞH ðf2 Þ þ ur2 H ðf1 Þð1  H ðf2 ÞÞþ þ ur3 ð1  H ðf1 ÞÞH ðf2 Þ þ ur4 ð1  H ðf1 ÞÞð1  H ðf2 ÞÞ Br ðf1 ; f2 Þ ¼ Br1 H ðf1 ÞH ðf2 Þ þ Br2 H ðf1 Þð1  H ðf2 ÞÞþ

ð5Þ

þ Br3 ð1  H ðf1 ÞÞH ðf2 Þ þ Br4 ð1  H ðf1 ÞÞð1  H ðf2 ÞÞ where urn and Brn for n ¼ 1,4 are the relative magnetic reluctivity calculated for both regions (PM, Fe) of the rotor poles and the remanent flux density considered only for the PM domain, while H(f) means the Heaviside step function. Furthermore, after taking into account that the zero-level set moves only in the normal direction ~ ni to itself qOiþ2 , the evolution of both signed distance functions was described by the so-called Hamilton-Jacobi-type equation (Osher and Sethian, 1988): qfi d~ r ¼ rfi ð~ r ; tÞ ¼ mi  ~ r ; tÞ ¼ nn;i jrfi j ni rfi ð~ dt qt

ð6Þ

r =dt denoted the velocity of the zero-level set that where ~ ni ¼ rfi =jrfi j and mi :¼ d~ corresponds to the defined objective functional and the governing primary system with its boundary conditions.

COMPEL 33,3

4. Optimization problem Let us consider the equation for the vector magnetic potential (2) in a week formulation with suitable test function j in the following way: aðj; AÞ ¼ lðjÞ

716

ð7Þ

where the bilinear energy form as well as the linear load were expressed by: Z Z l ðjÞ ¼ ðuPM MÞ  r  jdO aðj; AÞ ¼ ur  j  r  AdO; O

ð8Þ

O

for j, AAH1(O). Because the cogging torque can be expressed as the derivative of the co-energy vs the mechanical angular position of the rotor, the problem of the cogging torque minimization in 2D magnetostatic system can be reformulated into the equivalent form of the magnetic energy variation minimization as follows: min F ðf1 ; f2 Þ ¼ Wref ðf1 ; f2 Þ þ b1 TV ðf1 Þ þ b2 TV ðf2 Þ ¼ Z Z Z 1 Bðf1 ; f2 ÞHðf1 ; f2 ÞdO þ b1 jrf1 jdO þ b2 jrf2 jdO ¼ 2

f1 ;f2

O

O

O

ð9Þ subject to the following constrains: aðj; AÞ ¼ f1 ðjÞ with boundaries conditions for ð2Þ 1 Brrms ¼ L

Zy2

½1r B2 dlpa

ð10Þ ð11Þ

y1

Here, B(f1, f2) and H(f1, f2) mean the magnetic induction and the magnetic field, the TV(f) stands for the Total Variation regularization (Vogel and Omam, 1998) with two coefficients b1 and b2 for controlling the complexity of the zero-level set function, while Brrms is the rms value of the magnetic field density calculated in the air-gap along the path L. It is worth mentioning that such a definition of an objective functional is very effective from the computational viewpoint. As the energy belongs to self-adjoint operator, the calculation of its total derivative requires only one analysis of the primary system (Putek et al., 2012; Kim et al., 2008, 2004). Furthermore, in order to minimize an objective functional defined by Equation (9), the design sensitivity of each level set function was necessary. For this purpose, the total derivative of a magnetic energy expressed by Wref(f1,f2) was derived by applying the augmented Lagrangian method, the material derivative concept and the adjoint variable method (Kim et al., 2004; Park et al., 1992): dWref ¼ dt

Z g

½Gi ðB; MÞVn dg

ð12Þ

Taking into account Equation (12), the continuum design sensitivity analysis (CDSA) formula for the magnetostatic system together with derivative of the regularization functional was given by: qF qTV ðf1 Þ ¼ G1 ðB1 ; B1 ÞDW 0  b1 qf1 qf1    rf1 ¼  ðu1  u2 ÞB1  B1 DW 0 þ b1 r  dð f 1 Þ jrf1 j

Topology optimization of rotor poles

nn;1 ¼

ð13Þ

qF qTV ðf2 Þ ¼ G2 ðB; MÞDW 0  b1 qf2 qf2 ð14Þ    rf2 0 ¼  ½ðu1  u2 ÞB1  B1  ðM3  M1 Þ  B1 DW þ b2 r  dðf2 Þ jrf2 j

nn;2 ¼

where M means the magnetic magnetization, DW0 ¼ W0 W00 is the difference between the co-energy calculated for the chosen position and the constant co-energy regardless of the position, d(f) is the Dirac delta function. Due to the fact, that f is approximated by the piece constant value over mesh elements, in an implementation of the algorithm, the regularization functional TV(f) was used as in (Chan and Tai, 2004): X X qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi     f  f 2 þf  f 2 þeh2 ð15Þ TV ðfÞ ¼ i;j i1;j i;j i;j1 i

j

Here, h means the order of the mesh size and e is a positive constant, for example, h2e ¼ 1010 introduced in order to avoid dividing zero numbers for the (i, j) voxel. Furthermore, in order to avoid resolving another sub-optimization problem for imposing the constraint defined by Equation (11), two area constraints for each rotor pole separately were introduced in the practical implementation as follows: Z SFe ðf1 ; f2 Þ ¼ H ðf1 Þð1  H ðf2 ÞÞdO  SFe max O

SPM ðf1 ; f2 Þ ¼

Z

ð16Þ ð1  H ðf1 ÞÞH ðf2 ÞdO  SPM max

O

where, SFe max and SPM max mean the maximal area constraints for both rotor poles. The same technique was used for this purpose in Kim and Park (2010). When this constraint condition of the constant areas was imposed, the velocity field expressed by Equations (13) and (14) were transformed accordingly the Lagrange multiplier technique from nn;i to ^nn;i : Z Z ^nn;1  nn;1  nn;10 ; nn;10 ¼ nn;1 dG1 = dG1 ; g

^nn;2  nn;2  nn;20 ;

nn;20 ¼

Z

g

g

nn;2 dG2 =

Z

g

ð17Þ dG2

717

COMPEL 33,3

718

where vn, 10 and vn, 20 are the mean velocities field calculated around all the boundaries G1 and G2, respectively. 5. Incorporating the topological gradient into MLSM The proper initialization of the above-mentioned multi-level-set-based algorithm could significantly accelerate the optimization process. Moreover, the combination of both derivative information yields an efficient algorithm that has not only more flexibility in shape changing (Allaire et al., 2004) but also allows to escape local minima for a given topological class of shapes (He et al., 2007). The topological gradient measures the influence of small holes, so the topology changes, in the considered domain O. Thus, it can be defined as follows (Schumacher et al., 1996): Fobj ðOnBð~ r ; dÞÞ  Fobj ðOÞ ð18Þ gð~ r Þ ¼ lim d!0 dðOÞ where d(O) ¼ – area(B(~ r , d)) and O\ B(~ r , d) ¼ {yAO, 8 y–x 82Xd}, Fobj(O) means an objective function defined in the domain O, while B( ~ r ,d) is the ball positioned at the point ~ r with the radius d. Furthermore, after some manipulation using the asymptotic analysis (Sokozowski and Z˙ochowski, 1999) and the CDSA (Kim et al., 2004), the difference in an objective function before and after creating a hole takes the form of (Putek et al., 2012):  in O2 nBð~ r ;dÞ; 2  ðu1  u2 ÞB1  B1 rÞ ¼ ð19Þ gi ð~ 2  ðu1  u2 ÞB1  B1  ðM1  M1 Þ  B1 in O3 nBð~ r ;dÞ From the comparison of Equations (19) and (12) can be concluded that in this problem the computation of the topological gradient does not require any additional computational load. However, instead of adding in Equation (6) a special forcing term dependent on the topological gradient, as it was done in work (Burger et al., 2004), the modification of the fi evolution according to the following scheme is proposed: ^ ð~ . If fi( ~ r ; t) o 0 and gi( ~ r ; t) o 0 then f i r ; t Þ ¼ minðfi ð; t ÞÞ; which means that at a point ~ r it is creating a hole. ^ ð~ . If fi( ~ r ; t) 4 0 and gi( ~ r ; t) 4 0 then f i r ; t Þ ¼ maxðfi ð; t ÞÞ; which means that at ~ a point r it is adding material. . If fi( ~ r ; t) o 0 and gi( ~ r ; t) 4 0 or fi( ~ r ; t) 4 0 and gi( ~ r ; t) o 0 then do not modify ^ ð~ ~ r ; t Þ ¼ f ð r ; t Þ; because it is only the temporary the standard level set scheme f i i state in the optimization process and there is not enough information in order to clearly classify to O2, O3, or O4. Therefore, in such state, the interface and domain between them should be strictly represented by level sets of a continuous function f, which is the key feature of the level set approach. Moreover, in order to ensure the appropriate approximation of the values ur and Br in every iteration of the optimization process, the normalization of fi, before using ^ ð~ Equation (5) was conducted, thus f i r ; t Þ 2 h1; 1i: Hence, the modified level set evolution is given by the following equation: !!   ^ ^Þ qf qTV ð f  ^ i i ¼  Ki ðGi ðB; MÞDW 0 Þ þ bi Li ð20Þ rfi  ^ qt qf i

Topology optimization of rotor poles

where the normalization functions are defined as follows: Z Z Ki ðxÞ ¼ ki xdOr = jGi ðB; MÞDW 0 jdOi ; Or

Li ðxÞ ¼ li

Z

gr

Oi

 Z  qTV ðfi Þ  dGi xdGr =  qfi 

ð21Þ

Gi

Here, Or means rth cell of Oi and Gr is the rth boundary cell of Gi, while ki and li are introduced for the scaling purpose. However, both the d(f) and the step Heaviside function in a precise sense are limits of the CN function (Chan and Tai, 2004). Therefore, in order to tackle this problem, in computation they were replaced according to the recommendation of authors (Osher and Sethian, 1988) by a smeared-out version as follows:   t 1 1 1 f ð22Þ þ H t ðf Þ ¼  2  ; dt ðfÞ ¼ tan 2 p t 2 p f þt with t 40 being a parameter influencing how steep is the approximation around zero, chosen to be the order of the mesh size. In the practical implementation, instead of the steep-descent method the regularized version of the Gauss-Newton algorithm (Putek et al., 2012) was used for the solution of Equation (20). Due to the well-known converges properties of the Gauss-Newton algorithm (Nocedal and Wright, 1999; Cheney et al., 2005), in this way it is possible to reduce the iteration numbers of the multi-level-set-based optimization procedure. As the co-energy was calculated at the ith position in every iteration, the Tikhonov technique (Tikhonov et al., 1995) with the L-curve method (Hansen, 1992) were used for regularization purpose. The evolution of zero-level sets and the convergence history shown on Figures 5 and 6 indicate the features of the modified multi level set scheme. Thus, such modified first-order Hamilton-Jacobi with a special term responsible for controlling the length of level sets, allows to adjust a sign of a level set function in such a way that topology change like creating a hole or adding a material are also possible during the evolution of f. 6. Multi-level-based topology algorithm The optimization procedure consists of the following steps: (1)

(3) (4)

The definition of the problem that involves the assumption of the objective function and the specification of the area constrains (Equations (9)-(11) and (16)). The determination of the design domain, which are comprised of the division of the initial domains into small cells, called voxels and setting the iteration counter k ¼ 1. The FE analysis of the primary and dual systems (Equation (8)). If the counter is not equal 1, then jump to the step 6, otherwise go to the next step.

(5)

The topological gradient initialization that consists of:

(2)

.

the calculation of the TG in the center of all voxels (Equation (19));

.

sorting ascending voxels by the TG value;

.

removing material by the assumed area ratio; and

.

initializing the signed distance functions f1 and f2.

719

COMPEL 33,3

(6)

(7)

720

Compute an objective function (Equations (9) and (15)), next check of the convergence if the magnetic flux density has reached the critical value (Equation (11)), then stop the procedure, otherwise increase (k ¼ k þ 1) and go to the next step. The topology change (Equation (20)) that relies on: .

calculating the sensitivities vs an objective function (Equations (13) and (14));

.

computing the TG in the center of all voxels (Equation (19));

.

modifying the field velocity using the Lagrange multiplier technique (Equation (17)); and

.

updating the level set function according to the modified scheme (Equation (20)), and then going to the step 3.

7. Numerical experiment The optimization procedure shown on Figure 4 was then validated and applied to the topology optimization of the ECPS machine (May et al., 2011). The evolution of the signed distance function with their zero-level set describing the shape of one-pole pair is given in Figures 5 and 6(a-d). START PROBLEM DEFINITION 1

Define an objective function

2

Determine a volume constraint SPECIFY DESIGN DOMAIN Parameterization, k = 1

3

FE ANALYSIS OF SYSTEMS 4

Primary system

5

Adjoint system

No

Yes k = 1? TOPOLOGICAL GRADIENT INITIALIZATION 6

Compute Topological Gradient

7

Sort the voxels according the TG value

8

Remove material using the volume ratio

9

Initialize level set functions: 1 2

COMPUTE OBJECTIVE FUNCTION CONVERGE? (the volume constraint)

Figure 4. Flowchart of the optimization procedure based on the multi level set method and topological gradient

Yes

No MODIFIED MULTI-LEVEL SET OPTIMIZER 10

Compute sensitivities respect to OF

11

Compute Topological Gradient

12

Impose the constraints condition

13

Update level set functions, k = k +1

END

Additionally, the convergence properties of the proposed topology optimization algorithm was shown on Figure 7. In Cimrak and Melicher (2007), where the standard level set method with total variation regularization was applied to optimal shape design of magnetic random access memories, the implemented algorithm needed about 100 iteration in order to find an optimal solution. Furthermore, for the optimal configuration represented by the red line on the Figure 5, the numerical model in the Flux2D was built. Thus, the initial and after optimization

x 10–3 1 0.5 0 –0.5 –1 –0.02 –0.015 –0.01 –0.005

0

0.005 0.01 0.015

(a)

(b)

(c)

(d)

0.03 0.036 0.034 0.032

110 100 90 80 70 60 50 40

normalized rms value of Tcogg [%]

Notes: (a) First iteration; (b) third iteration; (c) fifth iteration; (d) seventh iteration

0

Topology optimization of rotor poles 721 Figure 5. Evolution of shape of one-pole pair described by zero-level set during seventh iterations, where the green shape with the red line border means optimal solution

Figure 6. Evolution of the distance function and shape of rotors poles

optimization process

Figure 7. The convergence history for the optimization of rotor poles using modified level sets algorithm

number of iterations 1

2

3

4

5

6

7

8

9

COMPEL 33,3

722

configurations were shown on Figures 8 and 9, while the field distribution in those models on Figures 10 and 11, respectively. Furthermore, based on the result of the optimization conducted in 2D, the full 3D model of the EPCS machine was built, as depicted on Figures 12 and 13. Also in this case, the obtained results confirmed the effectiveness of the proposed methodology. However, the optimization in the full 3D model should be performed in order to further reduce the value of cogging torque. Finally, the cogging torque and the rms value of the magnetic flux density and the electromagnetic torque for both models have been calculated (Figure 14). Hence, the performance of the cogging torque, the normal component of the magnetic flux density in the air-gap with the armature current and the electromagnetic torque calculated for both structures, before and after optimization, are shown on Figures 15 and 16. The period of the cogging torque graph depends on the motor slot/pole combination. In this case, there was a need to compute the cogging torque over one period only to fully assess the stator teeth interaction with the rotor poles.

M1

Figure 8. Mesh in the initial model of one-pole pair of the ECPS machine (M1)

M2

Figure 9. Mesh in the optimized model of one-pole pair of the ECPS machine (M2)

In order to determine the maximum value of the electromagnetic torque, the static state analysis in the function of the variation of electromagnetic torque vs the rotor position from 0 to 601 was provided with the constant in value of the armature current (not depending on the rotor position). The slot currents were replaced with sufficient accuracy by a slot’s local linear current density values as follows: JA ¼ 5A/mm2 in the slot S2 of phase A; JB ¼ JC ¼ 2.5A/mm2 in the slots S1 and S2 of phase B,C; JA ¼ 5A/mm2 in the slot S5 of phase A; JB ¼ JC ¼ 2.5A/mm2 in the slots S4 and S6 according Figure 2. Finally, some essential results of the optimization obtained from the 2D analysis are summarized in Table II.

Topology optimization of rotor poles 723

8. Conclusions In this work the shapes of the iron pole and the PM pole were simultaneously investigated in order to minimize the level of noise and vibration in ECPSM that can be used in modern drives for electro-mobiles. The proposed methodology, based on the MLSM and the topological gradient method, allows reducing the cogging torque of M1 Flux density [T] 3.0

2.0

1.5

1

Figure 10. Magnetic field distribution in 2D model before optimization

0

M2 Flux density [T] 3.0

2.0

1.5

1

0

Figure 11. Magnetic field distribution in 2D model after optimization

COMPEL 33,3

ISOVAL_NO_VACUUM 1.500 1.400 1.300 1.200

Figure 12. Magnetic field distribution in 3D model (before optimization in 2D)

Magnetic flux density / vector in T

724

1.100 1.000 900.000E–3 800.000E–3 700.000E–3 600.000E–3 500.000E–3 400.000E–3 300.000E–3 200.000E–3 100.000E–3 0.000

Flux

ISOVAL_NO_VACUUM 1.500 1.400

Magnetic flux density / vector in T

1.300

Figure 13. Magnetic field distribution in 3D model (after optimization in 2D)

1.200 1.100 1.000 900.000E–3 800.000E–3 700.000E–3 600.000E–3 500.000E–3 400.000E–3 300.000E–3 200.000E–3 100.000E–3

Flux

0.000

Topology optimization of rotor poles Tcogg (Nm)

0.50 0.40 0.30 0.20

cogging torque M1

725

M2

0.10

Figure 14. Cogging torque vs rotor position for the initial (M1) and the optimized configuration of the ECPS machine (M2)

0.00 –0.10 –0.20 –0.30 –0.40 –0.50

rotor position (degree) 0.0

2.0

4.0

6.0

8.0

10.0

B (T)

1.50 Magnetic flux density

1.00

M1

Figure 15. Magnetic flux density in the air-gap under magnet and iron poles with the armature current calculated for both configurations before (M1) and after optimization (M2)

M2 0.50

0.00

–0.50 line (mm) –1.00

0.0

10.0

20.0

30.0

40.0

T (Nm)

1.50 1.00

electromagnetic torque M1

0.50

Figure 16. Electromagnetic torque vs rotor position with the armature current density calculated for both configurations, before (M1) and after optimization (M2)

M2

0.00 –0.50 –1.00 rotor position (degree) –1.50 0.0

Considered quantities Rectified mean values Rms values Minimal values Maximal values

10.0

20.0

30.0

Before optimization (model M1) Tcogg (Nm) T (Nm) Br (T) 0.12 0.16 0.32 0.32

0.58 0.64 1 1

0.56 0.59 0.85 0.74

40.0

50.0

60.0

Table II. Values of cogging torque, electromagnetic torque 0.07 (41%k) 0.56 (3%k) 0.53 (5%k) and magnetic flux density Br in the air-gap with 0.08 (50%k) 0.62 (3%k) 0.57 (3%k) armature current before 0.12 (62%k) 1 0.85 and after optimization 0.12 (62%k) 1 0.74 After optimization (model M2) Tcogg (Nm) T (Nm) Br (T)

COMPEL 33,3

726

the initial structure of the PM machine according to the level of the magnetic flux density. The simulation results obtained for the optimized machine configuration are depicted in Figures 14, 15 and 16. Additionally, the mass of the machine was also reduced: 15 percent for the PM pole and 25 percent for the iron pole. The main result for the optimization was summarized in Table II. In this work also the unique design challenges of the proposed methodology was underlined. Based on the presented preliminary results for the simplified model, the full 3D design optimization of the new type machine approach for the field weakening of the PM has to be provided. This is considered as our future research goal. References Bianchi, N. and Bolognani, S. (2002), “Design techniques for reducing the cogging torque in surface-mounted PM motors”, IEEE Trans. Ind. Appl, Vol. 38 No. 5, pp. 1259-1265. Chen, S., Namuduri, C. and Mir, S. (2002), “Controller-induced parasitic torque ripples in a PM synchronous motor”, IEEE Trans. Ind. Appl, Vol. 38 No. 5, pp. 1273-1281. Cimrak, I. and Melicher, V. (2007), “Sensitivity analysis framework for micromagnetism with application to optimal shape design of magnetic random access memories”, Inverse Problems, Vol. 23 No. 2, pp. 563-588. Di Barba, P., Mognaschi, M.E., Pazka, R., Paplicki, P. and Szkolny, S. (2012), “Design optimization of a permanent-magnet excited synchronous machine for electrical automobiles”, JAEM, Vol. 39 Nos 1-4, pp. 889-895. Favre, E., Cardoletti, L. and Jufer, M. (1993), “Permanent-magnet synchronous motors: a comprehensive approach to cogging torque suppression”, IEEE Trans. Ind. Appl, Vol. 29 No. 6, pp. 1141-1149. Gieras, F. and Wing, M. (2008), Permanent Magnet Motor Technology, John Wiley & Sons Ltd., New York, NY. Hughes, A. (2006), Electric Motors and Drives, Elsevier Ltd., New York, NY. Jahns, T.M. and Soong, W.L. (1996), “Pulsating torque minimization techniques for permanent magnet AC motor drives  a review”, IEEE Trans. Ind. Electron, Vol. 43 No. 2, pp. 321-330. Kim, Y.S. and Park, I.H. (2010), “Topology optimization of rotor in synchronous reluctance motor using level set method and shape design sensitivity”, IEEE Trans. on Applied Superconductivity, Vol. 20 No. 3, pp. 1093-1096. Kim, D., Ship, K. and Sykulski, J. (2004), “Applying continuum design sensitivity analysis combined with standard EM software to shape optimization in magnetostatic problems”, IEEE Trans. and Magn, Vol. 40 No. 6, pp. 1156-1159. Kim, D.H., Lee, S.B., Kwank, B.M., Kim, H.G. and Lowther, D. (2008), “Smooth boundary topology optimization for electrostatic problems through the combination of shape and topological design sensitivities”, IEEE Trans. on Magnetics, Vol. 44 No. 6, pp. 1002-1005. Li, T. and Slemon, G. (1988), “Reduction of cogging torque in permanent magnet motors”, IEEE Trans. Magn, Vol. 24 No. 6, pp. 2901-2903. Lim, S., Min, S. and Hong, J.P. (2012), “Low torque ripple rotor design of the interior permanent magnet motor using the multi-phase level-set and phase-field concept”, IEEE Trans. on Magn, Vol. 48 No. 2, pp. 907-909. May, H., Pazka, R., Paplicki, P., Szkolny, S. and Canders, W.-R. (2011), “New concept of permanent magnet excited synchronous machines with improved high-speed features”, Archives of Electrical Engineering, Vol. 60 No. 4, pp. 531-540.

Osher, S.J. and Sethian, J.A. (1988), “Fronts propagating with curvature dependent speed: algorithms based on Hamilton-Jacobi formulations”, J. Comput. Phys, Vol. 79 No. 1, pp. 12-49. Putek, P., Paplicki, P., Slodicˇka, M. and Pazka, R. (2012), “Minimization of cogging torque in permanent magnet machines using the topological gradient and adjoint sensitivity in multi-objective design”, JAEM, Vol. 39 Nos 1/4, pp. 933-940.

Topology optimization of rotor poles

Sergeant, P., Crevecoeur, G., Dupre, L. and van den Bossche, A. (2008), “Characterization and optimization of a permanent magnet synchronous machine”, COMPEL, Vol. 28 No. 2, pp. 272-284. Zhu, Z.Q. and Howe, D. (2000), “Influence of design parameters on cogging torque in permanent magnet machines”, IEEE Trans. Energy Convers, Vol. 15 No. 4, pp. 407-412.

727

Further reading Allaire, G., Jouve, F., Gournay, F. and Toader, A.M. (2004), “Structural optimization using topological and shape sensitivity via a level set method, Internal Report 555”, Ecole Polytechnique, pp. 1-21. Braess, H. and Wriggers, P. (2000), “Arbitrary Langrangian Eulerian finite element analysis of free surface flows”, Computer Methods in Applied Mechanics and Engineering, Vol. 190 Nos 1/2, pp. 95-109. Burger, M., Hackl, B. and Ring, W., (2004), “Incorporating topological derivatives into level set methods”, J. Comp. Phys, Vol. 194 No. 1, pp. 344-362. Chan, T.F. and Tai, X.-C. (2004), “Level set and total variation regularization for elliptic inverse problems with discontinuous coefficients”, Journal of Computational Physics, Vol. 193 No. 1, pp. 40-66. Cheney, M., Isaacson, D., Newell, J.C., Simske, S. and Goble, J. (2005), “NOSER: an algorithm for solving the inverse conductivity problem”, International Journal of Imaging Systems and Technology, Vol. 2 No. 2, pp. 66-75. Cimrak, I. and Van Keer, R. (2010), “Level set method for the inverse elliptic problem in nonlinear electromagnetism”, J. Comput. Phys, Vol. 229 No. 24, pp. 9269-9283. Hansen, P. (1992), “Analysis of discrete ill-posed problems by means of the L-curve”, SIAM, Vol. 34 No. 4, pp. 561-580. Haug, E., Choi, K. and Komkov, V. (1986), Design Sensitivity Analysis of Structural Systems, Academic Press, New York, NY. He, L., Kao, C.H.Y. and Osher, S. (2007), “Incorporating topological derivatives into shape derivatives based level set methods”, Journal of Computational Physics, Vol. 225 No. 1, pp. 891-909. Nocedal, J. and Wright, S.J. (1999), Numerical Optimization, Springer-Verlag Inc, New York, NY. Park, I., Lee, B. and Hahn, S. (1992), “Design sensitivity analysis for nonlinear magnetostatic problems using finite element method”, IEEE Trans. On Magn, Vol. 28 No. 2, pp. 1533-1535. Putek, P., Crevecoeur, G., Slodicˇka, M., Van Keer, R., Van de Wiele, B. and Dupre´, L. (2012), “Space mapping methodology for defect recognition in eddy current testing-type NDT”, COMPEL, Vol. 31 No. 3, pp. 881-893. Schumacher, A., Kobolev, V. and Eschenauer, H. (1996), “Bubble method for topology and shape optimization of structures”, J. Struct. Optimi, Vol. 8 Nos 42-51, pp. 42-51. Sokozowski, J. and Z˙ochowski, A. (1999), “Topological derivatives for elliptic problems”, Inverse Problems, Vol. 15 No. 1, pp. 123-124.

COMPEL 33,3

728

Tikhonov, A., Goncharsky, A., Stepanov, V. and Yagola, A. (1995), Numerical Methods for the Solution of Ill-Posed Problems, Kluwer Academic Publishers, Dordrecht. Vese, L.A. and Chan, T.F. (2002), “A multiphase level set framework for image segmentation using the Mumford and Shah model”, Int. J. Comput. Vis, Vol. 50 No. 3, pp. 271-293. Vogel, C.R. and Omam, M.E. (1998), “Fast, robust total variation-based reconstruction of noisy, blurred images”, IEEE Transactions on Image Processing, Vol. 7 No. 6, pp. 813-824. Yamada, T., Izui, K., Nishiwaki, S. and Takezawa, A. (2010), “A topology optimization method based on the level set method incorporating a fictitious interface energy”, Comput. Methods Appl. Mech. Eng, Vol. 199 Nos 45-48, pp. 2876-2891. Zhu, Z.Q., Howe, D., Bolte, E. and Ackermann, B. (1993), “Instantaneous magnetic field distribution in brushless permanent magnet DC motors. I. Open-circuit field”, IEEE Trans. on Magn, Vol. 29 No. 1, pp. 124-135. Corresponding author Dr Piotr Putek can be contacted at: [email protected]

To purchase reprints of this article please e-mail: [email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints

The current issue and full text archive of this journal is available at www.emeraldinsight.com/0332-1649.htm

Bi-objective optimization of induction machine using interval-based interactive algorithms

Bi-objective optimization

729

Dmitry Samarkanov Ecole Centrale de Lille, Cite´ Scientifique, Villeneuve d’Ascq, France

Fre´de´ric Gillon L2EP, Optimisation, Ecole Centrale de Lille, Lille, France

Pascal Brochet Laboratoire d’Electrotechnique et d’Electronique de Puissance de Lille, Ecole Centrale de Lille, Lille, France, and

Daniel Laloy Jeumont Electric, Jeumont, France Abstract Purpose – Discrete highly constrained optimization of induction machine taking into consideration two objective functions: efficiency and total costs of production. The paper aims to discuss these issues. Design/methodology/approach – Interactive and semi-interactive interval-based optimization methods were used. Two concepts of multi-objective discrete optimization were proposed. Findings – Proposed methodology and algorithms allow decision maker (DM) participate in the process of optimal design and therefore decrease the total time of optimization process. The search procedure is straightforward and it does not require special skills of DM. Presented methods were successfully versified for the problem of optimal design with discrete variables. Research limitations/implications – Three interval algorithms suitable for inverse problems are researched and verified. It generally can be used for multi-objective problems. The dominance principles for interval boxes are showed in the paper. Proposed algorithms are based on the idea of hybridization of exact and evolutionary methods. Practical implications – Proposed approaches were successfully implemented within computer-aided application which is used by manufacturer of high power induction machine. Originality/value – The concept of pareto-domination using the interval boxes can be treated as original. The paper researched several elimination rules and discusses the difference between different approaches. Keywords Algorithms, Optimization, Design optimization, Optimal design, Induction machine, Iterative methods Paper type Research paper

I. Introduction Today there exist a large variety of interactive approaches for resolving bi-objective optimization problems. In Miettinen et al. (2008) authors present a classification of such technics on three groups which basically differ in the way of converging to the pareto-optimal solution set. Based on this classification, in this paper we research two of them, applying each to the problem of optimal design of induction machines. The first algorithm is based on reference points approach (Miettinen et al., 2010) where decision maker (DM) is directly participated in the process of solution elicitation. The second is based on objective function classification procedure (Miettinen et al., 2008).

COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering Vol. 33 No. 3, 2014 pp. 729-744 r Emerald Group Publishing Limited 0332-1649 DOI 10.1108/COMPEL-10-2012-0254

COMPEL 33,3

730

As it was described in Baril and Yacout (2012), all interactive algorithms generally have two typical distinctive steps: optimization and decision making. The latter is important and in turn it can be classified on several categories according to the information which is used by DM. As for now, methods based on some sort of ranging of intermediate solutions (Miettinen et al., 2008, 2010; Zhou and Sun, 2012; Laukkanen et al., 2012; Teghem et al., 2000) got a big popularity. Generally, achievement scalarizing strategy (Hartikainen, 2011; Sun et al., 2011; Sinha and Korhonen, 2010) is used in such algorithms as a reference direction metric. In Hartikainen (2011) was proved that interval arithmetic (IA) could be a good alternative in the simulation-based optimization problems where computationally expensive (black-box) simulator is substituted by a coarse meta-model. In that case, it could be reasonable to represent the outputs of the surrogate model in the form of intervals. This idea was developed and presented in the work, Limbourg and Aponte (2005). In a similar manner, we propose usage of the explicit formulations of the objective and constraints functions in the optimization loop, however, replacing the all real-valued variables by the interval one. As it has been shown in Legriel (2011), better diversity characteristics can be obtained during the stage of exploration of the design space. Authors proved it to be suitable for hybrid algorithms, i.e. by joint usage of exact and evolutionary methods. In Sinha and Korhonen (2010) it was illustrated that IA allows one to securely eliminate the unpromising parts of the design space, therefore conducting a more focussed search. Generally, in case of interval-based algorithms there is no need in fabricating any special metrics for calculating the reference direction (Korhonen and Karaivanova, 1999). However, the proper domination rules must be created. In Samarkanov et al. (2012) we already presented one way of usage of interval notation within the discrete optimization algorithms. This work is continuation of that research, however, this time we propose utilization of IA as mathematical backend in the optimization loop. Using this strategy, one obliged to have an explicit model, but this way shall be more robust. In our opinion, the IA-based interactive approaches have not obtained enough attention in the electrical engineering community. However, taking into consideration that these algorithms are relatively easy in the integration within the existed design workflows, we propose it as a possible alternative for resolving the bi-objective optimization problems. Additionally to the interactive approaches, in this paper we also propose one non-interactive interval-based optimization algorithm. In this method we used the dominance principles for the interval boxes (Limbourg and Aponte, 2005). We start by defining the problem and giving a brief introduction of IA. Discussion of three presented algorithms is given subsequently. Finally we present the results and conclusions. II. Problem definition Problem can be stated in the following way: find the set of optimal configurations of IM taking into consideration customer specification parameters. Figure 1 is served for illustrating the problem in an informal way. The number of variables equals seven and each one among them has a discrete set of feasible values. Two objective functions are: the total losses of IM and its cost of production. At our disposal we have an analytical model (coarse but computationally inexpensive) and simulation tool (exact but very expensive in execution). The total number of constraints equals 25. For the brevity reasons all of them will not be listed there, however, it should be highlighted that they

are divided on two groups: explicit and implicit one. The first group consisted from the constrains stated in the form of customer requirements: power, polarity, the starting current, maximal torque. Generally (but not always) these are equality constraints and should be satisfied in the optimal configuration. The second group of implicit constraints composed from the list of physical, mechanical, geometrical and electrical limitations which must be satisfied for IM and the clients have only a vague idea of them as they goes out of the scope of customer specification form. In comparison to the work Samarkanov et al. (2012) where the interval notation was used for resolving the mono-objective problem, in this paper the IA will be applied for treating the bi-objective case Figure 2. More formally the problem can be formulated in the following way: (   minimize ð1  ZÞ; ðCpr Þ ; X ð1Þ   subject to GðXÞ ¼ g1 ðXÞ; g2 ðXÞ;:::; gn ðXÞ p0; n ¼ 25

Bi-objective optimization

731

where Z is the efficiency of IM (%), Cpr is the cost of production of IM and 8i : x iAOi where Oi is the finite feasible set for the ith variable. Objective functions

x1: Megnetic length coarse: analytic model of induction machine

x2: Number of conductors in a slot

y1: min(losses)

x3: Width of stator conductor

y2: min(Cost of production)

x4: Height of stator conductor

OR

x5: Height of rotor bar exact: simulator (special software)

x6: Width of rotor bar

Figure 1. Layout of the process of optimization

x7: Number of parallel branches Constraints

δins

Variables

A

e ds δins

ds dre

δwind

Δ

A

ws wb

hb be

δ

bs

hs

hys

δcu

hcu

wcu

wo

hyr

Figure 2. Cross-section of the induction machine and the stator slot

COMPEL 33,3

732

II.I Calculation of the efficiency of induction machine Conventional analytical model is used for calculation of technical parameters of IM. It does not take into account several physic phenomena such as saturation, influence of harmonics, which generally should not be ignored. However, in our application, this model gives the sufficient level of accuracy. Following formula is used for computation of the efficiency: Pn ð2Þ Z¼ Pn þ Pmec þ PFE þ PJR þ PJS þ Padd where Pn is the rated power of the machine, Pmec is the mechanical losses, PFE is the iron losses, PJR is the rotor copper losses, PJS is stator copper losses, Padd is the additional losses. Please see Irimie (2008) for more details about the technical model. II.II Calculation of the total costs of induction machine The process of calculation of the total costs of IM Cpr is decomposed on two main steps: computation of the material costs and calculation of the workforce costs. The following formula can be used for obtaining the material costs of IM: rotor stator rotor C mat ¼ airon  ðM stator iron þ M iron Þ þ aCU  ðM CU þ M CU Þ þ C add

ð3Þ

where airon, aCU are the prices of iron and the copper, M designates the mass, Cadd is the price of all additional equipment of IM (cooling exchangers, bearings, etc.) III. Interactive approaches in interval-based optimization problems III.I General information IA have been already used for resolving the optimization problems (Sun et al., 2011; Limbourg and Aponte, 2005; Fontchastagner et al., 2009). It generally can be applied for evaluating the function with the input values defined in the interval form. This can be practical in the problems where the meta-modelling strategy is employed (Hartikainen, 2011). For the example of applications you can refer to the works (Sun et al., 2011; Limbourg and Aponte, 2005) where it is used in the algorithm for imprecise multi-objective problems. In Sinha and Korhonen (2010) IA is employed for calculating the ranking metric within the evolutionary algorithms. In Fontchastagner et al. (2009) authors showed an example of optimization of electric machine. In our application, we use two different models for evaluating the parameters of induction machine: a coarse and a fine one. The coarse model is imprecise but computationally inexpensive. At the same time the fine one is prohibitory expensive in execution. In Limbourg and Aponte (2005) it was proved that the imprecision imposed by the coarse models in multi-objective optimization problems can be successfully managed by applying the IA. It should be highlighted that IA could be successfully used for calculating the aspiration levels of objective functions during the optimization loop. For example, for calculation of Nadir Znad and utopian Z** points IA require only one evaluation of objective function. It can be easilyi proved that in bi-objective case   nad h  Z ¼ f 1 ðXÞ; f 2 ðXÞ , Z ¼ f 1 ðXÞ; f 2 ðXÞ where f 1 ; f 2 and f 1 ; f 2 are the real values of objective functions obtained using IA.

III.II IA in the problem of design of induction machine: two examples In this section we illustrate two simple examples of IM parameter calculation by using IA. For calculating the characteristics of IM we use the analytical model (Irimie, 2008). Example 1: find the intervals for starting current ratio Is/Ir and power factor cosj of IM where the input variable is the magnetic length of the machine l which is defined as the interval of possible values between [700, 1300] (mm). Figures 3 and 4 illustrate the results. The interval boxes of functions Is/Ir(l ) and cosj( l ) are obtained by applying IA on the coarse model. Points are calculated by using the fine model (simulator). It can be seen that the exact results are always laying within the interval boxes, but not necessary on the top (bottom) of a box.

Bi-objective optimization

733

6

starting current ratio, s r

5 4 3 2 I = 700 mm

1

I = 1,300 mm

1.0

Figure 3. Calculation of starting current ratio using interval arithmetic

1.0

Figure 4. Calculation of power factor using interval arithmetic

using interval arithmetic 0 0.0

0.2

0.4

0.6

0.8

slip, s

1.0 I =1,300 mm I =700 mm

0.8 power factor, cos

using interval arithmetic 0.6

0.4

0.2

0.0 0.0

0.2

0.4

0.6 slip, s

0.8

COMPEL 33,3

734

Example 2: find the intervals for the efficiency Z of IM where the input variables are: (1)

height of the stator conductor hcu ¼ [1.0,2.32]; and

(2)

width of the stator conductor wcu ¼ [7.0,12.0].

Figure 5 illustrates the results for the case when the initial interval of each variable was divided on six sub-intervals with the consequent usage of IA for the Cartesian inv product hinv cu  wcu . Points in Figure 5 were obtained by using the fine model directly. The interval boxes were calculated using the coarse analytic model. The relative error of over-evaluation for the sixth box equals e2/e1 ¼ 2.9 percent which is a very promising result. This example proves the suitability of IA as an effective mean for converging toward the optimal configuration of IM. III.III Interval-based elimination rules In this section we briefly recapitulate the main principles of elimination for the mono and bi-objective cases (Limbourg and Aponte, 2005). Generally during the procedure of elicitation it is possible to have three hypothetic situations of domination for the intervals (mono-objective case) and interval boxes (multi-objective case): (1)

strict dominance: interval box I1 strictly dominates the other I2 (I14I2);

(2)

vague dominance: interval box I1 vaguely dominates the other I2; and

(3)

incomparable situation: box neither dominating nor dominated (I18I2).

Thereafter for all formal statements we use the following notation for explaining the dominance: (1)

mono-objective case: (I14IN I2) I1 strictly dominates I2; and

(2)

bi-objective case: (I14IP I2) I1 strictly dominates I2.

Mono-objective elimination principle. In order to develop principles of domination, in Figure 6 we illustrated the three possible situations of interval arrangements for the mono-objective problems. In the case A we can affirm that interval I1 strictly dominates I2 (minimization problem) and therefore I2 can be eliminated. For the case B in the discrete optimization problems we could not be able to claim that I1 dominates I2 and for this reason interval I2 normally should not be

Figure 5. Calculation of efficiency of induction machine as a function of stator conductor height and width

eliminated. In Limbourg and Aponte (2005) authors show that the final results will not be suffering for the majority of problems if I2 is eliminated but this would eventually permit to decrease the time of execution. In our algorithm we use the same strategy and claim that for the case of vague dominance (B) the vaguely dominated interval (I2 in case of minimization) must be eliminated. Therefore we extend the strict dominance principle. In the incomparable situation neither box might be eliminated. Taking into consideration all cases considered above, the domination principle for the mono-objective minimization case can be described in the following way:     I1 4IN I2 , ðI 1 pI 2 Þ ^ I 1 pI 2 ^ I 1 6¼ I 2   where I1 ; I2 are intervals and I 1 ; I 2 ; I 1 ; I 2 2 R

Bi-objective optimization

735

ð4Þ

Bi-objective elimination principle. Figure 7 illustrate the principles for the bi-objective class of problems which could be treated as the natural extension of the mono-objective rules considered above. Therefore using the notation of intervals the domination principle for bi-objective minimization case: (

    8ði; jÞ 2 1:::n: Ii 4IN Ij _ Ii jjIj   I1 4IP I2 , 9ði; jÞ 2 1:::n: Ii 4IN Ij   where i 6¼ j and Ii ; Ij are intervals

ð5Þ

III.IV Calculation of exact solutions in the reduced variable domain At the final stage of all presented algorithms we have the step of exact solution search. In our application we provided possibility to use several algorithms there: (1)

NSGA-II;

(2)

branch-and-bound algorithm coupled with e- constrained method (Samarkanov et al., 2012); and

(3)

exhaustive enumeration (Samarkanov et al., 2012).

I1

(a)

I1 I1

I1

I2

I1

I2

z2

I2

(a)

–2

I2

I2 I1

I1

I12 I11

1

1

I1=I2

–1

I2

z1

(c) I1

I2 I2

I2

I2

(c)

(b)

I2

–2 I1 – I22

I1

I1 I2

I2

I1

(b)

(d)

I1 I2

I1

I2

Figure 6. (a) – strict dominance; (b) – vague dominance; (c) – incomparable situation

Figure 7. (a) – strict dominance; (b) – vague dominance; (c), (d) – incomparable situation

COMPEL 33,3

736

For better demonstrating the idea of searching the solutions in the reduced domain, we treated Kursawe benchmark example (Nebro, 2007; Samarkanov et al.) with discrete variable sets and the following initial parameters: .

Number of variables: 2

.

Feasible set for each variable: randomly generated in the interval [5.0, 5.0] where number of feasible values for each variable equal 1000.

In Figures 8 and 9 we illustrate the results obtained using interval-based optimization algorithm (IA-P from Section V.II). In Figure 8 you can see the projection of Paretooptimal interval boxes onto the variable domain. Figure 9 indicates the same boxes in the objective space. Pareto-optimal solutions were obtained using the exhaustive enumeration within the reduced variable domain. Please refer to Samarkanov et al. to have further discussion of the results. IV. Finding the set of initial configurations Identification of initial solutions requires strong formalization of customer specification CS. In our application we developed the requirement form which is common for the considered manufacturer. It consists of three main parts: general, performance and additional requirements. As soon as the form is successfully completed by DM, the search will be conducted within the database DB which contains the already calculated configurations (Samarkanov et al., 2012). As soon as the enquiry process is finished, two cases are possible: either the demanded solution is already exists or it does not. In the first case, optimization process will not be started and returns the found standard solutions to the DM. However, in the second case, it proceeds to the step of search of set of candidate configurations which lately will be used in the optimization process. 2 F2(X) 0

–2

–4

Figure 8. Interval boxes in the objective space and exact solutions (points) obtained within reduced variable domain

–6

F1(X) –8 –10.5 –10.0 –9.5

–9.0

–8.5

–8.0

–7.5

–7.0

–6.5

–6.0

Bi-objective optimization

0.2 X2 0.0 –0.2 –0.4

737

–0.6 –0.8 –1.0 –1.2 –1.4 X1 –1.6 –1.6

–1.4

–1.2

–1.0

–0.8 –0.6

–0.4

–0.2

0.0

0.2

For searching the set of candidate configurations K, L2 metric (Miettinen et al., 2008) is employed: 8 k

> 2 1=2 P  < Zi  Zi  minimize ð6Þ i¼1 > :  subject to Z 2 DB and Z 2 CS where i is the total number of specified parameters in CS. Generally it is preferable to select several candidate configurations. That is why we use (1) for retrieving at least five different candidate solutions from the DB. These configurations will be used in the optimization loop as the initial one. As soon as the set K is obtained, initialization of intervals for each variable is started. Formally this process can be defined in the following way: For each variable  x i in X k:   Find xi ¼ min x1i ; :::; xki , xi ¼ max x1i ; :::; xki ¼ max(xi) among all configurations K

K

from K where k is the total number of candidate configurations. V. Algorithms Generally, the stopping criterion for all considered algorithms is the maximal number of iterations imax which is defined by the user. The IA (Moore et al., 2009) is used for evaluation of objective and constraints functions. In order to distinguish between rational and interval values we use the notation x inv consequently. For all presented algorithms, during the optimization loop we use the same interactively repeated steps: (1) (2)

evaluation of functions (objective and constraints) by using IA; and ranking and elimination of dominated interval boxes.

The first approach which will be considered is the case when DM defines the maximal number of iterations. Ranking and elimination of interval boxes is carried out

Figure 9. Reduced variable domain obtained using intervalbased optimization algorithm

COMPEL 33,3

738

automatically and DM does not participate in those processes. Therefore we classify it as non-interactive approach based on interval dominancy principles (section III.III). Figure 10 illustrates the workflow for this type of algorithm. The second is the interactive algorithm where the ranking procedure is carried out by the DM. There we exploit the idea of the achievement sets which contains the eliminated interval boxes. This technique is quite common for the interactive algorithms (Sinha and Korhonen, 2010) and allows extracting any intermediate configurations which are stored in the archive set. This could be useful in case when dm would like to go back to some earlier iteration for changing his (her) ranking preferences. Figure 11 illustrates the workflow for this type of optimization strategy. V.I Algorithm based on interval dominance principle (A-ID) In this non-interactive algorithm we used the interval domination principles described in section III.III. As the division rule we used the most straightforward one, which consists in progressive division of the intervals on two halves. As the stopping criteria we took the maximal number of iterations imax which is specified by DM at the beginning of the algorithm. Formally, the algorithm can be described in the following way: (1)

Find the set I of initial configurations and for each design variable initialize the  intervals xinv i ¼ ½xi ; xi  (section IV).

(2)

Using IA (Section III.I) find utopian Z** and Nadir Znad objective vectors.

(3)

Ask DM about the maximal number of iterations imax (s)he would like to carry on and using the division rule (Samarkanov et al., 2012). Calculate the number of divisions Zi for each interval i.

(Re)initialize X

I. Pareto front in interval from

Divide interval from X by its midpoint Find Cartesian product L of all sub-intervals and save it in L

Find the intervals of objective functions F(x)={f1(x),f2(x)} and constraints G(x) for configurations from L

Two types of elimination strategy: 1

Interactive

2

Brute force elimination

Eliminate the non-promising boxes in the objective space

yes

no Stopping criteria? Input: Pareto-optimal boxes

Figure 10. Workflow for the noninteractive optimization algorithm

In the reduced domain of X resolve the problem using NSGA-II

II. Search of the extract pareto-optimal solutions

In the reduced domain of X resolve the problem using BB

In the reduced domain of X resolve the problem using EE

Bi-objective optimization

Initialize: X and list of feasible solutions L = {}

Divide each interval of X on the subintervals

739 Find Cartesian product of all sub-intervals

Using IA find the intervals of objective functions F(X)and G(X) for all boxes obtained at previous stage

Display all boxes in a graphical form: f1(X ) = f (box i) and propose to user choose some of them (which (s)he thinks are most promising)

1

Find all posible configurations inside the boxes from the previous stage (NSGA-II, EE, BB)

Identify Pareto-front

(4)

(5)

2

Uniformly divide each interval Q of designQvariable xiinv on Zi sub-intervals  inv and find Cartesian ðX Þ  ½x1  :::  ½xk ¼ xinv  invproduct:  1  x2  inv inv :::  xinv   :::  x  x  :::  x where k is the total number of n 1 n k 1 2 design variables. Put P(X) in the list L. For each configuration Xi from L: h i hh i inv inv inv For h objective space: ii obtain Z ¼ F1 ðX i Þ; F2 ðX i Þ  f 1 ðX i Þ; f 1 ðX i Þ ;

.

f 2 ðX i Þ; f 2 ðX i Þ

.

(6)

(7)

and put it in the list I.

For constraints space: obtain Ginv and put it in the list À:

Ranking and elimination: using interval dominance bi-objective elimination rule (Section III.III) delete unpromising and unfeasible interval boxes from I, À and their images from the list L. If list L is not empty AND not stopping criteria:   inv For each l i ¼ xinv 1 ; :::; xk i from L: Find Zi using the division rule and go to (4) Else if stopping criteria is satisfied:

.

.

Proceed to the calculation exact solutions (section III.IV) in the reduced Sk¼sizeof ð LÞ variable domain: i¼1 fLg ¼ l1 [ ::: [ lk :

Figure 11. Workflow for the interactive optimization algorithm

COMPEL 33,3

740

Else if L is empty: Quit with no solution V.II Interactive preference-based bi-objective algorithm (IA-P) For this algorithm the flowchart will be the same as for the previous one (A-ID) except the step (6): 6. Ranking and elimination: 6.1. Invite DM to rank the obtained interval boxes on two groups: promising and unpromising. 6.3. Update the archive set A by inserting there the unpromising interval boxes 6.2. Eliminate unpromising and unfeasible interval boxes from I , À and their images from the list L. In this algorithm, we do not use the elimination rules developed above. According to the main idea, it is up to DM to specify the most promising interval boxes in which the search procedure should be carried out. V.III Interactive bi-objective algorithm based on classification principle (IA-CP) In this algorithm we exploit the idea of classification of objective functions and therefore transforming the bi-objective problem into two mono-objective one. DM is engaged in the procedure of selection of most promising interval boxes right after the first mono-objective problem is resolved. Main points of the algorithm: (1)

Classification: ask DM to specify one objective function Z(s)he would like to optimize first.

(2)

Repeat steps (1)-(3) of the algorithm.

(3)

Uniformly divide each interval of design variable xiinv on Zi sub-intervals and find Cartesian product P(X). Put P(X) in the list L.

(4)

For each configuration Xi from L:

(5)

(6)

.

    For objective space: obtain Zinv ¼ Zinv ðXi Þ  ZðXi Þ; ZðXi Þ and put it in the list I :

.

For constraints space: obtain Ginv and put it in the list À:

Using interval dominance mono-objective elimination rule (section III.III) delete unpromising and unfeasible interval boxes from I , À and their images from the list L. If list L is not empty AND not stopping criteria:   inv For each l i ¼ xinv 1 ; :::; xk i from L: Find Zi using the division rule and go to (3) Else if stopping criteria is satisfied:

.

.

.

Elimination: show to DM distribution of interval boxes for Zinv Invite DM to specify the list most promising and find its image on variable space O. Proceed to the calculation S of exact solutions (section III.IV) in the k¼sizeðOÞ reduced variable domain: i¼1 fOg Else if L is empty: Quit with no solution.

VI. Results of optimal design We applied all three developed algorithms to the problem of optimal design of IM (section II). In Figures 12-14 you can see the results using A-ID and IA-P approaches. First, we calculated all possible configurations for one client demand using exact model (simulator). The set of all feasible configurations is depicted in Figures 12-14 in the form of green points. In Figure 13 you can see obtained Pareto-optimal front using interval boxes. As it can be concluded, A-ID technique allows converging to the solutions. Results of interactive IA-P method are illustrated in Figure 14. Initial configuration is marked as a point A. Using this method, we obtained four pareto-optimal boxes which cover only the part of real front. It is normal as it was the DM who guiding toward that area in the objective space.

Bi-objective optimization

741

Relative cost of production, [ p.u.]

3.5 3.0 2.5 2.0 1.5

Figure 12. Results of using A-ID method for the problem of optimal design of induction machine

1.0 3

4

5 6 7 Losses (100 – ), %

8

9

2.2 2.0 1.8 1.6 1.4 1.2 1.0 0.8

2.5

3.0

3.5 4.0 Losses (100 – ), %

4.5

5.0

Figure 13. Pareto-front using A-ID method for the problem of optimal design of induction machine

COMPEL 33,3

Figure 14. Results of using IA-P method for the problem of optimal design of induction machine

Relative cost of production, p.u.

742

1.10

1.05 A 1.00

0.95

0.90

3.0

3.2

3.4

4.2

3.6 3.8 4.0 Losses (100 – ), %

4.4

For testing IA-CP we took the original problem but limited the number of variable to the fourth first (Figure 1). For the presented case, DM specified the maximum number of iterations imax ¼ 1 and fixed the efficiency of IM Z as the firstpriority function (which must be maximized). The number of division of the interval of each variable was equals two. In Figure 15 you can see the results of mono-objective optimization right after one iteration of the algorithm ( p. 7.2 Section V.III). There, the number of interval boxes equals (number of variables)2, where “2” is the number of divisions. As an example, we simulated the case when DM chose four most promising interval boxes (numbered) in which consequently we used EE technique in coupling with industrial simulator (fine model) for finding the exact solutions (Figure 16). From the considered case we can conclude that even after one iteration it is generally possible to obtain the set of solutions better than initial one.

0.993 1 0.992

2

3

4

η

0.991 0.990 0.989

Figure 15. Usage of IA-CP approach for the problem of optimal design of induction machine: first stage

0.988 0.987

0

2

4

6

8 boxi

10

12

14

16

Relative costs of production

Bi-objective optimization

Initial configuration Pareto front Enumerated configurations

1.1

1.0

743

0.9

0.8

0.7 0.00

0.05

0.10 1–

0.15

0.20

VII. Summary and conclusions In the paper we showed three IA algorithms for resolving the bi-objective highly constrained discrete optimization problems. Two of the presented approaches are interactive algorithms which permit obtain global solution. We applied them to the problem of techno-economic optimization of induction machine and validate the results using several industrial examples. We proved that interactive approaches could be used successfully in the multi-objective optimization problems. References Baril, C. and Yacout, S. et al. (2012), “An interactive multi-objective algorithm for decentralized decision making in product design”, Optimization and Engineering, Vol. 13, pp. 121-150. Fontchastagner, J., Lefevre, Y. and Messine, F. (2009), “Some co-axial magnetic couplings designed using an analytical model and an exact global optimization code”, IEEE Transactions on Magnetics, Vol. 45 No. 3, pp. 1458-1461. Hartikainen, M. (2011), “Approximation through interpolation in nonconvex multiobjective optimization”, Ser. Jyvaskyla Studies in Computing. Jyvaskylan yliopisto, Vol. 141, pp. 15-34. Irimie, D. et al. (2008), “Comparative loss analysis of small three-phase cage induction motors”, 2010 XIX International Conference on Electrical Machines (ICEM), September, pp.1-4. Korhonen, P. and Karaivanova, J. (1999), “An algorithm for projecting a reference direction onto the nondominated set of given points”, IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, Vol. 29 No. 5, pp. 429-435. Laukkanen, T., Tveit, T.-M., Ojalehto, V., Miettinen, K. and Fogelholm, C.-J. (2012), “Bilevel heat exchanger network synthesis with an interactive multi-objective optimization method”, Applied Thermal Engineering, Vol. 48, pp. 301-316. Legriel, J. (2011), “Multi-criteria optimization and its application to multi-processor embedded systems”, PhD thesis, University Joseph Fourier, Grenoble. Limbourg, P. and Aponte, D. (2005), “An optimization algorithm for imprecise multi-objective problem functions”, The 2005 IEEE Congress on Evolutionary Computation, Vol. 1, September, pp. 459-466.

Figure 16. Usage of IA-CP approach for the problem of optimal design of induction machine: second stage

COMPEL 33,3

744

Miettinen, K., Ruiz, F. and Wierzbicki, A. (2008), “Introduction to multiobjective optimization: interactive approaches”, in Branke, J., Deb, K., Miettinen, K. and Slowinski, R. (Eds), Multiobjective Optimization, Ser. Lecture Notes in Computer Science Springer, Berlin and Heidelberg, Vol. 5252, pp. 27-57. Miettinen, K., Eskelinen, P., Ruiz, F. and Luque, M. (2010), “Nautilus method: an interactive technique in multiobjective optimization based on the nadir point”, European Journal of Operational Research, Vol. 206 No. 2, pp. 426-434. Moore, R.E., Kearfott, R.B. and Cloud, M.J. (2009), Introduction to Interval Analysis, SIAM, Philadelphia. Nebro, A.J. et al. (2007), “Multi-objective optimization using grid computing”, Soft Computing – A Fusion of Foundations, Methodologies and Applications, Vol. 11 No. 6, pp. 531-540. Samarkanov, D., Gillon, F. and Brochet, P. “Benchmarking bi-objective optimization problems”, available at: http://samarkanov.com/thesis/bi_objective.html Samarkanov, D., Gillon, F., Brochet, P. and Laloy, D. (2012), “Optimal design of induction machine using interval algorithms”, COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering, Vol. 31 No. 5, pp. 1492-1502. Sinha, A. and Korhonen, P. et al. (2010), “An interactive evolutionary multi-objective optimization method based on polyhedral cones”, Learning and Intelligent Optimization, Ser. Lecture Notes in Computer Science, Vol. 6073, pp. 318-332. Sun, J., Gong, D. and Sun, X. (2011), “Optimizing interval multi-objective problems using IEAs with preference direction”, Neural Information Processing, Ser. Lecture Notes in Computer Science, Vol. 7063, pp. 445-452. Teghem, J., Tuyttens, D. and Ulungu, E. (2000), “An interactive heuristic method for multi-objective combinatorial optimization”, Computers amp; Operations Research, Vol. 27 Nos 7/8, pp. 621-634. Zhou, T. and Sun, W. (2012), “Optimization of wind-PV hybrid power system based on interactive multi-objective optimization algorithm”, 2012 International Conference on Measurement, Information and Control (MIC), Vol. 2, pp. 853-856. Corresponding author Dr Dmitry Samarkanov can be contacted at: [email protected]

To purchase reprints of this article please e-mail: [email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints

The current issue and full text archive of this journal is available at www.emeraldinsight.com/0332-1649.htm

Framework for the optimization of online computable models

Online computable models

Benoit Delinchant and Fre´de´ric Wurtz Electrical Engineering Lab, Grenoble University, Grenoble, France

Jo~ao Vasconcelos

745

Evolutionary Computation Lab, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil, and

Jean-Louis Coulomb Electrical Engineering Lab, Grenoble University, Grenoble, France Abstract Purpose – The purpose of this paper is to make easily accessible models to test and compare the optimization algorithms we develop. Design/methodology/approach – For this, the paper proposes an optimization framework based on software component, web service, and plugin to exploit these models in different environments. Findings – The paper illustrates the discussion with optimizations in Matlabt and R (www.r-project.org) of a transformer described and exploitable from the internet. Originality/value – The originality is to make easy implementation of simulation model and optimization algorithm coupling using software component, web service, and plugin. Keywords Cloud computing, Optimization Paper type Research paper

Introduction The conventional ways to share models are no longer suited to the digital world in which we are living. This is usually made by means of publications in which errors or deficiencies may occur, leading to a very limited reuse. Models can be shared on a web site, offering a detailed description, compared to the published version, and a great opportunity to correct or complete the model. It is also common in the benchmarks, to make available a data file related to a particular software tool. The latter solution is dependent on the availability of the tool in question, especially in the case of commercial software that requires purchasing a license. Thus we see that these “traditional” means of sharing models have drawbacks. We propose an optimization framework to overcome these difficulties, reusing computable models by their coupling with optimization algorithms. This optimization framework is also designed not to favour any particular environment, providing the freedom to create, test, and compare various algorithms and implementations. In this paper we are using many tools for optimization in electromagnetics. Specific tools coming from electrical engineering laboratories: GOT[1] (Coulomb, 2008), which is rather dedicated to the optimization of indirect numerical model by response surface, and CADES[2] (Delinchant et al., 2004, 2012), dedicated to direct optimization of semi-analytical models using automatic Jacobian computation, and optimization algorithms developed in commercial framework: Matlabt and R (www.r-project.org). The authors wish to thank Pierre-Yves Gibello and Victor Soares for web server development.

COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering Vol. 33 No. 3, 2014 pp. 745-758 r Emerald Group Publishing Limited 0332-1649 DOI 10.1108/COMPEL-10-2012-0211

COMPEL 33,3

746

How to develop algorithms without the constraint of the environment? How to reuse models easily without worrying about the software tool required for their execution? Our idea for sharing optimization models is based on the concept of software component (SC) and web service. Software programing paradigms Software component (SC) The approach of software components is to make a standalone piece of computer program so that its distribution and reuse is facilitated. A standard of software component specifies its interfaces that define the kind and the format of information that will be exchanged by the component. This standardization approach allows sharing models easy because its interfaces are normalized and known. Moreover a software component must also specify its: .

packaging and encapsulation methodology: how it can be generated and how it is composed; and

.

deployment and introspection mechanism: how to use the component, to interrogate it, and to know its available services.

In the areas of electromagnetic field computation and building simulation (Gaaloul et al., 2011), we have developed a software component norm “Interface for Component Architecture (ICAR)” and associated tools. Such a Software Component (SC) owns the following services: .

to introspect variable names;

.

to set input values;

.

to get output results; and

.

to get Jacobian (optional).

The SC technology which is used is OSGi (Open Services Gateway initiative) software component model. This standard is founded on the Java language and defines a model in order to manage life cycle of an application, with services, inside an execution framework. Many application of OSGi exists but none in the field of engineering models and especially in the field of optimization. OSGi brings many advantages to the capitalization and reuse of our models, such as dependency checking if it requires other models to be evaluated. Software components can automatically treat and manage different issues such as communication between software tools, heterogeneity of programming language. They can be exchanged between partners, or simply made available for download on dedicated sites for capitalization/reuse of models. It also allows capitalization and making available “on the shelf” complex models in the case of composed systems. So we can summarize the expected benefits of the software component standard in terms of: (1)

Capitalization/reuse: .

models or shortcuts available on the shelf and ready to reuse; and

.

reuse of robust codes that are already tested and witch results are guarantee.

(2)

Composition: .

systems construction by assembling many components that are already available;

.

improved modularity, more abstract and functional development, dynamic extensions by composition, etc.; and

.

portable and distributable systems as the software component are an executable code.

Online computable models 747

Plugin and plug out paradigm Several commercial tools are able to generate ICAr component from various modelling methods. These tools own what we have called a plug out (an exporting functionality): .

CADESt: semi-analytical modelling (Delinchant et al., 2004, 2012, 2007);

.

MacMMems: magnetic MEMS (Delinchant et al., 2009);

.

Reluctoolt: reluctant network (Bertrand et al., 2010);

.

Flux2D/3Dt: electromagnetic FEM (Rezgui et al., 2012); and

.

Got-itt[3]: response surface modelling.

In the other side, a tool that has already implemented this standard will receive and use any ICAr component without any difficulties if it owns a plugin (import functionality). Using models in existing frameworks is then possible by developing such a plugin, but it depends on the openness of it, for example: .

easy if Java technology is available (like Matlabt);

.

using Java Native Interface ( JNI) for C/C þ þ or Fortran tools (like R project); and

.

through file exchange (for old tools).

optimization framework (matlab, R, …)

SC Virtual standard Machine libraries Plug’in installed once

download

In the following figure, a user can download a software component owning a specific model and integrate it into an optimization framework thanks to the concept of plugin. So, these plugin are pieces of computer code that are included into programs, and which exploit transparently ICAr software components (SC) or web services (WS) (Figure 1). This approach has some limitations linked to possible dependencies. Indeed, if the component requires dependencies, like component generated from Flux2D/3Dt or Got-itt software, the user must own these dependencies to run the model. In order to avoid this drawback, we have enhanced our model capitalization approach with the web service paradigm.

JAVA Models downloaded by each user

Figure 1. Software component usage in a framework through a plugin

COMPEL 33,3

748

Web services (WS) The computing capacity of a model, encapsulated in a software component, can be directly exploitable remotely on a web server (Tarricone and Esposito, 2006). Thus, a model can also be transmitted by file (the software component), or simply by an internet address (the web service). A web service can perform a model calculation remotely, removing the main difficulties associated with direct reuse (copy error, software license, etc.). Indeed, if we consider a finite element simulation that we want to share, but which requires installed huge software (hundreds of megabytes) and also requires a commercial licence, we may deposit it on a dedicated web server were all resources are installed and where commercial licence can be rent only for hour of usage (hopping at a very low price). The following figure illustrates such an architecture in which a designer may use a distributed model with its associated libraries and resources. It is used through classical web technologies (Apache/Tomcat/PhP/MySQL) and a plugin based on web service. We have defined same semantics than those of the software component (introspection, computation, etc.), but it is not done through API (programming interfaces) but though HTTP requests using very simple and accessible REST/JSON protocol and encoding (Figure 2). We may highlight the main opportunities and benefits allowed when computing capabilities of models are available in remote web servers: .

Possibility to have unlimited storage capacities and more efficient computing capabilities.

.

Dependability: services and shared data are stored in reliable and high efficient infrastructure.

.

Scalability and easier maintenance: a shared service can be improved or maintained from distance without any need of redeployment or local effort in each post.

.

Lower costs: availability of a possible use of unitarily expensive services. But the cost of these services is reduced when shared between many users (licenses computing, heavy calculations for optimization, etc.).

.

Sustainable development: pooling of computing resources on shared and virtualized computers which will allows energy saving.

.

Innovation facilities (“mashup” assembly of many services, or models, generating new uses).

Optimization framework Model creation and diffusion Thanks to this concept of making models available anywhere automatically as an executable black box, we have applied it in order to create an optimization framework

Figure 2. Web service usage in a framework through a plugin

optimization framework (matlab, R, …)

Libraries Resources

HTTPS JSON

Plug’in installed once

Models Models available on a distant server

with benchmark models and plugin/plug out available for several modelling and optimization tools. From the modelling point of view, when a model is created, it is also exported into a software component. Then it can be uploaded into the cloud using our web interface (https://icars.g2elab.fr) (Figure 3 left) and management protocol that we have defined in the server, as we can see in the server architecture (Figure 3 right). In the web interface, login rights are managed in order to guaranty secure use of each model through HTTPS protocol. A logged user can upload ICAr into the server and manage its library. He can either define rights or not in order to secure the use of its online models. Then he can copy/paste the URL (Uniform Resource Locator) which defines the entry point to use the uploaded model, and diffuse it to people. But please be aware that this solution only gives access to computing capacity and not to the knowledge embedded in the model, which is generally required to adapt the model to a new context of use. This last point, which is used complementary to the approach proposed here, is not detailed in this work, but readers who want to deepen it may consult the project Dimocode (Dargahi et al., 2010) (www.dimocode.fr) dedicated to engineering model knowledge sharing.

Online computable models 749

Model use and reuse Our optimization framework (CADES and GOT) are natively ICAr compliant, so we have developed an ICAr adaptation which is able to communicate also with web services. For other environments like Matlabt or R, in which we can develop and use optimization algorithms, plugin can be based on ICAr software component or web service. Several of them have been made in different environments (Excel, iShigh, Modelica/Dymola, Portunus, etc.) and especially Matlabt and R which are used in this paper as we will show now through a transformer design using optimization. Application to benchmarking optimization of a transformer Online model of a transformer The electromagnetic model is based on a classical equivalent circuit approach. The main hypothesis is done for the computation of the leakage inductance. In order to approximate it, we consider the flux lines perfectly straight, and the magnetic field is increasing/decreasing linearly in the windings and remains constant between both as described in Figure 4. The other part of the model is the economic one, based on material cost and cost of losses during transformer operation.

Network Server Management Calculus Requests

Apache HTTP Server

Apache Tomcat Web Server

Icar Repository Web application

Apache Domain

ICAr

Files proxy MySQL Server ICAr proxy

Figure 3. Left: web interface to manage ICAr software components on the web-server; right: web-server architecture

COMPEL 33,3

750

The equations of the models are captured in an open language (Modelica). After, these equations are transformed by CADES to automatically generate the computer code for calculating the model and the corresponding Jacobian. The software component is then created, corresponding to a single sharable file. This file can then be deployed on the internet (Figure 3, left) as a Web Service. The model is finally described on the Dimocode platform, where you can find the description of modelling assumptions, equations, and parameters (Figure 5, left). Associated with this model description of the device, there are also descriptions of the different optimization specifications. A final page provides various links to: the URL of the web service, the ICAR component to download, but also the Modelica source file of the model. The model can be downloaded, as Modelica equations, as ICAr software component, or used remotely as web service. It is illustrated on CADES framework (Figure 5, right) where a sensitivity analysis is performing thanks to Jacobian information. Plugin installation We want to optimize this model in Matlabt and R for several specifications (constrained or not, mono or multi-objective). For this, we must install the corresponding “plugins”. These plugins are available for download on the Dimocode site, associated with a detailed description. In both case (SC and WS) we are using java class since a JVM ( Java Virtual Machine) is available in Matlabt. So, our Java library has to be declared in Matlabt javaclasspath, in order to be used by Matlabt M-Files. Some M-File samples are given in order to compute, make a 2D/3D plot, and optimization of a simple model. These M-File samples can be adapted to other models very easily. The same has been done on R, where any Vicar file can be used with simple function calls. Model optimization on Matlabt Now we are able to optimize the transformer with the following specifications: (1)

(2)

Design variables: .

Induction (B) A [1, 2] T;

.

Current density ( J) A [0.1, 5] A/mm2;

.

Winding height (h) A [0.1, 1] m; and

.

Secondary winding turns (N2) A [10, 100].

Objective functions: .

Figure 4. Leakage flux approximation (left: 2D FEM simulation; middle: straight flux lines approximation; right: magnetic field plot along winding radius)

fob ¼ {material cost, losses cost}.

H

winding radius

Online computable models 751

Figure 5. Left: internet page (Dimocode) describing the model; right: model sensitivity study using CADES

COMPEL 33,3

752

(3)

Constraints: .

Length o1.2 m.

.

Height o1.2 m.

.

Short circuit primary voltage (U1CCpu) ¼ 6 percent.

The following figure gives the mono-objective function (average of both costs) of the constrained mono-objective optimization using sequential quadratic programming (SQP) and interior point (IP) algorithms. These two algorithms are very well known in the field of gradient-based optimization. According to Mathwork documentation, the implemented SQP algorithm solves a quadratic programming (QP) subproblem at each iteration which updates an estimate of the Hessian of the Lagrangian at each iteration using the BFGS formula as defined by M.J.D. Powell (1978). Then it performs a line search using a merit function and the QP subproblem is solved using an active set strategy similar to that described in Gill et al. (1981). The IP algorithm approximates the original inequality-constrained problem by a sequence of equality-constrained problems which are easier to solve (Byrd et al., 2000). Then the algorithm uses one of two main types of steps at each iteration, by default a direct step (or Newton step) attempting to solve the Karush-Kuhn-Tucker (KKT) equations via a linear approximation, or a conjugate gradient (CG) step if first failed, using a trust region. One of the first comparisons which can be made about these two algorithms applied to our model is the higher iteration number of IP algorithm. The other result is that SQP has reached a better solution (with the same initial point). The idea of this paper is not to go deeper on the analysis, just to show that we can compare easily (Figure 6). In Figure 7, we can see that constraints are satisfied in both cases, and the solutions are very similar (B: induction, J: current density, h: transformer high, N2: secondary winding turns). It is interesting to see the different path that both algorithms are going on. Objective function 10,000

SQP Interior point

6,840

zoom on last iterations

9,500 6,830

9,000

6,820

6,810

8,500 6,800

8,000

Figure 6. Comparison of the mono-objective (total price) optimization using sequential quadratic programming (SQP) and interior point (IP) algorithms

6,790

7,500

25

30

35

40

45

50

55

7,000 6,500

0

10

20

30 iterations

40

50

60

Objective function

9,000 8,000

8,000

7,000

7,000

6,000

0

5

10

15

20

25

30

35

6,000

0

10

length height

0

5

10

15

20

25

30

35

equality constraint

20

20

30

40

50

1.4 1.2 1 0.8 0.6

753

length height

0

10

20

30

40

50

60

equality constraint

20 U1CCpu

U1CCpu

10 0

60

inequality constraints

inequality constraints 1.4 1.2 1 0.8 0.6

Online computable models

Objective function

9,000

10

0

5

10

15

20

25

30

35

B J h N2

design variables 1

0

0

10

20

30

40

50

60 B J h N2

design variables 1 0.5

0.5

0

0 0

5

10

15

20

25

30

35

0

10

20

30

40

50

60

SQP tries many different solutions and seems to memorize the bests and finally converges to reach accuracy. On the contrary, IP tries several solutions in the very beginning and then reach the convergence, but in a very long time. A second comparison has been done regarding Jacobian use or not. Indeed, Matlabt algorithms are able to compute very accurate Jacobian approximations using an adaptive finite difference step. The result is that it requires more iteration, 79 vs 33 with the exact Jacobian (Enciu et al., 2009; Pham-Quang and Delinchant, 2012), but the behavior is exactly the same and the accuracy (relative and absolute tolerances fixed to 1e5) was reach without numerical issues. The algorithms used until now are mono-objective, but we can use multi-objective ones to solve this problem. This has been done at UFMG Evolutionary Computation Group (Federal University of Minas Gerais – Brazil) which have developed several genetic strategies, using differential evolution (DE) (Price et al., 2005), estimation of distribution algorithm (EDA) (Inza et al., 2000), and an hybrid version of both (DE/EDA). The algorithm configuration is the following: population ¼ 300; generation ¼ 20. The algorithm stops when the generation number is reached and the results are non-dominated solutions (approximated Pareto front) as we can see in the following figures. The three solutions are quite equivalent, showing a quite good discrepancy between points. It is interesting to see where the tradeoff regarding cost of material vs cost of losses which have been made with mono-objective (cost of losses ¼ h8,539, cost of material ¼ h2,519). So it is clear that total cost optimization tries to reach a low cost of losses since it is the higher value. But if we are looking for a tradeoff, the Pareto front gives a good tool in order to make the choice. We can notice that the result given by deterministic algorithm is more accurate as expected (Figure 8).

Figure 7. Comparison of the constraints and design variables using sequential quadratic programming (SQP) at left and interior point (IP) at right algorithms

COMPEL 33,3

754

Model optimization on R In this section we want to try another optimization technique based on surrogate models. The optimization procedure is efficient global optimization (EGO) Jones et al. (1998) based on sequential optimization of expected improvement (EI) criterion using genetic algorithm followed by a deterministic gradient-based algorithm (Mebane and Sekhon, 2009). The surrogate is a Kriging model since it gives an estimation of the interpolation uncertainties (Krige, 1951). The EI is then based on the minimization of these uncertainties as well as the decrease of the objective function. These algorithms are available in R[4] (statistic computing software) using DICE libraries[5]. So we have used the R plugin in order to see the transformer model available on line as a simple function in R software. In the following figures we can see the initial sampling (X1 ¼ magnetic induction, X2 ¼ current density) and the three first iterations of the EGO algorithm which tries to maximize the EI criterion (Figure 9). The following figure shows the result, with initial sampling, 15 new samples due to EGO iterations, and finally an optimization in the surrogate model to find the final solution. This solution is not too far from the one founded with SQP algorithm. The difficulty in such optimization is the interpolation using Kriging which is not really confident with few points especially in high dimension. Here, we have reduced the design space dimension from four to two in order to plot the results, but with four dimensions it is quite hard to find a solution to this constrained optimization due to physical non-linearities and those introduced by the penalty functions (Smoothly Clipped Absolute Deviation (SCAD); Li and Sudjianto, 2005; Figure 10). Conclusions We have proposed a reusable framework based on paradigm of software programming, which allow an easy use and interoperability of engineering models. The ICAr software component standard, which is dedicated to the optimization, is proposed. This standard provides the ability to make the models available by downloading a file, or to use it remotely as a web service. This solution offers the advantage of facilitating the reuse of models, as a black box, where only parameters and inputs can be modified. We have also shown that the web service solution allows using web services with dependencies such as fine element simulation software.

3,200 DE/EDA DE EDA mono-objective solution

3,000 2,800 2,600 Material Cost

Figure 8. Multi-objective optimization, differential evolution (DE), estimation of distribution algorithm (EDA), and a hybrid version of both (DE/EDA), and the comparison with the result founded by SQP mono-objective optimization

2,400 2,200 2,000 1,800 1,600 1,400 1,200 0.8

0.9

1

1.1

1.2 Losses Cost

1.3

1.4

1.5

1.6 x 104

X2

X2

X2

1

2

3

4

5

1

2

3

4

5

1

2

3

4

1.0 1.2 1.4 1.6 1.8 2.0 X1

Transfo optim

1.0 1.2 1.4 1.6 1.8 2.0 X1

Transfo optim

1.0 1.2 1.4 1.6 1.8 2.0 X1

Transfo optim

X2 X2 X2

1

1

3

4

5

1

2

3

4

5

1

2

3

4

5

2

1.0 1.2 1.4 1.6 1.8 2.0 X1

Transfo optim

1.0 1.2 1.4 1.6 1.8 2.0 X1

Transfo optim

1.0 1.2 1.4 1.6 1.8 2.0 X1

Transfo optim

2

3

4

5

1

2

3

4

5

1

2

3

4

5

X2 X2 X2

1.0 1.2 1.4 1.6 1.8 2.0 X1

Transfo optim

1.0 1.2 1.4 1.6 1.8 2.0 X1

Transfo optim

1.0 1.2 1.4 1.6 1.8 2.0 X1

Transfo optim

X2 X2 X2

1

2

3

4

5

1

2

3

4

5

1

2

3

4

5

1.0 1.2 1.4 1.6 1.8 2.0 X1

Transfo optim

1.0 1.2 1.4 1.6 1.8 2.0 X1

Transfo optim

1.0 1.2 1.4 1.6 1.8 2.0 X1

Transfo optim

X2 X2 X2

1

2

3

4

1.0 1.2 1.4 1.6 1.8 2.0 X1

Transfo optim

1

5

2

3

4

5

1

2

3

4

5

1 1.0 1.2 1.4 1.6 1.8 2.0 X1

Transfo optim

1.0 1.2 1.4 1.6 1.8 2.0 X1

Transfo optim

2

3

4

5

1

2

3

4

5

X2 X2

5

1.0 1.2 1.4 1.6 1.8 2.0 X1

Transfo optim

1.0 1.2 1.4 1.6 1.8 2.0 X1

Transfo optim

Online computable models 755

Figure 9. Initial sampling and the three first iterations of EGO optimization algorithm

Figure 10. EGO algorithm applied on transformer constrained mono-objective optimization 1 1.0 1.2

iterative sampling

8 6

7

5

4 12

2

1.4

15

1

Bt (T)

3

14

9

1.6

1.8

8.000 Optimization result on surrogate with 9 initial samples and 15 others

11

comparison with SQP result on direct model

initial sampling

2.0

13

756

2

3

4

5

COMPEL 33,3

J (A/mm2)

We have detailed a particular use case corresponding to the comparison of optimization algorithms available or developed in different software such as Matlabt and R for the design of a transformer. These software fit into our optimization framework through the use of plugin to exploit the ICAr components and related web services. These plugins are generic and can be used with any model. The next step of this work is to promote this architecture and make it available to anyone. Then, new plugin and plug out will be developed by other people and the interoperability capacity will grow. We may also address other kind of modelling semantic like system optimization and optimal control including dynamic simulations. The way to the practical use of such an open environment at an industrial level is still very long. Interoperability is not yet transparent for users which are still using independent simulation software without any optimization capacities. In the other hand, online simulation business model is quickly rising especially with cloud computing opportunities. But innovation is a step-by-step process, because several psychological – more than technical – obstacles should be overcome. Notes 1. Genuine Optimization Tool (http://forge-mage.g2elab.grenoble-inp.fr/project/got) 2. Component Architecture for the Design of Engineering Systems (www.cades-solutions.com/cades) 3. R: www.cedrat.com 4. www.r-project.org 5. Consortium DICE (Deep Inside Computer Experiments): http://dice.emse.fr/ References Bertrand, D.P., Gerbaud, L., Wurtz, F. and Morin, E. (2010), “A method and a tool for fast transient simulation of electromechanical devices: application to linear actuators”, MOMAG, August 29-September 1, Vila Vehla. Byrd, R.H., Gilbert, J.C. and Nocedal, J. (2000), “A trust region method based on interior point techniques for nonlinear programming”, Mathematical Programming, Vol. 89 No. 1, pp. 149-185. Coulomb, J.L. (2008), “Optimization”, in Meunier, G. (Ed.), The Finite Element Method for Electromagnetic Modeling, Chapter 14, John Wiley & Sons, Inc., Hoboken, NJ, pp. 547-593. Dargahi, A., Pourroy, F. and Wurtz, F. (2010), “Towards controlling the acceptance factors for a collaborative platform in engineering design”, IFIP Advances in Information and Communication Technology, Vol. 336, pp. 585-592. Delinchant, B., Estrabaud, L., Gerbaud, L. and Wurtz, F. (2012), “Multi-criteria design and optimization tools”, in Roboam, X. (Ed.), Integrated Design by Optimization of Electrical Energy Systems, Chapter 5, Wiley ISTE, London, pp. 193-245. Delinchant, B., Wurtz, F., Magot, D. and Gerbaud, L. (2004), “A component-based framework for the composition of simulation software modeling electrical systems”, Journal of Simulation, Society for Modeling and Simulation International, Special Issue: Component-Based Modeling and Simulation, Vol. 80 No. 8, pp. 347-356. Delinchant, B., Duret, D., Estrabaut, L., Gerbaud, L., Nguyen Huu, H., Du Peloux, B., Rakotoarison, H.L., Verdiere, F. and Wurtz, F. (2007), “An optimizer using the software component paradigm for the optimization of engineering systems”, COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering, Vol. 26 No. 2, pp. 368-379.

Online computable models 757

COMPEL 33,3

758

Delinchant, B., Rakotoarison, H.L., Ardon, V., Chadebec, O. and Cugat, O. (2009), “Gradient based optimization of semi-numerical models with symbolic sensitivity: application to a simple ferromagnetic MEMS switch device”, International Journal of Applied Electromagnetics and Mechanics, Vol. 30 Nos 3-4, pp. 189-200. Enciu, P., Wurtz, F., Gerbaud, L. and Delinchant, B. (2009), “Automatic differentiation for electromagnetic models used in optimization”, COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering, Vol. 28 No. 5, pp. 1313-1326. Gaaloul, S., Delinchant, B., Wurtz, F. and Verdie`re, F. (2011), “Software components for dynamic building simulation”, Proceedings of Building Simulation 2011,12th Conference of IBPSA, Sydney, November 14-16. Gill, P.E., Murray, W. and Wright, M.H. (1981), Practical Optimization, Academic Press, London. Inza, I., Larran˜aga, P., Etxeberria, R. and Sierra, B. (2000), “Feature subset selection by Bayesian network-based optimization”, Artificial Intelligence, Vol. 123 No. 1, pp. 157-184. Jones, D.R., Schonlau, M. and Welch, W.J. (1998), “Efficient global optimization of expensive black-box functions”, Journal of Global Optimization, Vol. 13, pp. 455-492. Krige, D.G. (1951), “A statistical approach to some basic mine valuation problems on the witwatersrand”, J. of the Chem., Metal. and Mining Soc. of South Africa, Vol. 52 No. 6, pp. 119-139. Li, R. and Sudjianto, A. (2005), “Analysis of computer experiments using penalized likelihood in Gaussian Kriging models”, Technometrics, Vol. 47 No. 2, pp. 111-120. Mebane, W.R. Jr and Sekhon, J.S. (2009), “Genetic optimization using derivatives: the rgenoud package for R”, Journal of Statistical Software, Vol. 42 No. 11, pp. 1-26. Pham-Quang, P. and Delinchant, B. (2012), “Java automatic differentiation tool using virtual operator overloading”, in Forth, S., et al. (Eds), Recent Advances in Algorithmic Differentiation, Emerald Group Publishing Limited, Bingley, pp. 241-250. Powell, M.J.D. (1978), “A fast algorithm for nonlinearly constrained optimization calculations”, in Watson, G.A. (Ed.), Numerical Analysis, Vol. 630, Lecture Notes in Mathematics, Springer, Berlin Heidelberg. Price, K.V., Storn, R.M. and Lampinen, J.A. (2005), Differential Evolution: A Practical Approach to Global Optimization, Natural Computing Series, Springer, Berlin Heidelberg. Rezgui, A., Gerbaud, L. and Delinchant, B. (2012), “VHDL-AMS to support DAE-PDE coupling and multilevel modeling”, IEEE Transactions on Magnetics, Vol. 48 No. 2, pp. 627-630. Tarricone, L. and Esposito, A. (2006), Advances in Information Technologies for Electromagnetics, Springer-Verlag Inc, New York, NY. Corresponding author Dr Benoit Delinchant can be contacted at: [email protected]

To purchase reprints of this article please e-mail: [email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints

The current issue and full text archive of this journal is available at www.emeraldinsight.com/0332-1649.htm

A modified lambda algorithm for optimization in electromagnetics

Optimization in electromagnetics

Piergiorgio Alotto Dipartimento di IngegneriaIndustriale, Universit`a di Padova, Italy

Leandro dos Santos Coelho

759

Industrial and Systems Eng. Graduate Program, Pontifical Catholic University of Parana, Curitiba, Brazil and Department of Electrical Eng. (PPGEE), Federal University of Parana (UFPR), Curitiba, Brazil

Viviana C. Mariani Department of Electrical Eng. (PPGEE), Federal University of Parana (UFPR), Curitiba, Brazil and Mechanical Eng. Graduate Program (PPGEM), Pontifical Catholic University of Parana, Curitiba, Brazil, and

Camila da C. Oliveira Industrial and Systems Eng. Graduate Program, Pontifical Catholic University of Parana, Curitiba, Brazil Abstract Purpose – The purpose of this paper is to show with the help widely used analytical and application-oriented benchmark problems that a novel and relatively uncommon optimization method, lambda optimization, can be successfully applied to the solution of optimization problems in electromagnetics. Furthermore an improvement to the method is proposed and its effectiveness is validated. Design/methodology/approach – An adaptive probability factor is used within the framework of lambda optimization. Findings – It is shown that in the framework of lambda optimization (LO) the use of an adaptive probability factor can provide high-quality solutions with small standard deviation on the selected benchmark problem. Research limitations/implications – Although the chosen benchmarks are considered to be representative of typical electromagnetic problems, different test cases may give less satisfactory results. Practical implications – The proposed approach appears to be an efficient general purpose stochastic optimizer for electromagnetic design problems. Originality/value – This paper introduces and validates the use of adaptive probability factor in order to improve the balance between the explorative and exploitative characteristics of the LO algorithm. Keywords Optimization techniques, Optimal design Paper type Research paper

I. Introduction Optimization algorithms featuring stochastic elements are nowadays commonly called metaheuristics and many of them, like particle swarm optimization (PSO), genetic algorithms (GA) and evolution strategies (ES) just to name a few of them, are known to be powerful methods for the solution of difficult optimization problems related to the This work was supported by the National Council of Scientific and Technologic Development of Brazil – CNPq – under Grant Nos 476235/2011-1/PQ and 304785/2011-0/PQ and University of Padova PRAT2011 Grant No. CPDA115285.

COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering Vol. 33 No. 3, 2014 pp. 759-767 r Emerald Group Publishing Limited 0332-1649 DOI 10.1108/COMPEL-10-2012-0224

COMPEL 33,3

760

design of electromagnetic devices and have been studied extensively in the last three decades with increasing interest in recent years (see e.g. Al-Aaawar et al., 2011; Crevecoeur et al., 2010; Watanabe et al., 2010). A recently introduced metaheuristic which has not received much attention in the electromagnetic optimization community and which has shown interesting performances in other application areas is the lambda optimization (LO) algorithm proposed in Cui et al. (2009, 2010). The objective of this paper is to review the basic algorithmic features of the relatively uncommon LO optimizer and to present a modified and improved LO (MLO) variant which tries to find a good balance between the explorative and exploitative behavior of the algorithm. Both algorithms are then tested on a brushless direct current (DC) wheel motor benchmark problem described in Brisset and Brochet (2005). Furthermore, the performance of both algorithms is compared with that of other well-established metaheuristics as well as with deterministic optimizers. The rest of this paper is organized as follows. Section II provides a detailed description of the LO algorithm on the basis of a very simple test case, while Section III is devoted to the application of LO to a multiminima analytical test problem. In Section III, we describe the brushless DC wheel motor benchmark problem and presents the optimization results for the LO and MLO algorithmic variants, and the paper concludes with a brief discussion in Section V. II. The LO algorithm A. Introduction, definitions and general aspects The popular GA are based on a binary encoding of the features of candidate solutions and thus represent them as strings of 0 and 1 s. The LO algorithm, on the other hand, is inspired by the observation that such representation is a very simple one and thus requires mutation and crossover operators to generate new candidate solutions, while a representation based on a more diversified set of symbols may allow the generation of highly diversified individuals without resorting to such operators. In LO the symbols used to represent the degrees of freedom of the problem are taken from the set {0,1,2,3,4} (this is called the element set) and symbols sk are assembled in strings of the form s1, yspsp þ 1, ys2p, ys(n1)p þ 1, ysnp in order to represent candidate solutions in an n-dimensional space (each block of p symbols encodes a variable). Thanks to this representation, each variable xj can be obtained from its string representation according to: p   X 5 pk min min  x s  xj ¼ xj þ xmax ð j1Þpþk j j 5p k¼1 ð1Þ or x ¼ xmin þ DxMs in matrix form. Obviously, the first symbol in each variable’s representation is the most significant one, i.e. its change produces the largest change in the variable, while the last symbol is the least significant one. The most important element of the LO algorithm is the so-called l operator which is at the basis of the theoretical properties of the method. The I operator performs a positive shift (with wrap-around) of all symbols of a string, e.g. l (0,2,1,4,3) ¼ (1,3,2,0,4), and is fundamental since the repeated operation of l will

result in a Markov chain which is endowed with peculiar statistical properties, among them the fact that it has a unique stationary distribution which will be reached asymptotically. This final distribution can be guessed after some iterations and this can be used in the optimization algorithm as will become clear in the following. If l is applied four times to a string, thus generating a complete set of possible rotations, the operation is called l-spreading. This operation is the LO equivalent of mutation in GA, with the important difference that in this case the operation is deterministic. l can also be applied to individual symbols of a string, e.g. in the case of the so-called l-comparison: if the fitness of individual I1 is better than the fitness of individual I2 all symbols of I2 which are not equal to those in the corresponding locations in I1 are subject to l. The idea behind this operation is that “bad genes” in an individual should change while “good genes” should remain. Inverse l-comparison uses the same principle but applies a negative shift. If a generic individual I2 is l-compared with a generic individual I1 each symbol has a probability of 0.2 to remain unchanged and a probability of 0.8 to change, therefore, in general, l-comparison produces individuals with a high number of modified string elements. B. Algorithm The LO algorithm will be presented on the basis of a very simple test problem, i.e. the minimization of f(x1,x2) ¼ (x11)2 þ (x21)2 with bounds [1000,1000] on each variable. For the sake of ease of description we will use a population of nind ¼ 4 individuals although larger populations would be used in real problems. Furthermore, also for the sake of ease of description, each variable will be encoded with ns ¼ 3 symbols although longer strings are used for the solution of real problems (typically 11 symbols for each degree of freedom), i.e. each candidate solution is represented by a string of length ls ¼ 2 ns ¼ 6. Step 1: initialization In total, three populations Q, E1, E2 are randomly generated. Each consists of nind individuals, encoded with ls symbols, i.e. each population is represented by a 4  6 matrix containing integers from the set {0,1,2,3,4}. For each variable the search range max  is initialized. The maximum number of allowed function evaluations and ½xmin j ; xj a threshold p in [0.5, 0.999], e.g. p ¼ 0.5, are chosen. Initialize the iteration counter to 1. Step 2: l-spreading l-spreading is applied to Q, E1, E2 thus deterministically generating a total of 12 new populations. The total number of individuals at this point is 15nind ¼ 60 and forms the complete population E, represented by a 60  6 matrix. Step 3: mapping and evaluation All individuals in E are evaluated after being mapped to design space according to: x ¼ xmin þ DxMs ð2Þ where xmin is the vector of lower bounds on the degrees of freedom, Dx is their range and Ms is defined in (1). This operation can be carried out in parallel and due to the large size of E (usually much larger than the populations of other stochastic algorithms, e.g. see Alotto (2011) for an example of the use of micropopulations in the context of Differential Evolution) big speedups with respect to a sequential evaluation are possible.

Optimization in electromagnetics

761

COMPEL 33,3

Step 4: ranking Put the nind ¼ 4 best individuals of E in population B, put the nind ¼ 4 worst individuals of E in population W, put the nind ¼ 4 good individuals between 60 percent (3/5) and 66 percent (2/3) of the ordered individuals of E in population G. Step 5: frequency analysis

762

For population B the first symbol in each variable’s representation (the most significant symbol) is examined and the occurrence frequency fk of each integer k ¼ 1, y, 5 of the element set is computed. Step 6: search range restriction If fkXP and the best individual in the population has k as most significant symbol in the representation of the corresponding variable then the search range of that h variable i max is shrunk to the appropriate 1/5 of the current search range, thus changing xmin ; x : j j The rationale behind this step is that if a symbol has high frequency in all the best individuals of E, i.e. in B, the optimum will likely be in a neighborhood of the current best solution and therefore the search should concentrate in that area. In terms of Markov chain theory we are guessing that one symbol of the representation is approaching its stationaryh state and itherefore we restrict the search. max have been updated the same operation takes place in terms of After xmin j ; xj the string representation by substituting the first column (most significant) with the second and so on until the least significant one is randomly generated. This operation is performed on the best population B and on G (good individuals) which are then substituted into W. The reason for this is that the old individuals in W are anyway probably outside the new search range. Step 7: packed lcomparison l-comparison is applied to triplets (packs) of individuals in B and in W so that both B and W are updated, inverse l-comparison is applied to a copy of B thus generating a new population T. Step 8: population update Replace Q, E1, E2 with B, W, T, respectively. Step 9: convergence check If the maximum number of function evaluations has been reached terminate the procedure, otherwise increment the iteration counter and return to Step 2. The application of the LO algorithm to the described simple test problem on a sample run generates the results shown in Table I. Figure 1 shows the behavior of the optimal value of the objective function and the dimensions of the search space during iteration. Although Table I and Figure 1 refer simply to a sample run on a very simple specific example, some of the behaviors can be considered typical of LO, namely: (1)

The convergence of the algorithm (decrease of fopt and dimensions of the search space) is normally fast due to the greediness of the search range restriction, especially for small values of p. In fact small values of p promote exploitation while larger values promote exploration. This fact can be used to develop adaptive strategies as will be explained in the case of the motor benchmark.

(2) (3)

Not always the dimension of the search space decrease at each iteration for each variable (e.g. iterations three and h four fori variable x2).

max The algorithm may generate some xmin which do not bracket the real j ; xj

minimum, e.g. in the case of variable x1 from iteration five onwards, and such an undesirable behavior is irreversible. In the presented case this is mainly due to the very coarse representation of each variable with three symbols only, but it can happen in general and is unavoidable.

Optimization in electromagnetics

763

C. Modified Lambda algorithm As will be shown in Section IV, the standard LO algorithm is quite sensitive to the choice of P, i.e. the threshold responsible for the shrinking of the search space. Therefore we propose a modified Lambda optimization (MLO) by introducing a dynamic value for P, i.e. P varies linearly between 0.6 and 0.3 during iteration. Thus, the algorithm starts with a more exploratory behavior (a larger value of P means that the likelihood of shrinking the search space is decreased) and ends with a more exploitative one (a smaller value of P implies a higher likelihood of quickly shrinking the design space). It should be noted that, like all optimization algorithms, also LO and MLO suffer from the curse of dimensionality, i.e. since the hypervolume of the search space increases exponentially with the number of degrees of freedom problems become Iteration

Obj. evals

1 2 3 4 5 6 7

Range x1

60 120 180 240 300 360 420

[200.0, [40.0, [8.0, [1.6, [0.32, [0.32, [0.832,

200.0] 40.0] 8.0] 1.6] 0.96] 0.96] 0.96]

Range x2

Dx1, Dx2

fopt

[200.0, 200.0] [40.0, 40.0] [8.0, 8.0] [8.0, 8.0] [1.6, 1.6] [0.96, 1.6] [0.96,1.088]

1.60E5 6.40E3 2.56E2 5.12E1 2.05E0 4.10E-1 1.64E-2

5858.0 821.2 5.072 1.422 0.208 0.018 0.015

Table I. Results for a sample run

106 Min(f) Size of search space

Min(f) and Size of search space

105 104 103 102 101 100 10–1 10–2 50

100

150

200 250 300 N. of obj. evaluations

350

400

450

Figure 1. Optimal value of the objective function and dimensions of the search space during iteration

COMPEL 33,3

764

extremely hard to solve even for a few tens of variables. However, both LO and MLO work quite well with high-dimensional search spaces if appropriately large populations are used (as a rule of thumb the population should consist of at least twice as many individuals as the number of degrees of freedom). III. Analytical benchmark The analytical benchmark refers to the well-known six-hump camel back function f ðx1 ; x2 Þ ¼ 4x21  2:1x41 þ x61 =3 þ x1 x2  4x22 þ 4x42 . The function has features typical of many real problems, namely a bowl-shaped large-scale behavior, shown in Figure 2(a), with a relatively flat plateau which has a rather undulated small-scale behavior with several local minima, shown in Figure 2(b). The function has two global minima at [0.089842, 0.712656] and [0.089842, 0.712656] with value f ¼ 1.031628453 and an additional four local minima. MO was run with nind ¼ 7 and l ¼ 11 (11 symbols to encode each degree of freedom). Figure 3 shows the position of the optimal solutions obtained over 30 runs of the algorithm with maximum number of objective function evaluation set to 100 (Figure 3(a)) and 500 (Figure 3(b)). The picture clearly shows that even with a small number of function evaluations, the areas of the global minima are correctly identified by the algorithm, while a larger, but still reasonable, number of function evaluations allows a good precision. The 160 140

200

100

50

80

0

60 40 1

Figure 3. Optimal solution over 30 runs

f(x1,x2)

f(x1,x2)

100

–50 2

Figure 2. Six-hump camel back function

5

120

150

0 –1

x2

–2

–2 –4

2

0

4

20

6 5 4 3 2 1 0 –1 2

4 3 2 1 1

0

x1

0 x2

–1

–1

–2 –2

0 x1

1

2

Notes: (a) Large-scale behavior; (b) small-scale behavior

1

1 0.8 0.6 0.4 0.2 0 –0.2 –0.4 –0.6 –0.8 –1

0.8 0.6 0.4 0.2 0 –0.2 –0.4 –0.6 –0.8 –1 –1.5

–1

–0.5

0

0.5

1

1.5

–1.5

–1

–0.5

0

0.5

1

Notes: (a) Maximum 100 function evaluations; (b) maximum 500 function evaluations

1.5

0

ability of the algorithm to escape local minima is also clearly shown, since none of the runs terminated in one of the four local minima. IV. Brushless DC wheel motor benchmark A brushless DC wheel motor benchmark was presented in Brisset and Brochet (2005) and several optimization results obtained with various algorithms are available in literature. Furthermore, the code for computing the objective function is publicly available (http://l2ep.univ-lille1.fr/come/benchmark-wheel-motor.htm) thus making the comparison independent of differences in the calculation of the objective function. These features make this benchmark ideal for comparing the performances of different techniques. The problem is characterized by five continuous degrees of freedom including geometric as well as electromagnetic parameters (stator diameter, air gap induction, conductor current density, teeth magnetic induction, stator back iron induction), and the problem consists in maximizing the efficiency Z of the motor (this is equivalent to a minimization of the motor losses). Furthermore, the problem is subject to six inequality constraints that are related to technological and operational considerations regarding the specific wheel motor. The parameters of both LO and MLO have been set with population size equal to ten and string length equal to 11. The number maximum of objective function evaluations is set to 1,000 and constraints were handled by a penalty method by both the LO and MLO approaches. In the classical LO algorithm the confidence probability P was set to several constant values between 0.5 and 1. Table II reports the results obtained by LO and MLO over 30 runs. A boldface result identifies the best values found in Table II. Comparisons with other stochastic techniques are possible for this benchmark too. Table III shows the available results for sequential quadratic programming (SQP)

Optimization method LO ( P ¼ 0.5) LO ( P ¼ 0.6) LO ( P ¼ 0.7) LO ( P ¼ 0.8) LO ( P ¼ 0.9) LO ( P ¼ 1) MLO

Maximum (best)

Mean

95.30 95.27 95.24 95.27 95.26 95.19 95.32

95.16 95.15 95.11 95.10 95.08 95.07 95.18

Z in % Minimum (worst) 95.00 95.00 94.92 94.92 94.88 94.88 95.00

Optimization in electromagnetics

765

SD  104 9.84 8.69 8.60 9.00 10.83 9.78 9.81

Algorithm

Z (%)

Evaluations of Z

SQP GA GA and SQP ACO PSO ICA LO ( P ¼ 0.5) MLO

95.32 95.31 95.31 95.32 95.32 95.31 95.30 95.32

90 3,380 1,644 1,200 1,600 800 1,100 1,100

Table II. Simulation results of Z maximization in 30 runs

Table III. Results of optimization using different methods

COMPEL 33,3

766

(http://l2ep.univ-lille1.fr/come/benchmark-wheel-motor/OptRest.htm, Moussouni et al. 2007), a GA (Moussouni et al., 2007), ant colony optimization (ACO; Moussouni et al., 2007), PSO (Moussouni et al., 2007) and imperialist competitive algorithm (ICA; dos et al., 2012). It can be noted that the MLO approach converged to the same solution found by SQP and ACO, which is most probably the global optimum of the problem. MLO stands out as the winner among stochastic optimizers in terms of function evaluations required to reach the optimum (ICA which appears to be the best in the table is affected by rather unsatisfactory values for worst case performance and standard deviation over multiple runs). IV. Conclusions This paper provides a detailed description of the rather new LO algorithm, highlights some of its features and shows its application to both analytical and applicationoriented benchmark problems. Furthermore a modified version (MLO) is presented with the aim of balancing the explorative and exploitative characteristics of the algorithm. The performance of MLO on a challenging electromagnetic optimization problem is shown to be superior to those of other well-established techniques. Work is under way to apply MLO to other challenging benchmarks problems and to extend it to the multiobjective case. References Al-Aaawar, N., Hijazi, T.M. and Arkadan, A.A. (2011), “Particle swarm optimization of coupled electromechanical systems”, IEEE Transactions on Magnetics, Vol. 47 No. 5, pp. 1314-1317. Alotto, P. (2011), “A hybrid multiobjective differential evolution method for electromagnetic device optimization”, COMPEL, Vol. 30 No. 6, pp. 1815-1828. Brisset, S. and Brochet, P. (2005), “Analytical model for the optimal design of a brushless DC wheel motor”, COMPEL, Vol. 24 No. 3, pp. 829-848. Crevecoeur, G., Sergeant, P., Dupre´, L. and Van de Walle, R. (2010), “A two-level genetic algorithm for electromagnetic optimization”, IEEE Transactions on Magnetics, Vol. 46 No. 7, pp. 2585-2595. Cui, Y., Cuo, R. and Guo, D. (2009), “A naı¨ve five-element string algorithm”, Journal of Software, Vol. 4 No. 9, pp. 925-934. Cui, Y., Cuo, R. and Guo, D. (2010), “Lambda optimization”, Journal of Uncertain Systems, Vol. 4 No. 1, pp. 22-33. dos, L., Coelho, S., Afonso, L.D. and Alotto., P. (2012), “A modified imperialist competitive algorithm for optimization in electromagnetics”, IEEE Trans. Magnetics, Vol. 48 No. 2, pp. 579-582. Moussouni, F., Brisset, S. and Brochet, P. (2007), “Some results on design of brushless DC wheel motor using SQP and GA”, International Journal of Applied Electromagnetics and Mechanics, Vol. 26 Nos 3-4, pp. 233-241. Moussouni, F., Brisset, S. and Brochet, P. (2007), “Comparison of two multi-agent algorithms: ACO and PSO”, Proceedings of 13th International Symposium on Electromagnetic Fields in Mechatronics, Electrical and Electronic Engineering, Prague, September 13-15.

Watanabe, K., Campelo, F., Iijima, Y., Kawano, K., Matsuo, T., Mifune, T. and Igarashi, H. (2010), “Optimization of inductors using evolutionary algorithms and its experimental validation”, IEEE Transactions on Magnetics, Vol. 46 No. 8, pp. 3393-3396. Web references Available at: http://l2ep.univ-lille1.fr/come/benchmark-wheel-motor.htm Available at: http://l2ep.univ-lille1.fr/come/benchmark-wheel-motor/OptRest.htm Corresponding author Dr Piergiorgio Alotto can be contacted at: [email protected]

To purchase reprints of this article please e-mail: [email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints

Optimization in electromagnetics

767

The current issue and full text archive of this journal is available at www.emeraldinsight.com/0332-1649.htm

COMPEL 33,3

Simple sensitivity calculation for inverse design problems in electrical engineering

768

Zoran Andjelic POLOPT Technologies, Baden, Switzerland Abstract Purpose – The purpose of this paper is to present a simple approach for calculation of the sensitivities in the free-form inverse design problems. The approach is based on the analogy with the similar tasks used in the signal-processing analysis. In the proposed case it is not required to solve an adjoint problem as in the most of the similar optimization tasks. The simulation engine used in the background is a Fast Boundary Element Method. The approach is validated on some known benchmark problems. Design/methodology/approach – Inverse design is recognized nowadays as a crucial scientific grand challenge. Contrary to the conventional approach (“Given the structure, find the properties”) it purses a new paradigm (“Given the desired property, find the structure”). Inverse class of problems has a broad application area, from the material-, medical-, bio- to the engineering-class of problems. When dealing with the inverse design in free-form optimization of the engineering problems the typical approach is to calculate the adjoint problem. Calculation of the adjoint problem mostly requires the costly calculation of the gradients, which makes the whole optimization procedure rather expensive due to the high computational burden required for their solution. Findings – In this paper it is proposed a novel Simple Sensitivity Approach to get in a fast way the response (sensitivity) function of the analyzed structure. The simulation engine used in the background is the Fast Boundary Element Method. Originality/value – Novel approach for inverse design when performing the free-form optimization of engineering problems. Keywords Optimization, Design optimization, Boundary element method, Free-form optimization, Inverse design, Sensitivity calculation Paper type Research paper

COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering Vol. 33 No. 3, 2014 pp. 768-776 r Emerald Group Publishing Limited 0332-1649 DOI 10.1108/COMPEL-10-2012-0229

Introduction Inverse design is recognized nowadays as a crucial scientific grand challenge. Contrary to the conventional approach (“Given the structure, find the properties”) it purses a new paradigm (“Given the desired property, find the structure”). Inverse class of problems has a broad application area, from the material-, medical-, bio- to the engineering-class of problems. Engineering class of inverse design tasks are usually treated either by solving adjoint problems or typically by use of evolutionary algorithms. The adjoint approach requires a considerable investment for its development (formulation, programming, testing); once it is done, adjoint methods coupled with a simple descent algorithm can locate optimal solution. The final solution depends on the initialization, so the adjoint method is likely to be trapped into local minima. On the other hand, evolutionary algorithms bear negligible development cost, accommodate readily any “external” evaluation software but, despite their robustness, require a great amount of function evaluation to reach the optimal solution: the cost increases with the number of design variables. In fluid mechanics a typical tasks of the inverse design problems is to make use of a prescribed pressure or velocity or any boundary layer quantity distribution along

the shape contours and seek for the shape that reproduces this distribution at given flow conditions (Canelas et al., 2010). In Lesnic and Martin (2005), the authors present the inverse approach for treatment of electromagnetic casting of molten metals. The approach is based on Simultaneous Analysis and Design (mathematical programming formulation) and is solved using feasible directions interior-point algorithm. In Feijoo et al. (2004), the authors treat an inverse acoustic problem by solving an adjoint problem. At each iteration step the computation of the shape derivatives of the cost function with respect to the parameters of the optimization problem is required. In Faugeras et al. (1999), the authors have used adjoint problem to analyse the electroencephalography and magnetosencephalography (MEG) problems in medicine. They solved the minimization problem by a descent algorithm using the information on the gradient with the adjoint state approach. The direct problem in MEG is to compute the magnetic field B(r) given by the primary current J(r) and the conductivity s(r) and the potential V(r). The inverse problem in MEG is: “given measure of some components of the magnetic field at some points q1, [y], qn, and the conductivity s(r) to estimate the primary current J.” In Engl et al. (2000), the authors optimize the 2D permanent magnet structure using constrained gradient-based inverse approach. Again to calculate the sensitivity it was necessary to solve the adjoint problem. Treatment of inverse problems in connection with BEM and structural mechanics problems is given in Arkadan and Subramaniam-Sivanesan (1996, May). Theoretical background of the regularization of the ill-posedness of the inverse problems is discussed in Andjelic et al. (2012). In this paper a novel Simple Sensitivity Approach (SSA) is proposed for inverse class of problems. It enables to find out the response (sensitivity) function of the analyzed structure without calculation of the adjoint problem and therefore provides much faster and simpler approach for inverse design in free-form optimization of real-world engineering problems. To the author’s best knowledge such an approach has not been reported in the literature so far. The simulation engine used in the background is the Fast Boundary Element Method (Andjelic, 2007; Andjelic and Sadovic, 2007).

Inverse design problems

769

Design sensitivity analysis of magnetic systems To analyze the sensitivity of the magnetic system we use the analogy with the sensitivity calculation in signal processing (SP). Figure 1 shows the control loop for sensitivity estimation in the controller design. Disturbance

R(s) External input

+

E(s) Σ

Controller Gc(s)



U(s)

d(s) +

Process +

Gp(s)

Y(s) Output

Noise N(s) + Σ

E(s) – feedback error

Σ

+

Figure 1. Feedback control loop for the controller design

COMPEL 33,3

In SP the sensitivity function e(s) ¼ e( jo) is defined as: eðSÞ ¼

770

EðSÞ RðSÞ  dðSÞ

ð1Þ

where E(S) is the frequency ( jo) dependent feedback error calculated as E(S) ¼ R(S)Y(S), R(S) is an external input and d(S) is the disturbance. The objective of such controller design in the SP tasks is to keep the error between the controlled output and the external input as small as possible. To calculate the sensitivity for the field optimization tasks we use the analogy to the above SP scheme. Let us take as an example the quantities from the magnetic problem, Figure 2 (exactly the same would be for any other class of the applications like electrostatic, acoustic, etc.). Sensitivity S of changing the field H in the space of interest (Figure 3) with the changes of the geometry of the magnetized body can be calculated following the analogy with the above scheme. In our “field controller” the input HG is the given (prescribed/desired) field distribution in the space of interest, HC is the calculated field in the same space. The Integrator performs the numerical integration, and the Controller compares the calculated and prescribed field values in the space of interest. Disturbance

HG

+ Σ

Controller

Integrator

F(H g,H c )

H c–ΔH

ΔH

E

External – input

+

N +

+

Σ

Space of interest j=1

i=1

2

2

3

3 N

Magnetized body

M

H c = H c–ΔH + ΔH Output

Noise

Figure 2. Calculation of the sensitivity function in the field optimization problems

Figure 3. “Disturbances” caused by the magnetized body in the space of interest

+

Σ

In magnetic problems the magnetic field H c( j) in the space point j of interest can be calculated using BEM (Andjelic and Sadovic, 2007) as: Z 1 c H ð jÞ ¼ sðiÞKði; jÞdG ð2Þ 4p

Inverse design problems

G

Here s(i) is a charge density distribution over the magnetized body and K(i,j) is a corresponding geometry dependent kernel. The most important component in our scheme is the calculation of the disturbance. To explain it we can consider the calculated field in the space point j as a sum of the contributions of the magnetic body charge densities over the magnetic body: H c ðjÞ ¼

N X

DH j ðiÞ ¼ DH j ð1Þ þ DH j ð2Þ þ . . . þ DH j ðN Þ

ð3Þ

i¼1

Here N is a number of points (mesh nodes) with the calculated charge densities on the magnetic body. Or, applying (3) for all space points j ¼ 1, M: H c ð j ¼ 1Þ

¼ DH 1 ð1Þ

þ DH 1 ð2Þ

þ ::: þ DH 1 ðNi Þ

H c ð j ¼ 2Þ :::

¼ DH 2 ð1Þ þ DH 2 ð2Þ þ ::: þ DH 2 ðNi Þ

ð4Þ

H c ð j ¼ M Þ ¼ DH M ð1Þ þ DH M ð2Þ þ ::: þ DH M ðNi Þ Here we raise a question: How much the calculated charge density in the node “i” of the magnetized body disturbs the field distribution in the space of interest? Or, better to say: What/Where is a maximal disturbance caused by such charge in the entire space of interest? The maximal disturbance term for each node on the magnetized body is obtained by scanning the corresponding contributions when calculating the field in each point of the space of interest. For example, the maximal disturbance the main node No. 2 of magnetized body (Figure 4) is causing in the space of interest is obtained by comparing the calculated contributions from the charge in node 2 in all space points. Thus disturbance DHCmax for each node at the magnetized body is calculated then as: C C DHmax ðiÞ ¼ max½DHmax ði; jÞ; j ¼ 1; Nj 

ð5Þ

Note that this approach does not require any additional calculation, but just a memorizing of the maximal contributions taken during the standard post-processing field calculation process. This is a unique and valuable feature of the proposed approach in comparison to the nowadays used gradient-based approaches. Finally, the sensitivity for each node i on the magnetized body can be calculated as: S ¼ SðiÞ ¼

HG  HC C H G  DHmax

ð6Þ

In the above case the HG is a an external input (given field) and HC is a calculated field in the space of interest.

771

COMPEL 33,3

Virtual surface with the desired homogeneous field Hd=22 A/m

772

 = 1,000

Figure 4. Metal sphere R ¼ 1 m and permeabilitym ¼ 1,000 in the homogeneous field H ¼ 100 A/m

H = 100 A/m

Notes: The virtual plane with the desired field of Hd=22 (A/m) is at the distance 2 m from the sphere’s center

The displacement vector for each mesh nodes can then be calculated as a product of the calculated sensitivity and the normal vector in the mesh node: D¼Sn ð7Þ The displacement in the direction of the normal vector is also physically meaningful, as the displacement in the tangential direction would not contribute to any change in the geometry form. Example: magnetized sphere In this elementary example we consider a metallic body having initially the spherical form (R ¼ 1 m) in a homogeneous magnetic field H ¼ 100 A/m. Our objective is to find a new form of the magnetized body that will provide a homogeneous field distribution (desired field) of Hd ¼ 22 A/m over the virtual surface 7  7 m at the distance 2 m from the sphere’s center. The free-form optimization run has been performed governed by the above described procedure for sensitivity calculation. The threshold of 3 percent with respect to the desired values (Hg ¼ 22 A/m) has been set. Note that before optimization the field value of 21 A/m was calculated only in the central point of the plane above the sphere. Figure 5 shows changes in the distribution of the calculated magnetic field in the space of interest caused by the changes of the form of the magnetized body. The error of 3 percent has been reached after 32 iterations. It has to be noted that the optimization has been currently conducted preserving the initial topology. Another possible approach would be to adapt the mesh after each iteration by either introducing new elements (extrusion of the geometry) or collapsing the old elements (shrinking of the geometry). This approach is recommended when the

Inverse design problems

0-iteration

5th-iteration

20th-iteration

25th-iteration

10th-iteration

15th-iteration

30th-iteration

32nd-iteration

Notes: After 32nd iteration the error has reached the given threshold of 3 per cent

773

Figure 5. Changes of the field distribution in the space of interest with the changes of the form of magnetized body

significant changes in geometry size appear. After each iteration the appropriate mesh smoothing usually has to be performed. Figure 6 shows the error calculated as a difference between the calculated and desired field in the space of interest. Looking at Figure 5 and the graph in Figure 6 it can be seen the major “work” was to extrude the magnetic body from the spherical shape to the final shape which has close to quadratic form with the longitudinal dimension of almost four times of initial spherical radius. Example: levitated rod problems As one more illustration it is shown an example of the optimization of the levitated rod problem. The model geometry is “borrowed” from Se-Hee et al. (2002) Figure 7. In our case the objective is to find a form of the magnetic pole shoes that provide the constant field distribution over the prescribed space of interest. In this case the prescribed space of interest is the area on the levitated rod within the angle of 1301 (Figure 8)[1]. The optimization has been performed for three different values of the given field Hg ¼ 150, 200 and 250 A/m. 30 100×(Hc-Hd/Hd)

25 20 15 10 5 0 0

5

10

15 20 Iterations

25

30

35

Figure 6. Error calculated during the optimization process for the entire space of interest

COMPEL 33,3 Core

774

Movable magnetic pole shoes Levitated rod (space of interest)

Coil

Figure 7. Model of the “Levitated rod” problem

130°

Figure 8. Objective function is a constant field distribution over the area of 1301

LEVITATED ROD

To obtain the form of the magnetic pole shoes that provide the desired field distribution over the levitated rod we have used the above procedure for sensitivity calculation in the combination with the module for free-form optimization (Andjelic et al., 2008 June; Se-Hee, et al., 2002). Figure 9 shows the field distribution over the space of interest after 61 iterations. The field in the observed area of 1301 varies below given threshold of 10 percent. Figure 10 shows the optimal form of the pole shoes for three different given values of Hg. With increase of the field values the pole shoes are “coming” closer to the levitated rod (from left to the right). Conclusions The paper elaborates a simple approach for sensitivity calculation in free-form optimization of engineering problems. The approach does not require the solution of the adjoint problem. The sensitivity is obtained as natural information delivered through the post-processing field calculation process. This enables efficient and robust optimization of the real-world problems without any additional effort. The background solution engine used for the optimization is in our case a Fast Boundary Element Method. The proposed method for sensitivity calculation has a generic character and

Inverse design problems

x10E+2 2.0000 1.8750 1.7500 1.6250 1.5000 1.3750 1.2500 1.1250 1.0000 0.8750 0.7500 0.6250 0.5000

(a)

(b)

775

Figure 9. Field distribution over the space of interest (levitated rod) for prescribed value of Hg ¼ 200 A/m

(c)

Notes: (a) Hg = 150 A/m; (b) Hg = 200 A/m; (c) Hg = 250 A/m

can be used having FEM or other numerical method as the numerical engine in the background. The approach is also independent on the application class and can be used for optimization of not only magnetic but also dielectric, acoustic or similar class of problems. Note 1. More than 1301 is hard to achieve as the magnetic field has a “zero”-value at the “equator” (Figure 9).

Figure 10. Front view to the movable geometry

COMPEL 33,3

776

References Andjelic, Z. and Sadovic, S. (2007), “Reduction of breakdown appearance by automatic geometry optimization”, IEEE Conf. on El. Insulation and Dielectric Phenomena, Vancouver, October 14-17. Andjelic, Z., Smajic, J. and Conry, M. (2007), “BEM-based simulations in engineering design”, Boundary Element Analysis, Mathematical Aspects and Applications Vol. 29, ISBN: 3-54047465-X, Springer Verlag, pp. 281-352. Andjelic, Z., Of, G., Steinbach, O. and Urthaler, P. (2012), “Fast BEM for industrial applications in magnetostatic”, in Langer, U., Schanz, M., Steinbach, O. and Wendland, W.L. (Eds), Lecture Nodes in Applied and Computational Mechanics (Vol. 63), Springer-Verlag. Andjelic, Z., Pusch, D., Schoenemann, T. and Sadovic, S. (2008), “Multi-load optimization in electrical engineering design”, Part 1: Simulation, EngOpt, Int. Conf. on Engineering Optimization, Rio de Janeiro, June 1-5. Arkadan, A.A. and Subramaniam-Sivanesan, S. (1996), “Shape optimization of PM devices using constrain gradient based inverse problem methodology”, IEEE Tran. On Mag, Vol. 32 No. 3, pp. 1222-1225. Canelas, A., Roche, J.R. and Herskovits, J. (2010), “Shape optimization for inverse electromagnetic casting problems”, INRIA-00544699, Ver. 1. Engl, H.W., Hanke, M. and Neubauer, A. (2000), Regularization of Inverse Problems (ISBN: 0-7923-4257-0), Kluwer Academic Publishing. Faugeras, O., Clement, F., Deriche, R., Keriven, R., Papadopoulo, T., Roberts, J., Vieville, T., Devernay, F., Gomes, J., Hermosillo, G., Kornprobst, P. and Lingrand, D. (1999), “The inverse EEG and MEG problems: the adjoint state approach I: the continuous case”, Rapport de recherche No. 3673, INRIA. Feijoo, G.R., Oberai, A.A. and Pinsky, P.M. (2004), “An application of shape optimization in the solution of inverse acoustic scattering problems”, Institute of physics publishing, Inverse Problems, Vol. 20 No. 3, pp. 199-228. Lesnic, D. and Martin, L. (2005), “A tutorial on inverse analysis with boundary elements”, in Marcelo, J.C., Helcio, R.B.O. and Geoge, S.D. (Eds), Inverse Problems, Design and Optimization (Vol. 1, ISBN: 85-7650-029-9), E-paper Publishing House Ltd. Se-Hee L., Dong-Hun K., Joon-Ho L., Byung-Sung K. and Il-Han P. (2002), “Shape design sensitivity for force distributions of magnetic systems”, IEEE Trans. On Applied Superconductivity, Vol. 12 No. 1, pp. 1471-1474. Further reading Kampolis, I., Papadimitriou, D.I. and Giannakoglou, K.C. (2005), “Evaluationar optimization using a new radial basis function network and the adjoint formulation”, in Marcelo, J.C., Helcio, R.B.O., Geoge, S.D., (Eds) Inverse Problems, Design and Optimization (Vol. 1, ISBN: 85-7650-029-9), E-paper Publishing House Ltd. Corresponding author Professor Zoran Andjelic can be contacted at: [email protected]

To purchase reprints of this article please e-mail: [email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints

The current issue and full text archive of this journal is available at www.emeraldinsight.com/0332-1649.htm

Optimal household energy management using V2H flexibilities Ardavan Dargahi

V2H flexibilities

777

Grenoble Electrical Engineering laboratory (G2Elab), Grenoble Institute of Technology, Grenoble, France

Ste´phane Ploix Laboratory of Grenoble for Sciences of Conception, Optimisation and Production(G-SCOP), Grenoble Institute of Technology, Grenoble, France

Alireza Soroudi Faculty of New Sciences and Technologies, University of Tehran, Tehran, Iran, and

Fre´de´ric Wurtz Electrical Engineering Lab, Grenoble University, Grenoble, France Abstract Purpose – The use of energy storage devices helps the consumers to utilize the benefits and flexibilities brought by smart networks. One of the major energy storage solutions is using electric vehicle batteries. The purpose of this paper is to develop an optimal energy management strategy for a consumer connected to the power grid equipped with Vehicle-to-Home (V2H) power supply and renewable power generation unit (PV). Design/methodology/approach – The problem of energy flow management is formulated and solved as an optimization problem using a linear programming model. The total energy cost of the consumer is optimized. The optimal values of decision variables are found using CPLEX solver. Findings – The simulation results demonstrated that if the optimal decisions are made regarding the V2H operation and managing the produced power by solar panels then the total energy payments are significantly reduced. Originality/value – The gap that the proposed model is trying to fill is the holistic determination of an optimal energy procurement portfolio by using various embedded resources in an optimal way. The contributions of this paper are in threefold as: first, the introduction of mobile storage devices with a periodical availability depending on driving schedules; second, offering a new business model for managing the generation of PV modules by considering the possibility of grid injection or self-consumption; third, considering Real Time Pricing in the suggested formulation. Keywords Battery storage, Building power management, Linear optimization, Power system economics, Solar power, Vehicle-to-home (V2H) Paper type Research paper

1. Introduction Car fleet electrification has been supported by a number of prior researches, e.g. Lindly and Haskew (2002) and Ford (1995) as a suitable alternative to push down the airborne pollutants in dense urban environments. Nevertheless, as concluded in Karplus et al. (2010) the expected decreases in tailpipe emission would be outweighed by the additional CO2 emission due to the increased electric power output necessary to support electric vehicles (EV). Hence, a widespread deployment of EV could also add a huge load to power

COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering Vol. 33 No. 3, 2014 pp. 777-792 r Emerald Group Publishing Limited 0332-1649 DOI 10.1108/COMPEL-10-2012-0223

COMPEL 33,3

778

generation and distribution infrastructures and also introduce line overloading and voltage stability problem. Since the early twenty-first century, new concepts are born to change consumer’s perception on EV as a transport only technology which becomes a huge energy consumer while charging its battery. The Vehicle-to-Grid (V2G) technology moves from EVs common pattern of use and opens the opportunity to regard them as valuable resources. According to the new researches (Turton and Moura, 2008) personal vehicles are in average engaged only 4-7 per cent of their lifetime for transportation purpose in a day and in the remaining time they are in rest on private or public parking lots. At this time each parked EV contains underutilized energy storage capacity which can also feed electricity to the grid when needed. For interested readers the principle of V2G power transmission can be found in Kempton and Tomice´ (2005) with more details. V2G could also extend to small private houses or residential and commercial buildings called, respectively, Vehicle-to-Home (V2H) or vehicle-to-building (V2B) (San Roman et al., 2011; Beer et al., 2012). In such case, instead of restoring the content of EV battery directly to the grid to serve a large number of the sites, one or several electric cars supply a single building. V2H concept would be regarded as the gateway towards a plausible convergence between transport and construction for a more flexible development. In this way, EVs can get charged during off-peak period at home or workplace from renewable energy resources located in buildings. The stored energy in the car battery can also supply the building at the high demand hours. This operation could relieve the electricity network at the peak hours and if properly performed can also create economic value for vehicle owners and building operators. The integration of EVs storage and distributed energy resources to the building power supply, poses new technical and economical opportunities and challenges as follows: (1)

(2)

Opportunities: .

avoiding high price values of electricity market in real time dynamic pricing paradigms;

.

reducing the risk of power outage and increasing the reliability of supply; and

.

reducing the environmental risk by using clean energy resources instead of polluting conventional power plants.

Challenges: .

probable increase of operating costs due to inappropriate energy management decisions;

.

uncertainty handling of renewable energy resources

.

reducing the life time of the EV’s battery due to multiple charging and discharging; and

.

unavailability of energy storage capacity (EV) due to mobility of the vehicle.

The gap that the proposed model is trying to fill is the holistic determination of an optimal energy procurement portfolio by using various embedded resources in an optimal way. The contributions of this paper are in threefold as: (1)

the introduction of mobile storage devices with a periodical availability depending on driving schedules;

(2)

offering a business model for managing the generation of PV modules by considering the possibility of grid injection or self consumption; and

(3)

considering Real Time Pricing in the suggested formulation.

The rest of this paper is organized as follows: the energy-system under study is described in Section 2. Section 3 models the storage system of EV. The detailed formulation of the optimal management problem is explained in Section 4, which is followed by Section 5 with a description of examined system usage scenarios. Finally Section 6 represents and discusses the results and the main conclusions are made in Section 7.

V2H flexibilities

779

2. Study case description and assumptions The optimal power management proposed in this paper is illustrated by a theoretical example of small-scaled V2H power supply system consisting of a residential building with decentralized generation facility and a plug-in electric car. Figure 1 schematically illustrates the power architecture of the considered power system as well as the power flows direction trough the whole system. The notations used in this figure are explained as follows: .

Pg: power flow from grid (kW);

.

Pg2b: power flow from grid to building (kW);

.

Pg2v: power flow from grid to electric car (kW);

.

Ps2v: power flow from PV plant to the electric car (kW);

.

Ps2b: power flow from PV plant to the building (kW);

.

Ps2g: solar surplus released on the grid (kW);

.

Pv2b: power flow from vehicle to building power (kW); Ps2g Utility Grid

Pg2v Ps2v

SOC

Peng

Pg

Electric Engine

Ps

Pv2b

Top Roof PV

Pg2b Ps2b

Building territory

Home appliances

Figure 1. Schematic of V2H integration into building energy supply

COMPEL 33,3

780

.

Peng: power flow fed to the electric engine (kW);

.

bin/out: net input/output power flow of the battery (kW); and

.

SOC: state of charge of the battery (kWh).

The building is supposed to be located in Grenoble, France. A rooftop installation consisting of 25 photovoltaic panels is disposed on the southern face of the roof. An electric driven Nissan Leaf is placed in disposal to ensure inhabitants daily commutes over short and medium distances. It is powered by a lithium-ion battery pack whose characteristics are given by Table I. The operating mode of performed case study is defined based on the following assumptions: .

The excessive photovoltaic power can be injected into the electric grid for selling.

.

Solar electricity used on the premises, either for recharging the car or to power the building, is also rewarded per unit. Self-consumption bonus (SCB) is one of The multiple schemes to accelerate the development of decentralized green power generation.

.

The household appliance is assumed inelastic, i.e. appliances consume constant power within a considered time interval.

.

The car is recharged only at home station and the refueling possibility in any other places are omitted.

.

For the sake of simplicity, AC/DC and vice versa DC/AC conversion losses in different parts of the system are neglected.

3. Battery storage model Unlike the stationary storage technologies commonly applied in housing applications, the EVs battery packs used in V2H and V2B systems are in constant motion when the car is driven from one point to another. Although V2H technology relies on power supply ability of the electric cars, the primary task of a family car is to address the mobility needs of the users. During the driving period, car can neither refuel its battery nor deliver V2H or V2B power. So the amount of energy that the battery is able to absorb or release for V2H support in a time interval depends on the fraction of the time interval wherein the car is plugged-in. Figure 2 depicts the position of a given trip performed by EV in the time. Let O be the set of journeys assigned to the electric car, o ¼ {1, y , NO}, where o is the journey

Characteristic

Table I. Specification of the Nissan Leaf battery

Technology Nominal capacity Charging rate Discharging rate Efficiency Depth of discharge Life cycle

Symbol

Value

– Cn CR DR Zb DOD –

Lithium-ion 24 kWh 30% 30% 95% 90% 160.000 kM

V2H flexibilities d(,t–2)

d(,t–1)

d(,t)

d(,t+1)

781

D()

Strt() (t–2)

Figure 2. Position of a daily commute vs time periods

(t–1)

(t)

(t+1)

(mission) index and NO is the number of missions. We define the potential duration d (o, t) corresponds to the duration of journey o within time interval [t,(t+1)] of length D as follows: d 0 ðo; tÞ ¼ Min½StrtðoÞ þ t þ tðoÞ; ðt þ 1ÞD  Max½StrtðoÞ; tD

ð1Þ

Where Start (o) represents the start time of the trip o, t the mission total time and d0 (o, t) is the potential duration of the trip in time period t. The duration is effective only if d0 (o, t) is positive as inforced by: dðo; tÞ ¼ Max½d 0 ðo; tÞ; 0

ð2Þ

Accordingly, the storage and V2B power flows should be limited with respect to the partial duration of the commutes scheduled for electric car as described in (2) and (3): ! X t dðo; t Þ bmax ð3Þ 0pbin p 1  in o2O

t 0pPv2b p

1

X

! dðo; tÞ bmax out

ð4Þ

o2O max bmax in and bout stand for the maximum charge and discharge capacities of the battery in a given time interval, respectively, and are defined as:

bmax in ¼

Cn CR Dt

ð5Þ

bmax out ¼

Cn DR Dt

ð6Þ

The respective value of the battery nominal capacity (Cn), charging (CR) and discharging (DR) rates can be found in Table II.

COMPEL 33,3

As illustrated, the battery charging may occur in several ways: either via the photovoltaic power or via the grid. This is also valid for discharging which may be performed in the purpose of building supply or for driving. According to the Figure 1, the corresponding equations for the power flow in or out of the car storage are: t t Zb þ Pg2v Zb btin ¼ Ps2v

782

btout ¼

t t Peng Pv2b þ Zb Zb

ð7Þ

ð8Þ

On the other hand, when the car is driven, the energy accumulated in the battery is fed into the engine to propel the car forward and thus the available battery capacity drops. Discharge caused by vehicle use in each time period can be obtained from (6) considering a constant power drain: t ¼ Peng

X EDo dðo; tÞ o2O

tðoÞ

ð9Þ

EDo represents the energy demand of the commute o and can be calculated as the product of the vehicle fuel efficiency reported around 0.25 kWh/km in (Grosjean and Perrin, 2012) and the driving distance in km; yielding equation (9): EDo ¼ Zveh Do

ð10Þ

Monitoring the residual content of the battery also called State of Charge (SOC) is important for a good estimation on the available driving distance range with EV and the extent in which the latter can serve the building internal loads via V2H or V2B. A very simple and practical method is used for SOC assessment. The battery content at the end of a time period equals to its content in the previous time plus its transmitted power during the same period: SOC t ¼ SOC ðt1Þ þ btin  btout

ð11Þ

4. Optimal energy flow management 4.1 Generic linear programming (LP) problem formulation The problem of energy flow management can be formulated and solved as an optimization problem called “multi-source energy dispatch” which corresponds in its nature to some standard problems known in electrical engineering, such as “economic dispatch” and

Table II. Summary of investigated scenarios

Scenarios

EV connection

V2H power supply

Daily commute profile

Baseline

No

No

EVL

Available

No

EVLS

Available

Available

No trip 12 h 15-14 h 45/30 km 19 h-19 h 30/5 km 12 h 15-14 h 45/30 km 19 h-19 h 30/5 km

“optimal power flow” (Wood and Wollenberg, 1996; Pardalos and Resende, 2002). A similar problem is suggested for a multi-source building in Warkozek et al. (2012). A particular formulation of the problem is tailored to our assumed V2H enabled building by using LP. The canonical form of linear optimization problem is given as: maximize or minimize f ðxÞ

ð12aÞ

subject to Axpb

ð12bÞ

and xX0

ð12cÞ

where: 2 0 e

c

0 c b a

d

>0

e  > ð9Þ Ai ðr; zi ; IÞ ¼ mkp0 I ari 1  2i kðki Þ  Eðki Þ > > > < h i a2 þr2 þz2i m0 Iki zi pffiffiffiffiffi Kðki Þ þ i ¼ 4pr Eðki Þ Bri ðr; zi ; IÞ ¼  qA qz a p i ð ai rÞ2 þz2 > > h i > > a2i r2 z2i > i ziffi : Bzi ðr; zi ; IÞ ¼ 1 q ðrAÞ m0pIkffiffiffiffi r qr 4p ai p Kðki Þ þ ða rÞ2 þz2 Eðki Þ i

where K and E are the complete elliptic integrals of the first and the second kinds, respectively. Then, the vector potential of the system can be deduced easily by adding A1 and A2. The stored energy by the two filamentary loops can be deduced from the mutual flux as follows:  We12 ¼ 12 I2 F12 ! We12 F12 ¼ 2pa2 A1 ðr;ðz2  z1 Þ; I1 Þ ! # rffiffiffiffi" m0 I1 a1 k2 ¼ pa2 I2 1 KðkÞ  EðkÞ ð10Þ kp a2 2

COMPEL 33,3

where: k¼

874

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 4a1 a2 ða1 þa2 Þ2 þðz2  z1 Þ2

2. kth model This model considers that each coil is made of Nk wires, carrying current I ¼ J.dS as shown in Figure 3. This approximation allows the calculation of different magnetic parameters at each point of the space with a small discrepancy (Rezzoug et al., 1992). So, the vector potential Ac at P (x, y, z) generated by one coil and the stored energy Wc by the system can be written: Ac ðr; zi ; I Þ ¼

Nk X

Ai ðr; zi ; I Þ ¼

i¼1

Wb ¼



rffiffiffiffi Nk X k2 m 0 I ai 1  i K ðk i Þ  E ðk i Þ kp r 2 i¼1

i6¼j X

Weij ¼

i; j

i6¼j X 1 i; j

2

ð11Þ

ð12Þ

Ii Fji

With the same approach the magnetic stray field can be obtained easily. 3. nth model The nth model has the highest accuracy. In our case it is a 2D FEM with fine mesh. It includes 552,202 nodes and 275,219 elements. One evaluation of this model takes 784 seconds. 4. Accuracy and computing time The accuracy and computing time of the models increase with the number of filamentary loops used to model one coil. Therefore, many models with different levels of accuracy can be obtained easily by changing the number of loops as shown in Table I. In this table, error refers the discrepancy between analytical models and 2D FEM outputs. Figure 4 shows trade-offs between these models error and its computing time.

Ac P (x, y, z )

z

Ai

z

dS

P (x, y, z )

a1 zi I

ai

J

Zoom ρ

O

Figure 3. Subdividing coil to filamentary loops

zi

y

aN

ai

x

K

I

Figure 4 shows a Pareto front consisting trade-offs between models error and its computing time confirming that each model present an optimal compromise between error and time. To justify the coherence of the analytical models, Figure 5 shows a comparison on calculated magnetic field within outer coil between an analytical model using 400 filamentary loops and the 2D FEM. Figure 5 shows through magnetic maps of the outer coil that the discrepancy between magnetic field values calculated by the medium model and the 2D FEM is less than 1 percent. This confirms the coherence of the analytical models built. Type

Number of loops

Error (%)

Time (s)

1 100 400 900

254 7.2 0.3 0.14

0.016 1.61 24.8 127.53

m1 m2 m3 m4

6

n-level OSM

875

Table I. Models accuracy and computation time

m1

log (Error)

4

m2

2

0 m3 0

Figure 4. Trade-offs between models error and computing time

m4

–2 50

100

150

Time (s)

0.2271

0.25

0.2032 0.2

0.1793 0.1554 Z(m)

Z [m]

0.15 0.1 0.05

0.1315 0.1076 0.0837 0.0598

0.0

0.0359 –0.05

2.

2.

2.9 2.95 3.0 3.05 3.1 3.15 3.2 3.25 3.3 Component: BMOD 0.0190692

92 24 96 18 3. 00 12 3. 04 06 3. 08 3. 11 94 3. 15 88 3. 19 82 3. 23 76 3. 27 70

0.0120

–0.1

R(m) 2.371719412 0.5

1

1.5

2

2.5

3

3.5

4

Notes: Color code refers the flux density in Tesla. Axes refers to coils radial and central axis

Figure 5. Comparison between the analytical model using 400 filamentary loops (right) and a 2D FEM (left)

COMPEL 33,3

5. Optimization problem The goal of the optimization problem is to find the design configurations that give a specified value of stored magnetic energy and minimal magnetic stray field. Mathematically, this is formulated as: B2stray jEnergy  Eref j min OF ¼ 2 þ R2 ; h2 d2 Bnorm Eref 8 > < 2:6mpR2 p3:4m 0:204mph2 =2p1:1m with > : 0:1mpd2 p0:4m

876

ð14Þ

s:t:j J2 jpð6:4j Bj þ 54ÞA=mm2 2 P Where B2stray ¼ ð 22 i¼1 Bstrayi Þ=22 ; Eref ¼ 180 MJ and Bnorm ¼ 3 mT with Bstrayi is the magnetic field measured in one point (i) located at ten meters from the device. 6. Optimization results In order to minimize the risk to enclose local optimum, optimization with the surrogate analytical model is launched with ten starts points. The optimization uses Sequential Quadratic Programming (SQP) method that allows obtaining an optimum in a very short time. Tables II and III show the results obtained with different number of models. Table II shows the computation time and the number of evaluations of each SMES model. Table III shows a comparison between optimal solutions found by the nL-OSM algorithm and an optimum solution found in literature (Alotto et al., 1996). n-level OSM algorithm and the global optimization algorithm used in (Alotto et al., 1996) converge to the same solution. This confirms that multi-starts optimization is an efficient method to reduce the risk of local optimum when using a local optimization algorithm such as SQP. The two-level OSM and the n-level OSM algorithms converge to the same solution. However, the computation time has decrease up to three times, thanks to the addition of Evaluation number of each model Table II. Computation time and evaluation number of each SMES model

Table III. Optimum found by different algorithms

5L-OSM 4L-OSM 3L-OSM 2L-OSM

(Alotto et al., 1996) 2L-OSM 3L-OSM 4L-OSM 5L-OSM

m5

m4

m3

m2

m1

Time (s)

2 2 2 7

2 2 4 –

2 3 – –

10 – – –

1,225 1,230 1,452 2,476

1,849 1,849 2,101 5,527

R2 (m)

h2/2 (m)

d2 (m)

OF

B2stray (T2)

Energy (MJ)

3.08 3.08 3.08 3.08 3.08

0.2390 0.2405 0.2405 0.2405 0.2405

0.3940 0.3937 0.3937 0.3936 0.3936

0.08808 0.08910 0.08920 0.08960 0.08963

7.9138e7 7.9382e7 7.9530e7 7.9503e7 7.9501e7

180.0277 180.1622 180.1572 180.2215 180.2535

models with intermediate accuracy and computation time. The computation time of nL-OSM stop to decrease when the number of intermediate models is 42 in the case of the SMES. Indeed, computing time of the 5L-OSM and 4L-OSM are equal. In the case of the SMES, the 4L-OSM is the best algorithm in terms of computing time. This result cannot be generalized to other cases because the efficiency in terms of computing time of the nL-OSM is forcefully related to the accuracy and the computing time of built coarse and medium models. It is very important to highlight that it is possible that the computing time of the nL-OSM algorithm can be greater than the (n1)L-OSM if medium models are chosen incorrectly: first feelings through the SMES case show that to have an additional gain on computing time, the introduced medium model (k) must satisfy two conditions. The first one is the medium model (k) must be at least three times more accurate and ten times less rapid than the coarse model. The second condition is that the medium model k must be at least seven times more rapid and three times less accurate than the medium model (k þ 1). V. Conclusion The n-level OSM algorithm allows reducing computing time in comparison with the traditional OSM algorithm thanks to n2 models inserted between the coarse and the fine model. Inserted models can be obtained easily by changing the mesh size in FEM, the number of elements in lumped-mass models, and the number of assumptions in analytical models. Using the different models with different granularities, the n-level OSM algorithm converge to the same optimum found by the conventional OSM algorithm with a major reduction of computing time equals to three and without decreasing the performance of the SM technique. The gain in computing time with the adapted SM algorithm stops increasing when number of medium models is important. To avoid meeting this case, the authors propose to search for an automatic diagnostic of models to find the optimal number of medium models to introduce between coarse and fine ones. References Alotto, P., Kuntsevitch, A.V., Magele, C., Molinari, G., Paul, C., Preis, K., Repetto, M. and Richter, K.R. (1996), “Multiobjective optimization in magnetostatics: a proposal for benchmark problems”, IEEE Transactions on Magnetics, Vol. 32 No. 3, pp. 1238-1241. Alotto, P.G., Baumgartner, U., Freschi, F., Jaindl, M., Kostinger, A., Magele, Ch., Renhart, W. and Repetto, M. (1996), “SMES Optimization Benchmark: TEAM Workshop Problem 22”, available at: www.compumag.org/jsite/images/stories/TEAM/problem22.pdf Bandler, J.W., Biernacki, R.M., Chen, S.H., Grobelny, P.A. and Hemmers, R.H. (1994), “Space mapping technique for electromagnetic optimization”, IEEE Transactions on Microwave Theory and Techniques, Vol. 42 No. 12, pp. 2536-2544. Ben Ayed, R. and Brisset, S. (2012), “Multidisciplinary optimization formulations benefits on space mapping techniques”, The International Journal for Computation and Mathematics in Electrical and Electronic Engineering (COMPEL), Vol. 31 No. 3, pp. 945-957. Crevecoeur, G., Sergeant, P., Dupre´, L. and Van de Walle, R. (2009), “Optimization of an octangular double-layered shield using multiple forward models”, IEEE Transactions on Magnetics, Vol. 45 No. 3, pp. 1586-1589. Echeverria, D. (2007), “Multi-level optimization: space mapping and manifold mapping”, PhD thesis, Universiteit van Amsterdam, Amsterdam, available at: http://pangea.stanford.edu/ Becheverr/smpapers/tesis00.pdf (accessed March 29, 2007).

n-level OSM

877

COMPEL 33,3

878

Echeverria, D., Lahaye, D., Encica, L. and Hemker, P.W. (2005), “Optimization in electromagnetic with the space mapping technique”, The International Journal for the Computation and Mathematics in Electrical and Electronic Engineering (COMPEL), vol. 24 No. 3, pp. 952-966. Echeverria, D., Lahaye, D., Encica, L., Lomonova, E.A., Hemker, P.W. and Vandenput, A.J.A (2006), “Manifold-mapping optimization applied to linear actuator design”, IEEE Transactions on Magnetics, Vol. 42 No. 4, pp. 1183-1186. Encica, L., Paulides, J.J.H., Lomonova, E.A. and Vandenput, A.J.A. (2008a), “Aggressive output space-mapping optimization for electromagnetic actuators”, IEEE Transactions on Magnetics, Vol. 44 No. 6, pp. 1106-1109. Encica, L., Paulides, J.J.H., Lomanova, E.A. and Vandenput, A.J.A. (2008b), “Electromagnetic and thermal design of a linear actuator using output polynomial space mapping”, IEEE Transactions on Industry Applications, Vol. 44 No. 2, pp. 534-542. Koziel, S., Bandler, J.W. and Madsen, K. (2006), “Space-mapping-based interpolation for engineering optimization”, IEEE Transactions on Microwave Theory and Techniques, Vol. 54 No. 6, pp. 2410-2421. Koziel, S., Bandler, J.W. and Madsen, K. (2008), “Quality assessment of coarse models and surrogates for space mapping optimization”, Optimization and Engineering Journal, Vol. 4 No. 9, pp. 375-391. Neittaama¨ki, P., Rudnicki, M. and Savini, A. (1996), Inverse Problems and Optimal Design in Electricity and Magnetism, Oxford University Press Inc, New York, NY, pp. 325-348. Rezzoug, A., Caron, J.P. and Sargos, M. (1992), “Analytical calculations of flux induction and forces of thick coils with finite length”, IEEE Transactions on Magnetics, vol. 28 No. 5, pp. 2250-2252. Tran, T.V., Brisset, S. and Brochet, P. (2009), “A New efficient method for global discrete multilevel optimization combining branch-and-bound and space mapping”, IEEE Transaction on Magnetics, Vol. 45 No. 3, pp. 1590-1593. Tran, T.V., Moussini, F., Brisset, S. and Brochet, P. (2010), “Adapted output space-mapping technique for a bi-objective optimization”, IEEE Transactions on Magnetics, vol. 46 No. 8, pp. 2990-2993. Wilson, M.N. (1983), Superconducting Magnets, Oxford Science Publications, Clarendon Press. Corresponding author Dr Ste´phane Brisset can be contacted at: [email protected]

To purchase reprints of this article please e-mail: [email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints

The current issue and full text archive of this journal is available at www.emeraldinsight.com/0332-1649.htm

High-speed functionality optimization of five-phase PM machine using third harmonic current

Five-phase PM machine

879

Jinlin Gong School of Electrical Engineering, Shandong University, Jinan, China and Arts et Metiers ParisTech-L2EP, Lille, France

Bassel Aslan Arts et Metiers ParisTech-L2EP, Lille, France

Fre´de´ric Gillon ECLille, L2EP, Villeneuve d’Ascq, France, and

Eric Semail Arts et Metiers ParisTech-L2EP, Lille, France Abstract Purpose – The purpose of this paper is to apply some surrogate-assisted optimization techniques in order to improve the performances of a five-phase permanent magnet machine in the context of a complex model requiring computation time. Design/methodology/approach – An optimal control of four independent currents is proposed in order to minimize the total losses with the respect of functioning constraints. Moreover, some geometrical parameters are added to the optimization process allowing a co-design between control and dimensioning. Findings – The optimization results prove the remarkable effect of using the freedom degree offered by a five-phase structure on iron and magnets losses. The performances of the five-phase machine with concentrated windings are notably improved at high speed (16,000 rpm). Originality/value – The effectiveness of the method allows solving the challenge which consists in taking into account inside the control strategy the eddy-current losses in magnets and iron. In fact, magnet losses are a critical point to protect the machine from demagnetization in flux-weakening region. Keywords Field weakening, Five-phase PM machine, Surrogate-assisted optimization Paper type Research paper

Introduction Multiphase drives are used in different areas, such as electrical ship propulsion (Sadeghi et al., 2011), aerospace (Huang et al., 2012) and hybrid-electric vehicles (Parsa and Toliyat, 2007). Compared to the traditional three-phase drives, they present specific advantages: tolerance to faults especially coming from power electronics devices; lower pulsating torque; splitting the power across more inverter legs especially for very high power drives or for 10-15 kW very low voltage (o60 V) drives in automotive sector. Moreover in comparison with three-phase drives, supplementary degrees of freedom This project was supported by the Laboratory of Electrical Engineering and Power Electronics (L2EP), France. It is a successive cooperation project between the control team and optimization team of the laboratory.

COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering Vol. 33 No. 3, 2014 pp. 879-893 r Emerald Group Publishing Limited 0332-1649 DOI 10.1108/COMPEL-10-2012-0220

COMPEL 33,3

880

that are favorable to optimization appear concerning the current control (Levi, 2008). In this paper, a five-phase machine, designed for automotive applications (Lu et al., 2012), is considered. This machine presents fractional-slot concentrated windings because of their high torque/volume ratio, high efficiency and simple winding structure (El-Refaie, 2010). However, high rotor losses (in magnets and iron) are one of the undesired parasitic effects which can appear with such kind of machine windings because of high level of space harmonics, whose impact is particularly significant at high speed in the flux weakening zone (Han et al., 2007; Yamazaki and Fukushima, 2011; Chong et al., 2011). These rotor losses reduce the efficiency of the machine and furthermore they can cause magnet heating which increases the risk of magnet demagnetization, leading finally toward full breakdown. Researches have been done in order to develop an optimal flux weakening strategy (choosing the optimal current vector) in three-phase permanent magnet (PM) machines (Chen and Chin, 2003; Sue and Pan, 2008; Sul, 2011) and a few one for multiphase machines (Parsa et al., 2005; Sun et al., 2010; Lu et al., 2011; Xuelei et al., 2011). In the cited researches, copper losses are always the first criteria to be minimized while iron and magnet losses are not taken into account. The reason of this absence is the lack of accurate analytical model for the calculation of the eddy-current losses and the necessity to have a finite element model to calculate them. As consequence the corresponding optimizations are only reliable for low speeds and with classical integral slot winding machines whose MagnetoMotive Force is clean from harmful parasitic harmonics. In (Chedot et al., 2007) an optimization is done for three-phase machine taking into account copper losses and iron losses using FEM software. The present paper concerns the design of a five-phase wye-coupled high-speed drive. For a five-phase machine, it is possible to obtain torque with only the first harmonic currents in order to produce the torque. Only two degrees of freedom are then necessary. It remains then two degrees of freedom since four independent currents can be imposed. The two remaining degrees of freedom can be used in order to optimize either the required power for the Voltage Source Inverter (VSI) or the losses in the machine. It can be shown that in case of no third harmonic electromotive force, the injection of third harmonic current has no impact on the torque but has an effect on the machine losses or on the power delivered by the VSI. In this paper, the objective is to maximize the efficiency of low-voltage five-phase machine with concentrated windings considering iron and also magnet losses, in which the fundamental current, the third harmonic current and two dimension parameters are taken into account simultaneously. This applied optimization procedure protects the machine from full breakdown by adding a constraint on total rotor losses level. Since classical empirical analytical formula for losses in magnets or iron are not reliable when several harmonics of currents are injected, it is then necessary to use FEM (www.ansys.com/ Products/Simulation þ Technology/Electromagnetics/Electromechanical þ Design/ ANSYS þ Maxwell) for calculation of the total losses. However, despite of the evolution in the computer performances, direct optimization with FEM is still complex and time costly. Surrogate model-assisted optimization approach consists of replacing FEM by a fast analytical model (Giurgea et al., 2007). Two ways to employ the surrogate model are applied in this paper, response surface methodology (RSM) approach and an optimization technique – efficient global optimization (EGO) algorithm. However, due to the inaccuracy of the surrogate model, the solution found is not always enough accurate. The EGO algorithm, one of surrogate-assisted algorithms, has been used successfully in the field of electromagnetic design optimization (Gong et al., 2011;

Berbecea et al., 2010). It uses the FEM in conjunction with a progressively built surrogate model whose accuracy increases with the search for optimal design (Schonlau et al., 1998). By this way, EGO benefits from both the rapidity of surrogate model and the accuracy of FEM. In this paper, the work is structured in three parts. In the first part, the component and the FEM of the five-phase PM machine are introduced and the optimization problem is presented. The effect of the geometry and the control strategy are combined in a common goal in order to improve the performances of the drive. In the second part, the optimization tools used in this paper, the RSM strategy and the EGO algorithms, are introduced briefly. In the third part, the optimization process is divided into three parts in order to illustrate progressively the special characteristics of the five-phase PM machine in the flux-weakening mode. Component and model Despite the advances in computing power over the last decade, the expense of running analysis codes remains non-neglecting. Though, single evaluations of finite element analysis codes can take for instance from a few minutes up to hours, even days, following the desired type of simulation. Complex numerical models are often non-robust, the analysis and even the mesh generation of such models failing for different design configurations. Moreover numerical models such as FEM are confronted to numerical noise, which affects or alters the convergence of the optimization algorithm, especially in the case of gradient-based algorithms. In a word, direct integration of FEM within an optimal design process is difficult. On the contrary, the surrogate models, mostly interpolating models, are noise-free. The main purpose of the use of surrogate models within an optimization process consists, through a significant overall time reduction of the optimization process, in avoiding heavy simulations with long computation time. But due to the inaccuracy of the surrogate models, the optimal found solution is not exact, and more important it might be infeasible, violating the constraints. In order to benefit from both the accuracy offered by the FEM and the fast prediction of a surrogate model, different optimization strategies assisted by surrogate model are proposed (Wang and Shan, 2007). In this paper, two surrogate-assisted optimization strategies are used: RSM and EGO, depending on the complexity of optimization problems. Model The global objective of a machine is to produce torque with good efficiency in a required speed range. In case of five-phase PM machine the electromagnetic torque can be computed by: Tem ¼ K ðE1 I1 cos j1 þ E3 I3 cos j3 Þ=O

ð1Þ

where Ek is the k-harmonic of electromotive force; Ik the k-harmonic of current; K a constant linked to the number of phases; X the speed; jk the phase between kharmonic of respective electromotive force and current. Thus, if the currents are controlled, there are four degrees of freedom to define a torque (I1, j1, I3, j3). The study is based on a FEM (www.ansys.com/Products/Simulation þ Technology/ Electromagnetics/Electromechanical þ Design/ANSYS þ Maxwell), which allows computing various kinds of losses developing in the machine. The losses in the iron

Five-phase PM machine

881

COMPEL 33,3

882

and in the magnet are directly obtained by the FEM. This computation is particularly important at high speed when the magnetic flux densities vary with high frequencies. Figure 1 shows the studied global model with its different inputs and outputs, where it can be seen how certain outputs are deduced directly from the FEM while others are calculated using analytical equations. The first four inputs of the model represent the two supplying current harmonics of first and third order. While two main

Slot Surface [ ]2/2 [ ]2/2

+

× Phase Resistance

I1 Stator Joule Losses

I3

Stator Iron Losses Rotor Iron Losses

1

Finit Elements Magnetic Model

3

+

Stator Total Losses

+

Rotor Total Losses

Rotor Magnet Losses Maximum Phase Voltage Torque

×

Mechanical Power

R(rotor radius) W(tooth width)

Speed

Stator

Rotor W R

Magnets Rs

Figure 1. Inputs and outputs of studied global model of the PM 5-phase machine

Shaft

dimensions of the geometrical model structure can be modified using the last two inputs. The control methods of efficiency maximization in electrical machines consider classically only Joule losses. This is due to the necessity to have analytical equations in real time calculations (Chen and Chin, 2003; Sue and Pan, 2008). The problem is that, at high-speed iron and magnet losses are no more negligible comparing to Joule losses. Moreover, certain new concentrated winding topologies generate high level of iron and magnet losses which makes the classical control methods much less efficient. Thanks to the FEM, a high-speed optimum control strategy will be obtained looking for a compromise between the different losses with the respect of local constraints linked to the heating. Generally, in common machine structures, the third harmonic of the electromotive force is small comparing to the fundamental one. For this reason the classical method of efficiency maximization put the third current harmonic equal to zero to avoid more Joule losses since this harmonic is not able to produce torque. This paper investigates the effect of the third current harmonic on the global efficiency at high speed, taking into account not only circuits copper losses but also iron and magnet losses. Furthermore, this study is done considering a limited DC bus voltage (where a phase voltage limitation is imposed) which allows checking the influence of the third current harmonic on flux weakening operation. Optimization problem The objective of this paper is to minimize the total losses in order to improve the machine efficiency by taking into account the combination of the control variables and the geometrical ones. Five-phase structure adds a freedom degree to the control strategy of synchronous machine by allowing injecting the third harmonic of current. This property increases the number of input parameters in flux weakening strategy from two, in the case of three-phase machine (fundamental current amplitude and phase (I1, j1), to four in the case of five-phase machine (I1, j1, I3, j3); Sadeghi et al., 2011). The added parameters can have a remarkable effect on iron and magnet losses in concentrated windings structure especially with the influence of iron non-linearity. Additionally, two dimension variables are added in order to take into account the machine structure optimization: the rotor radius and the stator tooth width (tooth width þ slot width ¼ constant) (see Figure 1). Both added parameters have remarkable effects on the objective function. The increase of the rotor radius decreases the height of the stator slots causing more copper losses (smaller copper section), and vice versa. The increase of the stator slot width (by decreasing the tooth width) expands the copper section leading to less copper losses but to higher iron saturation. Furthermore, the machine magnetic structure depends widely on the two optimized dimensions which gives these parameters an important influence on the machine torque and eddy-current losses. There are four inequality constraints and one equality constraint. The rotational speed of machine is fixed to 16,000 rpm which is the allowed maximal speed in flux-weakening zone. The motor power should be more than 10 kW. In order to avoid the demagnetization of the magnet, the rotor losses due to eddy currents in magnets and iron should be o400 W. The stator losses consisting in copper and iron losses should be o800 W. The voltage per phase should be o70 V, due to the limit of the DC voltage bus supply.

Five-phase PM machine

883

COMPEL 33,3

The optimization problem is presented in the following equation: min

I1 ;j1 ;I3 ;j3 ;R;W

884

ðTotal LossesÞ

s:t: Speed ¼ 16; 000 rpm; PowerX10 kW ; Lossesrotor p400 W   Lossesstator p800 W ; max Uphase  p70 V

ð2Þ



With I1 2 ½0; 230ðAÞ; j1 2 ½85; 60 ð Þ; I3 2 ½0; 25ðAÞ jj3 j 2 ½0; 90ð Þ; R 2 ½35; 60ðmmÞ; W 2 ½3; 13ðmmÞ and Speed is the maximum rotational speed; Power the power generated by the machine at maximum speed; Lossesrotor the losses in rotor (iron þ magnets); Lossesstator the losses in stator (iron þ windings); (Uphase) the needed phase voltage; Total Losses the Lossesstator þ Lossesrotor. The chosen range for the phase, j1 and j3 are in adequacy to the fact that the machine is working in flux-weakening mode; R the radius of rotor; W the stator tooth width (see Figure 1 – the machine shape). Optimization tools Two methodologies based on surrogate models are used to solve the constraint optimization problem. With the two methods, the iterative process is done with a human in the loop. Indeed, the complexity of the FEM and the optimization tools have not been linked by programming allowing a study at each iteration. RSM To do quickly and easily an optimization process on a complex model, not necessary well linked with the optimization tool, a good way is to use a methodology based on response surfaces (RS) or also named surrogate model. The nature of the RS can be multiple (Kreuawan, 2008). In this work, Kriging models are used for their good performances in the case of a low number of samples. A surrogate model must be built for the objective function (Total Losses), but also for each constraint of the optimization problem (Power, Lossesrotor, Lossesstator, Uphase). The optimization process is performed through theses surrogate models that are fast and thus allow reducing highly the computation time. But, due to the inaccuracy of surrogate models, the solution found is not always enough accurate on the objective function or on the constraints. Therefore, it is wise to enhance the accuracy of the model using further function calls (infill or update points): new samples coming from fine model (FEM) are added. A method, more or less, sophisticate can be used to add, one or a set of new samples allowing to increase the accuracy (Goupy, 1999). However, the problem of finding the global optimum is not obvious with this technique if no exploration mechanism is applied. If the optimization problem is smooth, this technique is effective. EGO The EGO algorithm is a surrogate-based optimization algorithm which uses Kriging models as surrogates for the fine model, in order to guide the search for the optimal solution. At each iteration of the algorithm, the improvement of solution is sought through an internal optimization loop, based on surrogate models. This optimization consists in the maximization of an Infill Criterion (IC) whose expression is based on the

Kriging model prediction y^ and an estimate of the prediction error s^ (Kreuawan, 2008). The considered IC naturally balances the exploration of the design space, improving thus the quality of the Kriging surrogate model and the exploitation of promising regions of the design space in the search for improving solutions. By this way, the number of fine model (FEM) calls is drastically reduced, obtaining thus the optimal trade-off solutions with an affordable computational cost. The role of the surrogate model within the algorithm is to guide the search for improving solution. The computational flow diagram of the EGO algorithm can be found in Berbecea et al. (2010), and it is described in eight steps as follows: .

Step 1. Initialization of the sampling plan: select the initial designs of the sampling plan using Latin Hypercube strategy (generally a good choose for this kind of surrogate model).

.

Step 2. Fine model evaluation: evaluate the designs of the sampling plane with the fine model.

.

Step 3. Kriging model construction: build the Kriging models (see Appendix) for each objective and constraint functions.

.

Step 4. Improvement point search: find the improvement point using the IC, expressed in the following equation: h i Y Pexp ðxÞ max E ½I ðxÞ  x ð3Þ Subject to gin exp ðxÞp0

where E [I(x)] is the Expected Improvement (EI) which is the probability that the estimated response is smaller than the current minimal objective function; Pexp(x) is the cumulative distribution function; gin exp is the inexpensive constraint in terms of evaluation time. Details on the IC can be found in Kreuawan (2008) and Berbecea et al. (2012). .

Step 5. Infill point fine model evaluation: evaluate the infill point determined at the precedent iteration using the fine model (FEM).

.

Step 6. Best objective value: if the objective infill is lower than the best objective and constraint violation is in acceptable tolerance, set this point as the new best point.

.

Step 7. Sampled data addition: add the infill point to the sampled data set.

.

Step 8. Stop criterion verification: if the maximum iteration number is attained, the algorithm ends. Otherwise, return to the Step 3 and repeat.

The EI criterion was first used by Schonlau et al. (1998). The EI criterion quantifies the amount of improvement expected to be attained by sampling at a certain point. The mathematical formulation of the EI criterion is given in the following equation: ( ð fmin  y^ÞFðfmins^^yÞ þ s^fðfmins^^yÞ if s^40 EI ¼ E½I ðxÞ ¼ ð4Þ 0 if s^ ¼ 0 where f and F represent the normal probability density function, respectively, the normal cumulative distribution function. Within the expression of EI we can distinguish the two terms corresponding to the exploitation of the surrogate models

Five-phase PM machine

885

COMPEL 33,3

886

(first term), respectively, the exploration of the design space (second term). When the value of the predicted error s^ is zero (i.e. point already sampled), the EI becomes null, meaning that for this point there is no expectation of improvement. If the predicted error s^ is different from zero, but small, and the predicted value of the function y^ is very small, in compare to the current best known value of the function fmin, then the first term of the expression (4) becomes predominant. Though, the search is performed locally, exploiting the good accuracy of the surrogate models prediction. Otherwise, if the predicted error s^ is important, then the second term in (4) takes control, looking to explore areas of the design space with high surrogate model inaccuracy. Thus, the optimization’s algorithm is applied not directly to the surrogate model but well to EI, which makes it possible to have two complementary mechanisms (exploitation/exploration) allowing a more robust convergence. The use of the surrogate model makes it possible to highly reduce the evaluation number of the fine model (here FEM to compute the losses). Design process based on optimization Three flux weakening optimizations problems are gradually formulated according to the number of design variables. The first problem which is presented in Equation (5) takes into account the fundamental current. The second one which is presented in Equation (6) takes into account both the fundamental and the third harmonic current according to the special characteristics of the five-phase machine. The optimization process starts with the selection of an initial sampling plan of 50 points. The initial sampling plan is then evaluated using the fine model and the objective and constraint function values are obtained. Next, for each objective and constraint function, a Kriging surrogate model is fitted over the initial sampling plan. A first optimization test is performed directly on both surrogate models. According to the optimization results for the two problems, two optimization strategies are used: exploration surrogate model strategy (EGO) and exploitation one (RSM). The final optimization results of the two problems are compared. This approach shows clearly the advantage of control for multiphase machines to use all the available degrees of freedom. The third problem which takes into account not only the control variables (the fundamental and third currents), but also the two dimension variables, is presented in Equation (2). The final comparison results between the three problems show the advantages of this optimization approach for the five-phase PM machine. Initial control solution The optimization problem considers in this case is only with the fundamental currents. There are two design variables: (I1, j1). Equation (5) presents the optimization problem. This problem is the simplest one among the three optimization problems. The purpose of this step is to check the finite element model of the PM machine, it also allows finding the optimal solutions with constraints: min ðTotal LossesÞ

I1 ;j1 ; I3 ¼0

s:t: Speed ¼ 16; 000 rpm; PowerX10 kW ; Lossesrotor p400 W   Lossesstator p800 W ; max Uphase  p70 V with I1 2 ½0; 230ðAÞ; j1 2 ½85; 60 ð Þ And the signification of the variables is the same with the Equation (1).

ð5Þ

An initial set of 25 designs was chosen using the full factorial design. The set of designs were then evaluated in parallel on the available computer cores by the FEM. The Kriging models for each objective and constraint functions are built individually using the initial points. Figure 1 presents the RS of the total losses function for this optimization problem. The initial set of 25 designs is marked with the black dots. The optimal solution of this model (green triangle in Figure 2) was sought using algorithm Sequential Quadratic Programming (SQP). The optimal solution was then validated using the FEM. According to the expert, the optimization result corresponds well the experiment. Another 25 points surrounded by the red dotted rectangle in Figure 2 around the optimal one are selected and evaluated by the FEM. The new Kriging models for the objective and the constraints functions are then fitted with the 50 points. The optimal solution with the new models is presented in Figure 2 by the blue square. The model of the objective function presented in Figure 2 is complex. Therefore, the exploration surrogate model strategy is employed. The EGO algorithm is then used on the Kriging model with 50 initial points. Instead of direct optimization with the surrogate model, the EGO algorithm maximizes the EI in order to find the infill point which allows improving the model in the most incertitude zone. Once a point found, it is then evaluated with the FEM and added to the set of sampled data in order to build new Kriging models on the increased data set. The model accuracy increases progressively with the increase of the sample data. The algorithm stops when the stop criterion is satisfied, returning the final optimal solution which is validated by the FEM. Considering the time consuming FEM model, a total budget of 50 fine model evaluation is imposed. The final solution with EGO algorithm is marked by the red star. With the set of 25 points, no solution was found with all the constraints. A modification of voltage constraint to 100 V instead of 70 V was then chosen and allows finding a solution (I1,j1) ¼ (114.16, 80.3) that verifies the constraints with acceptable tolerance (9,886 W for the power instead of 10,000 W). After adding 25 points around the initial optimal solution, a new optimization with these 50 points is done with the voltage constraint of 70 V. An optimal solution is found with the respected constraints (I1,j1) ¼ (144.3, 82.2). Table I presents a comparison. For each optimal value (I1,j1) found by Kriging model for a set of 25 or 50 points, the values of Power, Losses and Voltage are given using at first the Kriging model (square in gray) and second, the FEM model (italic).

Five-phase PM machine

887

5,000 Totallosses(W)

4,000 3,000

• Initial designs Optim. first problem with 25 points Optim. first problem with 50 points Optim. result with EGO

2,000 1,000 0 0 –50 Phi(theta) –100 0

50

100

150 I(A)

200

250

Figure 2. Kriging model and optimization results

COMPEL 33,3

888

I1 (A)

j1 (1)

25 points

114.16

80.3

50 points

144.30

82.2

142.90

75.9

Table I. Final Optimal solution with the solution first optimization problem (with EGO)

Power (W)

Lossesrotor (W)

Lossesstator (W)

Uphase (V)

Total losses (W)

9,886 7,456 32.6% 9,886 6,551 50.9%

384.8 288.6 33.3% 394.9 376.3 4.9%

657.9 641.6 2.5% 783.3 798.7 1.9%

100 109.6 8.8% 70 71.1 1.5%

956.9 930.2 2.9% 1,174.3 1,149.2 2.2%

10,020

379.7

798.7

72.5

1,178.5

Relative errors are then provided in order to compare result obtained with Kriging model to those calculated with FEM, FEM results being taken as reference. With the set of 25 points, it appears that the Kriging model leads to an error of more than 30 percent for the power and rotor losses. With the set of 50 points which allows verifying the voltage constraint of 70 V, the error is weak for all variables except of the power (50.9 percent). With EGO, the solution (I1,j1) ¼ (142.9, 75.9) is verifying all the constraints if a tolerance of 2.5 V (o5 percent) is accepted for the voltage. Using the RSM, all the optimal found solutions calculated with FEM do not verify the constraints. However, a feasible solution can be found with EGO algorithm. Analysis of results of optimization process shows that the voltage constraint is the most pregnant. As consequence, it has been decided to explore the impact of injecting third harmonic currents in order to attenuate the pressure due to the DC bus voltage. In the following part, the optimization on a second problem will be presented. The same constraints and objective are present; nevertheless, there are four design variables instead of two ones. Control with the third harmonic Through the analysis between the initial solution and the one with the third harmonic current, the surrogate model here is much smoother and the voltage constraint here is easier to be verified. And finally, an optimization problem with the third harmonic current is less constrained, less saturated and thus much easier. The RSM approach is sufficient allows finding the optimal results. The optimization problem with four design variables is presented in Equation (6). The both optimization problems (1), (5) and (6) have the same objective and constraints: min ðTotal LossesÞ

I1 ;j1 ; I3 ;j3

s:t: Speed ¼ 16; 000 rpm; PowerX10 kW ; Lossesrotor p400 W   Lossesstator p800 W ; max Uphase  p70V 

With I1 2 ½0; 230ðAÞ; j1 2 ½85; 60ð Þ I3 2 ½0; 25ðAÞ; jj3 j 2 ½0; 90ð Þ And the signification of the variables is the same with the Equation (1).

ð6Þ

As in the first optimization problem, a first set of 25 points is selected. The Kriging model of the objective function with the 25 initial designs (black points) is presented in Figure 3. As we can see that the Kriging model with four design variables is less complicated than the previous one with two design variables. The first optimal solution of this model (green triangle in Figure 3) is sought using algorithm SQP with multi-start strategy. The solution validated by FEM is marked with a red filled star. The both solutions (Kriging model and FEM) are very close, and the Kriging model can be considered sufficiently accurate. The exploitation surrogate model optimization strategy is hence chosen for this problem. It means that the infill points at the optimum predicted by the surrogate model will be progressively added to the sampling plan. Table II presents the improvement process of optimization by iteration. The comparison between the optimal solutions and the FEM evaluation result italicized at the optimum is presented, respectively, in Table II. All the optimal solutions respect the constraints, but the FEM results are not satisfied until the one with 45 points. The first line presents the results with 25 points, and both the torque and the voltage constraints are not respected. After adding ten points to the sampling plane, only the voltage constraint (o70 V) of the FEM evaluation is not respected.

Five-phase PM machine

889

Total losses versus Current I1 et Phi1 3,500

Lossestotal(W)

3,000 2,500

• Initial designs Optim. with 25 points Validation result with FEM

2,000 1,500 1,000

Figure 3. Kriging model with 25 samples and optimization results with four design variables

500 –60 –70 –80

Ph1(degree)

I1 (A) 25 points 35 points 45 points (final)

–90

j1 (1)

50

0

I3 (A)

100

150

200

250

I1(A)

j3 (1)

Power Lossesrotor Lossesstator (W) (W) (W)

126.7 78.4 15.02 30

Uphase (V)

Total losses (W)

9,886 9,684 2.1% 128.4 78.4 13.93 25.7 9,886 10,941 9.6%

202.0 201.2 0.4% 207.9 209.5 0.8%

643.6 677.9 5.1% 648.6 662.6 2.1%

70 72.5 3.4% 70 72.1 2.9%

845.6 879.0 3.8% 870.3 872.2 0.2%

129.2 79.1 13.48 19.5

213.3 213.3

666.4 666.5

69.93 69.91

879.6 879.8

9,953 9,953

Table II. Optimal solution with the second optimization problem

COMPEL 33,3

890

The exploitation surrogate model optimization allows finding the feasible solution for the four design variables. The two optimization problems are compared in this part. Table III presents the comparison between the optimal solutions. By injecting the third harmonic currents, the voltage constraint is respected while the mechanical torque is kept. Furthermore, the total losses in the machine decrease 25 percent. The comparison can well illustrate the advantages for five-phase machines to inject third harmonic component. Two optimization strategies are employed, respectively, for the two problems: exploration surrogate model optimization and exploitation one. The choice of the most appropriate optimization strategy depends on the model complexity. If the model to be approximated is smooth and not complex, the exploitation strategy (RS) can be employed; otherwise the exploration one (EGO) should be used. Adding geometrical degree of freedom The flux weakening control strategy is accomplished in the previous part. In this part, the shape design optimization is presented. The objective of this part is to optimize design of the five-phase high-speed machine. Compared to the four design variables optimization problem, two dimension variables are added in order to take into account the machine structure optimization: the rotor radius and the stator tooth width (tooth width þ slot width ¼ constant) (see Figure 1). The same objective and constraints are considered compared with the two previous problems. The optimization problem is presented in Equation (2). As the number of design variables increases, it is difficult to have an accurate surrogate model. There are two approaches to enhance the accuracy of surrogate model: increase the sampling points and use the appropriate sampling strategy. In our case, an initial set of 70 points using Latin Hypercube strategy is selected for the six design variable problem. The EGO algorithm is used in order to obtain a global optimum and have an accurate surrogate model around the optimum. A total budget of 200 fine model evaluations is imposed during the EGO optimization process. Table IV presents the comparison results between the four and six design variable problems. The initial dimension parameters are considered for problem with four variables. I1 (A)

Table III. Comparison between the first two optimization problems

First problem 142.9 75.9 Second problem 129.2 79.1

I1 (A)

Table IV. Optimal solution comparison

j1 (1)

Four variables Six variables

j1 (1)

I3 (A)

I3 (A)

j3 (1)

0

0

6.0

10,020

379.7

798.7

72.5

1,178.5

13.5

19.5

5.9

9,953

213.3

666.5

69.9

879.8

j3 (1)

Torque Power Lossesrotor Lossesstator Uphase Total (Nm) (W) (W) (W) (V) losses (W)

R W Power Lossesrotor Lossesstator Uphase Total losses (mm) (mm) (W) (W) (W) (V) (W)

129.2 79.1 13.5

19.5 45.0

7.0

9,953

213.3

666.5

69.9

879.8

159.4 76.2

71.4 43.0

4.2

10,640

163.5

675.8

63.8

839.3

5.2

After adding two dimension parameters, the high-speed machine can improve notably the performance at the optimal solution. The critical rotor losses are reduced (23 percent) while all the constraints are respected. Moreover the final optimal solution can work with lower DC voltage bus supply (9 percent) and higher mechanical power ( þ 7 percent). This work is done in the condition of a point of high-speed operation. If the PM machine should be designed with variable speeds, it would be necessary to reformulate the optimization problem, and that is another job to complete this one. Conclusions Multi-phase machines are widely used in automotive sector for the reasons as reliability, smooth torque and partition of power. Among the different kinds of multi-phase machines, the synchronous PM one appears as an attractive solution because of the high ratio of torque/volume. The five-phase PM machines can add a degree of freedom which increases in certain cases the flexibility of the control. Whereas, by injecting relatively low third harmonic of current (B10 percent of fundamental), the DC bus voltage constraint on the PWM VSI are easier to be respected. Moreover the higher harmonic current influences on the rotor losses which are non-negligible can be taken into account, and thus the total losses. The optimization results have proved the remarkable effect of using the freedom degree offered by a five-phase structure on iron and magnets losses, and the total losses are notably reduced (25 percent). Moreover, due to this optimization procedure rotor losses are decreased far below the imposed limit (47 percent), which increases the machine protection against magnet demagnetization. The RSM and EGO algorithm can be employed, respectively, depending on the complexity of the surrogate model. The RSM approach is more effective for a simple problem where the surrogate model is smooth. Nevertheless, the EGO algorithm is preferred to solve a more complex problem, which will allow obtaining progressively the global optimal solution of the FEM with small evaluation budget. Combining with two geometric parameters, a more complex optimization control problem is formulated and resolved in order to explore the sensitivity of the result in relation to geometric parameters. The performances of the five-phase machine with concentrated windings are notably improved at high speed (16,000 rpm). References Berbecea, A.C., Kreuawan, S., Gillon, F. and Brochet, P. (2010), “A parallel multi-objective efficient global optimization: the finite element method in optimal design and model development”, IEEE Transaction on Magnetics, Vol. 46 No. 8, pp. 2868-2871. Berbecea, A.C., Ben-Ayed, R., Gillon, F., Brisset, S. and Brochet, P. (2012), “Comparison of efficient global optimization and output space mapping on the bi-objective optimization of a safety isolating transformer”, IEEE Transactions on Magnetics, Vol. 48 No. 2, pp. 791-794. Chedot, L., Friedrich, G., Biedinger, J.-M. and Macret, P. (2007), “Integrated starter generator: the need for an optimal design and control approach. Application to a permanent magnet machine”, IEEE Transactions on Industry Applications, Vol. 43 No. 2, pp. 551-559. Chen, J.-J. and Chin, K.-P. (2003), “Minimum copper loss flux-weakening control of surface mounted permanent magnet synchronous motors”, IEEE Transactions on Power Electronics, Vol. 18 No. 4, pp. 929-936.

Five-phase PM machine

891

COMPEL 33,3

892

Chong, L., Dutta, R., Rahman, M.F. and Lovatt, H. (2011), “Experimental verification of core and magnet losses in a concentrated wound IPM machine with V-shaped magnets used in field weakening applications”, Electric Machines & Drives Conference (IEMDC), IEEE International, May 15-18, pp. 977-982. El-Refaie, A.M. (2010), “Fractional-slot concentrated-windings synchronous permanent magnet machines: opportunities and challenges”, IEEE Trans. on Industrial Electronics, Vol. 57 No. 1, pp. 107-121. Forrester, A., Sobester, A. and Keane, A. (2008), Engineering Design via Surrogate Modeling (ISBN: 978-0-470-06068-1), A John Wiley and Sons Ltd Publication, Southampton. Giurgea, S., Zire, H.S. and Miraoui, A. (2007), “Two stage surrogate model for finite-element-based optimization of permanent-magnet synchronous motor”, IEEE Transaction on Magnetics, Vol. 43 No. 9, pp. 3607-3613. Gong, J., Berbecea, A., Gillon, F. and Brochet, P. (2011), “Optimal design of a double-sided linear induction motor using an efficient global optimization”, International Symposium on Linear Drives for Industry Applications, July 3-6, LDIA, Eindhoven. Goupy, J. (1999), Plans d’expe´rience pour surface de re´ponse, (ISBN: 2-10-003993-8), DUNOD, Paris. Han, S.-H., Soong, W.L. and Jahns, T.M. (2007), “An analytical design approach for reducing stator iron losses in interior PM synchronous machines during flux-weakening operation”, Industry Applications Conference, 42nd IAS Annual Meeting, Conference Record of the IEEE, September 23-27, pp. 103-110. Huang, X., Goodman, A., Gerada, C., Fang, Y. and Lu, Q. (2012), “Design of a five-phase brushless DC motor for a safety critical aerospace application”, IEEE Trans. on Industrial Electronics, Vol. 59 No. 9, pp. 3532-3541. Kreuawan, S. (2008), “Modeling and optimal design in railway applications”, PhD dissertation, Ecole Centrale de Lille, Lille, available at: http://tel.archives-ouvertes.fr/index.php? halsid ¼ 5oc95e2243gc3k9gestnu5 km27&view_this_doc¼tel-00363633&version¼2 Kreuawan, S., Gillon, F. and Brochet, P. (2007), “Efficient global optimization an efficient tool for optimal design”, Proceeding of 16th International Conference on the Computation of Electromagnetic Fields (Compumag), June 24-28, Aachen. Levi, E. (2008), “Multiphase electrical machines for variable-speed applications”, IEEE Transactions on Industrial Electronics, Vol. 55 No. 5, pp. 1893-1909. Lu, L., Semail, E., Kobylanski, L. and Kestelyn, X. (2011), “Flux-weakening strategies for a five-phase PM synchronous machine”, European Power Electronics Congress (EPE), Birmingham, September. Lu, L., Aslan, B., Kobylanski, L., Sandulescu, P., Meinguet, F., Kestelyn, X. and Semail, E. (2012), “Computation of optimal current references for flux-weakening of multi-phase synchronous machines”, IEEE IECON’2012, International Conference On Industrial Applications of Electronics, Montreal, September. Parsa, L. and Toliyat, H.A. (2007), “Fault-tolerant interior-permanent-magnet machines for hybrid electric vehicle applications”, IEEE Trans. on Vehicular Technology, Vol. 56 No. 4, pp. 1546-1552. Parsa, L., Kim, N. and Toliyat, H. (2005), “Field weakening operation of a high torque density five phase permanent magnet motor drive”, IEEE International Electric Machines and Drives Conference, pp. 1507-1512. Sacks, J., Welch, W.J., Mitchell, T.J. and Wynn, H.P. (1989), “Design and analysis of computer experiments”, Statistical Science, Vol. 4 No. 4, pp. 409-435. Sadeghi, S. and Parsa, L. (2011), “Multiobjective design optimization of five-phase halbach array permanent-magnet machine”, IEEE Trans. on Magnetics, Vol. 47 No. 6, pp. 1658-1666.

Schonlau, M., Welch, W.J. and Jones, D.R. (1998), “Global versus local search in constrained optimization of computer models”, IMS Lecture Notes, Vol. 34, pp. 11-25, available at: http://books.google.fr/books?hl ¼ en&lr ¼ &id ¼ IvEA6CDt21kC&oi ¼ fnd&pg ¼ PA11& dq¼globalþversus þ local þ search þ in þ constrained þ optimization þ of þ computer þ models&ots ¼ tC0EIcH5m7&sig ¼ aCjdK5H3Zj4pDjSyprEoELE05EM#v ¼ onepage&q ¼ global%20versus%20local%20search%20in%20constrained%20optimization%20of%20 computer%20models&f ¼ false Sue, S.M. and Pan, C.T. (2008), “Voltage-constraint-tracking-based field-weakening control of IPM synchronous motor drives”, IEEE Trans. Ind. Electron, Vol. 55 No. 1, pp. 340-347. Sul, S.-K. (2011), Control of Electric Machine Drive System (Chapter 5), IEEE Press Series on Power Engineering, John Wiley & Sons Ltd, Hoboken, NJ. Sun, Z., Wang, J., Jewell, W. and Howe, D. (2010), “Enhanced optimal torque control of fault-tolerant PM machine under flux-weakening operation”, IEEE Trans. Ind. Electron, Vol. 57 No. 1, pp. 344-353. Wang, G.G. and Shan, S. (2007), “Review of metamodeling techniques in support of engineering design optimization”, Journal of Mechanical Design, Vol. 129 No. 4, pp. 370-380. Xuelei, S., Xuhui, W. and Wei, C. (2011), “Research on field-weakening control of multiphase permanent magnet synchronous motor”, Electrical Machines and Systems (ICEMS), International Conference, Beijing, August 20-23, pp. 1-5. Yamazaki, K. and Fukushima, Y. (2011), “Effect of eddy-current loss reduction by magnet segmentation in synchronous motors with concentrated windings”, IEEE Transactions on Industry Applications, Vol. 47 No. 2, pp. 779-788.

Web reference Available at: www.ansys.com/Products/Simulation þ Technology/Electromagnetics/Electro mechanicalþDesign/ANSYS þ Maxwell (accessed September 19, 2012). Appendix Basis of the Kriging method Kriging method was first developed by D. Krige and was introduced in field of computer science and engineering by Sacks et al. (1989) (Chen and Chin, 2003). In Kriging model, an unknown function y can be expressed as in the following equation: y ¼ BðxÞ þ Z ðxÞ

ð7Þ

where B(x) is a regression or polynomial model, giving the global trend of the modeled function y, and Z(x), which is a model of stochastic process, gives the local deviations from the global trend. The Gaussian correlation function is chosen in order to control the smoothness of the model. The mean square error (MSE) is the expected value of difference between the true response and the estimated one. By minimizing the expected MSE, the expression for the Kriging model is:   y^ðxÞ ¼ fB^ þ r T R1 y  fB^ ð8Þ where f is a unit vector with length equal to the number of sampled points, B^ is the estimator for the regression model, r is a correlation vector between a new location x to be estimated and the sample points location, y is the true response vector of the sampled points. Corresponding author Professor Eric Semail can be contacted at: [email protected]

To purchase reprints of this article please e-mail: [email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints

Five-phase PM machine

893

The current issue and full text archive of this journal is available at www.emeraldinsight.com/0332-1649.htm

COMPEL 33,3

894

Topology optimization of magnetostatic shielding using multistep evolutionary algorithms with additional searches in a restricted design space Yoshifumi Okamoto and Yusuke Tominaga Department of Electrical and Electronic Systems Engineering, Utsunomiya University, Utsunomiya, Japan

Shinji Wakao Department of Electrical Engineering and Bioscience, Waseda University, Tokyo, Japan, and

Shuji Sato Department of Electrical and Electronic Systems Engineering, Utsunomiya University, Utsunomiya, Japan Abstract

COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering Vol. 33 No. 3, 2014 pp. 894-913 r Emerald Group Publishing Limited 0332-1649 DOI 10.1108/COMPEL-10-2012-0202

Purpose – The purpose of this paper is to improve the multistep algorithm using evolutionary algorithm (EA) for the topology optimization of magnetostatic shielding, and the paper reveals the effectiveness of methodology by comparison with conventional optimization method. Furthermore, the design target is to obtain the novel shape of magnetostatic shielding. Design/methodology/approach – The EAs based on random search allow engineers to define general-purpose objects with various constraint conditions; however, many iterations are required in the FEA for the evaluation of the objective function, and it is difficult to realize a practical solution without island and void distribution. Then, the authors proposed the multistep algorithm with design space restriction, and improved the multistep algorithm in order to get better solution than the previous one. Findings – The variant model of optimized topology derived from improved multistep algorithm is defined to clarify the effectiveness of the optimized topology. The upper curvature of the inner shielding contributed to the reduction of magnetic flux density in the target domain. Research limitations/implications – Because the converged topology has many pixel element unevenness, the special smoother to remove the unevenness will play an important role for the realization of practical magnetostatic shielding. Practical implications – The optimized topology will give us useful detailed structure of magnetostatic shielding. Originality/value – First, while the conventional algorithm could not find the reasonable shape, the improved multistep optimization can capture the reasonable shape. Second, An additional search is attached to the multistep optimization procedure. It is shown that the performance of improved multistep algorithm is better than that of conventional algorithm. Keywords Optimization, Optimal design, FE method Paper type Research paper

This work was supported by JSPS ( Japan Society for the Promotion of Science) Grant-in-Aid for Young Scientists (B) Grant Number 25820099.

Introduction Topology optimization enables the design of novel geometries for electrical machines because the optimized solution does not depend on conventional information. Homogenization theory-based topology optimization was firstly proposed by Bendsøe and Kikuchi, who work in the research field of structural analysis (Bendsøe and Kikuchi, 1988). Subsequently, Bendsøe expanded homogenization-based method to the material density-based approach (Bendsøe, 1989), and its approach was introduced to the electromagnetic systems (Dyck and Lowther, 1996; Byun et al., 1999; Labbe´ and Dehez, 2010). The material density-based formulation permits the generation of intermediate material density, whereas the utilization of evolutionary algorithm (EA) has some possibility of searching a definite solution composed of binary-based material distribution without sensitivity computation of objective function. This paper proposes a topology optimization method for a magnetostatic shielding problem in order to reduce the magnetic flux density as much as possible. The design is achieved via multistep (MS) EAs in which the resolution is gradually improved using several kinds of finite element meshes. As a result, it has been shown that the convergence characteristics of the objective function and the converged value improve considerably (Okamoto et al., 2012). However, the proposed method was only applied to the design of a typical magnetic circuit problem to increase the electromagnetic force; it is unknown whether the proposed method is effective for a magnetic shielding design composed of layers thinner than the element size of the arranged finite elements. By reducing the constraint value for the shielding area at each optimization step, the effective topology of the magnetostatic shielding is computed. Only the genetic algorithm (GA) (Holland, 1975) had previously been implemented for the MS procedure; hence, a comparison with the immune algorithm (IA) (Farmer et al., 1986; Campelo et al., 2007) was drawn. Furthermore, an additional search (AS) in the restricted design space was added to the conventional MS procedure to produce a better solution for the next optimization step. As a result, it is shown that the proposed method is effective for topology optimization of the magnetostatic shielding model. Multistep optimization procedure Finite element analysis with non-conforming connection The weak form Gi of the Maxwell equation in a magnetostatic field is as follows: Z Z Gi ¼ ðr  N i Þ  ð½m1 r  AÞdV  N i  J 0 dV ¼ 0 ð1Þ O

OC

where Ni is the edge-based vector shape function, which is equivalent of the 2-D nodal shape function when the finite element mesh of the entire region is constructed by piling up a layer of 1st order finite elements such as a quadrilateral mesh. Variables A, [m], and J 0 are the magnetic vector potential, permeability tensor, and current density vector, respectively. The enclosed regions O and Oc are the entire analyzed region and the magnetizing winding region. The number of finite elements of a topological design domain has an increasing tendency. This phenomenon increases the already large elapsed time for topology optimization. While the mesh of a design domain for topology optimization is finely discretized, the coarse mesh of the outer space is preferable to the design domain in order to reduce the elapsed time for optimization. In particular, the degree of flexibility in mesh density is lower when adopting rectangular discretization. Thus, a non-conforming mesh connection (Kometani et al., 2000;

Multistep evolutionary algorithms 895

COMPEL 33,3

896

Okamoto et al., 2008) is an effective numerical method to connect the optimization region with the outer space. The non-conforming connection using a sub-mesh is achieved using the following linear combination: ! Z X N k Ak  dl ð2Þ Aab ¼ lab

k

where Aab is the magnetic vector potential value in edge ab on a slave side mesh, and lab is the length of edge ab. Interpolation is carried out by a line integral of the coarser mesh (slave side) to the finer one (master side) on the connection interface. When the non-conforming connection is applied to the specified interface, the global matrix maintains the sparse symmetric properties. Therefore, a sparse linear iterative solver based on Krylov subspaces can be applied to the finite element weak form with non-conforming connection. The symmetric Gauss-Seidel preconditioned minimized residual method based on the three-term recurrence formula of the CG-type (MRTR) method (Abe and Zhang, 2008) with Eisenstat’s technique (Eisenstat, 1981) is adopted as a linear solver for (1). Evolutionary algorithms The evolutionary heuristic algorithm is more versatile than the sensitivity-based method because the sensitivity analysis and program implementation usually caused by a change in the optimization target are not needed. In this paper, the GA and IA were applied to a 2-D magnetostatic shielding problem. Table I lists the main parameters for the GA and IA. The phenotype information of the ON-OFF material in an element is expressed by making a locus of 1-0 in the genotype, and the fitness is defined as the inverse value of the objective function. When there is no improvement in best solution in 200 generations or the generation number reaches the maximum tolerance of 30,000, the evolutionary search is stopped. GA The evolutionary factors of GA mainly consist of a uniform crossover, elite strategy, and roulette selection with exponential scaling. The scaled fitness value Fsi of an individual i is determined using a scaling power p: !p X Fk ð3Þ Fsi ¼ Fi = k

where Fi is fitness value for individual i, and it is calculated by the inverse of the objective function W. The mutation rate is set to constant or dynamic change. While the constant mutation rate is set to the inverse of the number of design variables nd, the dynamic mutation rate p(t) in the generation number t is computed as follow: pðtÞ ¼ ðp0  1=nd Þegt þ 1=nd Evolutionary algorithm Table I. Evolutionary algorithm parameters

GA IA

Population number

Crossover ratio

Power of scaling

Elite number

48

0.8

10

2

ð4Þ

Tp

Tp1

Tp2

0.1

0.05

0.2

where p0, and g are the initial mutation rate, and factor of exponent, respectively. This equation is variant formulation about the simplified Hesser’s mutation rate (Hesser and Ma¨nner, 1991): pðtÞ ¼ p0 egt

ð5Þ

Hesser’s mutation rate has a horizontal asymptote at zero. Therefore, the possibility of a trapped individual to get out of a local minimum becomes lower than that in the proposed dynamic mutation rate in (4), because the mutation rate has an asymptote at 1/nd. Figure 1(a) shows the procedure for the conventional GA. When the dynamic mutation rate is considered, the mutation rate changes every generation.

Multistep evolutionary algorithms 897

IA The main variables of the IA are the affinities F of the antigen and antibody, similarity C between the antibodies, density Y of the antibody, and differentiation of the suppressor T cell. Thus, Y of antibody v is evaluated as: N 1X Yv ¼ pvw ; N w¼1

 pvw ¼

1ðCvw XTp1 Þ 0 ðCvw oTp1 Þ

ð6Þ

where the similarity Cvw between antibodies v and w is computed by the Hamming distance, N is the number of antibodies, and Tp1 is a threshold value for judgment of pvw. The selected ratio for roulette is based on:  Ns Y Fv Cvk ðCvk XTp2 Þ Ev ¼ ð1  w Þ; w ¼ ð7Þ vk vk N 0 ðCvk oTp2 Þ P k¼1 Yv Fk k¼1

(a)

(b)

start

start

generation of initial population

generation of initial population

i=0

i=0

i=i+1

i=i+1

FEM computation of object function Wi of individual i

FEM computation of object function Wi of individual i

No

i = Np Yes

convergence Yes No storage of elite individuals selection

No

i = Np Yes convergence No

Yes

evaluation of similarity

cross over

Differentiation to T cells

mutation

selection

stop

stop

GA

IA

Figure 1. Evolutionary algorithm procedures

COMPEL 33,3

898

where Ns is the number of preserved antibodies in the suppressor T cell and Tp2 is a threshold value for judgment of wvk. Figure 1(b) shows the conventional procedure of IA. As GA procedure, the dynamic mutation is changed every generation using (4). Multistep procedure with AS in restricted space Figure 2 shows the conventional multistep procedure with design space restriction (DSR). First, a solution is searched for using the coarsest mesh, as shown in Figure 2(a). Next, the rough solution in Figure 2(b) is directly allocated to the doubly divided mesh as shown in Figure 2(c). The next designed element is specified by DSR in Figure 2(c). As a result of the DSR scheme, the initial number of design variables (nd ¼ 60) in a fine mesh can be reduced to nd ¼ 34. It is possible to obtain a practical solution by increasing the number of optimization steps even when the problem is composed of many design variables. Figure 3 shows the MS procedure with an AS. First, a solution is searched for in the coarsest mesh as shown in Figure 3(a). While the previous rough solution is allocated to the doubly divided mesh in the conventional multistep procedure in Figure 2(c), the AS is performed with DSR to allocate a better solution to the next fine mesh, as shown in Figures 3(c) and (d). By allocating a coarse solution to a fine mesh, the optimization is performed as shown in Figures 3(e) and (f). Finally, the AS with DSR is carried out again in Figures 3(g) and (h). Figure 4 shows the flowchart for multistep optimization. First, the multistep number Ns is determined (step 1). Next, the design domain is set at the ith finite element mesh (step 2), and a solution is searched for by an EA (step 3). When the AS is scheduled in the multistep procedure (step 4), the allocation of the solution (step 5) to the ith finite element mesh is considered. After DSR (step 6), the AS (step 7) is performed. When the step number i becomes Ns, the multistep optimization is stopped. Otherwise, the optimal solution will be allocated to the (i þ 1)th finite element mesh. Next, DSR (step 10) is performed, and the constraint value Si with respect to the 1st step

2nd step

(a)

(b)

coarse mesh (nd = 15)

Figure 2. Conventional multistep optimization procedure

(c)

1st global solution

Figure 3. Procedure for multistep optimization with an additional search

coarse mesh (nd = 15) : air element

allocation of coarse solution and DSR (nd = 34)

2nd global solution

2nd step

1st step

(a)

(d)

(b)

(c)

1st global solution

(d)

AS with DSR (nd = 13)

: iron element

coarse solution

: designed element

(e)

allocation of coarse solution and DSR (nd = 34)

(f)

2nd global solution

(g)

(h)

AS with DSR fine solution (nd = 41)

start step 1

determination of multistep number Ns i=0

step 11 reduction of constraint value Si+1 for magnetic shielding area

step 10 design space restriction

step 9 allocation of solution to finer finite element mesh

i=i+1 step 2 setting design domain using i-th finite element mesh

Multistep evolutionary algorithms 899

step 3 search using EA step 4 additional search No step 5 Yes allocation of solution to i-th finite element mesh step 6 design space restriction step 7 search using EA step 8 No

i = Ns Yes stop

shielding volume is gradually reduced owing to the shortened element size as a progression of the optimization step as follows: ð8Þ Siþ1 ¼ ka Si where constant ka is the annealing factor, which is set to 0.65. This value was determined by numerical experiments consisting of several trials (ka ¼ 0.6, 0.65, 0.7) (Tominaga et al., 2011). Next, the operation returns to step 2, and aforementioned procedure is iterated until the convergence criterion (step 8) is satisfied. Optimized model Box shield model Figure 5 shows the standard box shield model. The shield thickness in the standard model is 2 mm, and the 2-D topology of the magnetostatic shielding is optimized in the design domain to reduce the maximum magnetic flux density in the target domain by distributing the magnetic material, that has a linear permeability mr set to 103. Current I of the magnetizing winding is 1,000 AT. The reduction in elapsed time is schemed by an independent definition of the design domain using a non-conforming mesh connection. The boundary condition A  n ¼ 0 is imposed on the boundary y ¼ 0, and the condition A  n ¼ 0 is imposed on the boundary x ¼ 0 (Figure 6). In this optimization problem, the optimization step number Ns is set to 4, and Si in the ith step is determined by Si þ 1 sequentially divided by ka from the known value S4 in Table II. Figure 7 shows the four steps for finite element meshes. When the optimization step progresses, the element length in the design domain becomes half of the size of the previous step on the condition that the element length in the target and the other domain is identical in all meshes owing to the non-conforming mesh connection. The number of

Figure 4. Flowchart for multistep optimization with an additional search

COMPEL 33,3

y

2

18 16

72

(unit: mm) 16 14 coil

50

magnetic shielding

900

50

target domain

Figure 5. Standard model for the box shield model

x 50

y

2

16

12

(unit: mm)

16

72

50

design domain 5

coil

55

40

target domain

50

nonconf. boundary

Figure 6. Optimized model for the box shield model

x 50

55

40

5

Standard model Table II. Parameters of evolutionary algorithms

Number of total elements Number of elements in design domain Constraint value of shielding area Si  104 (m2)

65,208 4.040

Step number i in MSEA 1st 2nd 3rd 4th 860 240 14.71

1,726 960 9.553

5,042 3,840 6.215

17,986 15,360 4.040

elements in the design domain, which is equal to nd without DSR, becomes quadrupled number as the progression of optimization step number as shown in Table II. Objective function The topology optimization for magnetostatic shielding is formulated as follows: Minimization jBmax j2 ZZ Subject to Siron ¼ dSpSi Oiron

ð9Þ

(a)

(b)

y design domain

Multistep evolutionary algorithms

y design domain

901

enlarged

enlarged

x

x 1st

(c)

2nd

(d)

y

y

design domain

design domain

enlarged

enlarged x

x 3rd

4th

where Bmax is the maximum flux density in the target region, Siron is the area of magnetostatic shielding, and Oiron is the shielding region. The objective function W that needs to be minimized can be defined as follows: W ¼ jB max j2 þ PðSiron Þ

ð10Þ

where P(Siron) is the penalty function with respect to Siron as follows:  PðSiron Þ ¼

0 ðSiron oSi Þ kp ðSiron  Si ÞðSiron XSi Þ

ð11Þ

where constant k p is the penalty factor for Siron greater than or equal to Si and is set to 1010. Optimization results Performance of the non-conforming mesh connection A non-conforming mesh connection (Kometani et al., 2000; Okamoto et al., 2008) is effective for multistep optimization, because creating a finite element mesh is only needed in the design domain. This section shows the effectiveness and accuracy of a non-conforming mesh connection for the magnetostatic shielding problem. Figure 8 shows the finite element meshes for verifying the accuracy of a non-conforming mesh connection. The outer space and the region between the target domain and magnetostatic shielding are roughly discretized in order to reduce the computational cost. The mesh distribution of magnetostatic shielding in the conforming mesh is the same as that in the non-conforming mesh. Figure 9 shows the distributions of the magnetic flux density along the z-axis for these two meshes. The non-conforming distribution has similar characteristics to the conforming.

Figure 7. Finite element meshes for multistep optimization

COMPEL 33,3

(a)

(b)

y

y

902

Figure 8. Finite element meshes for verifying the accuracy of a nonconforming mesh connection

coil

coil

magnetic shielding

magnetic shielding

target domain

target domain

x

x

conforming

nonconforming

9.0 8.2 nonconf. |By| [mT]

|By| [mT]

conf. 6.0 nonconf. 3.0

Figure 9. Distributions of B y on y-axis

conf.

8.1 0.1

0.11

y [m] enlarged view

0

0.10 y [m]

0.20

The relative error of the non-conforming mesh compared to the conforming mesh is approximately 0.5 percent in the neighborhood of the shielding. The discrepancy in the maximum flux in the target domain between the conforming and the non-conforming mesh is o0.1 percent, as shown in Table III. Furthermore, the non-conforming mesh connection can reduce the elapsed time to less than one-fifth of the time of the conforming mesh connection owing to the reduction in the number of finite elements. Parallelization performance of EA A large elapsed time is required for topology optimization based on EA. However, it is not so difficult to parallelize the EA procedure composed of multiple individuals in one generation, as in GA or IA. Therefore, we use a PC having 12 threads composed of two XEON X-5680 CPUs running at 4.0 GHz with 48 GB. The fitness of each individual in one generation is computed by a maximum of 12 threads using message passing Mesh type Table III. Numerical results

NoE

DoF

Conforming 10,788 (1.00) 10,672 (1.00) Non-conforming 3,001 (0.28) 2,791 (0.26)

Preconditioned MRTR it Bmax (mT) Elapsed time (s) 354 83

351.41 351.62

0.89 (1.00) 0.14 (0.16)

interface (MPI). The finite element mesh shown in Figure 7(d) is adopted for the performance evaluation. Figure 10 shows the scalability of parallelization in GA, when the maximum generation is set to 10 and the random number sequence is fixed. We find that high scalability is obtained. The elapsed time could be reduced to approximately one tenth compared to the case number of process elements (PE) ¼ 1. Therefore, a parallelized EA of 12 threads is used in following topology optimizations.

Multistep evolutionary algorithms 903

Optimization of the constants for the dynamic mutation rate Constants p0 and g for dynamic mutation (4) are assumed as all combinations composed of p0 ¼ 0.05, 0.1, 0.2 and g ¼ 0.005, 0.01, 0.05, 0.1. A survey for generally optimal p0 and g is verified in this section. The finite element mesh (nd ¼ 240) shown in Figure 7(a) is adopted as the verification problem, and both the elapsed generation and converged W are estimated by the averaged value of ten trials. Table IV shows the results of the elapsed generation and converged W with or without dynamic mutation in a single step optimization. The shaded cells indicate a better converged W than the constant mutation case. Thus, the constant mutation is set to 1/nd, which is equal to 4.166  103. The converged W is mildly improved by means of a dynamic mutation rate. The converged W using (4) is generally better than that of (5). This tendency holds for the results using the IA. The reason for the improvement of W is that the mutation rate in the first-half is generally larger than that of the constant case in order to escape trapping at the local minimum. The best combination of (4), ( p0, g) ¼ (0.05, 0.01), (0.1, 0.1), is adopted as the parameters for the dynamic mutation of GA and IA in the following optimizations, respectively. Optimization using IA Figure 11 shows the optimization results by means of a single step optimization. The contour of the magnetic circuit could not be followed owing to the large number of design variables. Otherwise, the multistep IA (MSIA) follows the rough shape composed of two layers shown in Figure 12. However, the inner shielding is determined as the optimization step progresses. The island distribution of the magnetic core is generated in the upper tip of the design domain. Figure 13 shows the optimization results by means of multistep optimization with AS. The first solution in Figure 13(a) is the same as the solution shown in Figure 12(a). The performance of solution could be slightly improved by AS. For example, the island distribution of the magnetic element 240

12 10

elapsed time

200

scalability

8

160

6

120

4

80

measured scal.

40

2 0

elapsed time [s]

ideal scal.

8 4 PE number

12

0

Figure 10. Scalability of the parallelized GA

Table IV. Parameter survey for the dynamic mutation rate

Variant Equation (4)

(b) IA 1/nd (constant) Hesser’s Equation (5)

Variant Equation (4)

(a) GA 1/nd (constant) Hesser’s Equation (5)

2,970 (0.85) 3,712 (1.07) 2,805 (0.80)

300 (0.09)

0.05 0.2 0.1 0.05

462 (0.13)

0.1

(1.21) (1.03) (1.44) (0.78) (0.68) (0.69)

460 (0.13)

1,801 1,534 2,147 1,167 1,008 1,037

0.1

0.2

0.2 0.1 0.05 0.2 0.1 0.05

p0 1,492 (1.00) (0.62) 843 (0.56) (1.01) 790 (0.53) (1.08) 749 (0.50) (0.63) 1,272 (0.85) (0.73) 1,211 (0.81) (0.75) 1,202 (0.81)

2,969 (0.85) 2,918 (0.84) 2,972 (0.85)

381 (0.11)

423 (0.12)

2,763 (0.79) 3,132 (0.90) 2,805 (0.80)

761 (0.22)

833 (0.24)

3,485 (1.00) 376 (0.11) 890 (0.56)

927 1,512 1,672 943 1,083 1,117

(0.85) (0.79) (0.66) (1.12) (0.82) (0.91)

2,680 (0.77) 1,279 (0.37) 2,970 (0.85)

1,175 (0.34)

1,279 (0.37)

1,417 (0.85)

1,262 1,177 982 1,676 1,229 1,357

0.005

(93.4) (154) (306) (1.04) (0.76) (0.82)

2.96 1012 (4.12 1011) 4.59 1012 (6.38 1011) 6.41 1012 (8.91 1011) 6.16 (0.85) 5.47 (0.76) 5.73 (0.80)

424 701 1390 4.74 3.46 3.72

0.1

1.85  10 (2.57  1011) 6.63 (0.92) 6.98 (0.97) 7.26 (1.01)

12

6.16 (0.86) 6.21 (0.86) 6.59 (0.92)

116 (16.1)

53.3 (7.41)

7.23 (1.01) 7.45 (1.04) 6.33 (0.88)

9.63 (1.34)

8.96 (1.24)

(0.94) (0.98) (0.76) (0.98) (0.96) (0.78)

5.79  103 (805)

4.26 4.47 3.45 4.47 4.34 3.53

8.63 (1.20)

4.54 (1.00) (27.8) 4.87 (1.07) (12.8) 5.42 (1.19) (15.4) 7.40 (1.63) (0.94) 3.45 (0.76) (0.92) 3.87 (0.85) (0.73) 3.21 (0.71)

0.005

7.19 (1.00) 87.7 (12.2) 5.66  103 (787)

126 58.3 70.0 4.26 4.17 3.32

Converged W ( 109) g 0.05 0.01

904

Mutation type

Elapsed generation number g 0.05 0.01

COMPEL 33,3

Multistep evolutionary algorithms

y

905 x

Figure 11. Optimized topology for the magnetostatic shielding by means of single step IA

(a) y

1st

(b) y

(c) y

(d) y

x

x

x

2nd

3rd

x

4th

in Figure 13(a) can be terminated using AS to some extent to improve the objective function value. Therefore, the better solution is used as the next step optimization. Finally, the inner part of the magnetic shielding is almost eliminated. Table V shows the effectiveness of DSR in MSIA. nd is effectively reduced in all optimization steps. Figure 14 shows the convergence characteristics of W. While the characteristic using conventional IA is not well converged for many of the design variables (nd ¼ 15,360), the behavior using MSIA with DSR was effectively reduced. The discontinuous distributions of W in MSIA were caused by allocation of the optimized solution to finer meshes as the optimization progresses. Finally, the solution obtained by MSIA with AS is the best converged valued between the three optimal designs. Table VI shows the optimization results. While the solution for conventional IA does not satisfy the constraint condition for the magnetostatic shielding area (Sirono4.04  104), the solutions for MSIA satisfy the constraint condition for the area

Figure 12. Optimized topology for the magnetostatic shielding by means of an MSIA without AS

COMPEL 33,3

(a)

(b)

(c)

(d)

y

y

y

y

906

x

x

x

1st

1st after AS

x

2nd

2nd after AS

(e)

(f)

(g)

(h)

y

y

y

y

x

x

x

x

Figure 13. Optimized topology for the magnetostatic shielding by means of an MSIA with AS 3rd

Step no. nd Without DSR

Table V. Effectiveness of DSR in MSIA

With DSR

3rd after AS

1st

240 (1.00) 240 (1.00)

Without AS 2nd 3rd

960 (1.00) 329 (0.34)

4th

4th

1st

1st AS

3,840 15,360 240 (1.00) (1.00) (1.00) 679 1,393 240 116 (0.18) (0.091) (1.00) (0.48)

2nd

4th after AS

With AS 2nd AS 3rd

960 (1.00) 327 (0.34)

265 (0.28)

3rd AS

3,840 (1.00) 659 (0.17)

546 (0.14)

4th

4th AS

15,360 (1.00) 1,335 1,161 (0.10) (0.076)

of magnetostatic shielding. The maximum flux density is effectively reduced by means of multistep optimization. While the elapsed time for multistep optimization with AS is longer than that without AS, the maximum flux value obtained by multistep optimization with AS can be lower than the value obtained by multistep optimization without AS. Optimization using GA Figure 15 shows the optimization results using single step optimization. The contour of the magnetic circuit could not be followed for many design variables, as in the case for

–6

Multistep evolutionary algorithms

–8

907

1st 6

conventional IA

MSIA with AS

2nd

3rd

4th

3

MSIA without AS

log10 W

log10 W

0 –3

AS

–6

3rd –10

–9

0 1st

AS

–12 0

20,000

40,000

60,000

80,000

Elapsed time (h) Elapsed generation Siron 104 (m2) |Bmax| (mT)

Conventional 18.8 (1.00)

1st

4th

40,000 2nd generation number

generation number

Step num.

AS

AS 80,000

Figure 14. The convergence characteristics for the optimization using IA

enlarged view

Without AS 2nd 3rd

0.0829 0.188

1.61

Total

16.0

17.9 0.103 0.343 2.24 21.6 24.3 (1.06) (1.37) 54,212 5,999 19,105 12,428 40,898 78,430 (1.81) (2.61) 14.57 9.547 6.215 4.038 21.5 39.9 86.7 306.0 (0.10)

30,000 (1.00)

4,800

5,286 14,126

30,000

7.36 3.00103 (1.00)

14.70 41.5

9.562 61.8

4.038 427.0 (0.14)

6.211 105.0

1st

With AS 2nd 3rd

4th

4th

Total

Table VI. The optimization results by means of IA

y

x

Figure 15. Optimized topology for the magnetostatic shielding using single step GA

COMPEL 33,3

908

IA shown in Figure 11. When the multistep optimization is applied, the topology of the two-layered magnetic shielding could be followed, as shown in Figure 16. The crosssectional area of the magnetostatic shielding becomes narrower because the constraint condition for the magnetostatic shielding is gradually relaxed according to (8). There are uneven shapes of magnetostatic shielding. Figure 17 shows the topology changes for the magnetostatic shielding using multistep GA (MSGA) with AS. The first topology is the same as the result using MSGA without AS in Figure 16(a). The topology unevenness of the magnetostatic shielding in Figure 16(d) is removed using AS as shown in Figure 17(h). Table VII shows the results of DSR in MSGA. Here, we find that the DSR could also effectively reduce nd. Figure 18 shows the convergence characteristics of W. The characteristics for the conventional GA slowly converged to a local solution. The drastic decrease in W after 18,000 generation is attributed to satisfying the constraint condition for the magnetostatic shielding area. In the multistep optimizations, the converged value with AS is lower than that without AS. The allocation of a better solution can be performed for multistep optimization with AS. Therefore, the converged value can be slightly improved. Table VIII shows the optimization results. The area constraint for the magnetostatic shielding is satisfied in all results, and the number of elapsed generation for MSGA is less than that of MSIA. Furthermore, the converged Bmax of MSGA is less than that of MSIA. It is shown that GA is more effective than IA for this problem. Thus, the converged Bmax in MSGA with AS is less than that without AS. Performance of the optimized model Figure 19 shows the distributions of the magnetic flux densities. The flux densities for the optimized models are considerably lower than that of the standard model. While the upper center flux using MSIA with AS shown in Figure 19(b) is increased owing to extinct of the center part in inner magnetic shielding, the maximum flux could be reasonably reduced in the result using MSGA with AS shown in Figure 19(c). Figure 20 shows the comparison of the B y distributions for the standard and optimized models along the z-axis. The characteristics of the magnetic flux density are improved by multistep optimizations in the target region. The characteristic for MSGA with AS is superior to that of MSIA with AS.

(a)

(b)

(c)

y

y

(d)

y

y

x

x

x

x

Figure 16. Optimized topology for the magnetostatic shielding by means of MSGA without AS 1st

2nd

3rd

4th

(a)

(b)

(c)

(d)

y

y

y

y

x

x

1st

x

1st after AS

x

2nd

(e)

(f)

(g)

y

y

y

x

Multistep evolutionary algorithms 909

2nd after AS

(h) y

x

x

x

Figure 17. Optimized topology for the magnetostatic shielding by MSGA with AS 3rd

Step nd Without DSR With DSR

3rd after AS

1st

240 (1.00) 240 (1.00)

Without AS 2nd 3rd

960 (1.00) 375 (0.39)

4th

1st

4th

1st AS

3,840 15,360 240 (1.00) (1.00) (1.00) 837 1,785 240 136 (0.18) (0.12) (1.00) (0.57)

2nd

4th after AS

With AS 2nd AS 3rd

960 (1.00) 307 (0.32)

279 (0.29)

3rd AS

3,840 (1.00) 705 (0.18)

593 (0.15)

4th

4th AS

15,360 (1.00) 1,507 (0.098)

1,424 (0.093)

Comparison between optimized model and variant model Figure 21(a) shows the optimized topology obtained by MSGA with AS. There is a curvature in the y-direction on the upper side of the inner shield. The shielding shape of the other area is roughly straight. Therefore, it is unknown whether the curved part reduces the maximum flux density in the target region. We define the variant model of the optimized topology in Figure 21(b). Table IX shows the comparison for the performance of the various models. The maximum flux density is increased in the variant model. It is clear that the curvature of the upper side of the inner shield in the optimized model is effective for the reduction of maximum flux in the target region.

Table VII. Effectiveness of DSR in MSGA

COMPEL 33,3

1st 2nd 6

3rd

4th

–4

conventional GA MSGA with AS 3

log10 W

910

log10 W

0 –3 –6

–12 10,000

20,000

15,000 AS

Step no.

Conventional

generation number

30,000

Elapsed time (h) Elapsed generation Siron 104 (m2) |Bmax| (mT )

(a)

enlarged view

Without AS 2nd 3rd 4th

1st

30,000

AS

generation number

Table VIII. Optimized results using GA

4th

2nd 3rd –10 0 1st 0

AS

AS

MSGA without AS

–9

Figure 18. Convergence characteristics for the optimization using GA

–7

Total

1st

With AS 2nd 3rd

4th

Total

17.5 (1.00) 30,000 (1.00) 4.040

0.027 0.068 0.306 4.44 4.84 (0.28) 0.045 0.129 0.376 3.24 3.79 (0.22) 1,596 1,894 2,736 7,548 13,774 (0.46) 2,649 3,658 3,303 5,477 15,087 (0.50) 14.69 9.562 6.214 4.040 14.71 9.562 6.214 4.040 101.0 80.1 1.66103 (1.000) 17.0 28.6 52.7 (0.061) 15.4 25.4 41.1 (0.048)

(b)

|B| [mT]

(c)

|B| [mT]

15

15

30

standard

15

30

MSIA with AS

30

x

x

x

Figure 19. Magnetic flux densities for various models

|B| [mT]

y

y

y

MSGA with AS

Multistep evolutionary algorithms

8.0 0.40

standard model

standard model

MSIA with AS

|By| [mT]

|By| [mT]

6.0

4.0

911

MSGA with AS

MSGA with AS

2.0 0.05

MSIA with AS 0

0.05 y [m]

0.25 y [m]

(a)

0

enlarged view of target domain

0.50

(b)

y

y

enlarged view

enlarged view x

optimized model using MSGA with AS

Siron  104 (m2) |Bmax| (mT)

x

Figure 21. Optimized model and variant model

variant model

Standard model

Variant model

Optimized model

4.040 351.6 (1.000)

4.029 82.24 (0.234)

4.040 80.06 (0.228)

Conclusions We proposed a multistep optimization method with AS for the topological design of a magnetostatic shielding model. The obtained results are as follows: (1)

Figure 20. Comparison of B y distributions between the standard and optimized models along the y-axis

The non-conforming mesh connection using a linear combination is an effective analysis method for multistep optimization because only mesh changes in the design domain are needed for maintaining computation accuracy and total elapsed time can be reduced by connecting with rough meshes in the space except main region.

Table IX. Performance of the optimized model

COMPEL 33,3

912

(2)

Evolutionary algorithm is implemented on a PC with 12 cores using parallelization. The fitness computation of each individual using finite element analysis is assigned to a single thread by MPI. As a result, it is shown that the computation speed is improved tenfold compared to the sequential computation.

(3)

An improvement of dynamic mutation proposed by Hesser is formulated. The optimized parameter is roughly estimated for the topology optimization for the magnetostatic shielding problem, and implemented to IA and GA. As a result, there is some improvement in the converged objective functions.

(4)

MSGA and MSIA are applied to the topology optimization for magnetic shielding. While the conventional algorithm could not find the reasonable shape, the multistep optimization can capture the reasonable shape.

(5)

An AS is attached to the multistep optimization procedure. As a result, it is shown that the solution with AS implemented in conventional procedure of the multistep optimization is improved. In the 2-D design problem of magnetic shielding, the performance of GA is better than that of IA.

(6)

The variant model of optimized topology using MSGA with AS is defined to clarify the effectiveness of the optimized topology. The upper curvature of the inner shielding contributed to the reduction of magnetic flux density in the target domain.

References Abe, K. and Zhang, S.-L. (2008), “A variant algorithm of the Orthomin(m) method for solving linear systems”, Applied Mathematics and Computation, Vol. 206 No. 1, pp. 42-49. Bendsøe, M.P. (1989), “Optimal shape design as a material distribution problem”, Struct. Optim, Vol. 1, pp. 193-202. Bendsøe, M.P. and Kikuchi, N. (1988), “Generating optimal topologies in structural design using a homogenization method”, Comput. Methods Appl. Mech. Eng, Vol. 71 No. 2, pp. 197-224. Byun, J.K., Hahn, S.Y. and Park, I.H. (1999), “Topology optimization of electrical devices using mutual energy and sensitivity”, IEEE Trans. Magn, Vol. 35 No. 5, pp. 3718-3720. Campelo, F., Watanabe, K. and Igarashi, H. (2007), “3D topology optimization using an immune algorithm”, The International Journal for Computation and Mathematics in Electrical and Electronic Engineering, Vol. 26 No. 3, pp. 677-688. Dyck, D.N. and Lowther, D.A. (1996), “Automated design of magnetic devices by optimizing material distribution”, IEEE Trans. Magn, Vol. 32 No. 3, pp. 1188-1193. Eisenstat, S.C. (1981), “Efficient implementation of a class of preconditioned conjugate gradient methods”, SIAM. J. Sci. Stat. Comput, Vol. 2 No. 1, pp. 1-4. Farmer, J.D., Packard, N. and Perelson, A. (1986), “The immune system, adaptation and machine learning”, Physica D: Nonlinear Phenomena, Vol. 2 No. 1, pp. 187-204. Hesser, J. and Ma¨nner, R. (1991), “Towards an optimal mutation probability for genetic algorithms”, in Schwefel, H.-P. and Manner, R. (Eds), Parallel Problem Solving from Nature, Vol. 496, Springer, Berlin Heidelberg, pp. 23-32. Holland, J. (1975), Adaptation in Natural and Artificial Systems, The University of Michigan Press, Ann Arbor, MI. Kometani, H., Sakabe, S. and Kameari, A. (2000), “3-D analysis of induction motor with skewed slots using regular coupling mesh”, IEEE Trans. Magn, Vol. 36 No. 4, pp. 1769-1773.

Labbe´, T. and Dehez, B. (2010), “Convexity-oriented mapping method for the topology optimization of electromagnetic devices composed of iron and coils”, IEEE Trans. Magn, Vol. 46 No. 5, pp. 1177-1185. Okamoto, Y., Tominaga, Y. and Sato, S. (2012), “Topological design for 3-D optimization using the combination of multistep genetic algorithm with design space reduction and nonconforming mesh connection”, IEEE Trans. Magn, Vol. 48 No. 2, pp. 515-518. Okamoto, Y., Himeno, R., Ushida, K., Ahagon, A. and Fujiwara, K. (2008), “Dielectric heating analysis method with accurate rotational motion of stirrer fan using nonconforming mesh connection”, IEEE Trans. Magn, Vol. 44 No. 6, pp. 806-809. Tominaga, Y., Okamoto, Y. and Sato, S. (2011), “Optimal topology design for magnetic shielding using multistep genetic algorithm”, The Papers of Joint Technical Meeting on Static Apparatus and Rotating Machinery, IEEJ Japan, SA-11-49, RM-11-62, pp. 17-22, (in Japanese). Corresponding author Dr Yoshifumi Okamoto can be contacted at: [email protected]

To purchase reprints of this article please e-mail: [email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints

Multistep evolutionary algorithms 913

The current issue and full text archive of this journal is available at www.emeraldinsight.com/0332-1649.htm

COMPEL 33,3

914

Adaptive unscented transform for uncertainty quantification in EMC large-scale systems Moises Ferber, Christian Vollaire and Laurent Kra¨henbu¨hl Universite´ de Lyon, Ampe`re CNRS UMR5005, ECL, Ecully, France, and

Jo~ao Antoˆnio Vasconcelos UFMG – Laboratorio de Computac¸a~o Evolucionaria, Belo Horizonte, Brazil Abstract Purpose – The purpose of this paper is to introduce a novel methodology for uncertainty quantification in large-scale systems. It is a non-intrusive approach based on the unscented transform (UT) but it requires far less simulations from a EM solver for certain models. Design/methodology/approach – The methodology of uncertainty propagation is carried out adaptively instead of considering all input variables. First, a ranking of input variables is determined and after a classical UT is applied successively considering each time one more input variable. The convergence is reached once the most important variables were considered. Findings – The adaptive UT can be an efficient alternative of uncertainty propagation for large dimensional systems. Originality/value – The classical UT is unfeasible for large-scale systems. This paper presents one new possibility to use this stochastic collocation method for systems with large number of input dimensions. Keywords Sensitivity analysis, Electromagnetic compatability, Uncertainty estimation, Power devices Paper type Research paper

COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering Vol. 33 No. 3, 2014 pp. 914-926 r Emerald Group Publishing Limited 0332-1649 DOI 10.1108/COMPEL-10-2012-0212

Introduction Non-intrusive uncertainty quantification methodologies are becoming a popular approach for the uncertainty propagation in complex systems. Many papers recently show the application of stochastic collocation methods or polynomial chaos decomposition in order to estimate the output variance due to parameter uncertainty in many fields of science (van Dijk, 2002; Stievano et al., 2012; Dongbin et al., 2005; Moro et al., 2011; Gaignaire et al., 2010, 2012; Preston et al., 2009; Beddek et al., 2012; Tartakovsky and Dongbin, 2007; Osnes and Sundnes, 2012; Freitas et al., 2010; Hansen et al., 2009; Hildebrand and Gevers, 2004; Silly-Carette et al., 2009). Additionally, the recent development and application of computational tools for electromagnetic compatibility allows accurate simulations of large-scale EMC systems. Many numerical techniques of forward modeling in EMC can be mentioned, such as finite element method, transmission line method and partial equivalent element circuit. For instance, a model of a power converter can take into account the intrinsic parasitic effects of components, capacitive and inductive coupling of PCB tracks and non-linear behavior of semiconductors. Thus, the conducted electromagnetic interference determined using this model is very accurate. However, the computational cost of one accurate simulation is usually very high. Many parameters of EMC models are actually known up to a certain precision only. This parametric uncertainty is not taken into account by EMC solvers. One possible approach is to develop a non-intrusive methodology, that uses the results

of the solvers for different input scenarios and compute statistics of the output. An example of a methodology that has been applied successfully in EMC is the unscented transform (UT) (Julier and Ulmann, 2004; de Menezes et al., 2008, 2009; Ajayi et al., 2008). Although the UT is much more efficient than Monte-Carlo simulations to compute the statistical moments (average, mean, skewness and kurtosis), it is not appropriate for large-scale systems. The required number of simulations increases exponentially with the number of dimensions of the model. For instance, a 10-D model would require at least 2,808 simulations to estimate the statistical moments of one output variable using fourth order UT approximation (de Menezes et al., 2008). In this context, an adaptive UT for the uncertainty quantification of large-scale EMC models that explores the dominance of some dimensions over others is an interesting alternative to classical UT. This novel methodology will be presented and applied to three different large-scale models and compared to Monte-Carlo method. UT The UT consists of estimating the statistical moments of an output random variable using the result of evaluations of the model on well-chosen input values called sigma points (Si). A detailed description of the methodology can be found in Julier and  and the variance (s2 ), for instance, are given by (1) Ulmann (2004). The mean (G) G and (2), respectively: N X   G ¼ E GðU þ u^Þ ¼ w0 GðUÞ þ wi GðU þ Si Þ

ð1Þ

i¼1 N n o X  2 ¼ w0 ðGðU Þ  GÞ 2þ 2 s2G ¼ E ðGðU þ u^Þ  GÞ wi ðGðU þ Si Þ  GÞ

ð2Þ

i¼1

where U is a vector with average input values, u^ is a vector with zero-mean random input variables, E{} is the expectation of a random variable and wi for i ¼ 1-N are weights. The expressions for higher-order statistical moments can be found in Julier and Ulmann (2004). There are different ways of determining the sigma points, which correspond to the weights and values of the input parameters where the system must be evaluated. One possible set of sigma points is given by the solution of the system of non-linear equations in (3) where k is the order of approximation: X X   ð3Þ wi ; wi Sik ¼ E u^k w0 ¼ 1  i

i

where wi (i ¼ 0, y, Nsp) are the weights, Si (i ¼ 0, y, Nsp) are the input values in which the system must be evaluated around average, k is the order of the transform and Nsp is the number of sigma points. A sigma point is characterized by its weight and parameter value. The solution of (3) gives the minimum number of sigma points in order to correctly estimate the output statistical moments and thus it is the computationally cheapest.

Uncertainty quantification in EMC 915

COMPEL 33,3

However, as the number of dimensions of the system increases, the solution of (3) becomes complex. There is a different set of sigma points which is much simpler to determine and is given by the following expressions (Julier and Ulmann, 2004):

916

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Nrv2 Nrv þ 2 pffiffiffiffiffiffiffi r; w1 ¼ ; Nrv 2Nrv ðNrv þ 2Þ2 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ ð1; 0; . . . ; 0Þ Nrv þ 2 r; pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 ¼ ð0; 1; . . . ; 0Þ Nrv þ 2 r; w2 ¼ ; ðNrv þ 2Þ2

Si;1 ¼ ð1; 1; . . . ; 1Þ Si;2 Si;2 ... Si;2 ¼ ð0; 0; . . . ; 1Þ

ð4Þ

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Nrv þ 2 r

where Nrv is the number of random variables and r is the vector with the input standard deviations. The set of sigma points given by (4) eliminates the need to solve a highly complicated system of non-linear equations but it requires more evaluations of the system. Table I shows numerical values of wi and Si from (4) for problems with a number of random variables from 1 to 5, with three digits precision. Thus the UT consists briefly of calculating a set of sigma points by (3) or (4), evaluating the system at this set and applying (1) and (2) to obtain the average and standard deviation of the output variable. The only issue about the methodology is for high-dimensional systems, since the number of sigma points increases very fast with the dimensionality. Adaptive UT An alternative method to the classical UT is described as follows: rank the input parameters by their influence on the output variable, apply the UT considering only the most important variable and successively one more variable at a time, following the order of importance and stop when convergence is reached. In a large-scale system that only a few input parameters must be considered, this adaptive approach is very effective. Figure 1 presents an overview of the adaptive UT. There are several approaches in order to classify the importance of input parameters on a given output variable, for instance Morris (1991) and Campolongo et al. (2007). In this paper, we consider the simplest way to estimate, which is based on

Nrv

Table I. Sigma points and weights given by (4) with s ¼ 1 and Nrv ¼ 1, y, 5

1 2 3 4 5

w1

Si,1

w2

Si,2

0.056 0.063 0.045 0.028 0.016

(71)  1.732 (71, 71)  1.414 (71, 71, y, 71)  1.291 (71, 71, y, 71)  1.225 (71, 71, y, 71)  1.183

0.111 0.063 0.040 0.028 0.020

(71)  1.732 {(0, 71) and (71,0)}  2.000 (0, y, 71, y, 0)  2.236 (0, y, 71, y, 0)  2.449 (0, y, 71, y, 0)  2.646

the partial derivative of the output with respect to each input and on the amount of uncertainty of each input. Thus, the expression is:    qG ranki ¼    si ð5Þ qxi

Uncertainty quantification in EMC

where ranki is the importance of the ith input variable, jqG=qxi j is the partial derivative of the system G with respect to the ith input and si is the ith standard deviation. In this manner, both the derivative and the amount of uncertainty are taken into account. For instance, an input variable with a strong influence on the output but with very low uncertainty or an input variable with weak influence on the output and strong uncertainty may be low ranked compared to an input variable with medium influence and uncertainty. The partial derivative is computed by the approximate expression:     qG Gðx0 Þ  Gðx0 þ Dxi Þ    ð6Þ qx    Dxi i

917

where x0 is the vector of nominal parameters and Dx i is a small variation of the ith parameter. In general, the value of Dx i does not have a significant impact on the ranking procedure, as it is shown in the results section for model 1. The convergence criterion is computed through the variance of the response for the mean and standard deviation, as follows: var fmean of last N samplesg o e; var fvar of last N samplesg o e

ð7Þ

where var{} is the variance of a random variable, mean{} is the average of a random variable, N is an arbitrary positive integer and e is a given tolerance. In this paper, the parameters N and e were fixed to 500 and 0.001, as these values must be chosen by the user depending on the required accuracy. In other words, the convergence is reached when the variance of the average value of the output is less than a given tolerance and the variance of the standard deviation is less than another given tolerance. This is a much fairer comparison with Monte Carlo. Many papers set a high number of simulations for Monte-Carlo method without checking the convergence. The adaptive UT has been implemented and tested in three different large-scale models. The first two models are analytical formulas with different kinds of

Large-scale EMC Model Y = G (x1, ... , xN) e.g.: SABER, PEEC, TLM, ...

Sort x1 to xN by influence on output Y. Set i = 1.

Replace G (x1, ..., xN ) By Gi (x1, ... , xi ) NO

Statistical Moments Computed!

YES

i = i + 1. Reach convergence criterion?

Apply UT on Gi (x1, ... , xi )

Figure 1. Overview of adaptive unscented transform

COMPEL 33,3

non-linearity and dimensionality. The third model is a relatively high-fidelity model of a DC-DC converter which takes into account many parasitic effects and imperfections. Results Model 1 – quadratic polynomial The first model to be analyzed is given by the following expression:

918 Gð~ xÞ ¼ ð1 þ

Nrv X

ai x2i Þ

ð8Þ

i¼1

where Nrv ¼ 100, x i (i ¼ 1, y, Nrv) follow independent normal probability density functions (PDF) with 5 and 15 percent of standard deviation relatively to the average and the coefficients ai (i ¼ 1, y, Nrv) are chosen so that there are five dominant input variables. Figure 2 presents the relative effect of all input variables on the output given by (5), with a Dx i ¼ 0.1 of the interval size, for i ¼ 1, y, Nrv while Figure 3 presents the effect of changing the value of Dx i in the ranking result, for parameters from number 60 to 82. Notice, in Figure 3, that parameters 75 and 77, for instance, have higher ranking than other parameters, no matter the Dx i chosen. The coefficients ai are randomly chosen to produce a generic model but the average of five coefficients is 30 times the average of the rest of the coefficients. In this manner, a model with a subset of input dominant variables is created and shown by red arrows in Figure 2. The results of Monte-Carlo method and adaptive UT are presented and compared in Figures 4 and 5, for the two scenario of uncertainty. Figures 4 and 5 show the rapid convergence of the adaptive UT when computing the average and the standard deviation of the output (Table II). Model 2 – logarithm, fourth power and inner product The second model to be analyzed is given by the following expression: Gð~ xÞ ¼ 20 log10

Nrv X i¼1

ai xi þ

Nrv X i¼1

bi x2i

þ

Nrv X i¼1

ci x3i

þ

Nrv X

di x4i

þ ei

i¼1

7 Y

x i þ ei

i¼1

i¼9

1.4 1.2

STD * Grad

1 0.8 0.6 0.4 0.2

Figure 2. Model 1 – ranking input variables

0 0

20

40 60 Index of Variable

80

Nrv Y

100

! xi

ð9Þ

1.2

Uncertainty quantification in EMC

5% 10% 15% 20%

1

STD * Grad

0.8 0.6

919

0.4

Figure 3. Comparison of Dx i (5, 10, 15 and 20 percent of interval width) on ranking result

0.2 0 60

385

65

70 Index of Variable

Monte Carlo Adaptive UT

384

80

Monte Carlo Adaptive UT

10

383

9.5 Standard Deviation

Average Value

75

382 381 380 379 378

9 8.5 8 7.5

377 7 6,000 2,000 4,000 Number of Simulations

0

8,000

0

Monte Carlo Adaptive UT

825

Standard Deviation

Average Value

823 822 821 820 819

Monte Carlo Adaptive UT

77

824

2,000 4,000 6,000 8,000 Number of Simulations

Figure 4. Results for Model 1, 5 percent standard deviation

76

75

74

73

818 0

5,000 10,000 Number of Simulations

15,000

0

1 2 3 Number of Simulations

Figure 5. Results for Model 1, 15 percent standard deviation

4 x 104

COMPEL 33,3

920

Table II. Comparison between Monte Carlo and adaptive unscented transform – Model 1

where Nrv ¼ 200, x i (i ¼ 1, y, Nrv) follow independent uniform PDF and the coefficients ai (i ¼ 1, y, Nrv) are chosen so that there are eight dominant input variables. Figure 6 presents the relative effect of all input variables on the output given by (5) (Table III and Figures 7 and 8). Model 3 The third model to be analyzed is a model of a DC-DC power converter. Its schematic is shown in Figure 9. The input variables of this model are the voltage source, resistances, capacitances, inductances and semiconductor parameters of the diode and MOSFET. The output variable is the FFT of the voltage across the resistor R3 in dB at 20 kHz in the line impedance stabilization network. This output is a measure of the maximum conducted

Input uncertainty (%)

Methodology

Mean

SD

No. of solver calls

5 5 15 15

Monte Carlo Adaptive UT Monte Carlo Adaptive UT

382.8046 382.8015 823.8560 822.6413

9.5272 9.3268 75.5047 74.6990

44,200 444 200,000 444

0.04 0.035

STD * Grad

0.03 0.025 0.02 0.015 0.01 0.005

Figure 6. Model 2 – ranking input variables

Table III. Comparison between Monte Carlo and adaptive unscented transform – Model 2

0 0

50

100 Index of Variable

150

200

Input uncertainty (%)

Methodology

Mean

SD

No. of solver calls

20 20 40 40

Monte Carlo Adaptive UT Monte Carlo Adaptive UT

76.7156 76.5069 78.4996 77.7902

0.6776 0.6882 1.4039 1.5621

10,000 2,557 10,000 2,557

78.5

0.8

Monte Carlo Adaptive UT

78

Monte Carlo Adaptive UT

0.75 Standard Deviation

Average Value

77.5 77 76.5 76 75.5

Uncertainty quantification in EMC

0.7

921

0.65 0.6 0.55

75 0.5 74.5 0

2,000 4,000 6,000 Number of Simulations

110

8,000

0

Monte Carlo Adaptive UT

2,000 4,000 6,000 8,000 10,000 Number of Simulations

Figure 7. Results for Model 2, 20 percent interval

Monte Carlo Adaptive UT

2.5

100 Standard Deviation

Average Value

90 80 70 60

2

1.5

1

0.5

50 40

0 0

2,000

4,000

6,000

8,000

Number of Simulations

0

2,000

4,000

6,000

8,000

Number of Simulations

EMI. Thus, this problem is an example of a parametric uncertainty study of a power converter for the assessment. The characteristics of the model are given as follows: Nrv ¼ 45, x i (i ¼ 1, y, Nrv) follow independent uniform PDF and the coefficients ai (i ¼ 1, y, Nrv) are chosen so that there are eight dominant input variables. Figure 10 presents the relative effect of all input variables on the output given by (5). The relevant input variables of Figure 10 are described in more detail in Table IV. It can be seen that the parasitic effects of the converter have low impact on the conducted EMI at 20 kHz, when compared to the nominal component values. The results presented in Figure 11 show that the convergence of the adaptive UT for the average value was achieved considerably faster than traditional Monte-Carlo method. In Figure 12, the improvement brought by the adaptive UT is clear, especially for the assessment of the standard deviation. The results of Table V show good agreement between the two methodologies and encourage the further study of the adaptive UT.

Figure 8. Results for Model 2, 40 percent interval

Figure 9. Model 3 – power converter schematic

Vinput

+

Vout1

Vout1

Z2

Z1

LISN

L2

Vmes1 + v –

L1

C4

R4

R3

C3

Cdec1

Cdec2

Cpar2

PulseGen

Cpar1

S Mosfet

D

g

powergui

Continuous

Diode

Load5

Load4

Load3

Load2

Load11

922 Cpar3

Load1

COMPEL 33,3

Uncertainty quantification in EMC

0.16 1

0.14

10

0.12 9

STD * Grad

8 0.1

923

5

0.08 0.06 6 2 3

0.04 4 0.02 0

7

0

10

Parameter

Mean

1 2 3 4 5

Input voltage Inductance, L1 Inductance, L2 Capacitance, C3 Resistance, R3

200 V 47.68 mH 47.68 mH 270.65 nF 50 O

Index 6 7 8 9 10

Monte Carlo Adaptive UT

–0.5

40

50

Parameter

Mean

Resistance, R4 Capacitance, C4 C decoupling, Cdec1 C decoupling, Cdec2 Load resistance, R11

50 O 270.65 nF 899.35 nF 899.35 nF 132.97 O

Table IV. List of important components

Monte Carlo Adaptive UT

1.2 1.1 1

–1 Standard Deviation

Mean (Current 50 Ohm@20KHz) in dB

Index

20 30 Index of Variable

Figure 10. Model 3 – sensitivity analysis

–1.5

–2

0.9 0.8 0.7 0.6 0.5

–2.5

0.4 0.3

–3 0

200 400 600 800 Number of Simulations

0

200 400 600 800 Number of Simulations

1,000

Figure 11. Results for Model 3, 10 percent interval

COMPEL 33,3

0

Monate Carlo Adaptive UT

Monate Carlo Adaptive UT

3.5

–0.5 3 Standard Deviation

924

Average Value

–1 –1.5 –2 –2.5 –3

2.5 2 1.5 1

–3.5 0.5

–4

Figure 12. Results for Model 3, 30 percent interval

0

1,000

2,000

3,000

4,000

0

Number of Simulations

Table V. Comparison between Monte Carlo and adaptive unscented transform – Model 3

1,000

2,000

3,000

4,000

Number of Simulations

Input uncertainty (%)

Methodology

Mean (dB)

SD

No. of solver calls

10 10 30 30

Monte Carlo Adaptive UT Monte Carlo Adaptive UT

2.9224 2.9371 2.9169 2.8436

0.9895 0.9605 2.9849 3.0328

1,000 389 45,000 701

Conclusions An adaptive collocation method has been presented and successfully tested in three different large-scale models, whereas most of the traditional methodologies for uncertainty quantification can be unfeasible for high-dimensional or high-computational cost models. It showed a much faster convergence rate than Monte-Carlo method for similar accuracy. Moreover, the comparison with Monte Carlo was made for a given accuracy, thus being fairer than in other papers. Generally, one sets a very high number of simulations for the Monte Carlo whereas the convergence was reached much earlier. The first two models were analytical and used to illustrate the general idea of the methodology. The third model was a DC-DC converter with uncertainty in all its parameters. The adaptive UT turned out to be a good alternative for a fast assessment of the average and standard deviation of the conducted EMI. .

References Ajayi, A., Ingrey, P., Sewell, P. and Christopoulos, C. (2008), “Direct computation of statistical variations in EM problems”, IEEE Trans. Electromagn. Compat, Vol. 50 No. 2, pp. 325-332. Beddek, K., Clenet, S., Moreau, O., Costan, V., Le Menach, Y. and Benabou, A. (2012), “Adaptive method for non-intrusive spectral projection – application on a stochastic eddy current NDT problem”, IEEE Transactions on Magnetics, Vol. 48 No. 2, pp. 759-762.

Campolongo, F., Cariboni, J. and Saltelli, A. (2007), “An effective screening design for sensitivity analysis of large models”, Environmental Modelling and Software, Vol. 22 No. 10, pp. 1509-1518. de Menezes, L., Ajayi, A., Christopoulos, C., Sewell, P. and Borges, G.A. (2008), “Efficient computation of stochastic electromagnetic problems using unscented transforms”, IET Science, Measurement & Technology, Vol. 2 No. 2, pp. 88-95. de Menezes, L.R.A.X., Thomas, D. and Christopoulos, C. (2009), “Statistics of the shielding effectiveness of cabinets”, Proceedings ESA Workshop on Aerospace EMC, Florence April 1, 6pp. Dongbin, X., Kevrekidis, I.G. and Ghanem, R. (2005), “An equation-free, multiscale approach to uncertainty quantification”, Computing in Science & Engineering, Vol. 7 No. 3, pp 16-23. Freitas, S.C., Trigo, I.F., Bioucas-Dias, J.M. and Gottsche, F.-M. (2010), “Quantifying the uncertainty of land surface temperature retrievals from SEVIRI/Meteosat”, IEEE Transactions on Geoscience and Remote Sensing, Vol. 48 No. 1, pp. 523-534. Gaignaire, R., Scorretti, R., Sabariego, R.V. and Geuzaine, C. (2012), “Stochastic uncertainty quantification of Eddy currents in the human body by polynomial chaos decomposition”, IEEE Transactions on Magnetics, Vol. 48 No. 2, pp. 451-454. Gaignaire, R., Crevecoeur, G., Dupre´, L., Sabariego, R.V., Dular, P. and Geuzaine, C. (2010), “Stochastic uncertainty quantification of the conductivity in EEG source analysis by using polynomial chaos decomposition”, IEEE Transactions on Magnetics, Vol. 46 No. 8, pp. 3457-3460. Hansen, N., Niederberger, A.S.P., Guzzella, L. and Koumoutsakos, P. (2009), “A method for handling uncertainty in evolutionary optimization with an application to feedback control of combustion”, IEEE Transactions on Evolutionary Computation, Vol. 13 No. 1, pp. 180-197. Hildebrand, R. and Gevers, M. (2004), “Quantification of the variance of estimated transfer functions in the presence of undermodeling”, IEEE Transactions on Automatic Control, Vol. 498, pp. 1345-1350. Julier, S.L. and Ulmann, L.K. (2004), “Unscented filtering and non-linear estimation”, Proc. IEEE, Vol. 92 No. 3, pp. 401-422. Moro, E.A., Todd, M.D. and Puckett, A.D. (2011), “Experimental validation and uncertainty quantification of a single-mode optical fiber transmission model”, Journal of Lightwave Technology, Vol. 29 No. 6, pp. 856-863. Morris, M.D. (1991), “Factorial sampling plans for preliminary computational experiments”, Technometrics, Vol. 33 No. 2, pp. 161-174. Osnes, H. and Sundnes, J. (2012), “Uncertainty analysis of ventricular mechanics using the probabilistic collocation method”, IEEE Transactions on Biomedical Engineering, Vol. 59 No. 8, pp. 2171-2179. Preston, J.S., Tasdizen, T., Terry, C.M., Cheung, A.K. and Kirby, R.M. (2009), “Using the stochastic collocation method for the uncertainty quantification of drug concentration due to depot shape variability”, IEEE Transactions on Biomedical Engineering, Vol. 56 No. 3, pp. 609-620. Silly-Carette, J., Lautru, D., Wong, M.-F., Gati, A., Wiart, J. and Fouad Hanna, V. (2009), “Variability on the propagation of a plane wave using stochastic collocation methods in a bio electromagnetic application”, IEEE Microwave and Wireless Components Letters, Vol. 19 No. 4, pp. 185-187. Stievano, I.S., Manfredi, P. and Canavero, F.G. (2011), “Stochastic analysis of multiconductor cables and interconnects”, IEEE Transactions on Electromagnetic Compatibility, Vol. 53 No. 2, pp. 501-507.

Uncertainty quantification in EMC 925

COMPEL 33,3

Tartakovsky, D.M. and Dongbin, Xiu (2007), “Guest editors’ introduction: stochastic modeling of complex systems”, Computing in Science & Engineering, Vol. 9 No. 2, pp. 8-9.

926

Corresponding author Moises Ferber can be contacted at: [email protected]

van Dijk, N. (2002), “Numerical tools for simulation of radiated emission testing and its application in uncertainty studies”, IEEE Transactions on Electromagnetic Compatibility, Vol. 44 No. 3, pp 466-470.

To purchase reprints of this article please e-mail: [email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints

The current issue and full text archive of this journal is available at www.emeraldinsight.com/0332-1649.htm

Ant colony optimization for the topological design of interior permanent magnet (IPM) machines

Topological design of IPM machines 927

Lucas S. Batista, Felipe Campelo, Frederico G. Guimar~aes and Jaime A. Ramı´rez

Departamento de Engenharia El´etrica, Universidade Federal de Minas Gerais, Belo Horizonte, Brasil, and

Min Li and David A. Lowther Department of Electrical and Computer Engineering, McGill University, Montreal, Canada Abstract Purpose – The purpose of this paper is to apply an Ant colony optimization approach for the solution of the topological design of interior permanent magnet (IPM) machines. Design/methodology/approach – The IPM motor design domain is discretized into a suitable equivalent graph representation and an Ant System (AS) algorithm is employed to achieve an efficient distribution of materials into this graph. Findings – The single-objective problems associated with the maximization of the torque and with the maximization of the shape smoothness of the IPM are investigated. The rotor of the device is discretized into a 9  18 grid in both cases, and three different materials are considered: air, iron and permanent magnet. Research limitations/implications – The graph representation used enables the solution of topological design problems with an arbitrary number of materials, which is relevant for 2 and 3D problems. Originality/value – From the numerical experiments, the AS algorithm was able to achieve reasonable shapes and torque values for both design problems. The results show the relevance of the mechanism for multi-domain topology optimization of electromagnetic devices. Keywords Topology optimization, Ant colony optimization, IPM motor design Paper type Research paper

1. Introduction Interior permanent magnet (IPM) motors have become popular in recent years for applications that require variable speed and torque, such as the design of electrical machines in hybrid electric vehicles (HEVs). Due to the current price hike of fuel, HEVs emerge as an effective alternative to conventional cars equipped with internal combustion engines, making the design of highly efficient and reliable electrical machines one of the main challenges for the HEV industry. Among the different types of electrical machines, IPM motors are the most suitable for use in the traction drive of an HEV, since they present the advantages of high-torque density and efficiency and, This work was supported by the Foreign Affairs and International Trade DFAIT, Canada, and by the following Brazilian agencies: State of Minas Gerais Research Foundation: FAPEMIG (Grants Pronex: TEC 01075/09, Pronem: 04611/10); Coordination for the Improvement of Higher Level Personnel: CAPES; National Council of Scientific and Technologic Development: CNPq (Grants 306910/2006-3, 141819/2009-0, 475763/2012-2, 30506/2010-2) CNPq Grant no. 472769/2013-8.

COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering Vol. 33 No. 3, 2014 pp. 927-940 r Emerald Group Publishing Limited 0332-1649 DOI 10.1108/COMPEL-08-2013-0285

COMPEL 33,3

928

in contrast to surface mount permanent magnet (PM) motors, create some areas of saturation in the rotor which result in a reluctance torque component to the total output torque. This feature extends the constant-power speed range of the motor and improves the performance in high-speed operation (Rahman et al., 2012; Rahman, 2007). In recent years, the finite element method (FEM) has been applied to the analysis and design of IPM motors to provide accurate performance estimation. In many works, FEM analyses are combined with metaheuristics-based optimization procedures or analytical approaches to look for efficient IPM designs (Bianchi and Canova, 2002; Qinghua et al., 2002; Sim et al., 1997). In others, the performances of various IPM rotor structures with flux barriers are investigated (Stumberger et al., 2008). In 2000, Ohnishi and Takahashi (Ohnishi and Takahashi, 2010) built a FEM model considering the rotation of the motor, and designed the motor using an experimental design method (Ohnishi and Takahashi, 2000). A contribution where the shape of the surface of the iron core in an IPM motor was optimized to reduce the cogging torque, using a continuum design sensitivity analysis approach, can be found in Kim et al. (2003). While in Kim et al. (2003) only the surface of the rotor is allowed to change within a small range, Takahashi et al. (2010) proposed a new design scheme using the ON/OFF topology optimization approach to find new magnetic circuits of IPM motors. In the latter, two designs are actually presented, one obtained through surface optimization and another representing the design of the field barrier using the topology optimization approach. However, the resulting structures of the motors from the conventional topology optimization do not present smooth surfaces and, therefore, tend not to be practical for manufacture (Campelo et al., 2008). More recently, one proposed an ant colony-based strategy with a useful graph representation for the topology optimization problem, which are adequate to deal with two-dimensional (2D) and 3D design spaces with an arbitrary number of materials (Batista et al., 2011). The same mechanism is employed in this paper for the topological design of IPM motors, albeit with a few modifications in the optimization algorithm. The design of an IPM machine is a complicated and multiphysics task, involving mechanical, thermal, and electromagnetic issues. Moreover, the designer has to take into account specifications related to the size and the weight of the machine, the desired output torque, and the cost of the PM. Note also that the proposed strategy could be useful to solve some other topological optimization problems. For instance, an interesting application regards the optimum design of inverse electromagnetic problems (Rudnicki and Wiak, 2003). This paper focusses on the topological design of the rotor of an IPM motor of specific dimensions. The design region of the rotor is discretized and each cell can be assigned with a different material: air, steel (M19 stainless steel) or PM (NdFeB). With this discretized formulation, the topology design problem becomes a combinatorial optimization problem (COP), which can be efficiently solved by using heuristic and metaheuristic methods. In this work, we define the material distribution problem as equivalent to finding an optimal path in a graph, making the use of ant colony algorithms suitable as a design technique (Batista et al., 2011; Dorigo et al., 2006; Dorigo and Blum, 2005). We employ two different formulations for the topology optimization of the IPM motor in our experiments. The first one involves the maximization of the torque, subject to a constraint on the volume of PM used. The second case consists in the maximization of the smoothness subject to torque and volume constraints, using a finer discretization. The obtained topologies illustrate the applicability of the ACO method for complex topology design. The algorithm was able to find viable designs for the IPM motor. The resulting topology can be further refined by using any shape optimization method.

This work is organized as follows: Section 2 presents the mathematical background of topology optimization problems and shape smoothness; Section 3 provides an overview of the ACO method; Section 4 describes the adaptation of the Ant System (AS) for topology optimization; Section 5 details the IPM motor design problem; Section 6 presents the numerical results obtained with two different formulations for the topology optimization problem, showing two different and interesting configurations for the design of the IPM motor; finally Section 7 summarizes our conclusions. 2. Mathematical background 2.1 Topology optimization problem Let the design space, or design domain, O C Rd be a finite, bounded subset of the d-dimensional geometric space, with d ¼ 2 or 3, and let c A O denote a point within this subspace. Also, let S ¼ {1, y, n} denote the discrete set of n possible states for each point c, and xðOÞ : Rd 7!S jOj be a function that attributes a state eAS to each point within the design space. By considering the state of a given point as the material properties at that point, it is possible to define the general topology optimization problem as: xðOÞ ¼ arg max f ðxðOÞÞ x ( xðOÞ 2 S jOj Subject to: Problem constraints Find :

ð1Þ

where f ðxðOÞÞ: S jOj 7!R is the performance functional that is to be maximized, and the problem constraints are mathematical representations of the system requirements and limitations. In other words, the problem of topology optimization is essentially a problem of finding the optimal distribution of an arbitrary finite number of materials within a bounded subspace of Rd. 2.2 Shape smoothness Despite the potential for the discovery of innovative solutions presented by topology optimization approaches, computational methods employed for the solution of the problem defined in (1) are prone to yield shapes that often present non-smooth characteristics, such as checkerboard patterns, that make them unfeasible from a manufacturing point of view (Campelo, 2009). This is particularly true for methods that employ space discretization, such as the method that will be used later in this work. To address this drawback, we present the derivation of a coefficient that quantifies the smoothness of a given shape. This coefficient can then be used to drive the optimization algorithms toward more regular shapes, which are in principle easier to manufacture and manipulate. For the derivation of this coefficient, we define smoothness as the ratio between the area of non-background material (i.e. materials other than air) and the total length of interface between different materials in a discretized material matrix. This definition considers shapes with fewer holes, fewer disjoint material blocks, and more regular boundaries as smoother, and therefore as more desirable from a manufacturing perspective. The definitions that follow are for the 2D case only, but their extension to 3D domains should be straightforward. 2.2.1 Smoothness coefficient. Consider an integer matrix M, with nr rows and nc columns, representing the distribution of materials within a given space. The value of

Topological design of IPM machines 929

COMPEL 33,3

930

any given element m(i, j) A{0,1, y, K1} represents one of a number of possible materials, with 0 representing a background filling material, e.g. air. Any given element mi, j is considered as having sides measuring [1/nr  1/nc], an area of 1/(nrnc) and total perimeter of 2/nr þ 2/nc, which means that M is considered as having unit area, length and width. The total area of the kth material is given by: Ak ¼

nc nr X 1 X gk nr nc i¼1 j¼1 ij

ð2Þ

¼ k, and 0 otherwise. where gkij is an indicator variable that assumes value 1 if m(i, j) P The total area of non-background material is given by AðMÞ ¼ K k¼1 Ak. The boundary perimeter of element mi, j is given by the total length of its edges that are interfaces with elements containing different materials. This implies that the total interface length of a given matrix, S b (M), is given by the sum of the boundary perimeter lengths of all its elements (corrected so as not to count the same interface twice). From the above definitions, the smoothness coefficient of a 2D matrix of materials M can be defined as a rate between the total area of non-background material (i.e. materials other than air) and the total length of interface between different materials. This definition can be summarized as: nðMÞ ¼ 4

AðMÞ Sb ðMÞ

ð3Þ

where A(M) and S b (M) represent the total area and interface boundary length, respectively. In this definition a full matrix (i.e. no air) is considered as having unit area, and the multiplication by four forces this coefficient to assume values between 0 (for the limit case of an infinitely fine discretization with a checkerboard pattern of materials) and 1 (for a full matrix composed of only one kind of material). To avoid some problems when including this coefficient into optimization procedures, the empty matrix (all air) is defined as having v (M) ¼ 0. 3. Ant colony optimization (ACO) The ACO method was proposed by Dorigo et al. for the solution of hard COPs (Dorigo et al., 1991, 1996, 2006; Dorigo and Blum, 2005; Dorigo, 1992). As a metaheuristic, the ACO algorithm is often used to find good, while not necessarily globally optimal, solutions to hard COPs in reasonable computational time (Blum and Roli, 2003; Glover and Kochenberger, 2003). The mechanisms of the ACO draw inspiration from the collective and coordinated foraging behavior observed in many ant species, in which given paths or trails become more attractive through a positive feedback loop, where the probability of an ant choosing a given path increases with the number of ants that have previously chosen the same track. Deneubourg et al. (1990) have shown that indirect communication among ants via pheromone trails enables them to find shortest paths between their nest and food sources. This characteristic has inspired the definition and use of artificial ant colonies to search for approximate solutions to hard COPs. Several ACO algorithms have been proposed in the literature, including the original one, named AS (Dorigo et al., 1991, 1996; Dorigo, 1992), and the two most successful

variants, called MAX  MIN AS (Stu¨tzle and Hoos, 2000) and Ant Colony System (Dorigo and Gambardella, 1997a, b; Gambardella and Dorigo 1996). The AS algorithm is discussed next. 3.1 Ant system (AS) The AS (Dorigo et al., 1991, 1996; Dorigo, 1992) was the first ACO algorithm presented in the literature. In order to show its effectiveness for the solution of COPs, the traveling salesman problem (TSP) was considered as a concrete example. Essentially, a TSP can be modeled as a graph G /N, ES, where N denotes the set of cities (nodes) and E represents the set of connections (edges) between cities. The main goal is to find the minimal length closed tour that visits each city once. To solve this problem, each ant is considered a simple agent that obeys the following rules: .

the next city to go to is chosen with a probability that is a function of the amount of pheromone laid on the connecting edge and of the city distance;

.

aiming to get a Hamiltonian cycle, transitions to already visited cities are forbidden until a tour has been completed (a tabu list is used to ensure this constraint); and

.

when a tour is completed, it lays pheromone on each edge visited.

During the construction of a solution, ants select the next city to be visited based on a stochastic mechanism. In short, the probability pkij ðtÞ with which an ant k, at time t, goes from city i to city j is calculated as: 8 a b ½tij ðtÞ ½Zij ðtÞ > < P if j 2 = tabuk ½tih ðtÞa ½Zih ðtÞb ð4Þ pkij ðtÞ ¼ h 2 tabuk > : 0 if j 2 tabuk where tij (t) is the intensity of pheromone laid on connection (i, j) at time t, Zij (t) is the heuristic information of connection (i, j) at time t, and tabuk represents the set of visited cities for the kth ant. The parameters a and b control the relative importance between the intensity of pheromone and the heuristic information, which (in the case of the TSP) is given by Zij ¼ 1/dij, where dij denotes the Euclidean distance between the ith and the jth cities. Since this heuristic information is inversely proportional to the distance between the cities (i, j), this parameter is also called visibility information. Let m and n be the number of ants and cities, respectively. Supposing that all the m ants carry out their next city in one iteration, then at every n iterations of the algorithm, which represent a cycle, each ant has completed a tour. At this point, the trail intensity is updated according to: m X Dtkij ð5Þ tij ðt þ nÞ ¼ ð1  rÞtij ðtÞ þ k¼1

where r represents the pheromone evaporation rate[1], and Dtkij denotes the quantity of pheromone laid on edge (i, j) by the kth ant between time t and t þ n: 8 < Q=Lk if the kth ant uses edge ði; jÞ in ð6Þ Dtkij ¼ its tour between time t and t þ n; :0 otherwise

Topological design of IPM machines 931

COMPEL 33,3

932

where Q is a constant and Lk is the cost of the tour found by the kth ant. Next, the ACO concepts presented here are adapted to solve structural topology optimization problems. 4. AS approach for topology optimization The ACO metaheuristic used in this paper has some similarities with the one presented in (Batista et al., 2011), and is described in this section. The main contribution of our AS method lies on the way the topology optimization problem is mapped into a discretized design space. The graph representation, shown in Figure 1, illustrates the idea for a two-dimensional model with an arbitrary number of possible states per cell. The design domain O C Rd is discretized into cells, cij, i ¼ 1, y, p and j ¼ 1, y, q, and each cell of this topological matrix is represented as a transition between nodes in a directed graph, with each connecting edge corresponding to a given element property (material) eu A S that can be assumed by the cell. Based on this representation, an AS algorithm is used to find an optimal tour from the first to the last cell. In this way, each candidate solution (ant) will represent a path, i.e. a valid topology, which is equivalent to a given distribution of materials. The resulting shape is then evaluated according to a given performance function, and the internal parameters of the AS (e.g. pheromone intensities and heuristic informations) are updated according to the quality of the solutions. The main stages of the AS algorithm are highlighted next.

e1

e1

e2

e2

e1 e2

eu

eu

C11

C13

C12 e1

e1

eu e1

e2

e2

e2 eu eu

C21

e1 e2

Figure 1. Graph representation of the design space

C23

e1

e1

e2

e2

eu

eu C31

eu C22

C32

eu C33

Note: Each connecting edge eu between nodes corresponds to a specific material property that can be assumed by the cell cij

4.1 Initialization of the AS algorithm The initial pheromone laid on all connecting edges is defined by a small value, given by teiju ðt ¼ 0Þ ¼ 1=n , in which n is the number of different materials, and eu, u ¼ 1, y, n, a specific element (material). 4.2 Node transition rule Each ant begins its trip from the first cell and performs a tour until the last cell, ðk;e Þ according to a node transition rule. The transition probability pij u ðtÞ inside cell th (i, j), for the k ant at time t, and using the connecting edge eu, is stated as: , n h h ia X ia ðk;eu Þ eu teiju ðtÞ pij ðtÞ ¼ tij ðtÞ ð7Þ u¼1

Notice that the heuristic information (visibility) is not considered in (7), which means that the material properties of each cell (i, j) is selected probabilistically, by using a simple roulette wheel approach based on pheromone trail only. 4.3 Pheromone updating rule Since the topological matrix has p  q cells, and all the m ants pass through each cell in one iteration, each ant completes a tour every pq steps (also referred to as a cycle). At this moment the trail intensity is updated as: teiju ðt þ pqÞ ¼ ð1  rÞteiju ðtÞ þ

m X

ðk;eu Þ

Dtij

ð8Þ

k¼1 ðk;e Þ

in which Dtij u denotes the quantity of pheromone laid on cell (i, j), edge eu, by the kth ant during the current cycle: 8 < YðQ; Lk Þ if the kth ant uses edge eu in its ðk;e Þ ð9Þ Dtij u ¼ tour between time t and t þ pq; :0 otherwise; where Y (.,.) is a quality measure function, Q is a constant, and Lk is the cost of the solution achieved by the kth ant. 4.4 Visibility information Some information about the distribution of materials of a generated topology can be used to enforce the smoothness of its shape. In this work, the material distribution matrix of a topology is used to estimate new transition probability values for all its edges, such that the original topology may be improved. Let Eijk ðtÞ ¼ ðeðx;yÞ Þx;y , x A{i 1, y, i þ 1} and yA { j 1, y, j þ 1}, be the local material distribution around cell (i, j) of the candidate topology represented by the kth ant at time t, and e(x,y) be the element (material) assumed by the cell (x,y). The visibility ðk;e Þ Zij u inside cell (i, j), through the edge eu, can be given by:       z 2 Eijk ðtÞ : z ¼ eu   z ¼ eði;jÞ : z ¼ eu   ðk;eu Þ   þ1 ð10Þ Zij ðtÞ ¼  k  Eij ðtÞ  1

Topological design of IPM machines 933

COMPEL 33,3

934

ðk;e Þ

where the first part of Zij u denotes the percentage of cells in Eijk that assume the material property eu. The new transition probability values are then calculated by using (11), which are employed to probabilistically rearrange the material distribution of the topology represented by the kth ant. h ia h ib ðk;e Þ teiju ðtÞ Zij u ðtÞ ðk;e Þ ð11Þ pij u ðtÞ ¼ n h P eu ia h ðk;eu Þ ib tij ðtÞ Zij ðtÞ u¼1

Eventually, when applied to border cells, the mask Eijk will extend beyond the borders of the topology. In this case, the missing cells are padding with the material property of the local outside topology domain, e.g. air. To avoid premature convergence, this operator is performed over few random selected candidate solutions of the population (0.05  m in this work). 4.5 Filtering strategy A filtering mechanism is applied over the pheromone matrix intending to reduce the material preference variation among adjacent cells, so enabling smooth topology estimates (Campelo et al., 2008). In short, a simple averaging filtering strategy is performed, such that, each pheromone intensity value teiju , associated with the cell (i, j) and connecting edge eu, is replaced by the mean value of its corresponding neighbors, including itself. This has the effect of eliminating pheromone trail intensity values which are unrepresentative of their surroundings. u Þx;y , xAi 1, y, i þ 1} and yA j 1, y, j þ 1}, be the Let Tijeu ðtÞ ¼ ðteðx;yÞ pheromone trail intensities around cell (i, j) and associated with the connecting edge eu. Also, let K ¼ (1/9)U33, with U a square unit matrix, be the averaging kernel used in the mean filtering. Then, computing the straightforward convolution of Tijeu ðtÞ with the kernel K carries out the mean filtering process for the trail intensity teiju. This procedure is performed for all the connecting edges of all cells of the pheromone matrix. As previously, when border cells are considered, the mask Tijeu ðtÞ will extend beyond the borders of the topology. In this case, the pheromone intensity teiju, associated with each connecting edge eu of a specific missing cell (i, j), will be set to represent the material property of the local outside topology domain. Since this artificial change in the pheromone intensities can represent a strong influence on the ant colony behavior, the filtering mechanism has been applied at every 20 cycles of the algorithm. 4.6 Elitism approach In general, an elitism operator aims to preserve the best solutions achieved so far by a method. In the AS algorithm used in this paper, the ant that represents the most suitable topology of the current cycle is stored and, if better than the best solution of the next cycle, replaces the worst topology of the new population, thus maintaining the convergence of the method. 4.7 Stop criteria The search process iterates until a predefined maximum number of cycles ncmax, or the stagnation behavior is observed, i.e. all ants making the same tour.

The pseudocode shown in Algorithm 1 summarizes the steps discussed ahead. Algoritham 1: Ant system for topology optimization

Topological design of IPM machines 935

5. The IPM motor design problem The design of an IPM machine is a complicated task which involves the consideration of many aspects, such as mechanical, thermal, and electromagnetic issues. In this work, we only investigate the topological design of the rotor of an IPM motor of specific dimensions. The design can be started with an empty space (or any arbitrary topology) and the goal of the design is to find a certain layout of the rotor which yields the desired outputs. The topological design approach is capable of finding any novel structures of the machine, free of initial assumptions about the shape of the machine. Figure 2(a) shows the stator structure of a three-phase four-pole IPM machine. The magnetic flux density plot for a particular case is also presented. Due to the symmetry of the device, only a quarter of the machine is modeled for analysis. The length of the motor in the third dimension is 65 mm. The rotating speed of the motor is set to 1,800 rpm. The number of turns of wire in the excitation coils is 140, and the currents in the windings are 3 A. The three phase sinusoidal currents in the stator windings are given as: 8 < IU ¼ Im sinðot þ yÞ I ¼ Im sinðot  120 þ yÞ ð12Þ : V IW ¼ Im sinðot þ 120 þ yÞ where Im is the magnitude of the current, and y is used to control the advance angle between the stator magnetic field and the rotor field. This model is solved using non-linear finite element analysis and the output torque is calculated using the Maxwell stress tensor by MagNet (2012). In Figure 2(b), the design region of the rotor is discretized into 9  18 cells. Each cell can be assigned with a material from air, steel (M19 stainless steel) or PM (NdFeB). This discretization is able to present an approximated structure of a physical device.

COMPEL 33,3

Stator core

Coil u-

936

Stator core

Coil u-

v+

v+ v+

v+

w-

w-

wwu+

Figure 2. Optimization model for the IPM motor design problem

u+ 8 mm

19.75 mm

0.5 mm 56 mm

Notes: (a) Magnetic flux density plot; (b) discretization of the IPM design domain

There are two important attributes of the target device that need to be taken into account: (1)

The output torque of the motor is being maximized during the design process.

(2)

The target device must be cost effective. The price of the rare earth PM material is quite expensive and it accounts for the majority part of the cost of the machine. Therefore, a constraint on the volume PM is imposed to the design problem.

The efficiency of the machine and manufacturing tolerance are not considered as objectives for the topological design stage. These can be optimized through a shape optimization which serves as a post-processing tool of the topological design after the optimal topology is found (Campelo et al., 2008). 6. Numerical results In this section we describe the results obtained using two different formulations for the topology optimization problem. In both cases, the algorithm was set up as follows: number of ants m ¼ 50, pheromone evaporation rate r ¼ 0.8, and a quality measure function (9) as ¼ Q/Lk, with Q ¼ 100. Furthermore, we have considered a ¼ 1, b ¼ 1, and a maximum number of cycles ncmax ¼ 150. 6.1 Case 1: maximization of the torque subject to PM volume constraint The first experiment consists of the maximization of the torque, subject to a constraint on the volume of PM used, as described in the previous section. This problem can be expressed as: Find: Subject to:

xðOÞ ¼ arg max Torque x  Physical constraint on the volume of the PM material

ð13Þ

The result obtained for this case is shown in Figure 3(a). It was considered a 9  18 discretized domain, and a maximum PM constraint of ten blocks. The configuration

Air

Topological design of IPM machines

0.75

Iron

0.7

PM

0.65

Torque (Nm)

0.6 0.55

937

0.5 0.45 0.4 0.35 0.3 0.25

0

50

100 Number of cycles

150

Note: White cells contain air, the gray area represents iron, and the darkest area is filled with permanent magnet

found did not violate the material constraints, presented a reasonably straightforward shape, and was able to provide a torque of 0.764 Nm. The torque values achieved throughout the evolution of the AS algorithm are presented in Figure 3(b). 6.2 Case 2: maximization of shape smoothness subject to torque and PM volume constraints The second case study tries to obtain an efficient and feasible solution by maximizing a measure of the shape smoothness (Section 2.2), while adding a constraint of the minimum torque allowed. The problem is defined as: Find : Subject to:

xðOÞ ¼ arg max nðxÞ x  Minimum torque constraint; Maximum PM volume constraint

ð14Þ

The result obtained by the algorithm in this second problem was also interesting, as illustrated in Figure 4. This result was generated with a 9  18 discretized domain, a minimum torque requirement of 0.45 Nm, and a maximum PM constraint of ten blocks. In the operation of IPM machines, torque ripples are generated due to the interaction between the rotor magnetic field and the stator teeth structure. The optimization takes the rotation of the rotor into account to guarantee the feasibility of the design. The result is obtained for a rotor angle at ten degrees from the origin. As it can be seen, the resulting shape obeys the maximum PM volume constraint, and the resulting torque of 0.563 Nm is also in accordance with the design requirements. A comparison on the periodical torques between the design from case 1 (maximization of torque) and the design from case 2 (maximization of smoothness) is shown in Figure 5. As we can see, design 1 has a larger initial torque at zero degree, but design 2 presents

Figure 3. (a) Final topology obtained using the problem formulation of maximizing the torque, subject to a maximum PM volume constraint; (b) Torque values achieved throughout the evolution of the AS algorithm

COMPEL 33,3

Air Iron PM

938

Figure 4. Final topology obtained using the problem formulation of maximizing the smoothness, subject to a minimum torque and a maximum PM volume constraints

Note: White cells contain air, the grey area represents iron, and the darkest area is filled with permanent magnet 0.8 Design 1 Design 2

0.75

Figure 5. Comparison on the periodical torques between the design from case 1 (maximization of torque) and the design from case 2 (maximization of smoothness)

Torque [Nm]

0.7 0.65 0.6 0.55 0.5 0.45 0.4

0

5

10 16 20 Rotor angle [degrees]

25

30

a smaller torque ripples. This results from the minimum torque constraint imposed on the second optimization problem. From the results presented in this section, it is possible to observe the ability of the AS approach to generate viable design alternatives for the IPM motor. While the shapes obtained are still reasonably coarse approximations of what a final design would look like, the purpose of the topology optimization in this class of problems was fulfilled: to search for an initial, tentative shape for the device, without being constrained by a given topology. The natural next step in this design problem would be the parametrization of the solutions found (Campelo et al., 2008), in order to refine and improve the design by means of a regular parametric optimization algorithm.

7. Conclusions This paper proposed an approach for the topological shape design of the rotor of an IPM machine. The design domain of the rotor is mapped onto a discretized model, in which each cell can be assigned a different material property, among air, steel or PM. Based on this direct graph representation, the topology design problem is handled as a COP, and an AS algorithm is used to reach an efficient distribution of materials into this graph. Two mono-objective formulations for the topology optimization of the IPM machine are investigated in our experiments. The first one deals with the maximization of the torque, subject to a PM volume constraint. A more complex situation involves the maximization of the smoothness of the IPM shape, subject to torque and PM volume constraints. The approach employed was able to find feasible designs for the IPM machine, and useful rotor shapes and torque values were achieved for both cases considered. The results of the IPM motor design problem illustrate the potential of use of the AS method for complex topology optimization of electromagnetic devices. Furthermore, the model presented is adequate for the solution of configurations involving various magnetization directions of PMs, and can be easily extended to 3D topology optimization problems. Note 1. The purpose of this parameter is to prevent a stagnating behavior, in which the algorithm stops searching for alternative solutions, i.e. all ants end up doing the same tour. References Batista, L.S., Campelo, F., Guimar~aes, F.G. and Ramı´rez, J.A. (2011), “Multi-domain topology optimization with ant colony systems”, COMPEL, Vol. 30 No. 6, pp. 1792-1803. Bianchi, N. and Canova, A. (2002), “FEM analysis and optimisation design of an IPM synchronous motor”, IEE Power Electronics, Machines and Drives, Vol. 487 No. 1, pp. 49-54. Blum, C. and Roli, A. (2003), “Metaheuristics in combinatorial optimization: overview and conceptual comparison”, ACM Computing Surveys, Vol. 35 No. 3, pp. 268-308. Campelo, F., Ota, S., Watanabe, K. and Igarashi, H. (2008), “Generating parametric design models using information from topology optimization”, IEEE Trans. on Magnetics, Vol. 44 No. 6, pp. 982-985. Campelo, F., Watanabe, K. and Igarashi, H. (2008), “Topology optimization with smoothness considerations”, International Journal of Applied Electromagnetics and Mechanics, Vol. 28 Nos 1-2, pp. 187-192. Campelo, F. (2009), “Evolutionary design of electromagnetic systems using topology and parameter optimization”, PhD thesis, Hokkaido University, Hokkaido. Deneubourg, J.-L., Aron, S., Goss, S. and Pasteels, J.M. (1990), “The self-organizing exploratory pattern of the argentine ant”, J. Insect Behav., Vol. 3 No. 2, pp. 159-168. Dorigo, M., Birattari, M. and Stu¨tzle, T. (2006), “Ant colony optimization: artificial ants as a computational intelligence technique”, IEEE Comput. Intell. Mag., Vol. 1 No. 4, pp. 28-39. Dorigo, M. and Blum, C. (2005), “Ant colony optimization theory: a survey”, Theoretical Computer Science, Vol. 344 Nos 2-3, pp. 243-278. Dorigo, M. and Gambardella, L.M. (1997a), “Ant colonies for the travelling salesman problem”, Biosystems, Vol. 43 No. 2, pp. 73-81. Dorigo, M. and Gambardella, L.M. (1997b), “Ant colony system: a cooperative learning approach to the traveling salesman problem”, IEEE Trans. on Evolutionary Computation, Vol. 1 No. 1, pp. 53-66.

Topological design of IPM machines 939

COMPEL 33,3

940

Dorigo, M., Maniezzo, V. and Colorni, A. (1991), “Positive feedback as a search strategy”, Technical Report No. 91-016, Politecnico di Milano, Milano. Dorigo, M., Maniezzo, V. and Colorni, A. (1996), “Ant system: optimization by a colony of cooperating agents”, IEEE Trans. Systems, Man, Cybernet – Part B, Vol. 26 No. 1, pp. 29-41. Dorigo, M. (1992), “Optimization, learning and natural algorithms”, PhD thesis, Politecnico di Milano, Milano. Gambardella, L.M. and Dorigo, M. (1996), “Solving symmetric and asymmetric TSPs by ant colonies”, Proc. of IEEE International Conference on Evolutionary Computation, pp. 622-627. Glover, F.W. and Kochenberger, G.A. (Eds) (2003), Handbook of Metaheuristics, International Series in Operations Research & Management Science, Kluwer Academic Publishers. Kim, D.-H.H., Park, I.-H.H., Lee, J.-H.H. and Kim, C.-E.E. (2003), “Optimal shape design of iron core to reduce cogging torque of IPM motor”, IEEE Transactions on Magnetics, Vol. 39 No. 3, pp. 1456-1459. MagNet (2012), MagNet User’s Manual, MagNet, Montreal, QC, available at: www.infolytica.com Ohnishi, T. and Takahashi, N. (2000), “Optimal design of efficient IPM motor using finite element method”, IEEE Transactions on Magnetics, Vol. 36 No. 5, pp. 3537-3539. Qinghua, L., Jabbar, M.J. and Khambadkone, A.M. (2002), “Design optimisation of wide-speed permanent magnet synchronous motors”, IEE Power Elect., Machines and Drives, Vol. 487 No. 1, pp. 404-408. Rahman, M.A. (2007), “IPM motor drives for hybrid electric vehicles”, International Aegean Conference on Electrical Machines and Power Electronics (ACEMP), pp. 109-115. Rahman, M.A., Masrur, M.A. and Uddin, M.N. (2012), “Impacts of interior permanent magnet machine technology for electric vehicles”, IEEE Inter. Electric Vehicle Conference (IEVC), pp. 1-5. Rudnicki, M. and Wiak, S. (Eds) (2003), Optimization and Inverse Problems in Electromagnetism, Springer. Sim, D.-J., Cho, D.-H., Chun, J.-S.S., Jung, H.-K.K. and Chung, T.-K. (1997), “Efficiency optimization of interior permanent magnet synchronous motor using genetic algorithms”, IEEE Transactions on Magnetics, Vol. 33 No. 2, pp. 1880-1883. Stumberger, B., Stumberger, G., Hadziselimovic, M., Marcic, T., Virtic, P., Trlep, M. and Gorican, V. (2008), “Design and finite-element analysis of interior permanent magnet synchronous motor with flux barriers”, IEEE Transactions on Magnetics, Vol. 44 No. 11, pp. 4389-4392. Stu¨tzle, T. and Hoos, H.H. (2000), “Max-Min ant system”, Future Generation Computer Systems, Vol. 16 No. 9, pp. 889-914. Takahashi, N., Yamada, T. and Miyagi, D. (2010), “Examination of optimal design of IPM motor using ON/OFF method”, IEEE Transactions on Magnetics, Vol. 46 No. 8, pp. 3149-3152. Corresponding author Professor Lucas S. Batista can be contacted at: [email protected]

To purchase reprints of this article please e-mail: [email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints

The current issue and full text archive of this journal is available at www.emeraldinsight.com/0332-1649.htm

Drive optimization of a pulsatile total artificial heart

Pulsatile total artificial heart

Andre´ Pohlmann and Kay Hameyer Institute of Electrical Machines, RWTH Aachen University, Aachen, Germany

941

Abstract Purpose – Total artificial hearts (TAHs) are required for the treatment of cardiovascular diseases. In order to replace the native heart a TAH must provide a sufficient perfusion of the human body, prevent blood damage and meet the implantation constraints. Until today there is no TAH on the market which meets all constraints. So the purpose of this paper is to design a drive in such a way that the operated TAH meets all predefined constraints. Design/methodology/approach – The drive is designed in terms of weight and electric losses. In setting up a cost function containing those constraints, the drive design can be included in a optimization process. When reaching the global minimum of the cost function the optimum drive design is found. In this paper the optimization methods manual parameter variation and differential evolution are applied. Findings – At the end of the optimization process the drive’s weight amounts to 460 g and its mean losses sum up to 10 W. This design meets all predefined constraints. Further it is proposed to start the optimization process with a parameter variation to reduce the amount of optimization parameters for the time consuming differential evolution algorithm. Practical implications – This TAH has the potential to provide a therapy for all patients suffering from cardiovascular diseases as it is independent of donor organs. Originality/value – The optimization-based design process yields an optimum drive for a TAH in terms of weight and electrical losses. In this way a TAH is developed which meets all implantation constraints and provides sufficient perfusion of the human body at the same time. Keywords Optimization, Differential evolution, Finite element method (FEM), Linear drives, Total artificial heart (TAH) Paper type Research paper

1. Introduction Heart transplants are a limited option for treating heart diseases. Thus, artificial hearts (AHs) are necessary. Currently, CardioWest is the only clinically approved total artificial heart (TAH) system worldwide (Abstracts from the 14th Congress of the International Society for Rotary Blood Pumps, 2006). In order to avoid its drivelines through the human abdominal wall, it is desirable to completely implant a TAH in the human thorax. This will improve the patients quality of life and reduce the risk of infections. However, the limited space in the thorax restricts the weight and dimensions of an implantable AH and its drive. Nevertheless, the drive has to provide the required force for the pump to achieve a sufficient perfusion of the human body. The resulting electric losses in the drive system must be limited to prevent blood damage due to overheating. Two optimization methods for the drive are studied in this paper to find the best drive design adressing these problems. Figure 1 shows the cross section of the drive of a pulsatile TAH heart developed at the Institute of Electrical Machines at RWTH Aachen University (Leßmmann et al., 2008). The drive is excited by an inner and an outer permanent-magnet ring made of neodymium iron boron (NdFeB) material. Compared to the remanent magnetic induction Br,in of the inner magnet ring, which is 1.44 T, the remanent magnetic

COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering Vol. 33 No. 3, 2014 pp. 941-952 r Emerald Group Publishing Limited 0332-1649 DOI 10.1108/COMPEL-07-2013-0230

COMPEL 33,3

Coils

Vacoflux

942

Permanent Magnets

Figure 1. Cross section of AH drive

induction Br,out of the outer magnet ring only averages to 1.35 T. Pole shoes are attached above and beyond the magnets to concentrate the magnetic induction in the air gap. They are made of an iron vanadium cobalt alloy (referred in the this paper by its commercial name Vacoflux), which has a magnetic saturation induction of 2.4 T (Vacuumschmelze). The mover consists of four coils. Each coil is hand wounded with a rectangular copper wire. By this way, a copper fill factor of 75 percent is achieved. In Figure 2, the magnetization of the magnets and the flux path are indicated by arrows. When supplying the coils with a direct current, the direction of the coil movement can be determined by applying the Lorentz force Equation (1). The drive weighs 616 g. During the simulation, the mean electrical losses occurring in the drive were calculated to 8 W (Pohlmann et al., 2011), when providing the required force (Figure 3) for the blood pump to generate a sufficient perfusion of the body. 2. Medical constraints for the design of AHs In order to derive technical constraints for designing AHs, the most important physiological constraints of the human heart and blood circuits summarized in

F

F

Figure 2. Direction of driving force Moving up

Moving down

Pulsatile total artificial heart

10 0 –10

Force (N)

–20 –30 –40

943

–50 –60 –70 –80 –90 –100 –2

For Optimisation Force vs. Displacement @ 120 bpm Force vs. Displacement @ 160 bpm

0

2

4

10 12 6 8 Displacement (mm)

14

16

18

20

(Finocchiaro et al., 2008) are explained. The weight of a heart mH and its volume VH amount up to 400 g and 800 ml, respectively, while the pumping capacity of the left and right ventricle Vsum sum up to a maximum of 140 ml. When the pressure in the left ventricle pV exceeds the diastolic pressure pdia of the aorta, the left ventricle is evacuated. It is filled with blood again when the systolic pressure psys of the aorta exceeds the pressure of the left ventricle. This process is in principle the same for the right ventricle. In this way a pulsatile perfusion of up to six litres at a frequency f between 60 and 80 bpm (beats per minute) of the human body is achieved. Due to the larger height differences the systolic/diastolic pressures (120/80 mmHg) of the systemic blood circuit exceeds the pressures of the lung/pulmonary ppul blood circuit (25/15 mmHg). For this reason the left ventricle mostly is weakened first, if the human heart is overloaded. In medicine the important causes for blood damage are hemolysis, thrombogenicity or denaturation. High shear forces, stagnation of the blood, not biocompatible material and coarse surfaces are the main factors yielding hemolysis and thrombogenicity. In the human body denaturation starts at body temperature Tb of 401C and is irreversible for temperatures above 421C. Denaturation yields a deformation of the bimolecular structure of the proteins in the blood causing its destruction. 3. Computation chain Both of the optimization methods, which will be introduced in the next section, follow an optimization chain consisting of three steps, which has been deduced in (Pohlmann et al., 2011). Based on the input parameters, the mesh required for the finite-element method (FEM) simulation of the magnetic induction distribution in the drive’s air gap is generated. Finally, the weight and the losses of the modeled drive are analytically calculated. Based on the required forces, the optimal current supply of the coils, depending on the air gap magnetic induction, is determined to achieve a good efficiency. 3.1 Parameter input The optimization process is automated by establishing a parameterizable computer model for the mesh generation. As shown in Figure 4, the model is defined by five fixed parameters (a-e) and seven variable parameters. A parameter file is created which can automatically initialize the variables required for the generation of the computer models.

Figure 3. Required force output

COMPEL 33,3

c

d e

b a

944 ΔOut

ΔCoil Thickness

ΔR Middle

Δ R In

1 ΔMagnet Height 2 ΔVacoflux Height

Figure 4. Parametrized motor model for optimization

N coils

3

3.2 Finite element simulation The position-dependent driving force F~n ðxÞ of the actuator can be calculated using the Lorentz force equation:   ~r ; nðxÞ F~n ðxÞ ¼ In ðxÞ  ~ lB

ð1Þ

Aside from the active conductor length l of the coils and the coil supply In(x), the radial magnetic induction distribution Br,n(x) in the air gap is required to determine the force output. Its direction is dependent on the coil supply, as shown in Figure 2. Considering that the magnetic leakage flux is significant and the iron vanadium cobalt alloy is magnetically saturated (Figure 5), non-linear FEM simulations, which are performed using a 3-D static problem solver from the in-house FE software package iMoose of the Institute of Electrical Machines (Institute of Electrical Machines, RWTH Aachen University) at the RWTH Aachen University, are employed to determine the induction. 3.3 Calculation of weight and losses The weight and resulting losses have to be computed for the optimization process. Although the total weight of the drive can be calculated by multiplying the volumes (extracted from CAD models) by their material densities and adding the resulting weights, a calculation chain is required to determine the resulting losses. The counter effects of the field excited by the coils can be neglected because of the 4 mm air gap width between the inner and outer permanent-magnet rings. Moreover, the iron losses

Pulsatile total artificial heart

Induction, T 2.40 2.17 1.94

coils

1.71

945

1.48 1.25 pole shoes

1.02 0.79

Figure 5. Flux density distribution in TAH

0.56 0.33

permanent magnets

0.10

in the pole shoes and permanent magnets are negligible for the operation frequency ranging from between 1.33 and 2.66 Hz. Hence, the ohmic losses of each coil are dominant. The coils should be supplied depending on the average radial magnetic field distribution Br ; nðxÞ penetrating each coil n at the displacement x in the air gap to keep the losses as low as possible. Therefore, the current factors kI,n(x) are calculated by evaluating the magnetic induction distribution obtained by the previous FEM simulations for each coil and position x of the axial displacement: kI ;n ðxÞ ¼ P4

Br;n ðxÞ   Br;n ðxÞ

ð2Þ

n¼1

Before the current supply In;fw=bw ðxÞ for each coil can be calculated using: In;fw=bw ðxÞ ¼ kI ;n ðxÞ  Isum;fw=bw ðxÞ

ð3Þ

the current value Isum;fw=bw ðxÞ has to be determined. Therefore, the Lorentz force Equation (1) is manipulated to: Isum;fw=bw ðxÞ ¼ P4

Frequired;fw=bw ðxÞ Br;n ðxÞ  kI ;n ðxÞ  l

ð4Þ

n¼1

The force Frequired;fw=bw ðxÞ required for the blood pump can be obtained from Figure 3, which have been obtained by measurements described in (Pohlmann et al., 2011). Considering that the forces for the pulmonary and arterial blood circuits differ, the annotations fw (pulmonary) and bw (arterial) are used to indicate the direction of the coil movements. Finally, the resulting ohmic losses Psum;fw=bw ðxÞ are determined by accumulating the ohmic losses of each coil. Each coil loss can be determined by multiplying the square of the coil’s current In;fw=bw with the winding resistance R: Psum; fw=bw ðxÞ ¼

4 X n¼1

2 In;fw=bw ðxÞ  R

ð5Þ

COMPEL 33,3

In order to compute the average losses during one heart beat cycle, the maximum axial displacement of the pusher plates of 18.5 mm is divided into segments with a length of 0.5 mm: xm ¼ m  0:5mm

946

m ¼ 0ð1Þ37

ð6Þ

Assuming that the pusher plates move sinusoidally, the delay time tm for each of these m segments is calculated as:    9:25 mm 9:25 mm arcsin xmþ19:25  arcsin xm9:25 mm mm ð7Þ tm ¼ 2pf Finally, the position-dependent losses are weighted with the time factor tm to calculate the average losses for one heart beat cycle: Psum ðxÞ ¼ f 

36 X

Psum;fw ðxm Þ  tm þ Psum;bw ðxm Þ  t37m

ð8Þ

m¼0

Based on this computation chain, the drive is optimized through a combination of analytical and numerical computations to achieve accurate results within the minimum computation time. 4. Optimization methods The natural adult human heart weights approx. 400 g. Thus, evidently, the weight of the proposed drive has to be reduced. This weight reduction can be achieved by optimizing the drive’s geometry and therefore by reducing, for example, its height or outer diameter. The Lorentz force Equation (1) indicates that a drop in magnetic induction Bn(x) caused by a reduced permanent-magnet volume or active wire lengths l results in lower generated forces. If the required forces Fn(x) are similar, the current supply of the coils In(x) has to be increased, thus increasing the resulting losses as well. However, (Yamaguchi et al., 1993) states that the electrical losses in the drive should be limited to 20 W to keep the temperature rise in the body below 11C. This statement has been proven by the temperature measurements in the TAH called ReinHeart developed at RWTH Aachen University (Pohlmann et al., 2011) during the Mock Loop tests. Therefore, a weight reduction in the drive can be achieved by optimizing its geometry. For the optimization these requirements can be summarized in a cost function (cf ) as follows: cf ¼ 10

Pl Plim Plim

mres;d

þ 10

md

! min

ð9Þ

This cost function consists of two parts considering the resulting losses Pl and weight md of the drive. The cost function value can be minimized by achieving electrical losses in the range of the loss limit Plim of the drive and a low resulting weight mres,d. Although the allowable losses are 20 W, the drive should be optimized in such a way that the required force is provided with resulting losses within the 10 W limit and minimal weight. A lower loss limit is required to allow for the peripheral components such as the battery, which should also be lightweight and small. This approach consequently reduces the drive dimensions as well.

4.1 Manual parameter variation In the manual parameter variation, the influence of a limited number of objective variables on the resulting losses and weight of the drive is studied by varying each parameter at a fixed step width. For example, the outer radius is decreased in steps of 1 mm (Figure 6). Evaluating the plots of the resulting weight against the losses establishes a hierarchical order, which indicates the best geometrical parameters to be varied. In this paper, the influence of the inner and outer radii, as well as the thickness of the coils, is presented and discussed. Finally, the optimization algorithm has to determine the best parameter combination for the optimum geometry of the drive while maintaining the maximum losses within the allowable range. In Figure 6, the outer and inner radii are decreased and increased, respectively, in steps of 1 mm. When the outer radius is reduced, a minimal weight of 495 g and resulting losses of 10.7 W are obtained. Meanwhile, when the inner radius is changed, the minimal weight only amounts to 582 g and the losses accumulate to 9.7 W. In this case, the achievable weight reduction is minimal. Therefore, the outer radius should be reduced first before increasing the inner radius. In Figure 7, the effect of coil thickness is investigated. The outer radius is reduced for coil thicknesses of 2 and 3 mm. The average coil radius is identical for both arrangements. However, comparison between the coils shows that the thinner coil has an increased inner radius and a decreased outer radius. This approach increases the amount of air in the air gap. When the permanent magnets and pole shoes in the radial direction are increased, this amount is further reduced. Thus, the drive’s weight is reduced because the material density of the copper wire ( 8:92g=cm3 ) is higher than those of the NdFeB magnets ( 7:4g=cm3 ) and the Vacoflux ( 8:12g=cm3 ) used for the pole shoes. In addition, the axial height of the coils is approximately twice as high as the combined height of the magnets and pole shoes. These conditions further decrease the weight. On the other hand, the coil’s resistance is increased because of the reduced cross section of the coils. Therefore, the drive’s losses are decreased. This effect is partially compensated by the reduction of the leakage flux, which concentrates the flux density in the air gap more effectively. These complex correlations yield an intersection between the two curves when the losses and weight were 10.6 W and 500 g, 620

Pulsatile total artificial heart

947

inner diameter outer diameter

600 1

Weight (g)

580

2

560

3

540 520 500 480 8

8.5

9

9.5 Losses (W)

10

10.5

11

Figure 6. Variation of inner and outer diameter

COMPEL 33,3

620

coil thickness 3mm coil thickness 2mm

600 580

1

948

Weight (g)

560 2

540 520

3

500 480

Figure 7. Variation of the coil thickness

460 440

8

8.5

9

9.5

10 10.5 Losses (W)

11

11.5

12

12.5

respectively. When the outer radius is further decreased, the losses of a 2-mm-thick coil variant are even lower than for the 3-mm-thick coil variant. 4.2 Differential evolution DE is an evolutionary optimization algorithm (Price et al., 2005; Chakraborty, 2010; Hameyer and Belmans, 1999), which can be applied for solving optimization problems numerically. In this paper it is applied for the weight reduction of the discussed TAH drive system by optimizing its geometry. First, the optimization parameters and their constraints are set. Then, a population of random models is created based on these parameters. The quality of each model is determined using a predefined cost function:  cf ¼ a  10

 weight jlosses  10j þ b  10ð 616 Þ 10

ð10Þ

Parameters a and b are initialized with a value between 0 and 1 such that the sum of a and b equals 1 to prioritize either the losses or the resulting weight during the optimization process. Therefore, theoretically, the best model has losses of 10 W and weights 0 g. Based on the best model of this population, a new generation is created. After several iterations, as the algorithm converges, the optimum geometry of the drive is obtained without exceeding the allowable losses (Hameyer and Belmans, 1999). A cost function trigger was established to accelerate this process. Before evaluating the cost function, the resulting losses for all models are determined. Values that exceed the limit are multiplied by a factor of ten. Thus, the cost function value is significantly increased. The results of the DE algorithm are presented in Figure 8. For all plots, the x-axis represents the number of iterations, whereas the y-axis denotes the parameter indicated in the title of each plot. The convergence of the DE algorithm is shown in plot b by the mean distance to the cost function of each generated model. The mean distance is initially larger than 105 and decreases quickly to a nearly constant value when the 28th iteration is reached. Thereafter, only slight differences are observed in the subsequent best models. In this plot, the cost functions of the best models are not shown for the first 11 iterations to emphasize the variations at the middle and end of the algorithm. Plots c and d illustrate the operation principle of the DE algorithm.

Best cost: 2.330534

Pulsatile total artificial heart

mean Distance of cost function: 2.343362

3 105

2.5

100

2 10

20

30

40

50

60

10

Weight: 460.868118 g

20

30

40

50

60

Losses: 9.996000 W

949

12

550

11 500 450

10 9 10

20

30

40

50

60

10

Radio Small Vacoflux: 16.332683 mm 22

11

20

10

18

30

40

50

60

50

60

50

60

9

16 10

20

30

40

50

60

8

10

Vacoflux Height: 3.204369 mm

20

30

40

Coil Thickness: 2.837467 mm

5

3

4

2.5

3

20

Magnet Height: 9.174507 mm

2 10

20

30

40

50

60

10

20

30

40

Radio Outside: 38.484090 mm 40

Figure 8. Results obtained from the DE algorithm

38 36 10

20

30

40

50

60

In the applied cost function, the losses have a higher priority. Initially, the losses exceed the given limit. When the cost function is triggered, the models are penalized and yield a high cost function value. Thus, the weight is reduced by changing the drive’s geometry, as indicated in plots e-i. As shown in plot b, the mean cost functions quickly reduced during the first few iterations. After the 12th iteration, the losses fall below the 10 W limit. As the weight is further reduced, the losses rise again until the global maximum of 9.99 W in the final model. Starting from the 20th iteration, the losses become constant but the weight is further reduced to its final value of 460 g. 5. Results The final geometrical parameters and the resulting weights and losses for the two optimization methods and the prototype are listed in Table I. The resulting losses for Parameter Drive weight (g) Losses (W) Inner radius (mm) Outer radius (mm) Height maximum (mm)

Prototype

Variation calculations

Differential evolution

616 8 7 42.5 16.5

517 9.94 8 38.5 16.5

460 9.99 8 38.45 15.6

Table I. Comparison of prototype and optimized geometry

COMPEL 33,3

950

the parameter variation and DE were 9.94 and 9.99 W, respectively. These losses are close to the desired loss limit of 10 W. When compared to the losses of the prototype motor, the losses are increased by 2 W. The higher loss limit allowed for a weight reduction of 99 and 156 g for the parameter variation and DE, respectively. This variation is caused by the differences in the geometries obtained by the two methods. Although the inner (8 mm) and outer (38.5 and 38.45 mm) radii are nearly identical, the maximum height of the static drive part obtained by the DE algorithm is lower. This and the other differences are shown in Figure 9, which compares the geometrical parameters of the prototype (a) and those obtained by the parameter variation (b) and DE algorithm (c). The comparison between the axial dimensions of the magnets and the pole shoes of the different models shows that model c has a higher magnet height and lower pole shoe height. Its weight is reduced compared to that of model b because of the lower material density of the magnets and the reduced height of the static part. Model c also has a smaller average radius and coil thickness values. Its electrical losses are higher because of the reduction of the active wire length l, whereas the balance between the magnetic induction Bn(x) generated by the inner and outer permanent magnets is changed compared with that of model b. Therefore, higher weight reduction is achieved using the DE algorithm although the resulting losses are close to those obtained by parameter variation. But as the computational effort for DE is higher than in a parameter variation, a parameter study prior to DE is recommended to find out which optimization parameters have a high or low impact on the weight or losses of the general drive

16.5

42.5

16.5

38.5

15.6

38.45

Figure 9. Visualization of the optimization results

design. In this way the number of optimization parameters could be reduced, which finally yields a speed up for the DE. 6. Conclusions This paper has reported about the optimization of the geometry of a drive for a pulsatile artificial heart. Two optimization approaches, namely, the parameter variation and DE algorithm, were discussed. Both rely on a calculation chain, which is a combination of FEM simulations, which determine the distribution of the magnetic induction, and analytical equations, which are used to calculate the resulting weight and the losses of the drive. The new drive geometries achieved by the two optimization approaches both had resulting losses close to the 10 W limit. However, the weight obtained by the DE approach was lower than the weight result of parameter variation. In parameter variation, the geometrical parameters were varied with a fixed step width. This approach results in weight vs losse characteristics showing the sensitivity of each parameter. The disadvantage of this approach is that only one parameter can be actively changed. But the investigated geometrical parameters are all interdependent. In the DE algorithm, the computer models were created based on randomly chosen parameters, and therefore, their dependencies were considered. As the algorithm converged, the best possible results according to a predefined cost function were obtained. However, the effect of each parameter is not traceable, because they were initialized using random values. Furthermore, DE requires more computer models, when compared to the parameter variation. Considering that the probability of non-convergence increases with the increase in the parameters, a combination of both optimization approaches is advisable. After the sensitivities of some geometrical parameters are determined, the amount of parameters can then be reduced for the next DE optimization step, as well as the required number of models and, therefore, the computation time. DE proposed a coil thickness of 2.7 mm. Considering that the producible coil thickness depends on the available copper wire, an adjustment of the DE results is necessary to allow for manufacturing. References Abstracts from the 14th congress of the international society for rotary blood pumps (2006), Artif. Organs, Vol. 30 No. 11, p. A 27. Chakraborty, U.K. (2010), Advances in Differential Evolution, Springer, Berlin and Heidelberg. Finocchiaro, T., Butschen, T., Kwant, P., Steinseifer, U., Schmitz-Rode, T., Hameyer, K. and Leßmann, M. (2008), “New linear motor concepts for artificial hearts”, IEEE Transactions on Magnetics, Vol. 44 No. 6, pp. 678-681. Hameyer, K. and Belmans, R. (1999), Numerical Modelling and Design of Electrical Machines and Drives, Computational Mechanic Publications, WIT Press, Southampton. Institute of Electrical Machines, RWTH Aachen University, available at: www.iem.rwthaachen.de (accessed September 2012). Leßmmann, M., Finocchiaro, T., Steinseifer, U., Schmitz-Rode, T. and Hameyer, K. (2008), “Concepts and designs of life support systems”, IET Science, Measurement & Technology, Vol. 2 No. 6, pp. 499-505. Pohlmann, A., Leßmann, M., Finocchiaro, T., Schmitz-Rode, T. and Hameyer, K. (2011), “Numerical computation can save life: FEM simulations for the development of artificial hearts”, IEEE Transactions on Magnetics, Vol. 47 No. 5, pp. 1166-1169.

Pulsatile total artificial heart

951

COMPEL 33,3

952

Pohlmann, A., Leßmann, M., Finocchiaro, T., Fritschi, A., Steinseifer, U., Schmitz-Rode, T. and Hameyer, K. (2011), “Drive optimisation of a pulsatile total artificial heart”, Archives of Electrical Engineering, Vol. 60 No. 2, pp. 169-178. Price, K.V., Storn, R.M. and Lampinen, J.A. (2005), Differential Evolution – A Practical Approach to Global Optimization, Springer, Berlin and Heidelberg. Vacuumschmelze. available at: www.vacuumschmelze.de (accessed September 2012). Yamaguchi, M., Yano, T., Karita, M., Yamamoto, Y., Yamada, S. and Yamada, H. (1993), “Performance test of a linear pulse motor-driven artificial heart”, IEEE Translation Journal on Magnetics in Japan, Vol. 8 No. 2, pp. 130-136. About the authors Andre´ Pohlmann received his Diploma Degree in Electrical Engineering from the RWTH Aachen University in October 2008. In December 2008 he started his doctoral studies at the Institute of Electrical machines at the RWTH Aachen University. His field of research is magnetic bearings and drives for artificial hearts. Andre´ Pohlmann is the corresponding author and can be contacted at: [email protected] Professor Kay Hameyer received his MSc Degree in Electrical Engineering from the University of Hannover and his PhD Degree from the Berlin University of Technology, Germany. After his university studies he worked with the Robert Bosch GmbH in Stuttgart, Germany as a Design Engineer for permanent magnet servo motors and vehicle board net components. Until 2004 Dr Hameyer was a full Professor for numerical field computations and electrical machines with the KU Leuven in Belgium. Since 2004, he is full Professor and the Director of the Institute of Electrical Machines (IEM) at the RWTH Aachen University in Germany, 2006 he was Vice Dean of the faculty and from 2007 to 2009 he was the Dean of the Faculty of Electrical Engineering and Information Technology of the RWTH Aachen University. His research interests are numerical field computation and optimization, the design and controls of electrical machines, in particular permanent magnet excited machines, induction machines and the design employing the methodology of virtual reality. Since several years Dr Hameyer’s work is concerned with the magnetic levitation for drive systems, magnetically excited audible noise in electrical machines and the characterization of ferro-magnetic materials. Dr Hameyer is author of more then 250 journal publications, more then 500 international conference publications and author of four books. Professor Hameyer is a member of VDE, IEEE senior member, fellow of the IET.

To purchase reprints of this article please e-mail: [email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints

The current issue and full text archive of this journal is available at www.emeraldinsight.com/0332-1649.htm

Multiobjective approach developed for optimizing the dynamic behavior of incremental linear actuators

Optimizing incremental linear actuators 953

Imen Amdouni ESTI, Bardo, Tunisia

Lilia El Amraoui Unite´ de Recherche Syste`mes Me´catroniques et Signaux, De´partement Technologies de l 0 Information et des Communications, ENIT, Tunis, Tunisia

Fre´de´ric Gillon L2EP, Optimisation, Ecole Centrale de Lille, Lille, France

Mohamed Benrejeb ENIT, Unite´ de Recherche LARA Automatique, Tunis, Tunisia, and

Pascal Brochet Laboratoire d’Electrotechnique et d’Electronique de Puissance de Lille, Ecole Centrale de Lille, Lille, France Abstract Purpose – The purpose of this paper is to develop an optimal approach for optimizing the dynamic behavior of incremental linear actuators. Design/methodology/approach – First, a parameterized design model is built. Second, a dynamic model is implemented. This model takes into account the thrust force computed from a finite element model. Finally, the multiobjective optimization approach is applied to the dynamic model to optimize control as well as design parameters. Findings – The Pareto front resulting from the optimization approach (or the parallel optimization approach,) is better than the Pareto, which is obtained from the only application of MultiObjective Genetic Algorithm (MOGA) method (or parallel MOGA with the same number of optimization approach objective function evaluations). The only use of MOGA can reach the region near an optimal Pareto front, but it consumes more computing time than the multiobjective optimization approach. At each flowchart stage, parallelization leads to a significant reduction of computing time which is halved when using two-core machine. Originality/value – In order to solve the multiobjective problem, a hybrid algorithm based on MOGA is developed. Keywords Control systems, Computer-aided design, Electrical machines, Multiobjective optimization, FE method, Parallel computing Paper type Research paper

I. Introduction The design of electrical systems is a very complex task which needs experts from various fields of competence. In a competitive environment, where technological advance is a key factor, industry seeks to reduce study time and to make solutions reliable by way of rigorous methodologies providing systemic solutions. Then, it is necessary to develop multiobjective optimization methods which are suitable to these

COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering Vol. 33 No. 3, 2014 pp. 953-964 r Emerald Group Publishing Limited 0332-1649 DOI 10.1108/COMPEL-06-2013-0205

COMPEL 33,3

954

concerns (Tran, 2009; Alotto, 2011; Barba, 2005; Xue et al., 2010; Taboada et al., 2008). Multiobjective optimization methods can be divided into two categories: scalar and Pareto approaches. The multiobjective scalar method such as goal attainment method consists on the transformation of multiobjective problems into a set of single objective ones (Tran, 2009; Alotto, 2011; Barba, 2005). The Pareto multiobjective methods as MultiObjective Genetic Algorithm (MOGA) separate the objective vector elements of the optimization procedure. It typically uses a concept of dominance to distinguish non-dominated solutions from dominated ones (Tran, 2009; Taboada et al., 2008). This paper refers to the optimal dynamic modeling of an incremental linear actuator. First, one parameterized model is built for finite element (FE) design under Opera2Ds software environment. Second, the dynamic model of the incremental linear actuator is implemented under MATLABs/Simulink environment. This model requires as input the thrust force developed by the actuator and computed from the FE model. Then, the design and control parameters of the linear actuator are simultaneously optimized by the successive use of hybrid MOGA (or parallel hybrid MOGA) and then MOGA (or parallel MOGA). II. Linear actuator description The electromagnetic structure, on which the optimal design approach is tested, is a linear tubular switched reluctance actuator. It presents four statoric phases and a toothed plunger. Non-magnetic separations are set between the statoric phases so that only one statoric phase can be aligned with plunger teeth when it is supplied (Figure 1). Each exciting coil is winded around the plunger and lodged between two stator teeth. With the assumption that the machine is perfectly axi-symmetric and that the statoric phases are not magnetically coupled, the construction of the actuator can then be obtained by assembly of elementary modules shown in axial plane in Figure 1. The plunger teeth are uniformly distributed and have a width equaling that of the stator tooth. So the supply of one phase translates the plunger of one step equaling half of a tooth width if the different phases are separated by an air gap of one tooth half thickness. III. Static and dynamic modeling of the actuator The dynamic model of the incremental actuator is developed under MATLABs/ Simulink environment. This model takes into account the thrust force developed by

Figure 1. Longitudinal half cross-section of an axi-symmetric four phase actuator

A-A

Non-magnetic separation Winding Stator phases

A

A

the studied actuator and computed from the static FE model when supplying its two first phases (Figure 2). 1. Static modeling of the actuator An electromagnetic FE model is developed for the studied actuator. This static model is based on an integral formulation of partial differential equations describing the actuator static behavior to determine a specific solution under vector form. It consists on dividing the study area into sub-domains called FEs on which approximate solutions are computed from the exact solutions on the element peaks said nodes. The vector potential is then calculated on each node (Reece and Preston, 2000). Figure 3 presents a FE discretization of one module of the studied actuator. The FE method can assign to each mesh element a completely different property from neighboring elements. In addition, the major area of interest can be treated with a higher resolution over the rest of the body (Lynch and Plausen, 1990). Opera2Ds software is used for the resolution of the FE model. Figure 4 shows the parameterized actuator module used as virtual prototype to compute the actuator performances. In order to compute the resulting force, the elementary force must be integrated on a cylinder placed within the air gap. Let R the radius of this cylinder. The static force developed by the actuator is then obtained from the following equation:  Y Z Br Bz dz ð1Þ F¼ 2

> > > > f2 min ¼ minðRtim ðt1 ; t2 ; rr; zr; lbÞÞ > > > < 0spt pt p0:4000s 2 1 > 2 mmprrp3:5000mm > > > > > 0:0508mmpzrp1:5700mm > > : 2mmplbp3:5000mm

ð4Þ

In the next section, this problem will be performed using the optimal multiobjective approach, which is developed under O2P platform (Amdouni et al., 2011). V. Multiobjective optimization approach In this section, the multiobjective optimization approach is described and then the control and design parameters of the studied actuator are optimized using this methodology. 1. Multiobjective optimization approach description An optimal dynamic behavior strategy is developed for electromagnetic structures and applied, in this paper, to the dynamic behavior optimization of incremental linear actuators. The optimal multiobjective approach is described by the flowchart of Figure 6, which is composed of three steps: MOGA (1), MOGOA (2) and MOGA (3). First, the optimization is performed by MOGA optimization method (MOGA (1)) (Deb, 2001) with a small number of iterations) there is no guarantee that MOGA will

COMPEL 33,3

Start Define the optimization problem Start MOGA

958

MOGA (1) First Stopping criteria

Optimal Pareto front obtained from MOGA Hybrid MOGA Initialize MOGOA by optimal Pareto front obtained from MOGA

Start MOGOA MOGOA (2) Second Stopping criteria

Optimal Pareto front obtained from MOGOA

Initial population = Optimal Pareto front obtained from MOGOA

Start MOGA

MOGA (3)

Figure 6. Flowchart of the successive optimization approach

Third Stopping criteria

MOGA (3)

Optimal Pareto front obtained from MOGA

End

find any optimal Pareto solution in a finite number of solution evaluations for an arbitrary problem. Then, to make the overall procedure faster and to perform the task with a more guaranteed manner, MOGA method is combined with a multiobjective optimization method having local convergence properties, such as MultiObjective GOal Attainment (MOGOA) method (MOGOA (2)) (Gembicki, 1974). In this direction,

MOGA optimization solutions are used as initial data to the MOGOA method (MOGOA (2)) to allow a better exploration of the local space solutions. Then, in order to sort and to save the diversity of the new Pareto front solutions, which is obtained from the hybrid solver, MOGA method (MOGA (3)) is restarted using these ones as initial population (Deb and Goel, 2001; Sindhya et al., 2008) (Figure 6). MOGOA method (Gembicki, 1974), does not try to preserve the diversity of the solution. This method requires three inputs: start point, goal and weight. The start point is the initial point for the algorithm. The goal is a vector of values that the objective function attempts to attain. The weight is a weighting vector to control the relative underattainment or overattainment of the objectives. 2. Dynamic behavior optimization of the studied actuator At first, the elaborated optimization strategy is applied to the optimization of the dynamic performances of the studied actuator. At second, in order to analyze optimization algorithms parallelization speed, this optimal approach is restarted using successively, parallel MOGA, parallel MOGOA and then parallel MOGA (Figure 6). Finally optimization results are reported and compared. The stopping criteria of the first (MOGA (1)), second (MOGOA (2)) and third (MOGA (3)) steps of the flowchart (Figure 6) are reported in Table I. If the oscillation of the dynamic response and the response time are p1 mm and 0.2 s, respectively, the algorithm stops. First, the optimization is carried out by MOGA method (MOGA (1)), Figure 6, which parameters are given in Table II. MOGA uses real encoding. The function that performs the crossover is the scattered function. This function creates a random binary vector. It then selects the genes where the vector is equal to 1 from the first parent and the genes where the vector is equal to 0, from the second parent and combines the genes to form the child (Deb, 2001). MOGA is based on stochastic uniform selection and uniform mutation types (Deb, 2001). The number of objective function evaluations, the average distance of the solutions on the Pareto front (Deb, 2001), the spread (Deb, 2001) which measures the changes between two fronts and the computation time are illustrated in Table III. The Pareto front obtained from MOGA optimization method is reached after 551 evaluations of the objective function. Figure 7 plots the function values for some non-inferior solutions (MOGA (1)). In this case, a search of Pareto front solutions variation range is achieved by running MOGA with a small number of generations; that is why we are not sure that whole Pareto front solutions are found. These Pareto front solutions are used as initial data for the MOGOA optimization method that is faster and more efficient for a local search. The hybrid solver estimates the pseudo weights for each Pareto front solution which is equal to the absolute value of the goal and runs MOGOA from each one. Each objective requires Pareto front points found so far as goal. The new Pareto front, which is obtained from the hybrid solver is illustrated in Figure 7 (MOGOA (2)). The average distance of the solutions (Deb, 2001) on the Pareto front is improved by the use of the hybrid function (Table III (MOGOA (2))). The spread (Deb, 2001) is higher with the hybrid function which indicates that the front has changed considerably when the MOGA with hybrid function is used (Table III). In fact, the obtained results show an increase of the spread from 0.3236 to 0.3242. Also some hybrid solver solutions are repeated in this Pareto front.

Optimizing incremental linear actuators 959

COMPEL 33,3

Then, in order to avoid this repetition and to sort hybrid solver solutions, these ones are introduced as initial population to the MOGA, which is restarted again (MOGA (3)). The spread and the average distance are decreased successively by 37 and 26 percent, which proves that the Pareto front is improved (Table III). Optimal Pareto front is

960

Criteria

Table I. Stopping criteria

MOGA (1)

MOGOA (2)

MOGA (3)

600 1 0.2

600 1 0.2

600 1 0.2

Max number of iterations Dosc limit (mm) Rtim limit (s)

Factors

Table II. MOGA factors

Values/types

Generation Population size Crossover probability Mutation probability Crossover type Selection type Mutation type

Results

Table III. Optimization results

10 50 0.8000 0.2000 Scattered Stochastic uniform Uniform

MOGA (1)

Number of evaluations Average distance Spread Time (s) Total time (s)

Optimization approach MOGOA (2)

551 0.0157 0.3236 1.6131 e þ 003

2529 0.0159 0.3242 1.0338 e þ 004 1.3412 e þ 004

MOGA (3)

MOGA

501 0.0117 0.2041 1.4613 e þ 003

3581 0.0199 0.2011 1.0484 e þ 004 1.0484 e þ 004

Pareto front 0.65

MOGA (1) MOGOA (2) MOGA (3)

0.6

Rtim (s)

0.55 0.5 0.45 0.4 0.35

Figure 7. Pareto front results using the optimization approach

0.3 0.25 1.8

2

2.2

2.4

2.6

2.8

Dosc (mm)

3

3.2

3.4

3.6

achieved after 501 evaluations of the objective function (Figure 7 (MOGA (3)). Comparing this Pareto to the initial one, some solutions are taken from the initial Pareto and some others are added due to the use of the hybrid solver. The obtained optimization results show that the MOGOA (2) method consumes more computing time than MOGA (1 or 3). The total computing time of the optimization approach is higher, it reaches 1.3412e þ 004 s (Table III). In this direction, the parallelization of the optimization approach is used to reduce simulation time (Figure 6). As a result, the total simulation time of the optimization strategy (Figure 6), is reduced by 51 percent, but the total number of objective functions evaluations is quite the same (3,581 evaluations of the objective function) (Table IV). In fact, using two-core machines enables us to run two simulations at each optimization process steps. Table IV shows that the average distance and the spread are incremented with parallel MOGOA (this proves that the Pareto front has changed) and then decremented with parallel MOGA (3) at the last step of the parallel optimization approach (which means that the optimal Pareto front is achieved) (Figure 6). The optimal Pareto fronts resulting from each parallel stage of the optimal approach are presented in Figure 8. Parallel optimization approach Parallel Parallel Parallel MOGA (1) MOGOA (2) MOGA (3)

Results Number of evaluations Average distance Spread Time (s) Total time (s)

551 0.0144 0.2768 937.7720

2529 0.0155 0.3122 5.1337e þ 003 6.9506 e þ 003

0.6

Optimizing incremental linear actuators 961

Parallel MOGA

501 0.0134 0.2195 879.0921

3581 0.0210 0.1770 6.0947 e þ 003 6.0947 e þ 003

Table IV. Optimization results using the parallelization

Parallel MOGA (1) Parallel MOGOA (2) Parallel MOGA (3)

0.55 A

Rtim (s)

0.5 0.45 0.4 0.35 0.3 0.25 2

2.5 Dosc (mm)

3

3.5

Figure 8. Pareto front results using the parallel optimization approach

COMPEL 33,3

962

Comparing the optimization results, the spread and the average distance resulting from the application of the first (parallel MOGA (1)) and second stages (parallel MOGOA (2)) of the parallel optimization approach (Table IV) are lower than these, which are obtained, respectively, from MOGA (1) and MOGOA (2) (Table III). However, these two parameters are higher at the last stage with parallel MOGA (parallel MOGA (3)) (Table IV). As a result, the optimal Pareto front obtained from the simple application of the optimization approach, Figure 7 (MOGA (3)), is better than the other, which is obtained from the parallel optimization approach (Figure 8) (parallel MOGA (3)). The optimal dynamic response on two successive steps and the optimal parameters of the solution A (which is the first solution in the Pareto front of Figure 8) are successively presented in Figure 9 and Table V. Comparing this result, Figure 9, to the initial one, Figure 5, the oscillation parameters Dosc is decreased from 3.1 to 1.9 mm, Table V, demonstrating the efficiency of the proposed approach. Also, the stabilization time of the second phase is decremented by 47 percent: the obtained results show an important decrease from 0.97 to 0.51 s (Table V). The only application of MOGA method (with, Table III, or without parallelization, Table IV), with the same number of optimization approach (or parallel optimization 6

x 10–3

Dynamic position (m)

5 4 3 2 1

Figure 9. Optimal dynamic response on two successive steps of the linear actuator

0 0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Time (s)

Parameters/responses

Table V. Initial solution and solution A results

rr (mm) zr (mm) lb (mm) t1 (s) t2 (s) Dosc (mm) Rtim (s)

Initial solution

Solution A

4 0 0 0.5000 0.5000 3.1000 0.9700

2.3310 0.6240 2.8940 0.2761 0.1215 1.892 0.5150

approach) function evaluations, decreases the spread (we have no significant change in the Pareto) and increases the average distance. As a result, the Pareto front resulting from the optimization approach, Figure 7 (or the parallel optimization approach, Figure 8) is better than the Pareto, which is obtained from the only application of MOGA method (or parallel MOGA with the same number of optimization approach objective function evaluations) (Figure 10). Conclusions An optimal approach is implemented to improve the dynamic performances of incremental linear actuators. At first, static and dynamic models of the studied actuator are built and coupled. At second, the optimization problem is formulated. Then, two optimization processes are performed using the optimal approach. The first one is based on hybrid MOGA and then MOGA and the second one uses parallel hybrid MOGA and then parallel MOGA. The hybrid solver uses mainly the MOGA (or parallel MOGA), while the MOGOA (or parallel MOGOA) method runs after the MOGA (or parallel MOGA); a hybrid scheme is used to find a new Pareto front for the multiobjective problem. In both cases (with or without parallelization), we start by running MOGA for a small number of generations to get a near optimum front. Then the solution from MOGA is used as initial data to the MOGOA method, which is faster and more accurate for local search. These existing populations are combined with the new individuals returned by the hybrid solver and a new Pareto front is obtained. In fact, the hybrid solver estimates the pseudo weights for each solution on the Pareto front and runs the MOGOA starting from each Pareto front solution. The use of the hybrid function results on a new Pareto front. This can be indicated by a higher value of the average distance measured and the spread of the front. However, the diversity of the solution can be loosed (because MOGOA does not try to preserve the diversity). In order to sort the new Pareto front solutions obtained from the hybrid solver, MOGA is restarted using these ones as initial population. As a result, the optimal Pareto front is reached. The only use of MOGA can reach the region near an optimal Pareto front, but it consumes more computing time than the multiobjective optimization approach. 0.6

Optimizing incremental linear actuators 963

MOGA Parallel MOGA

0.55

Rtim (s)

0.5 0.45 0.4 0.35 0.3 0.25 0.18

0.2

0.22 0.24 0.26 0.28 Dosc (mm)

0.3

0.32 0.34 0.36

Figure 10. Pareto front results using MOGA

COMPEL 33,3

964

At each flowchart stage, parallelization leads to a significant reduction of computing time which is halved when using two-core machine. References Alotto, P. (2011), “A hybrid multiobjective differential evolution method for electromagnetic device optimization”, International Journal for Computation and Mathematics in Electrical and Electronic Engineering, Vol. 30 No. 6, pp. 1815-1828. Amdouni, I., Saadaoui, R., El Amraoui, L., Gillon, F., Benrejeb, M. and Brochet, P. (2011), “Kernel software platform for electrical devices optimization: application to linear actuator performance’s optimization”, International Journal of Applied Electromagnetics and Mechanics, Vol. 37 No. 7, pp. 147-157. Barba, P.D. (2005), “Recent experiences of multiobjective optimisation in electromagnetics: a comparison of methods”, International Journal for Computation and Mathematics in Electrical and Electronic Engineering, Vol. 24 No. 3, pp. 921-930. Deb, K. (2001), Multi-Objective Optimization Using Evolutionary Algorithms, John Wiley & Sons, New York, NY. Deb, K. and Goel, T. (2001), “A hybrid multi-objective evolutionary approach to engineering shape design”, Proceedings of the First International Conference on Evolutionary Multi-Criterion Optimization (EMO-01), pp. 385-399. Gembicki, F.W. (1974), “Vector optimization for control with performance and parameter sensitivity indices”, PhD dissertation, Case Western Reserve University, Cleveland, OH. Lynch, D.R. and Plausen, K.D. (1990), “Time-domain integration of the Maxwell equations on finite elements”, IEEE Transactions on Antennas and Propagation, Vol. 38 No. 12, pp. 1933-1942. Minoux, M. (1983), Mathematical Programming, Theory and Algorithms, Edition Dunod, Paris. Reece, A.B.J. and Preston, T.W. (2000), Finite Element Method in Electrical Power Engineering, 1st ed., Oxford University Press. Sindhya, K., Deb, K. and Miettinen, K. (2008), “A local search based evolutionary multi-objective optimization technique for fast and accurate convergence”, Proceedings of the Parallel Problem Solving From Nature (PPSN-2008), Berlin, Springer-Verlag, Berlin, Heidelberg edition, Vol. 5199, pp. 815-824. Taboada, H.A., Espiritu, J.F. and Coit, D.W. (2008), “MOMS-GA: a multi-objective multi-state genetic algorithm for system reliability optimization design problems”, IEEE Transactions on Reliability, Vol. 57 No. 1, pp. 182-191. Tran, T.V. (2009), “Combinatorial problems and multi-level models for optimal design of electrical machines”, PhD dissertation, Ecole Centrale de Lille, Lille. Xue, X.D., Cheng, K.W.E., Ng, T.W. and Cheung, N.C. (2010), “Multi-objective optimization design of in-wheel switched reluctance motors in electric vehicles”, IEEE Transactions on Industrial Electronics, Vol. 57 No. 9, pp. 2980-2987. Corresponding author Imen Amdouni can be contacted at: [email protected]

To purchase reprints of this article please e-mail: [email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints

The current issue and full text archive of this journal is available at www.emeraldinsight.com/0332-1649.htm

Radial output space mapping for electromechanical systems design Maya Hage Hassan, Ghislain Remy, Guillaume Krebs and Claude Marchand Laboratoire de Ge´nie Electrique de Paris (LGEP), CNRS UMR 8507; SUPELEC; Universite´ Pierre et Marie Curie P6 and Universite´ Paris-Sud, Ile-de-France, France

Radial OSM

965

Abstract Purpose – The purpose of this paper is to set a relation through adaptive multi-level optimization between two physical models with different accuracies; a fast coarse model and a fine time consuming model. The use case is the optimization of a permanent magnet axial flux electrical machine. Design/methodology/approach – The paper opted to set the relation between the two models through radial basis function (RBF). The optimization is held on the coarse model. The deduced solutions are used to evaluate the fine model. Thus, through an iterative process a residue RBF between models responses is built to endorse an adaptive correction. Findings – The paper shows how the use of a residue function permits, to diminish optimization time, to reduce the misalignment between the two models in a structured strategy and to find optimum solution of the fine model based on the optimization of the coarse one. The paper also provides comparison between the proposed methodology and the traditional approach (output space mapping (OSM)) and shows that in case of large misalignment between models the OSM fails. Originality/value – This paper proposes an original methodology in electromechanical design based on building a surrogate model by means of RBF on the bulk of existing physical model. Keywords Electromagnetic, Space Mapping, Multi-level optimization, Surrogate modeling Paper type Research paper

Introduction Achieving optimal design in engineering applications, in terms of design specifications, is often a compromise between final solution accuracy and fast computation/simulation time. The first constraint is due to the use of an expensive, accurate model, the fine model, while the second is based on the evaluation of a faster but less accurate model, denoted as coarse model. The Space Mapping (SM) technique (Bandler et al., 2004; Dennis (2000)),conceived by Bandler in 1994, allows engineers to exploit optimization on fine model in order to reach a degree of exactness on the final design without being prohibited by calculation time. SM has been widely used as an optimization technique of microwave devices (Bandler et al., 1994) and used for electromagnetic systems starting by Hong-Soon et al. (2001), to perform optimization on an interior permanent magnet (IPM) motor, and (Tran et al., 2007) applying varieties of SM techniques to the optimization of a safety isolating transformer. Since its conception, several approaches aiming to elaborate this technique have appeared, all attempting to improve the parameter extraction step which is the key to establishing mapping between the two model spaces. Other improvements such as Aggressive Space Mapping, Trust Region Aggressive Space Mapping (Encica et al., 2009) and Implicit Space Mapping (Bandler et al., 2001) eliminate simulation overhead required while using the Parameter Extraction on the fine model space, through the introduction of a quasi-Newton step in the fine space exploiting each fine model iterate, as soon as it is available.

COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering Vol. 33 No. 3, 2014 pp. 965-975 r Emerald Group Publishing Limited 0332-1649 DOI 10.1108/COMPEL-05-2013-0192

COMPEL 33,3

966

In 2001, a variant of the SM technique, the output space mapping (OSM) was first introduced by Dennis (2000) as auxiliary mapping, to improve the performance of the Input SM algorithm. It aims to reduce misalignment between the fine and coarse models by mapping the response of the coarse model at point x*c to the response of the fine model at the same point. OSM has been used in the electromechanics domain without a prior introduction of input SM. For example, it was adopted in the optimization of an internal permanent magnet synchronous machine (Vivier et al., 2011) using two models built on a finite element analysis with different accuracies, and was also combined in a three level output mapping using models with three different levels of accuracies (Ben Ayed et al., 2012; Koziel and Bandler, 2010). Output standalone correction can be seen as a shifting of coarse model responses at each iteration independently. Therefore, for a large discrepancy between the response spaces correction strategy has to be reviewed for a refined correction. In this paper, an output correction strategy based on radial basis surrogate modeling is proposed to map the error between the two models. Through the following paragraphs, we will discuss further the OSM technique along with ROSM and a comparison between both techniques is carried out through the optimization of an Axial Flux Machine. OSM approaches OSM We define the optimization problem that is to be solved on the fine model as finding the fine model optimizer x*f that minimizes distance between m fine model outputs fm and corresponding m design specifications Y m: xf ¼ arg mink fm ðxÞ  Ym k22

ð1Þ

x2X

Through the introduction of the OSM technique, the optimization problem is to be reformulated on a surrogate model. We denote by surrogate model, the corrected coarse model, which is trained through the optimization process to fit data derived of the evaluation of the fine model. The coarse model is corrected by use of what is called a mapping output denoted as the surface in Dennis (2000), and can be expressed as a local multiplicative correction coefficient that is established by minimizing the residual between the responses of the two models spaces per iteration. At ith iteration it is given by:  i 2 fji ðxÞ ðxÞ  Ym 2 ; Sji ¼ yij :Cji ðxÞ with yij ¼ i ; 1pjpm xs ¼ arg minSm Cj ðxÞ x2X

ð2Þ

Radial OSM Local model correction can be seen as a shifting of coarse model responses at each iteration independently (Echeverria and Hemker, 2005), the correction strategy has to be reviewed for a refined surrogate model accuracy, in case of a non-linear or large misalignment between the response spaces (case of a poor coarse model generating very low fidelity outputs with respect to the fine ones). Using the surface concept to approach Surrogate Space Mapping based models implies the search for direct approximation methods that can be used to smoothen generated fine model data. Familiar approximation methods include polynomial modeling, response surfaces

(Simpson et al., 2001), radial basis functions Mark (1996) via neural nets, and kriging, (Laslett, 1994)(Davis and Ierapetritou, 2007). It is worth mentioning the clear distinction between the Space Mapping approach exploiting an existing physical model capable of simulating the system over a wide range of parameter values (Koziel et al., 2009) and optimization using surrogate models establishing a local approximation of fine model responses using a set of fine model simulations. Thus, through the surrogate SM approach, we will exploit efficient optimization using two modeling levels, sorted by their increasing cost and veracity: the coarse and fine models. In order to obtain sufficient accuracy on the global design solution, at the lowest possible cost, and to build a Meta fine model with increasing exactness through the optimization process. The standard output SM using an additive term assumes that the surrogate model is built by correcting the coarse model responses with an additive term that reflects the residual error between both models, such that at the ith iteration the surrogate model at jth response is given by: Sji ¼ Cj ðxÞ þ dji

ð3Þ

With dji ¼ fji  Cji Through the RBF modeling method (Baxter, 1992), an adaptive correction is endorsed that is dependent on the design variables. In other terms, a residual function will be built through an iterative process, using the responses of both fine and coarse models. Residual function is given by Rd(x). The attractive feature in RBF networks is their nonparametric regression nature, so that their primary use is in estimating model outputs without any a priori on estimated function nature. The residual between the two modelsP is built by the use of a linear combination of a Gaussian RBF, such that: RdðxÞ ¼ Nk¼1 wk hk ðxÞ , where hk the basis function at kth the learning center: hk ðxÞ ¼ expðcr 2 Þ

rX0

c40

ð4Þ

Where c is kept fixed and equal to one and r ¼ kx  xk k2 . Weight factors wk are computed so that they satisfy: 2 1 3 f  c1 6 f 2  c2 7 6 7 ð5Þ F:w ¼ d; where w ¼ ½w1    wN T ; d ¼ 6 . 7 4 .. 5 f N  cN Flh ¼ expðckxl  xh k22 Þ for

0pl; hpN

ð6Þ

U is an N  N matrix. Thus the radial surrogate model at the ith iteration is given by: S i ðxÞ ¼ CðxÞ þ

i X k¼1

wk hk ðxÞ

ð7Þ

Radial OSM

967

COMPEL 33,3

Optimal designs issued from the surrogate model optimization will be acting as centers of our RBF. Weight factors are iteratively computed. The algorithm considered here has the following steps: (1)

968

Compute iteration on coarse model: xci ¼ arg minkCm ðxÞ  Ym k22 x2X

Evaluate fine model on xci , and stop if stopping criteria is satisfied. If i ¼ 1, set d i ¼ f i(x)C i(x) as the first additive correction factor. If i 41 , compute the radial mapping using the sets of fine model outputs [f1(x), y , f i(x)] and coarse outputs [C 1(x), y , C i(x)], using Equation (5). Compute the new surrogate model using Equation (6), set i ¼ i þ 1, go to step 1.

(2) (3)

(4)

Optimization of an axial synchronous machine In order to compare the discussed varieties of OSM, we considered a constrained mono-objective optimization of a permanent magnet 12/4 axial flux synchronous machine. Our aim is to find optimal physical characteristics of the machine with a view to minimize the machine’s total losses and respect five non-linear constraints on the machine’s Torques (T), Electromotive forces (Emf) and Current density ( J). Total losses (ETotal) minimization is done with respect to a certain profile mission, the couples Torque-Speed issued from “Artemis road and urban cycles” Figure 1, are chosen in order to fit the Torque-Speed characteristic of the machine, Figure 2. (a)

(b)

9,000

Torque (Nm)

Speed (rpm)

7,500

Figure 1. Artemis cycles

100

Road Urban

6,000 4,500 3,000 1,500 0

0

150 300 450 600 750 Number of characteristic points

850

Road Urban

80 60 40 20 0

0

150 300 450 600 750 Number of characteristic points

100 EMF & Torque at basic speed

Torque (Nm)

80

Figure 2. Torque-speed characteristics of the machine and corresponding operating points

EMF & Torque at High speed

60

40

20

0 0

1,500

3,000

4,500 Speed (rpm)

6,000

7,500

9,000

850

Optimization problem The machine’s design specifications are summarized in Table I, a 2D equivalent model of the half machine is presented in Figure 3. Fine and coarse models The fine and coarse models are both built on a reluctance network Figure 4. As for the network topology, the number of reluctances describing teeth, yokes and magnets is kept constant; as for the air gap, it is described by vertical and horizontal reluctances that depend on a discretization step dx, such that: rh ¼

dx ; m0 a d

rv ¼

a=2 m0 dx d

Radial OSM

969

ð8Þ

where a is the air-gap value, d the machine’s depth and m0 the relative permeability. The magnetic materials are supposed to have a linear behavior. For each displacement step, a network of reluctances is established, and a motor map in terms of flux linkages and magnetic flux density is established through the examination of all possible combinations of phase angle and current. Through this, the average torque of each pair of phase angle and current is computed using derivatives of flux linkages and

Type

Values

Design domain Objective: total energies Non-linear inequality constraints Non-linear equality constraints Other model data

x4

lm/2

90px1p150 (mm), 70px2p110 (mm), 5px3p10 (mm), 10px4p200 (mm), 20px5p100 (mm), 3px6p50 (turns) ETotal ¼ EIron þ Eresistive þ EInverter þ EMagnet Emfbp255 (V), Emfhsp260 (V), Jo 9 (A.mm2) Tb ¼ 100 (Nm),Ths ¼ 33.33 (Nm) Machine Length ¼ 352 (mm), Outer Radius ¼ 150 (mm), Inner Radius ¼ 51 (mm), N1stator teeth ¼ 6, N1rotor magnets ¼ 8, AirGap ¼ 0.5 (mm), Br ¼ 1.19 (T)

Table I. Model design specifications

x3

Rotor pole

x5

Air-gap Coils

x2

Stator tooth

Magnet

y

x

x1

x=[x1 ,x2 ,x3 ,x4 ,x5 ,x6] x1: pole width x2: tooth width x3: x4: x5: x6:

Magnet width tooth height rotor height No. turns

Figure 3. 2D equivalent model of the axial flux machine

COMPEL 33,3

Periodic boundary condition

fm2

fm1

Rotor

970

rv

Moving air-gap layer

rh rh

Fixed air-gap layer

Stator

Figure 4. Machine’s reluctance network

fb1

fb2

fb3 Periodic boundary condition

corresponding currents. The coarse model is derived from the fine model by making simplifications regarding: (1)

the rotoric step dx; and

(2)

losses in the coarse model, which are calculated through a barycentric method.

Thus, the Torque-Speed space is divided into five regions, Figure 2, each region is represented by its barycenter and the necessary parameters required calculating equivalent losses. Five parameters are to be calculated as indicated in Table II. The losses are calculated at each barycenter as follows:  ðEJ Þi ¼

Ni R Ii2

T2

 i

ð9Þ

hT i2i

Iron energetic losses per unit volume, calculated with the Bertoti formula for an unspecified periodic variation of magnetic density (Bertotti, 1988) are given by the common formula: 0 Eiron ¼ @kh f DB2k

þ

1 kf T

ZT 

dB dt

2

1 dtADt

ð10Þ

0

where kf and kh are empiric coefficients characterizing iron, and can be calculated through the manufacturer’s data (Hoang, 1995), f and 1/T the fundamental frequency, B(t) the flux density in the considered volume, Dt the time consumed per cycle and

Table II. Barycenter parameters

Expression

Parameter per barycenter i

Mean speed and quadratic mean speed Mean torque and quadratic mean torque Number of points associated to each barycenter

2 hOii ; hO2 i1/ i 2 1/ 2 hTii ; hT ii Ni

p the number of poles. The flux density variations are considered to be sinusoidal, iron losses expression is reformulated and expressed at each barycenter as follows:  ðEiron Þi ¼

kh

2p hOi B2max p

þ kf

 2  2 2 B O max Ni p2

Radial OSM

ð11Þ

Similar considerations as for iron losses, regarding the flux density are taken; therefore, the eddy-current losses per unit of magnet volume to be determined at each barycenter are expressed by (Polinder and Hoeijmakers, 1999):  ðEmagn Þi ¼

 e2m  2  2 O Bmax Ni 12 rm

971

ð12Þ

Through this accuracy modification on coarse model outputs, a non-linear coarse-fine model misalignment is established, especially on a specific function, the basic torque, Tb. The determination of this output relies on the average torque map described above, therefore any alteration of the rotoric step dx will repercuss on the estimation of the basic torque at the operating point determined by the torque-speed couple (100 Nm, 314.16 rd/s), as well as on the electromotive force at basic speed. A detailed comparison between the coarse and fine models is presented in Table III. In order to ease comparison of both models, the coarse and fine models are evaluated at X ¼ [105, 80, 6.25, 57.5, 40, 14] (Table IV). Results of these evaluation and relative errors are shown in Table III. Optimization results In general, the search for an optimal design that minimizes an unknown objective should not pass by a prior criterion set up. However, in order to compare the behavior of the two algorithms, the optimization problem is to be reformulated as a

Characteristic dx No. Operating points Computation time (s)

Coarse

Fine

102 5 3.6

2.103 818 84.8

Characteristic

Coarse

Fine

Relative error (%)

Eresistive ( J) EIron ( J) EMagnet ( J) EInverter ( J) ETotal ( J) Tb (Nm) Ths (Nm) Emfb (V) Emfhs (V)

62,484 280,990 34,912 573,800 952,186 97.1 33.33 198.53 294.61

65,270 293,570 37,507 631,380 1,027,727 93.1 33.33 214.1 336.9

4.3 4.3 6.9 9.1 7.35 4.1 0 7.5 12.55

Table III. Coarse and fine models specifications

Table IV. Coarse and fine models comparison table

Figure 6. ROSM convergence history

Tb relative error (%)

4 2 0 1

2

3 No. iterations

2

0

5

100 80 60 40 20 0

1

2

3 No. iterations

1

2

3 No. iterations

4

1

2

4 3 No. iterations

5

4

5

5 4 3 2 1 0

5

Notes: Basic Emf; (b) high speed Emf

x1 x2 x3 ETotal(kJ) (mm) (mm) (mm)

Table V. Radial output space mapping results

4

4

Notes: (a) The total losses; (b) basic torque

emfb relative error (%)

Figure 5. ROSM convergence history

6 6

emfhs relative error (%)

972

goal attainment problem, by which we will impose a design specification on the objective function. For both algorithms, the Matlab’s sequential quadratic programming (SQP) routine is implemented to solve the optimization on the surrogate model. As well as the use of the same starting point that corresponds to the upper values of the domain denoted by the optimization variables, Table I, for all specified goals. In order to examine closely convergence nature, convergence history of the five considered mappings for total losses equal to 1,300 kJ. The radial OSM history, Figures 5 and 6, shows a smooth monotonous convergence towards preset model specifications, as well as convergence values of different output functions. Table V reflects the rigorous modeling of the surrogate assisted coarse model throughout multi-level optimization phases.

Elosses relative error (%)

COMPEL 33,3

1,000 1,100 1,200 1,300 1,400

150 94.5 150 98.7 150 104.4 150 106.7 150 109.49

9 9.3 8.9 8.8 8.4

x4 (mm) 33.665 40.8 69.6 94.7 106.9

x5 x6 Tb (mm) turns (Nm)

Ths (Nm)

Emfb Emfhs J No. (V) (V) (A mm2) iter

50.53 62.03 56.138 49.35 51.6

33.33 33.33 33.33 33.33 33.33

123.4 74 124.8 142.47 150.59

10.77 9.69 10.6 11.14 11.13

100 100 100 100 100

256 249.9 258 260 254.7

8.99 8.2 6.8 7.5 8.5

7 6 5 5 4

While, in this case of large discrepancy, the alignment of the two models by use of OSM may prove to be unsuitable to solve this type of problematic, the coupling between basic torque and electromotive forces at basic speed, and the noisy relation between coarse and fine model, can act as primary cause of the noisy, none monotonous convergence reflected by Figures 7 and 8. Additionally, the incapacity of this method to rigorously converge is shown in Table VI. Conclusions The use of a new variant of output mapping is proposed in the electromagnetic field in order to increase final robustness of optimal design while using an approximated model, through the progressive construction of a radial basis surrogate model on the

Radial OSM

973

Tb relative error (%)

10

Elosses

6 4 2 0

1

2

3 4 No. iterations

5

8 6 4 2 0

6

1

2

4 3 No. iterations

5

6

2

3 4 No. iterations

5

6

emfhs relative error (%)

emfb relative error (%)

Notes: (a) The total losses; (b) basic torque

100

50

0

1

2

3 4 No. iterations

5

6

5 4 3 2 1 0

1

Figure 8. OSM convergence history

Notes: (a) Basic Emf; (b) high speed Emf

x1 x2 x3 x4 x5 x6 Tb ETotal(KJ) (mm) (mm) (mm) (mm) (mm) turns (Nm) 1,000 1,100 1,200 1,300 1,400

150 94.5 150 99 150 104.5 150 102.7 150 109.56

9 35.665 9.26 56 8.9 64.27 8.79 98.38 8.4 107.15

43.8 38.2 51.9 43.23 51.54

Figure 7. OSM convergence history

11.4 100 11.8 98.6 10.19 93.7 11 92.69 11.15 100

Ths (Nm)

Emfb (V)

33.33 129.4 33.33 83.3 33.33 63.38 33.33 60.2 33.33 150.59

Emfhs J No. (V) (A mm2) iter 259 254 241 245.9 255

9 7.29 7.13 8.02 8.5

7 6 6 6 4

Table VI. Output space mapping results

COMPEL 33,3

974

bulk of the coarse model. This additive technique from a simple implementation and its efficient convergence compared to commonly used OSM in case of large or non-linear misalignment between the two considered fine and coarse model. The use of these methods can be beneficial in case of standard optimization process, if one considered mapping the output functions that are essentially affected by model’s accuracy, as for example, magnetic field, forces, couples, while setting as objectives that are not influenced, like total mass, or manufacturing cost. References Bandler, J.W., Biernacki, R.M., Shao Hua, C., Grobelny, P.A. and Hemmers, R.H. (1994), “Space mapping technique for electromagnetic optimization”, IEEE Transactions on Microwave Theory and Techniques, Vol. 42 No. 12, pp. 2536-2544. Bandler, J.W., Cheng, Q.S., Dakroury, S.A., Mohamed, A.S., Bakr, M.H., Madsen, K. and Sondergaard, J. (2004), “Space mapping: the state of the art”, IEEE Transactions on Microwave Theory and Techniques, Vol. 52 No. 1, pp. 337-361. Bandler, J.W., Qingsha, C., Gebre-Mariam, D.H., Madsen, K., Pedersen, F. and Sondergaard, J. (2001), “EM-based surrogate modeling and design exploiting implicit, frequency and output space mappings”, Microwave Symposium Digest, IEEE MTT-S International, Vol. 2, pp. 1003-1006. Baxter, B. (1992), The Interpolation Theory of Radial Basis Functions, Cambridge University, Cambridge. Ben Ayed, R., Gong, J., Brisset, S., Gillon, F. and Brochet, P. (2012), “Three-level output space mapping strategy for electromagnetic design optimization”, IEEE Transactions on Magnetics, Vol. 48 No. 2, pp. 671-674. Bertotti, G. (1988), “General properties of power losses in soft ferromagnetic materials”, IEEE Transactions on Magnetics, Vol. 24 No. 1, pp. 621-630. Davis, E. and Ierapetritou, M. (2007), “A kriging method for the solution of nonlinear programs with black-box functions”, AIChE Journal, Vol. 53 No. 8, pp. 2001-2012. Dennis, J.E. (2000), “A summary of the Danish Technical University”, Workshop, Department Computational and Applied Mathematics, Rice University, Houston, TX, November. Echeverria, D. and Hemker, P.W. (2005), “Space mapping and defect correction”, Comp. Methods in Appl. Math, Vol. 5 No. 2, pp. 107-136. Encica, L., Paulides, J. and Lomonova, E. (2009), “Space-mapping optimization in electromechanics: an overview of algorithms and applications”, COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering, Vol. 28 No. 5, pp. 1216-1226. Hoang, E. (1995), “Etude, mode´lisation et mesure des pertes magne´tiques dans les moteurs a` re´luctance variable a` double saillance”, PhD dissertation, ENS de Cachan, France. Hong-Soon, C., Dong-Hun, K., Il-Han, P. and Song-Yop, H. (2001), “A new design technique of magnetic systems using space mapping algorithm”, IEEE Transactions on Magnetics, Vol. 37 No. 5, pp. 3627-3630. Koziel, S. and Bandler, J.W. (2010), “Coarse models for efficient space mapping optimization of microwave structures”, Microwaves, Antennas & Propagation, IET, Vol. 4 No. 4, pp. 453-465. Koziel, S., Bandler, J.W. and Madsen, K. (2009), “Space mapping with adaptive response correction for microwave design optimization”, IEEE Transactions on Microwave Theory and Techniques, Vol. 57 No. 2, pp. 478-486. Laslett, G.M. (1994), “Kriging and splines: an empirical comparison of their predictive performance in some applications”, Journal of the American Statistical Association, Vol. 89 No. 426, pp. 391-400.

Mark, J.O. (1996), “Introduction to radial basis function networks”, technical report, Center of Cognitive Science, University of Edinburgh, Edinburgh. Polinder, H. and Hoeijmakers, M.J. (1999), “Eddy-current losses in the segmented surfacemounted magnets of a PM machine”, Electric Power Applications, IEEE Proceedings, Vol. 146 No. 3, pp. 261-266. Simpson, T., Poplinski, J., Koch, P.N. and Allen, J. (2001), “Metamodels for computer-based engineering design: survey and recommendations”, Engineering with Computers, Vol. 17 No. 2, pp. 129-150. Tran, T.V., Brisset, S., Echeverria, D., Lahaye, D. and Brochet, P. (2007), Space-Mapping Techniques Applied to Optimization of a Safety Isolating Transformer, ISEF, Prague. Vivier, S., Lemoine, D. and Friedrich, G. (2011), “Fast optimization of a linear actuator by space mapping using unique finite-element model”, Industry Applications, IEEE Transactions on, Vol. 47 No. 5, pp. 2059-2065. About the authors Maya Hage Hassan was born in Beirut, Lebanon, in 1986. She received her BS degree in mechanical engineering from the Lebanese University-Faculty of Engineering, Beirut, Lebanon, in 2010 and her MS degree in mechanical engineering from the Ecole Centrale de Nantes, Nantes, France, in 2010. She is currently working toward the PhD degree in electrical engineering in the Laboratoire de Ge´nie Electrique de Paris, Paris, France. Her current research interests include optimization and design of electric machines and machine drives. Maya Hage Hassan is the corresponding author and can be contacted at: [email protected] Associate Professor Ghislain Remy was born in Epinal, France, in 1977. He received the teaching degree “Agregation” from the Ecole Normale Supe´rieure de Cachan, France in 2001, and a PhD degree from the Ecole Nationale Supe´rieure d’Arts et Me´tiers (ENSAM) of Lille, France in 2007. Since 2008, he has been an Associate Professor at the Institut Universitaire de Technologie of Cachan (Universite´ Paris-Sud) and with the Laboratoire de Ge´nie Electrique de Paris (LGEP)/Sud de Paris Energie Electrique, CNRS UMR8507, E´cole Supe´rieure d’E´lectricite´, Universite´ Paris VI et XI, Gif-sur-Yvette, France. His current research interests include optimization for system design and real-time control of electromechanical systems with multi-domain and multi-level approaches. Dr Guillaume Krebs was born in Croix (France) in 1978. He received the Engineer degree in 2003 and the PhD degree in electrical engineering from the Universite´ de Lille 1 (USTL) in 2007. He is now Assistant Professor at the Universite´ de Paris-Sud since September 2008. His research areas (at the Laboratory of Electrical Engineering of Paris, LGEP) are the modelling of electrical machine and non-destructive testing by the finite element method. Professor Claude Marchand is Graduated from the Ecole Normale Supe´rieure de Cachan and he received the PhD Degree from the Universite´ Paris VI in 1991. Since 1988 he is with the Laboratoire de Ge´nie Electrique de Paris (LGEP). From 1994 to 2000, he was Assistant Professor at the Institut Universitaire de Technologie of Cachan (Universite´ Paris-Sud). Since 2000 he is Professor at the Universite´ Paris-Sud. He is head of the LGEP Research Department “Modelling and Control of Electromagnetic Systems.” His research interests are in eddy current non-destructive testing and in design and control of electrical actuators.

To purchase reprints of this article please e-mail: [email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints

Radial OSM

975

The current issue and full text archive of this journal is available at www.emeraldinsight.com/0332-1649.htm

COMPEL 33,3

Minimum energy control of descriptor positive discrete-time linear systems

976

Tadeusz Kaczorek Faculty of Electrical Engineering, Bialystok University of Technology, Bialystok, Poland Abstract Purpose – The purpose of this paper is to formulate and solve the minimum energy control problem of descriptor positive discrete-time linear systems. Design/methodology/approach – A procedure for computation of the optimal input sequences and the minimal value of the performance index is proposed. Findings – Necessary and sufficient conditions for the positivity and reachability of descriptor positive discrete-time linear systems and sufficient conditions for the existence of solution to the minimum energy control problem are given. Originality/value – A method for solving of the minimum energy control problem of descriptor positive discrete-time linear systems is proposed. Keywords Systems theory, Optimal control, Control theory, Linear systems, State space Paper type Research paper

COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering Vol. 33 No. 3, 2014 pp. 976-988 r Emerald Group Publishing Limited 0332-1649 DOI 10.1108/COMPEL-04-2013-0111

1. Introduction A dynamical system is called positive if its trajectory starting from any nonnegative initial state remains forever in the positive orthant for all nonnegative inputs. An overview of state of the art in positive theory is given in the monographs (Farina and Rinaldi, 2000; Kaczorek, 2002). Variety of models having positive behaviour can be found in engineering, economics, social sciences, biology and medicine, etc. Descriptor (singular) linear systems with regular pencils have been considered in many papers and books (Rami and Napp, 2012; Brull, 2009; Bru et al., 2003; Campbell, 1980; Canto et al., 2008; Dai, 1989; Kaczorek, 1992, 2011b; Virnik, 2008a, b). The existence of solutions to the standard and positive descriptor linear systems, their some structural properties and stability have been investigated in Brull (2009), Campbell (1980), Canto et al. (2008), Dai (1989), Kaczorek (1992) and Virnik (2008b). The minimum energy control problem for standard linear systems has been formulated and solved by Klamka (1976, 1983, 1991) and for 2D linear systems with variable coefficient in Kaczorek and Klamka (1986). The controllability and minimum energy control problem of fractional discrete-time linear systems has been investigated by Klamka (2010). The minimum energy control of fractional positive continuous-time linear systems has been addressed in Kaczorek (2013). In this paper necessary and sufficient conditions for the positivity and reachability will be established. The minimum energy control problem of descriptor positive discrete-time linear systems will be formulated and solved. The paper is organized as follows. In Section 2 necessary and sufficient conditions for the positivity of the descriptor systems are given. The transformation of the This work was supported by National Science Centre in Poland under work S/WE/1/11.

descriptor systems to equivalent standard systems by the use of the shuffle algorithm is considered in Section 3. In Section 4 the reachability of positive descriptor linear systems is discussed. The minimum energy control problem of descriptor positive systems is formulated and solved in Section 5. Concluding remarks are given in Section 6. The following notation will be used: R – the set of real numbers, Rnm – the set – the set of n  m matrices with nonnegative entries and of n  m real matrices, Rnm þ Rnþ ¼ Rn1 þ , Mn – the set of n  n Metzler matrices (real matrices with nonnegative off-diagonal entries), In – the n  n identity matrix. 2. Descriptor discrete-time linear systems Consider the descriptor discrete-time linear system: Exiþ1 ¼ Axi þ Bui ; i 2 Zþ ¼ f0; 1; :::g

ð2:1Þ

where x iARn, uiARm are the state and input vectors and E,AARnn, BARnm. It is assumed that det E ¼ 0 and the pencil EzA is regular, i.e. det[EzA]a0 for some zAC (the field of complex numbers): Definition 2.1: The system (2.1) is called (internally) positive if x iARnþ for every consistent nonnegative initial condition x0ARnþ and every input uiARmþ , iAZ þ . The set of consistent nonnegative initial conditions x0ARnþ depends not only on the matrices E, A, B but also on the admissible inputs uiARmþ (Rami and Napp, 2012; Bru et al., 2003; Canto et al., 2008; Kaczorek, 2002; Virnik, 2008b). In the next section it will be shown that the system (2.1) can be reduced to the equivalent form:  i þ B0 ui þ B1 uiþ1 þ ::: þ Bq1 uiþq1 xiþ1 ¼ Ax

ð2:2Þ

where A 2 Rn  n , Bk 2 Rn  m , k ¼ 0,1, y , q1 will be defined in Section 3.1 and q is the index of the pair (E, A). Theorem 2.1: the descriptor discrete-time linear system (2.2) is positive if and only if: nm A 2 Rnþ n ; Bk þ ; k¼ 0; 1;:::; q1

ð2:3Þ

Proof: Sufficiency. The solution of (2.2) has the form: xi ¼ Ai x0 þ

i1 X

  Aik1 B0 uk þ B1 ukþ1 þ ::: þ Bq1 ukþq1

ð2:4Þ

k¼0

From (2.4) and (2.2) it follows that if (2.3) holds then x iARnþ for every x0ARnþ and all uiARmþ , iAZ þ .  0 2 Rn . Necessity. Let ui ¼ 0 for iAZ þ . Then from (2.2) for i ¼ 0 we have x1 ¼ Ax þ n nn  This implies A 2 Rþ since x0AR þ is arbitrary. In a similar way we may show that Bk 2 Rnþ m for k ¼ 0,1, y, q1.’

Discrete-time linear systems 977

COMPEL 33,3

3. Transformation of the state equations by the use of the shuffle algorithm The following elementary row operations will be used (Kaczorek, 1992, 2011b): (1)

Multiplication of the i th row by a real number c. This operation will be denoted by L[i  c].

(2)

Addition to the ith row of the jth row multiplied by a real number c. This operation will be denoted by L[i þ j  c].

(3)

Interchange of the ith and jth rows. This operation will be denoted by L[i, j ].

978

Using the shuffle algorithm (Kaczorek, 1992) we shall transform the state Equation (2.1) with det E ¼ 0 and regular pencil EzA to the equivalent form (2.2). Performing elementary row operations on the array: E A

ð3:1Þ

B

or equivalently on (2.1) we get: E1

A1

B1

0

A2

B2

ð3:2Þ

and: E1 xiþ1 ¼ A1 xi þ Bi ui

ð3:3aÞ

0 ¼ A 2 x i þ B2 u i

ð3:3bÞ

where E1 has full row rank. Substituting in (3.3b) i by i þ 1 we obtain: A2 xiþ1 ¼ B2 uiþ1

ð3:4Þ

The Equations (3.3a) and (3.4) can be written in the form: 

       A1 B1 0 E1 xi þ ui þ xiþ1 ¼ u B2 iþ1 A2 0 0

ð3:5Þ

The array: E1

A1

B1

0

A2

0

0

B2

ð3:6Þ

can be obtained from (3.2) by performing a shuffle.   E1 If det 6¼ 0 then solving the Equation (3.5) we obtain: A2 

xiþ1

E1 ¼ A2

1 

      A1 B1 0 x þ u þ u 0 i 0 i B2 iþ1

ð3:7Þ

If the matrix is singular then performing elementary row operations on (3.6) we obtain: E2

A3

B3

C1

0

A4

B4

C2

ð3:8Þ

Discrete-time linear systems

and: E2 xiþ1 ¼ A3 xi þ B3 ui þ C1 uiþ1

ð3:9aÞ

0 ¼ A4 xi þ B4 ui þ C2 uiþ1

ð3:9bÞ

where E2 has full row rank and rank E2X rank E1. Substituting in (3.9b) i by i þ 1 we obtain: A4 xiþ1 ¼ B4 uiþ1 þ C2 uiþ2

ð3:10Þ

The Equations (3.9a) and (3.10) can be written as: 

         A3 B3 C1 0 E2 xi þ ui þ xiþ1 ¼ uiþ1 þ u C2 iþ2 A4 0 0 B4

ð3:11Þ

The array: E2 A4

A3 0

B3 0

C1 B4

0 C2

ð3:12Þ 

 E2 6¼ 0, we can find x i þ 1 A2 from (3.11). If the matrix is singular, we repeat the procedure for (3.12). If the pencil is regular then after q1 steps we obtain a nonsingular matrix:   Eq1 Aqþ1 can be obtained from (3.8) by performing a shuffle. If det

And: 

xiþ1

Eq1 ¼ Aqþ1

1 

        0 Aq Bq Cq x þ u þ u þ ::: þ u ð3:13Þ Hq iþq1 0 i 0 i 0 iþ1

Therefore, it has been shown that if det E ¼ 0 and the pencil EzA is regular then the Equation (2.1) can be reduced to the equivalent form (3.13) (or (2.2)) by the use of the shuffle algorithm. Example 3.1. Reduce the descriptor system (2.1) with the matrices: 2 3 2 3 2 3 1 0 0 1 1 0 1 E ¼ 4 0 1 0 5; A ¼ 4 0 0 1 5; B ¼ 4 0 5 ð3:14Þ 0 0 0 1 0 1 1 to the equivalent form (2.2) and check the positivity of the system.

979

COMPEL 33,3

In this case the array (3.1) has already the desired form (3.2): E

A

B ¼

E1 0

A1 A2

1 B1 ¼ 0 B2 0

0 1 0

0 0 0

1 0 1

1 0 0

0 1 1

1 0 1

0

  1 1 ; B1 ¼ ; B2 ¼ ½1 ð3:15bÞ 0

ð3:15aÞ

where:

980



1 0 E1 ¼ 0 1

  0 1 1 ; A1 ¼ 0 0 0

 0 ; A2 ¼ ½ 1 1

Performing the shuffle on (3.15a) we obtain: E1 A2

A1 0

1 0 ¼ 0 B2 1

B1 0

and the nonsingular matrix: 

E1 A2



0 1 0

2

1 ¼4 0 1

0 0 1 0 1 0

1 0 0

1 0 0

0 1 0

1 0 0

0 0 1

ð3:16Þ

3 0 05 1

ð3:17Þ

In this case the equation (3.7) has the form:    1      A1 E1 B1 0 xi þ ui þ xiþ1 ¼ uiþ1 B2 A2 0 0  i þ B0 ui þ B1 uiþ1 ¼ Ax where: A ¼



E1 A2

1 

A1 0



2

3

1

1

0

6 ¼ 40

0

7 1 5; B0 ¼

1 1 2 3 0     E1 1 0 6 7 ¼ 405 B1 ¼ B2 A2 1

0

ð3:18aÞ 

E1 A2

1 

B1 0



2 3 1 6 7 ¼ 4 0 5; 1

ð3:18bÞ

By Theorem 2.1 the descriptor system (2.1) with (3.14) is positive since the matrices (3.18b) have nonnegative entries. 4. Reachability of the positive systems The reachability of continuous-time positive linear systems have been investigated in Valcher (2009). Consider the positive discrete-time linear system (2.2). Definition 4.1: The positive system (2.2) is called reachable in n steps if for any given xf ARnþ there exists an input sequence ukARmþ for k ¼ 0,1, y , h–1, h ¼ n þ q that steers the state of the system from x0 ¼ 0 to xf ARnþ , i.e. xn ¼ xf.

Theorem 4.1. The positive system (2.2) is reachable in n steps if and only if the reachability matrix: Rhm ¼ ½ Bq

Bq1 þ ABq

:::

An1 B1 þ An2 B0

n  hm An1 B0  2 Rþ

ð4:1Þ

contains the monomial matrix:

981

Rn ¼ ½ R^1

R^2

nn ::: R^n  2 Rþ

ð4:2Þ

where R# k, k ¼ 1,2, y ,n are some linearly independent monomial columns of (4.1). Proof: the solution of the equation (2.2) for i ¼ n and x0 ¼ 0 has the form: 2

3 uh1 n1 6 uh2 7 X 6 7 xf ¼ xn ¼ Ank1 ðB0 uk þ B1 ukþ1 þ ::: þ Bq ukþq Þ ¼ Rn 6 .. 7 4 . 5

ð4:3Þ

k¼0

u0 where Rn is defined by (4.2). Assuming some uk ¼ 0, kA[0,1, y , h1] expect whose which correspond to the linearly independent monomial columns of (4.2) we obtain xf ¼ R# u# and: u^ ¼ R^1 xf 2 Rnþ

ð4:4Þ

since R# 1ARnn þ . For different choice of linearly independent monomial columns R# k, k ¼ 1,2, y ,n we obtain different u#. Therefore, if the positive system is reachable then there exist many different input sequences ukARmþ for k ¼ 0,1, y , n þ q–1 which steers the state of the system from x0 ¼ 0 to xf ARnþ .’ Example 4.1. (Continuation of Example 3.1). Check the reachability of the positive system (3.18). In this case n ¼ 3, q ¼ 1 and the matrix (4.1) has the form: 2

R4 ¼ ½ B1

B0 þ AB1

0 A2 B0  ¼ 4 0 1

AB0 þ A2 B1

1 1 1

2 1 2

3 2 1 5: 2

ð4:5Þ

The matrix (4.5) contains only one monomial column B1 . Therefore, by Theorem 4.1 the positive system (3.18) and also the descriptor system (2.1) with (3.14) is not reachable. Example 4.2. Consider the descriptor system (2.1) with the matrices: 2

1 E ¼ 40 0

0 1 0

Discrete-time linear systems

3 2 0 0 0 5; A ¼ 4 1 0 0

1 0 0

3 2 3 0 0 1 5; B ¼ 4 1 5: 1 1

ð4:6Þ

Using the shuffle algorithm we may reduce the descriptor system with (4.6) to the equivalent form (2.2):  i þ B0 ui þ B1 uiþ1 xiþ1 ¼ Ax

ð4:7aÞ

COMPEL 33,3

982

where: 2

0 A ¼ 4 1 0

2 3 2 3 3 0 0 0 1 5; B0 ¼ 4 1 5; B1 ¼ 4 0 5: 0 1 0

1 0 0

ð4:7bÞ

By Theorem 2.1 the equivalent system is positive since the matrices (4.7b) have nonnegative entries. Hence the descriptor system is also positive. To check the reachability of the positive system (4.7) we compute the reachability matrix: 2

R4 ¼ ½ B1

B0 þ AB1

AB0 þ A2 B1

0 A2 B0  ¼ 4 0 1

0 2 0

2 0 0

3 0 1 5: 0

ð4:8Þ

The matrix (4.8) has three linearly independent monomial columns. Choosing as the linearly independent first three columns of (4.8) we obtain the matrix (4.2) of the form: 2

R^ ¼ ½ R^1

0 2 0

3 2 05 0

31 2 2 0 0 5 xf ¼ 4 0 0 0:5

0 0:5 0

0 R^3  ¼ 4 0 1

R^2

ð4:9Þ

and from (4.4): 2

3 2 u0 0 u^ ¼ 4 u1 5 ¼ R^1 xf ¼ 4 0 1 u2

0 2 0

3 1 0 5xf 2 Rþ 0

ð4:10Þ

for any given final state vector xfAR3þ . If we choose as the linearly independent monomial columns of the matrix (4.8) its first, fourth and third column then we obtain: 2

0 R^0 ¼ 4 0 1

0 1 0

3 2 05 0

ð4:11Þ

and: 2

3 2 0 u0 u^0 ¼ 4 u3 5 ¼ 4 0 1 u2

0 1 0

31 2 2 0 0 5 xf ¼ 4 0 0 0:5

0 1 0

3 1 0 5xf 2 Rþ 0

ð4:12Þ

for any given xfAR3þ . 5. Minimum energy control Consider the descriptor positive system (2.1) reduced to the form (2.2). In Section 4 it was shown that if the positive system is reachable then there exist many input sequences that steer the state of the system from x0 ¼ 0 to the given final state xfARnþ .

Among these input sequences we are looking for sequence ukAR mþ for k ¼ 0,1, y ,n þ q–1 that minimizes the performance index: I ðuÞ ¼

h1 X

uTi Qui

ð5:1Þ

Discrete-time linear systems

i¼0

where QARmm is a symmetric positive defined matrix such that: þ Q

1

2

m Rm þ

983 ð5:2Þ

and h is the number of steps in which the state of the system is transferred from x0 ¼ 0 to the given final state xfARnþ . The minimum energy control problem for the descriptor positive discrete-time linear systems (2.1) can be stated as follows. Given the matrices E, A, B of the descriptor positive system (2.1), the number of of the performance index (5.1) steps h, the final state xfARnþ and the matrix QARmm þ satisfying the condition (5.2), find a sequence of inputs ukARmþ for k ¼ 0,1, y ,h–1 that steers the state of the system from x0 ¼ 0 to xfARnþ and minimizes the performance index (5.1). To solve the problem we define the matrix: T nn Wn ¼ Rhm Q1 hm Rhm 2 Rþ

ð5:3Þ

is defined by (4.1) and: where RhmARnhm þ 1 1 hm  hm Q1 hm ¼ blockdiag½Q ; :::; Q  2 Rþ

The matrix is non-singular if the positive system is reachable in h steps. For a given xfARnþ we may define the input sequence: 2 3 u^h1 6 u^h2 7 6 7 T 1 u^ ¼ 6 .. 7 ¼ Q1 hm Rhm Wn xf 4 . 5 u^0

ð5:4Þ

ð5:5Þ

where Q1 hm, Wn and Rhm are defined by (5.4), (5.3) and (4.1), respectively. Note that u#ARhm þ if: Wn1 2 Rnþ n

ð5:6Þ

and this holds if the condition (5.2) is met. Theorem 5.1. Let the descriptor positive system (2.1) be reachable in h steps and the conditions (5.2) and (5.6) be satisfied. Moreover, let: 2 3 uh1 6 uh2 7 6 7 ð5:7Þ u ¼ 6 .. 7 2 Rhm þ 4 . 5 u0

COMPEL 33,3

be an input sequence that steers the state of the system from x0 ¼ 0 to xfARnþ . Then the input sequence (5.5) also steers the state of the system from x0 ¼ 0 to xfARnþ and minimizes the performance index (5.1), i.e. I ð^ uÞpI ð uÞ

984

ð5:8Þ

The minimal value of the performance index (5.1) is given by: I ðu^Þ ¼ xTf Wh1 xf

ð5:9Þ

Proof: if the conditions (5.2) and (5.6) are met and the system is reachable in h steps then the input sequence (5.5) is well defined and u#ARhm þ . We shall show that the input sequence (5.5) steers the state of the system from x0 ¼ 0 to xfARnþ . Using (4.3) and (5.5) we obtain: T 1 xh ¼ Rhm u^ ¼ Rhm Q1 hm Rhm Wn xf ¼ xf

ð5:10Þ

T ^ ¼ Rhm u or: since by (5.3) Rhm Q1 hm Rhm ¼ Wn. Hence xf ¼ Rhm u

Rhm ½u^  u ¼ 0

ð5:11Þ

T ½^ u  uT Rhm ¼0

ð5:12Þ

The transposition of (5.11) yields:

Postmultiplying the equality (5.12) by W1 n xf we obtain: T ½^ u  uT Rhm Wn1 xf ¼ 0

ð5:13Þ

1 From (5.5) we have Qhmu# ¼ RT hmWn xf. Substitution of this equality into (5.13) yields:

½^ u  uT Qhm u^ ¼ 0

ð5:14Þ

where Qhm ¼ blockdiag[Q,y,Q]ARhmhm þ From (5.14) it follows that: uT Qhm u ¼ u^T Qhm u^ þ ½u  u^T Qhm ½u  u^

ð5:15Þ

since by (5.14) uT Qhm u^ ¼ u^T Qhm u^ ¼ u^T Qhm u. u  u^X0. To find the From (5.15) it follows that (5.8) holds since ½ u  u^T Qhm ½ minimal value of the performance index (5.1) we substitute (5.5) into (5.1) and we obtain: h1 X T T 1 1 T 1 u^Ti Q^ ui ¼ u^T Qhm u^ ¼ ½Q1 I ð^ uÞ ¼ hm Rhm Wn xf  Qhm ½Qhm Rhm Wn xf  ð5:16Þ i¼0 T 1 T 1 ¼ xTf Wn1 Rhm Q1 hm Rhm Wn xf ¼ xf Wn xf 1 T since by (5.3) W1 n Rhm Qhm Rhm ¼ In.’

From the above considerations we have the following procedure for computation of the optimal input sequence and the minimal value of the performance index (5.5). Procedure 5.1: . step 1: knowing E, A, B compute A and Bi for i ¼ 0,1, y ,q1;

985

.

step 2: using (4.1) compute Rhm;

.

step 3: for given Q using (5.4) compute Q1 hm;

.

step 4: using (5.3) compute Wn and W1 n ; and

.

step 5: for given xfARnþ using (5.5) compute u#.

.

Step 6: using (5.9) compute the minimal value of the performance index I(u#).

Example 5.1. (Continuation of Example 4.2). For the descriptor system (2.1) with (4.6) 2 3 2 3 0 1 compute the optimal input sequence that steers the system from x0 ¼ 4 0 5 to xf ¼ 4 1 5 0 1 and minimize the performance index (5.5) for: Q4 ¼ diag½2; 2; 2; 2

ð5:17Þ

Using procedure and results of Example 4.2 we obtain the following: . step 1: the desired matrices A and Bi for i ¼ 0,1, y ,q1 are given by (4.7b); .

step 2: the matrix R4 is given by (4.8);

.

step 3: using (5.4) and (5.17) we obtain: Q1 4 ¼ diag½0:5; 0:5; 0:5; 0:5

.

ð5:18Þ

step 4: using (5.3), (5.4) and (5.17) we obtain: 2

0 0 2 6 T W3 ¼ R4 Q1 R ¼ 0 2 0 4 4 4 1 0 0

2 3 0:5 0 0 6 76 0 0:5 1 56 4 0 0 0 0 0

0

0

32

0 0:5

0 0

76 0 2 76 76 54 2 0

0

0:5

0 0

0 1

3

2 3 2 0 0 07 7 6 7 7 ¼ 4 0 2:5 0 5 05 0 0 0:5 0 1

ð5:19aÞ and:

2

W31

Discrete-time linear systems

0:5 ¼4 0 0

0 0:4 0

3 0 05 2

ð5:19bÞ

COMPEL 33,3

.

986

.

step 5: using (5.5), (5.17) and (5.19b) we obtain: 2 3 u^3 6 u^2 7 1 T 1 7 u^ ¼ 6 4 u^1 5 ¼ Q4 R4 W3 xf u^0 2 32 3 0:5 0 0 0 0 0 1 2 6 0 0:5 0 6 7 0:5 0 0 7 76 0 2 0 74 0 0:4 ¼6 4 0 0 0:5 0 54 2 0 0 5 0 0 0 0 0 0:5 0 1 0 step 6: using (5.9) and (5.19b) we obtain: 2 0:5 I ð^ uÞ ¼ xTf W31 xf ¼ ½ 1 1 1 4 0 0

0 0:4 0

2 3 32 3 1 0 1 6 0:4 7 7 0 54 1 5 ¼ 6 4 0:5 5 2 1 0:2

ð5:20Þ

32 3 0 1 0 54 1 5 ¼ 2:9 2 1

ð5:21Þ

6. Concluding remarks Necessary and sufficient conditions for the positivity and reachability of descriptor positive discrete-time linear systems have been established (Theorem 2.1 and Theorem 4.1). The transformation of the descriptor system to equivalent standard system by the use of the shuffle algorithm has been addressed. The minimum energy control problem for the descriptor positive systems has been formulated and solved (Theorem 5.1). A procedure for computation of optimal input sequences and minimal value of the performance index has been proposed (Procedure 5.1). The procedure has been demonstrated on numerical example. An open problem is an extension of these considerations to fractional positive descriptor continuous-time and discrete-time linear systems. References Bru, R., Coll, C., Romero-Vivo, S. and Janchez, S. (2003), “Some problem about structural properties of positive descriptor systems”, Lecture Notes in Control and Information Science, Vol. 294, Spring, pp. 233-240. Brull, T. (2009), “Explicit solutions of regular linear discrete-time descriptor systems with constant coefficients”, Electron. J. Linear Algebra, Vol. 18, Spring, pp. 317-338. Campbell, S.L. (1980), Singular Systems of Differential Equations Research Notes in Mathematics, Vol. 40, Pitman, Boston, MA. Canto, B., Coll, C. and Sanachez, E. (2008), “Positive solutions of a discrete-time descriptor systems”, Inter. J. Systems Sci., Vol. 39 No. 1, pp. 81-88. Dai, L. (1989), “Singular control systems”, Lecture Notes in Control and Information Science, Vol. 118, Spring. Farina, L. and Rinaldi, S. (2000), Positive Linear Systems; Theory and Applications, J. Wiley, New York, NY. Kaczorek, T. (1992), Linear Control Systems, Research Studies Press and J. Wiley, New York, NY. Kaczorek, T. (2002), Positive 1D and 2D Systems, Springer-Verlag, London.

Kaczorek, T. (2011b), “Checking of the positivity of descriptor linear systems by the use of the shuffle algorithm”, Archives of Control Sciences, Vol. 21 No. 3, pp. 287-298. Kaczorek, T. (2013), “Minimum energy control of fractional positive continuous-time linear systems”, Proceedings of MMAR 2013 – 18th International Conference of Methods and Models in Automation and Robotics, Mi˛edzyzdroje, 26-26 August. Kaczorek, T. and Klamka, J. (1986), “Minimum energy control of 2D linear systems with variable coefficients”, Int. J. of Control, Vol. 44 No. 3, pp. 645-650. Klamka, J. (1976), “Relative controllability and minimum energy control of linear systems with distributed delays in control”, IEEE Trans. Autom. Contr., Vol. 21 No. 4, pp. 594-595. Klamka, J. (1983), “Minimum energy control of 2D systems in Hilbert spaces”, System Sciences, Vol. 9 Nos 1-2, pp. 33-42. Klamka, J. (1991), Controllability of Dynamical Systems, Kluwer Academic Press, Dordrecht. Klamka, J. (2010), “Controllability and minimum energy control problem of fractional discrete-time systems’”, in Baleanu, D., Guvenc, Z.B. and Tenreiro Machado, J.A. (Eds), New Trends in Nanotechology and Fractional Calculus, Springer-Verlag, New York, NY, pp. 503-509. Rami, M.A. and Napp, D. (2012), “Characterization and stability of autonomous positive descriptor systems”, IEEE Trans. Autom. Contr., Vol. 57 No. 10, pp. 2668-2673. Valcher, M.E. (2009), “Reachability properties of continuous-time positive systems”, IEEE Trans. Autom. Contr., Vol. 54 No. 7, pp. 1586-1590. Virnik, E. (2008a), “Stability analysis of positive descriptor systems”, Linear Algebra Appl., Vol. 429 No. 10, pp. 2640-2659. Virnik, E. (2008b), Analysis of Positive Descriptor Systems, Topics in System and Control Theory, Springer Verlag. Further reading Kaczorek, T. (2011a), “Positive linear systems consisting of n subsystems with different fractional orders”, IEEE Trans. Circuits and Systems, Vol. 58 No. 6, pp. 1203-1210. Kaczorek, T. (2011c), Selected Problems of Fractional Systems Theory, Springer-Verlag, Berlin. About the author Professor Tadeusz Kaczorek (born 1932, Poland) received his MSc, PhD and DSc degrees from the Faculty of Electrical Engineering at the Warsaw University of Technology in 1956, 1962 and 1964, respectively. He began his professional career in 1954 at the Warsaw University of Technology and continued to work there till 2003. On his retirement he has worked at the Faculty of Electrical Engineering of the Bialystok University of Technology till now. From 1968 to 1969 he was the Dean of the Faculty of Electrical Engineering and from 1970 to 1973 he was the Prorector of the Warsaw University of Technology. Since 1971 he has been a Professor and since 1974 a full Professor at the Warsaw University of Technology. In 1986 he was elected a corresponding member and in 1996 a full member of the Polish Academy of Sciences. From 1988 to 1991 he was the Director of the Research Centre of the Polish Academy of Sciences in Rome. In June 1999 he was elected a full member of the Academy of Engineering in Poland. In May 2004 he was the elected honorary member of the Hungarian Academy of Sciences. He was awarded a degree honoris causa by the University of Zielona Gora (2002), Lublin University of Technology (2004), West Pomeranian University of Technology (2004), Warsaw University of Technology (2004), Bialystok University of Technology (2008),

Discrete-time linear systems 987

COMPEL 33,3

988

Lodz University of Technology (2009), Opole University of Technology (2009), Poznan University of Technology (2011) and Rzeszow University of Technology (2012). His research interests cover the theory of systems and the automatic control systems theory, especially singular multidimensional systems, positive multidimensional systems, fractional systems and singular positive 1D and 2D systems. He has initiated research in the field of singular 2D, positive 2D linear systems and positive fractional 1D and 2D systems. He has published 24 books (seven in English) and over 1,000 scientific papers. Professor Tadeusz Kaczorek can be contacted at: [email protected]

To purchase reprints of this article please e-mail: [email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints

The current issue and full text archive of this journal is available at www.emeraldinsight.com/0332-1649.htm

Current spectrum estimation using Prony’s estimator and coherent resampling Michaz Lewandowski and Janusz Walczak

Prony’s estimator and coherent resampling 989

Faculty of Electrical Engineering, Silesian University of Technology, Gliwice, Poland Abstract Purpose – A highly accurate method of current spectrum estimation of a nonlinear load is presented in this paper. Using the method makes it possible to evaluate the current injection frequency domain model of a nonlinear load from previously recorded time domain voltage and current waveforms. The paper aims to discuss these issues. Design/methodology/approach – The method incorporates the idea of coherent resampling (resampling synchronously with the base frequency of the signal) followed by the discrete Fourier transform (DFT) to obtain the frequency spectrum. When DFT is applied to a synchronously resampled signal, the spectrum is free of negative DFT effects (the spectrum leakage, for example). However, to resample the signal correctly it is necessary to know its base frequency with high accuracy. To estimate the base frequency, the first-order Prony’s frequency estimator was used. Findings – It has been shown that the presented method may lead to superior results in comparison with window interpolated Fourier transform and time-domain quasi-synchronous sampling algorithms. Research limitations/implications – The method was designed for steady-state analysis in the frequency domain. The voltage and current waveforms across load terminals should be recorded simultaneously to allow correct voltage/current phase shift estimation. Practical implications – The proposed method can be used in case when the frequency domain model of a nonlinear load is desired and the voltage and current waveforms recorded across load terminals are available. The method leads to correct results even when the voltage/current sampling frequency has not been synchronized with the base frequency of the signal. It can be used for off-line frequency model estimation as well as in real-time DSP systems to restore coherent sampling of the analysed signals. Originality/value – The method proposed in the paper allows to estimate a nonlinear load frequency domain model from current and voltage waveforms with higher accuracy than other competitive methods, while at the same time its simplicity and computational efficiency is retained. Keywords Coherent resampling, Frequency domain model estimation, Prony’s frequency estimator, Spectrum estimation Paper type Research paper

1. Introduction Modern harmonic analysis of power supplying networks is often based on frequency domain methods (Arrillaga and Watson, 2003). One of the most popular methods consists in evaluating the network state for each harmonic separately and is widely used for the steady-state analysis of electrical networks (Grady, 2006). Nonlinear load models used in this method are usually based on higher harmonics current injection. In this kind of model, load currents for each phase can be expressed as follows (considering three-phase system): ( ) N pffiffiffi Ua1 jo1 t X e þ Iah ejho1 t ; a ¼ 1; 2; 3 ð1Þ ia ðtÞ ¼ 2Re Za1 h¼2

COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering Vol. 33 No. 3, 2014 pp. 989-997 r Emerald Group Publishing Limited 0332-1649 DOI 10.1108/COMPEL-01-2013-0013

COMPEL 33,3

990

where a is the phase number; o1 the angular frequency of the first harmonic; ia(t) the load current of the a phase; N the number of the highest considered harmonic; Ua1 the complex value of the first voltage harmonic across load terminals for the a phase; Za1 the a phase complex impedance of the load for the first harmonic (evaluated from the values of the first voltage and current harmonics); Iah the complex value of the hth current harmonic of the a phase. In the following paragraphs the model defined by Equation (1) is considered only for the harmonics of integer order (steady-state analysis in the frequency domain). This assumption is widely used in many currently available software packages dedicated to the simulation of harmonics propagation in the supply networks (Grady, 2006). Equation (1) can be represented by the equivalent three-phase circuit diagrams (Figure 1). As it can be noticed in Figure 1, for the first harmonic the model is represented by the three impedances Z1, Z2 and Z3. These impedances can be calculated when the values of the first voltage and current harmonics are known. For higher harmonics (h ¼ 2, y, N) the model is represented by the equivalent current sources for each harmonic and phase (I1h, I2h and I3h). In other words, for higher harmonics it is necessary to know the current frequency spectrum of each phase. In most practical cases, only the time domain voltage and current waveforms measured across load terminals are available (Lewandowski et al., 2011), which creates the need for accurate current frequency spectrum estimation from its time domain representation. There are many methods of stationary and nonstationary signal frequency spectrum estimation (Antoniou, 2005; Grabowski and Walczak, 2003; Leonowicz, 2004). The most popular ones are usually based on the discrete Fourier transform (DFT). The accuracy of spectrum estimation using DFT is strongly connected with the signal parameters (sampling frequency, base frequency, number of recorded periods, etc.). Only under very strict conditions DFT leads to a spectrum deprived of negative effects such as the spectrum leakage (Agrezˇ, 2005). As it was discussed in (Lewandowski et al., 2011), these negative effects in most cases cannot be neglected and they might have a significant influence on the load model parameters. For example, in the work by Lewandowski et al. (2011) it was shown that the frequency swing can cause a significant difference in the evaluation of total harmonic distortion (THD) value calculated from the estimated signal spectrum. In Figure 2, a relative THD error, occurring when the base frequency of the signal is different then the assumed 50 Hz, is shown. Results were obtained for a standard square wave signal (first 40 harmonics were considered). As can be noticed in Figure 2, the sampling frequency is synchronized with the base frequency of the signal only for 50 Hz, which leads to THD error equal 0 per cent. When the frequency moves away from the value of 50 Hz, the THD error rapidly increases to almost 30 per cent. It follows that DFT may lead to correct results when the h=1 i1(t)

Figure 1. Three-phase nonlinear load frequency domain model based on higher harmonics current injection

h=2…N I1h

I11 U11

u1(t) ≡ u3(t)

u2(t) i2(t)

+ U31

i3(t)

U1h

Z1

U21 I21

Z2

U3h

Z3 I31

U2h I2h

I3h

signal parameters are chosen carefully. In particular, the following set of rules leads to correct DFT results: (1)

The sampling frequency is synchronized with the signal base frequency (the coherent sampling). In other words, the sampling frequency is an integer multiple of the base signal frequency (first harmonic frequency) with respect to the sampling theorem.

(2)

The observation time is equal to the integer number of signal periods.

Prony’s estimator and coherent resampling 991

Unfortunately, in most cases these conditions are not met during the signal recording, which creates the need to reduce the DFT related negative effects. One of the widely used techniques is the time domain windowing applied for power system signals (Xue and Yang, 2003). Other examples are the leakage minimization method discussed in Agrezˇ (2005) or the spectrum leakage elimination presented in Salor (2009). These methods are focused on reducing the transform errors which are caused by the improper (incoherent) signal sampling. On the other hand, there are methods which are focused on restoring the proper signal parameters to avoid the DFT errors instead of reducing them. One of the most interesting methods is the signal resampling with a frequency synchronised with the base frequency (first harmonic) of the signal. The method is called coherent resampling (Zhou et al., 2010) and its accuracy is limited by two main factors – the accuracy of the base frequency estimation and the accuracy of the interpolation during the resampling process. In the paper by Zhou et al. (2010), Newton’s interpolation was proposed for both resampling and as a part of the frequency estimation algorithm. However, there are different frequency estimators which can be used in power system analysis. In the paper by Łobos and Rezmer (1997), Prony’s first-order estimator was used for successful real-time tracking of signal base frequency and the presented results confirmed its high effectiveness. In the following paragraphs, a hybrid frequency spectrum estimation method based on coherent resampling and Prony’s estimator is proposed. 2. Spectrum estimation method All further considerations are limited to a fully balanced system. In this case, the model can be reduced to its one phase equivalent shown in Figure 3, which is a common practice (Grady, 2006). As it can be noticed in Figure 3, the one-phase model is much simpler and all the phase indices (the a symbol in Equation (1)), can be dropped. It is also necessary to mention that

THDerror %

30

20

10

0 48

49

50 frequency Hz

51

52

Figure 2. Relative THD estimation error for different values of the signal base frequency (sampling frequency was constant and equal 6.4 kHz)

COMPEL 33,3

992

the parameters of the model are only the: U1, Z1 and Ih. The Uh values are calculated for each harmonic as a result of the harmonic flow analysis of the system (6). The block diagram presenting the proposed method of current frequency spectrum estimation of one-phase nonlinear load is shown in Figure 4. Discrete current signal i [n] is processed in a semi-parallel mode. In the first signal path, input current i [n] is filtered by a band-pass filter to extract the base frequency component if [n]. It has been assumed that the frequency of this component is in a range of 48-52 Hz. Exemplary signals before i [n] and after filtering if [n] are shown in Figure 5. Next, using filtered signal if [n], the base frequency is estimated using the Prony’s estimator. The first order Prony’s frequency estimator has been defined as follows (Łobos and Rezmer, 1997): 2 f1 ¼

NP 1 

if ½n  1 þ if ½n þ 1

3

2

6 7 1 6 n¼1 7 arccos6 N 1 7  5 4 P 2pTp 2 if ½n  if ½n  1 þ if ½n þ 1

ð2Þ

n¼1

where T p is the sampling period; N the number of considered samples; f1 the estimated base frequency. h =1 i(t)

Figure 3. One-phase equivalent nonlinear load model for balanced system

h=2…N I1



u(t)

U1

Z1

Uh

+

Ih

frequency estimation if [n]

band-pass filter (48-52 Hz)

Prony’s frequency estimator

f1

i [n]

Figure 4. Spectrum estimation method block diagram

Interpolation and coherent resampling

ir [n]

ic [n-kM, n]

whole periods cutting

DFT

Ic [f ]

resampling and DFT

i [n]

input signal

1

if [n]

filtered signal (first harmonic)

1 0.5

0

Figure 5. Signals before and after band-pass filtering

0 –0.5

–1

0

50

100

200 150 sample number

0

20

40

60

80 100 120 sample number

The estimated frequency f1 is used in the second signal path for the coherent resampling (new sampling frequency is an integer multiple of f1). At this stage the signal is interpolated using one of the known interpolation methods (Hoffman and Frankel, 2001; Faires and Burden, 2002). The presented method has already been tested using the Newton’s divided-difference formula, which can be defined as follows (Faires and Burden, 2002): F ðxÞ ¼ f ½x0  þ f ½x0 ; x1 ðx  x0 Þ þ f ½x0 ; x1 ; x2 ðx  x0 Þðx  x1 Þ þ ::: þ f ½x0 ; x1 ; x2 ; :::; xn ðx  x0 Þ:::ðx  xn1 Þ

Prony’s estimator and coherent resampling 993

ð3Þ

where F(x) is the interpolated function value in point x and f [x i, y, x i þ k] is defined as follows: f ½xi ; :::; xiþk  ¼

f ½xiþ1 ; :::; xiþk   f ½xi ; :::; xiþk1  xiþk  xi

ð4Þ

The interpolation process allows to calculate sample values for the new sampling frequency f1, which are located in between the input signal i [n] samples. Afterwards, resampled signal ir[n] is cut to the length of k signal periods with M samples for each period (signal ic[n] in Figure 6). At the end of processing, the resampled and cut signal is used to estimate the current discrete frequency spectrum Ic[ f ] using the DFT (exemplary amplitude spectrum shown in Figure 6). As it can be noticed during the comparison of Figures 5 and 6, the resampled signal ir[n] and the input signal i [n] look very similar, but the spectrum estimated by DFT might be significantly different. 3. Spectrum estimation example To assess the efficiency and accuracy of the proposed method, a comparison with results published in the paper by Zhou et al. (2010) has been done. The conditions of the experiment were set to be as similar as possible to those described in Zhou et al. (2010). The test signal was synthesized using the following equation (Zhou et al., 2010):   2pfh þ jh i½n ¼ jAh j sin n fs h¼1 9 X

ð5Þ

where |Ah|, fh i jh are the magnitude, frequency and phase of the hth harmonic and fs is the sampling frequency. Base frequency f1 was equal 50.1 Hz and the sampling frequency fs was equal 4 kHz. Interpolation has been performed using fourth-order Newton’s algorithm. Signal length was limited to 128 samples and DFT size to 80

ir [n], ic [n]

1

cutting of the resampled signal

0 –1 0

50 ic [n]

100

150 200 sample number

estimated amplitude spectrum

|Ic [f ]| 1 0.8 0.6 0.4 0.2 01

3

5

7

9 11 13 15 17 19 harmonic number

Figure 6. Signals after resampling and cutting and the estimated current amplitude spectrum

COMPEL 33,3

994

samples (one whole period of the test signal). The band-pass filter has been designed using frequency domain windowing with the Kaiser window. Similarly to paper by Zhou et al. (2010), a white Gaussian noise has been added to the test signal in order to keep the signal-to-noise ration at the level of 40 dB. Moreover, the simulation results have been averaged for 50 independent simulations. The estimated current spectrums in comparison with the reference values have been shown in Figure 7. Results for window interpolated Fourier transform (WIFTA) and time-domain quasi-synchronous sampling (TDQS) have been taken from Zhou et al. (2010). As it can be noticed in Figure 7, the estimated amplitude spectrum for the first two harmonics is evidently closer to the exact values in comparison with WIFTA method and comparable with the results obtained using the TDQS method. In case of the phase spectrum, the results are clearly better for the proposed method than for the TDQS and WIFTA method, respectively. The difference between the proposed method and WIFTA algorithm is also clearly visible for the estimated frequency values (Figure 8). In Figure 8 it can be noticed, that the Prony’s estimator used in the proposed method performs well in the presence of noise, giving noticeably better results than the WIFTA algorithm and comparable results to the TDQS method. To emphasize the difference in the performance between the considered spectrum estimation methods, a relative estimation error between each method and the reference value are presented in Table I. It can be noticed in Table I, that the proposed method leads to significantly better results than the WIFTA and the TDQS methods for the frequency and phase estimation. In case of amplitude estimation, for some harmonics TDQS and even WIFTA performs better, but the proposed method has a much lower value of the maximum estimation error. Phase

Amplitude 1.5

170

Exact value

Exact value

120

Proposed

1.0

deg

p.u.

WIFTA TDQS

0.5

Figure 7. Comparison of estimated current spectrums with the reference values

Proposed WIFTA

70

TDQS

20 –30

0.0

–80 1

2

3 4 5 7 harmonic number

1

9

2

3 4 5 harmonic number

Frequency Exact value

Proposed

WIFTA

TDQS

600

Hz

400

Figure 8. Comparison of estimated frequencies with the reference values

200 0 1

2

3

4 harmonic number

5

7

9

7

9

Harmonic number

1

Frequency estimation error (%) Magnitude estimation error (%) Phase estimation error (%)

2

3

4

WIFTA 1.54 351.29 2.86 20.47 TDQS 0.104 0.105 0.104 0.105 Proposed 0.002 0.002 0.002 0.002 WIFTA 22.0 23345 0.000 30.00 TDQS 0.10 15.0 0.000 10.0 Proposed 0.037 0.024 0.040 0.885 WIFTA 14.4 104 36.3 227 TDQS 1.00 3.00 1.80 11.9 Proposed 0.014 0.193 0.0596 0.710

5 1.19 0.105 0.002 4.00 0.000 0.248 13.8 0.632 0.097

Source: Zhou et al. (2010) WIFTA and TDQS

7

9

Prony’s estimator and coherent resampling

0.12 0.25 0.105 0.104 0.002 0.002 5.00 20.0 0.000 0.000 995 0.572 2.79 5.88 13.4 Table I. 5.63 4.66 0.446 2.11 Spectrum estimation error comparison-WIFTA, TDQS and proposed

4. Discussion of the results As it has been shown in the previous paragraph, the proposed method leads to generally better results in comparison with the other two competitive methods. However, it is important to notice, that the spectrum evaluation accuracy is strongly connected with the frequency estimation error and the interpolation accuracy. For example, it is worth mentioning that the Prony’s estimator performed so well in the presence of noise, despite the fact that this kind of simple estimator has a rather poor performance in the presence of noise (Dupuis et al., 2004; Osborne and Smyth, 1991). Actually, it has been revealed by further noise analysis that the frequency estimation performs quite well in a wide range of noise levels, which can be observed in Figure 9. As it can be noticed in Figure 9, the average frequency estimation error is smaller than 1 per cent even for SNR equal 0 dB. Such a good performance of the Prony’s estimator in the proposed method results directly from the band-pass filter located at the input of the Prony’s estimator (Figure 4). This filter removes most of the noise from the signal, which can be noticed in Figure 5, where signal if[n] contains only the base sinusoidal component. The proposed method is therefore structurally resistant to noise and other interferences present in the signal when they are outside the pass-band of the applied filter. Although the proposed method has proven to have better performance in comparison with similar methods, there is still place for improvement. For example, optimization algorithms can be used to design optimal band-pass filter. Alternatively, different Frequency estimation error 0.6

%

0.4

0.2

0

20

40 SNR dB

60

Figure 9. Frequency estimation error vs SNR (results averaged for 1,000 independent tests)

COMPEL 33,3

interpolation methods than the Newton’s formula can be used. Additionally, it is worth to remember that the method has been designed for periodic signals (steady-state analysis) and that the load current and voltage waveforms across the load terminals should be recorded simultaneously to allow correct voltage/current phase shift estimation. Otherwise, the phase estimation leads to incorrect results.

996

5. Conclusions The proposed method of current frequency spectrum estimation of a nonlinear load has revealed its high accuracy in comparison with WIFTA and TDQS methods. The main advantages of the method are high accuracy of spectrum estimation (even in the presence of noise), relative simplicity (in comparison to the other two presented methods) and semi-parallel signal processing (some signal operations can be performed simultaneously). Using the proposed method, it is possible to estimate the frequency spectrum of a nonlinear load from data which were recorded without the coherent sampling. Moreover, it is confirmed by the authors’ experience that it is a typical real-life scenario. This feature allows to evaluate single-phase spectrum model from previously recorded and stored waveforms without repeating the measurements. The method can also be used in a real-time DSP-based measurement system, where the coherent sampling condition is not met. In this case, the coherent sampling can be restored with the aid of the proposed method by changing only the software of the system (without any changes in the hardware). In fact, the method has already been implemented using TMS320F2808 processor board, and the results of the first laboratory tests were very promising. When considering the limitations of the method, it is important to realize, that it is valid only with the assumptions that the system is fully balanced and we are using the single-phase equivalent model of a three-phase network. Otherwise, the white Gaussian noise values and the voltage and current values might be different for each phase. Additionally, the method is valid only for the steady-state operation of the system. Generalization of the method for unbalanced system is planned in the near future as well as the extensive tests in the industrial environment. References Agrezˇ, D. (2005), “Improving phase estimation with leakage minimization”, IEEE Transactions on Instrumentation and Measurement, Vol. 54 No. 4, pp. 1347-1353. Antoniou, A. (2005), Digital Signal Processing – Signals, Systems and Filters, McGraw-Hill, New York, NY. Arrillaga, J. and Watson, N.R. (2003), Power System Harmonics, 2nd ed., Wiley, Chichester. Dupuis, P., Sels, T., Driesen, J. and Belmans, R. (2004), “Exponential parameters measurement using a modified Prony method”, Instrumentation and Measurement Technology Conference – IMTC 2004, Como, pp. 1590-1594. Faires, D. and Burden, R.L. (2002), Numerical Methods, 3rd ed., Cengage Learning, Stamford, CT. Grabowski, D. and Walczak, J. (2003), “Generalized spectrum analysis by means of neural network”, in Rutkowski, L. and Kacprzyk, J. (Eds), Neural Networks and Soft Computing, Advances in Soft Computing, Physica-Verlag, Berlin Heidelberg and New York, NY, pp. 704-709. Grady, M. (2006), Understanding Power System Harmonics, Department of Electrical & Computer Engineering, University of Texas, Austin, TX, available at: http://users.ece.utexas.edu/Bgrady/ Hoffman, J.D. and Frankel, S. (2001), Numerical Methods for Engineers and Scientists, 2nd ed., Marcel Dekker Inc, New York, NY.

Leonowicz, Z. (2004), “Analysis of non-stationary signals in power systems”, The International Journal for Computation and Mathematics in Electrical and Electronic Engineering, Vol. 23 No. 2, pp. 381-391. Lewandowski, M., Maci˛a(ek, M. and Pasko, M. (2011), “Integration of harmonic flow analysis software with database of nonlinear loads”, Proceedings of the AMTEE’2011 Conference, Klatovy, pp. VIII-3. Łobos, T. and Rezmer, J. (1997), “Real-time determination of power system frequency”, IEEE Transactions on Instrumentation and Measurement, Vol. 46 No. 4, pp. 877-881. Osborne, M.R. and Smyth, G.K. (1991), “A modified Prony algorithm for fitting functions defined by difference equations”, SIAM J. on Sci. and Statist. Comp., Vol. 12 No. 2, pp. 362-382. Salor, O. (2009), “Spectral leakage elimination of the Fourier transform of signals with fundamental frequency deviation”, IEEE Proceedings of the 17th Signal Processing and Communication Applications Conference, pp. 852-855. Xue, H. and Yang, R. (2003), “Optimal interpolating windowed discrete Fourier transform algorithms for harmonic analysis in power systems”, IEEE Proc.-Gener. Transm. Distrib, Vol. 150 No. 5, pp. 583-587. Zhou, F., Huang, Z.Y., Zhao, C.Y. and Chen, D.Y. (2010), “Time domain quasi-synchronous sampling algorithm for harmonic analysis”, IEEE Proceeding 14th Conference on Harmonics and Quality of Power, pp. 1-5. About the authors Dr Michaz Lewandowski (1977) graduated the BSc and MSc degrees from the Faculty of Electrical Engineering in 2002 and 2003, respectively. In 2003, he took an internship in Belgium as part of the EU Leonardo da Vinci training programme, after which he started to work at the Faculty of Electrical Engineering of the Silesian University of Technology. He received the PhD degree in November 2009. He is an author or co-author of 29 conference and journal papers and chapters in monographs. His area of interest include circuit theory and applications, digital signal processing and computer modelling and analysis. Since 2004 he is a member of Polish Society of Theoretical and Applied Electrical Engineering. Dr Michaz Lewandowski is corresponding author and can be contacted at: [email protected] Dr Janusz Walczak (1950) is associated with the Silesian University of Technology in Gliwice since 1981. He graduated the BSc in 1976 from the Faculty of Control Engineering and the MSc in 1981 from the Faculty of Electrical Engineering. He received his PhD and DSc degrees in 1986 and 1993, respectively. He became an Associate Professor in 1997 and a Full Professor in 2003. He is an author or co-author of over 350 scientific papers, four monographs and three books. His area of interest include synthesis and analysis of electrical systems, power theory of systems with non-sinusoidal waveforms, non-stationery signal processing and artificial neural networks. He set up and supervises a group of researchers active is these fields. He is a member of many organizations and committees, e.g., Circuits and Systems as well as Signal Processing sections of IEEE.

To purchase reprints of this article please e-mail: [email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints

Prony’s estimator and coherent resampling 997

The current issue and full text archive of this journal is available at www.emeraldinsight.com/0332-1649.htm

COMPEL 33,3

998

Optimal shape design of flux barriers in IPM synchronous motors using the phase field method Jae Seok Choi Samsung Electronics Co Ltd, Suwon, Korea

Takayuki Yamada, Kazuhiro Izui and Shinji Nishiwaki Department of Mechanical Engineering and Science, Kyoto University, Kyoto, Japan, and

Heeseung Lim and Jeonghoon Yoo School of Mechanical Engineering, Yonsei University, Seoul, Korea Abstract Purpose – The purpose of this paper is to present an optimization method for flux barrier designs in interior permanent magnet (IPM) synchronous motors that aims to produce an advantageous sinusoidal flux density distribution in the air-gap. Design/methodology/approach – The optimization is based on the phase field method using an Allen-Cahn equation. This approach is a numerical technique for tracking diffuse interfaces like the level set method based on the Hamilton-Jacobi equation. Findings – The optimization results of IPM motor designs are highly dependent on the initial flux barrier shapes. The authors solve the optimization problem using two different initial shapes, and the optimized models show considerable reductions in torque pulsation and the higher harmonics of back-electromotive force. Originality/value – This paper presents the optimization method based on the phase field for the design of rotor flux barriers, and proposes a novel interpolation scheme of the magnetic reluctivity. Keywords Finite element analysis, Flux barrier, IPM synchronous motor, Phase field Paper type Research paper

1. Introduction Permanent magnet (PM) motors were introduced in the nineteenth century, before the induction motor was invented, but PM motors were not widely used due to the low energy product of PM materials such as AlNiCo and ferrites that were then available. The production of large PM motors has been routine since the 1970s, following advances in magnetic technology such as the introduction of powerful rare earth magnets (Heikkila¨, 2002). Currently, interior permanent magnet (IPM) motors employing rare earth magnets are universally used in applications ranging from home appliances to electric vehicles (Coey, 2002). Since the PMs are located in the rotor laminations, IPM motors can utilize both reluctance and PM torque, and take advantage of favorable torque-to-volume ratios and easy speed control (Miller, 1989). COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering Vol. 33 No. 3, 2014 pp. 998-1016 r Emerald Group Publishing Limited 0332-1649 DOI 10.1108/COMPEL-01-2013-0026

This work is supported by JSPS Grant-in-Aid for Scientific Research (B) (No. 22360041), by Grant-in-Aid for Challenging Exploratory Research (No. 23656069), and by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MEST) (No. 20120005701). The authors sincerely appreciate them.

However, torque pulsation is often a problem in IPM motors, which prevents smooth operation and causes unwelcome noise and structural vibration. The pulsating torque consists of cogging torque and torque ripple. The cogging torque results from unevenly distributed air-gap permeance associated with the stator slots and PMs of the rotor. The torque ripple is produced by the interaction between the stator current and the rotor magnetic field, or mechanical imbalances. The pulsating torque can be reduced through careful design of the machine geometry, the winding layout and the drive current. In order to reduce the torque pulsation, PM synchronous motors should have a sinusoidal air-gap field distribution, which is affected by the rotor magnets as well as the winding distribution. A sinusoidal air-gap flux density produced by magnets can be obtained by appropriately shaping the outlines of PMs or rotor poles (Heikkila¨, 2002; Evans, 2010; Li et al., 2003; Hsieh and Hsu, 2005). Hong and Yoo optimized the PM shape in surface mounted PM synchronous motors using the level set method, in order to produce a sinusoidal air-gap field (Hong and Yoo, 2011). Choi et al. presented a method for obtaining optimal rotor pole shapes in IPM motors using the phase field method (Choi et al., 2012). On the other hand, a sinusoidal air-gap field can be achieved by making the shape of flux barriers correctly. Bianchi et al. proposed the choice of couples of flux barriers of different shapes to reduce higher torque harmonics in synchronous reluctance machines or PM-assisted machines (Bianchi et al., 2009). Fang et al. introduced new barrier designs in IPM synchronous motors with the aim of reducing the torque pulsation, and optimized their geometries using response surface method (Fang et al., 2006, 2010). The present work gives priority to the design of the flux barriers within IPM synchronous motor rotors for the sinusoidal flux density distribution in the air-gap. The optimization is based on the phase field method using an Allen-Cahn equation. This approach is a numerical technique for tracking diffuse interfaces, similar to the approach of level set method, so it is regarded as a shape optimization method. Unlike level set methods that use the Hamilton-Jacobi equation, a phase field approach does not require a computationally troublesome re-initialization process (Takezawa et al., 2010; Choi et al., 2011). In this work, the design result using the proposed topology optimization method is also presented to compare with results obtained by a phase field method. The evolution of the density field (a design variable) and the system analysis are based on the finite element analysis (FEA). The augmented Lagrangian method is employed to deal with the volume constraint. For practical applications, structural and thermal aspects including the winding layout can be considered in the optimization problem. For instance, yield criteria by the centrifugal force of the rotor or the temperature rise of magnets by eddy current loss can be added to design constraints. However, our work focusses on the flux barrier design for the target flux density in the air-gap not considering structural and thermal problems. The remainder of this paper is organized as follows. In Section 2, we introduce a topology optimization method that use a reaction-diffusion equation, the variation of which provides the basis for the phase field-based method using the Allen-Cahn equation. In Section 3, the optimization problem is formulated for the flux barrier design and the design sensitivities are derived based on the adjoint variable method. In Section 4, numerical optimization results are presented, and the motor performances of the optimized models, such as cogging torque, back-electromotive force (EMF) and torque ripple, are estimated. Moreover, the performances of the optimized models

Flux barriers in IPM synchronous motors 999

COMPEL 33,3

1000

projected to 0/1 space are additionally investigated. Finally, our conclusions are mentioned in Section 5. 2. Optimization algorithm 2.1 Design approach using a reaction-diffusion equation Before describing our approach that uses the phase field method, we introduce an optimization method based on a reaction-diffusion equation, which is regarded as the so-called Allen-Cahn equation, when it is often used in phase transition problems. The reaction-diffusion equation expresses two processes, namely, a “diffusion” process describing how substances spread in space, reducing their concentration, and a “reaction” process that describes how substances react with each other locally. The reaction term can be modified to accommodate specific problems such as population dynamics (Abraham, 1998), combustion (Peters, 2000), and chemical reactions (Ross et al., 1988). Choi et al. proposed a topology optimization method using a reaction-diffusion equation in which the shape derivative (Fre´chet derivative) of the objective function is simply employed in the reaction term, as in the following equation (Choi et al., 2011):  Az Þ qfðx; tÞ qFðf; ¼ er2 fðx; tÞ  in x 2 O; 0otpT qt qf

ð1Þ

qf ¼ 0 on qO qn

ð2Þ

where OCRD (D ¼ 2) is a bounded domain with boundary qO, f is a density field determining the material properties, and Az is the z-directional magnetic vector potential. The Neumann boundary condition qf/qn ¼ 0 is imposed on all boundaries where n is the outer unit vector normal to qO. T represents the time necessary for the convergence. The right-hand side of (1) is composed of a diffusion term and a reaction   The diffusion term, qF=qf; which is the gradient of the augmented Lagrangian F: coefficient e is a very small parameter related to the interfacial energy. As shown in Figure 1, material phases in the design space O are dependent on the density field, which has values close to 1 in the solid region (ferromagnetic material) and values close to 0 in the void (air). Between the two material phases, there exists a diffuse interfacial layer, the thickness of which depends on the diffusion coefficient e. As the diffusion coefficient is reduced, it becomes possible to obtain optimal configurations with increasingly sharp interfaces, since this parameter controls the mobility of the diffuse interface. In our work, the parameter e is defined as: Z ð3Þ e ¼ e0 dx O

The order of e0 is empirically determined and is set to 2  105 in this work. Once e0 and e are selected, we consider only parameter e0, since it is not sensitive to the domain size. When the same order of e0 is used, we can obtain optimized configurations with the interfacial layer thickness of a similar scale regardless of the domain size (Choi et al., 2011 , 2012).

Flux barriers in IPM synchronous motors

(a) Diffuse interfacial layer

Ω

=1 Γ

1001

=0

ε 

(b)

x ε

Γ ( = 0.5)

Notes: (a) Description of the interface Γ using the density field  ; (b) cross-section of the diffuse interfacial layer

Figure 1. Partition of material phases

Optimization problems for magnetic systems may be written as: minimize F ðf; Az ðfÞÞ f Z Z subject to GðfÞ ¼ fdx  Vreq dxp0; O

0pfp1

ð4Þ

O

where F(f, Az), G(f), and Vreq, respectively, represent the objective function, the inequality constraint on volume, and the required volume fraction. To deal with the constraint, we define the augmented Lagrangian F as:  Az ; lÞ ¼ xFðf; Az Þ þ lCðfÞ þ r CðfÞ2 ð5Þ Fðf; 2 where:   l CðfÞ ¼ max GðfÞ;  ð6Þ r Here, l is the Lagrange multiplier, and r is a penalty parameter. x is a parameter to normalize the design sensitivity, and is defined as: R dx O ð7Þ x¼ kqF=qfkL2 ðOÞ The gradient of the Lagrangian with respect to the density field is employed as the reaction term in (1), and is written as: qF qFðf; Az Þ qCðfÞ ¼x þ ðl þ rCðfÞÞ qf qf qf

ð8Þ

COMPEL 33,3

1002

For the shape optimization using the phase field model, the gradient of the augmented Lagrangian is simply modified as (Takezawa et al., 2010; Choi et al., 2011):   qF qFðf; Az Þ qCðfÞ 0 0 ¼ ag ðfÞ þ p ðfÞ x þ ðl þ rCðfÞÞ qf qf qf

ð9Þ

gðfÞ ¼ f2 ð1  fÞ2 ; pðfÞ ¼ f3 ð6f2  15f þ 10Þ

ð10Þ

where:

Here, the function g(f) and p(f) are, respectively, a smooth Dirac delta function and a smooth Heaviside function in the range of 0pfp1, as shown in Figure 2. Usually, the function g(f) is described as a double-well potential. In (9), a is a parameter that influences the thickness of the diffuse interfacial layer, and must be positive. In this study, a is set to a value of 1. p0 (f) becomes a smooth Dirac delta function because p0 (f) ¼ 30g(f). Therefore, the design sensitivity outside the diffuse interfacial layer becomes zero, which provides this approach with its interface tracking property. The first term of the right-hand side of (9), ag 0 (f), plays a functional role similar to that of the re-initialization in the level set methods. This term keeps the diffuse interfacial layer tense without allowing folds, and also maintains the layer at a constant thickness. (a) 0.06 0.05 g()

0.04 0.03 0.02 0.01 0.2

0.4

0.6

0.8

1.0

0.6 

0.8

1.0



(b) 1.0

p()

0.8 0.6 0.4 0.2 0.2

Figure 2. Function g(f) and p(f)

0.4

Notes: (a) g() ; (b) p()

2.2 Numerical implementation The density field f is evolved forward in time based on the FEA, in which triangular three-node elements are employed. From (1) and (2), the weak form is written as:  ½K1 

 qf þ ½K2 ffg ¼ fbg qt

where: Ne

½K1  ¼ [

Z

ð11Þ

1003

NT Ndx

ð12Þ

erNT rNdx

ð13Þ

e¼1 Oe

Ne

½K2  ¼ [

Z

e¼1 Oe

N2

fb g ¼  [

Z

e¼1

qF Ndx qf

ð14Þ

Oe e Here, Ne is the total number of the nodes in the design domain O, [Nj¼1 is the union set of elements, and N is a shape function. For the time discretization, we employ the implicit scheme, which is accurate with order 1 in time and order 2 in space (Allaire, 2007). From (11), the following equation is given by:



   1 1 ½K1  þ ½K2  ftþDt ¼ ½K1  ft þ fbg Dt Dt

ð15Þ

where Dt is the time step, and the density field ft þ Dt at time t þ Dt is computed from (15). On the other hand, the Lagrange multiplier and the penalty parameter are changed during the optimization algorithm. Their updated schemes are given by: ltþDt ¼ lt þ r t CðfÞ; r tþDt ¼ gr t

ð16Þ

where the parameter g controls the rate of increase of the penalty parameter r and is set to 1.03. When the rate of change of r is increased, the convergence rate of the optimization algorithm can be made fast. But, higher values of the parameter g can lead to convergence instability. The criterion for the convergence of the density field is defined as:

T

F  F TDt

p105 ð17Þ F TDt where FT is the converged objective value at time T.

Flux barriers in IPM synchronous motors

COMPEL 33,3

1004

3. Flux barrier design 3.1 Interpolation of the magnetic reluctivity In order to determine the material property of a ferromagnetic material within the design domain, we need to appropriately define an interpolation scheme for either the magnetic permeability or the reluctivity. It has been reported that permeabilitybased interpolation distorts the design sensitivity, especially for problems that have a low volume constraint (Choi and Yoo, 2008). Therefore, we adopt a reluctivity-based scheme, as follows: ð18Þ nðf; B2 Þ ¼ n0  nðB2 Þ ð1  fÞp þnðB2 Þ where v0, n , B and p are the reluctivity of air, the reluctivity of the ferromagnetic material used, the flux density, and the penalization parameter, respectively. Penalization ( p41) has been widely used to reduce intermediate densities (grayscale areas) in structural topology optimization, but it has not yet been used in reluctivitybased topology optimization for the magnetic field, since it is inherently possible to produce clear optimized configurations free from grayscales without the use of any penalization (Choi and Yoo, 2008; Yoo et al., 2008). Figure 3 shows the relationship between the density field f and the relative reluctivity nr ( ¼ n/n0), in which the relative reluctivity of the employed material is assumed to have a constant value ( n ¼ n0 =1000 ). The linearly interpolated reluctivity ( p ¼ 1) represents the lack of penalization. When penalization is increased, the suppression in high density areas becomes stronger, as shown in Figure 3, broadening the extent of regions with reluctivity values approaching that of the solid phase (the ferromagnetic material). The penalization based on (18) provides stronger design sensitivities at the diffuse interface compared with the linear interpolation. Therefore, we employ a penalized interpolation scheme for the reluctivity and set the value of p to 3. 3.2 Formulation of the optimization problem Figure 4(a) shows a 1/8th segment of the 3-phase, 8-pole, 48-slot IPM motor. The outer diameters of the rotor and the stator are 80 and 120 mm, respectively. The air-gap size

Relative reluctivity (r)

1

p=1 p=2 p=3

0.8

0.6

0.4

0.2

Figure 3. Interpolation of the magnetic reluctivity

0 0.0

0.2

0.4 0.6 Density ()

0.8

1.0

Flux barriers in IPM synchronous motors

(a) Stator

Γ1

1005 Design domain Magnet Γ3

Γ4 Rotor

Γ2

(b)

(c) 0.9

Air-gap field (T)

0.8 0.7 0.6 0.5 0.4 0.3 Target field Basic model

0.2 0.1 0

0.005 0.01 0.015 0.02 0.025 0.03 Arc-length (m)

Notes: (a) Design domain and boundary conditions; (b) flux lines of the basic model; (c) air-gap field distribution

Figure 4. Design model of the IPM motor

COMPEL 33,3

1006

and the stack length are 0.6 and 65 mm, respectively. The remanent flux density of the PM used is 1.1 T. The iron core of the rotor and the stator is assumed to be soft iron, the B-H curve of which is shown in Figure 5. At boundaries G 1 and G2 , a Dirichlet boundary condition is applied, and an anti-periodic boundary condition is imposed at G3 and G4. The design space and the entire analysis domain are, respectively, discretized into 22,208 and 34,008 three-node elements. For the design of the flux barriers in the rotor, minimizing stator slot effects is desirable. In our work, it is assumed that the entire stator is a monolithic iron core during the optimization process, as shown in Figure 4(b). Figure 4(b) illustrates the flux lines of the basic model that includes triangular flux barriers at both ends of the magnet. The flux density at the air-gap region is displayed in Figure 4(c). The air-gap field is nearly flat over the distance from the magnet center (d-axis) to the magnet ends, and the field shape is far from the targeted sinusoidal distribution. In order to minimize the difference between the flux density Bair and the target field Btarget in the air-gap region Oair, the optimization problem is formulated as: Z minimize Fðf; Az Þ ¼ ðBair  Btarget Þ2 dO f

subject to GðfÞ ¼

Oair

Z

fdx  Vreq

OD

Z

dx ¼ 0

ð19Þ

OD

0pfp1 af ðAz ; Az Þ ¼ lf ðAz Þ for Az 2 V ; 8Az 2 V where OD and Az denote the design domain and the test function, respectively. Btarget is obtained based on the following equation: vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi uR u O B2air dO pffiffiffi Btarget ¼ 2Brms sin yE ; Brms ¼ t Rair ð20Þ Oair dO

2.5

B(T)

2 1.5 1 0.5

H(A/m)

0, 00 0 35

00 0 30 0,

0, 00 0 25

00 0, 0 20

00 0 15 0,

00 0 10 0,

00 0

Figure 5. B-H curve for soft iron

50 ,

0

0

where yE is the angle in electrical measure. The target field is computed based on the root mean square (RMS) value of the air-gap field at each iteration, so it changes slightly during the optimization iterations. The bilinear form and the load linear form are, respectively, defined as (Bianchi, 2005):   Z qAz qAz qAz qAz þ af ðAz ; Az Þ ¼ nðfÞ dO ð21Þ qx qx qy qy O

and: lf ðAz Þ ¼

Z 

  qAz qAz  Br; y Jz Az þ n Br; x dO qy qx

ð22Þ

O

where J z is the z-directional current density, and Br,x and Br,y are, respectively, the x- and y-directional remanent flux density. The admissible linear space V is defined as:







V ¼ fAz 2 H 1 ðOÞD Az G ¼ Az G ¼ 0; Az G ¼ Az G g ð23Þ 1

2

3

4

3.3 Sensitivity analysis It is assumed that the objective function F(f) is continuous with respect to the design variable f. The variation of F(f) in the direction of qf is defined as (Choi and Kim, 2005):



d qF 0 df ð24Þ F df ¼ Fðf þ tdfÞ

¼ dt qf t¼0

where t is the perturbation size of the design variable. The adjoint equation is given by: Z qBair wdx; 8w 2 V af ðw;  wÞ ¼ 2ðBair  Btarget Þ ð25Þ qAz Oair

Here, w is the adjoint variable. The variations of the bilinear form and the load linear form are, respectively, written as:   Z qAz qAz qAz qAz þ a 0df ðAz ; Az Þ ¼ n 0 ðf; B2 Þ dfdO ð26Þ qx qx qy qy OD

l 0df ðAz Þ ¼ 0

ð27Þ

Using the adjoint variable computed from (25) and the above variation results, the variation of the objective function yields: F 0df ¼ a 0df ðAz ; wÞ   Z qAz qw qAz qw þ ¼  n 0 ðf; B2 Þ dfdO qx qx qy qy OD

ð28Þ

Flux barriers in IPM synchronous motors 1007

COMPEL 33,3

1008

4. Numerical examples Two-dimensional analysis of the IPM motor and the density field evolution are performed using commercial software, namely, COMSOL Multiphysics and Matlab. 4.1 Optimization results Usually, the shape optimization using the phase field or the level set method uses several holes evenly distributed on a given design space at the initial stage. However, the optimization of the IPM motor rotor is highly dependent on the initial configurations, because the reluctivity on the design domain is determined based on the B-H curve shown in Figure 5, and the PM material is strong enough to magnetically saturate the rotor pole. As a result, our optimization problem has many local optima, depending on the initial shapes. We need to predict appropriate positions of initial holes. Figure 4(c) showed that the flux density of the design model at both PM ends is higher than the target curve. It is why we initially place a few holes near both ends of the magnet like Figure 6. The initial configuration 1 shown in Figure 6(a) has two half circular barriers besides two triangular ones. The initial configuration 2 in Figure 6(b) has two additional circular barriers. Table I, Figure 7 and 8, respectively, show the optimized configurations, the air-gap field distribution, and flux line plots, where the optimized model 1 and the optimized model 2 are the results by the phase field-based design from the initial configuration 1 and the initial configuration 2, respectively. The field distribution obtained by the topology optimization based on (8) is nearly the same as the target field shown in Figure 7(a). However, the design domain is full of intermediate densities, so this result is not practically useful. This is why a phase field-based approach is advantageous in the present optimization problem. Compared with the basic model, the optimized model 1 shows two additional flux barriers. New flux barriers are expanded to the rotor center from initial holes and their ends are very sharp, appearing somewhat like a bull’s horn. Figure 7(b) shows that the air-gap field distribution is closer to the target field compared with that of the basic model. (a)

(b)

Figure 6. Initial configurations for the phase field-based optimization

Notes: (a) Initial configuration 1; (b) initial configuration 2

Optimized configuration Model using topology optimization

1.386x10–10

Optimized model 1

1.636x10–9

Optimized model 2

2.767x10–10

Flux barriers in IPM synchronous motors 1009

Table I. Optimized configurations and objective values

(b)

(a) 0.9

0.9

0.8

0.8

0.7

0.7

0.6 0.5 0.4 0.3 0.2

Target field Topology opt. model

0.1 0

0

0.005 0.01 0.015 0.02 0.025 0.03 Arc-length (m)

Air-gap field (T)

Air-gap field (T)

Objective value

0.6 0.5 0.4 0.3 Target field Opt. model 1

0.2 0.1 0

0

0.005 0.01 0.015 0.02 0.025 0.03 Arc-length (m)

(c) 0.9

Air-gap field (T)

0.8 0.7 0.6 0.5 0.4 0.3

Target field Opt. model 2

0.2 0.1 0

0

0.005 0.01 0.015 0.02 0.025 0.03 Arc-length (m)

Notes: (a) Optimized model using topology optimization; (b) optimized model 1; (c) optimized model 2

Figure 7. Air-gap flux density distribution of optimized models

COMPEL 33,3

(a)

1010 (b)

(c)

Figure 8. Flux line plots of optimized models

Notes: (a) Optimized model using topology optimization; (b) optimized model 1; (c) optimized model 2

However, the field is somewhat flat in the vicinity of the rotor center. The initial model 2 yields a total of six flux barriers, the shapes of which are different from the optimized model 1. The field distribution is closer to the sinusoidal curve than model 1, as shown in Table I. 4.2 Motor performances The analysis of the motor system during the optimization process is static, and a single computation at each iteration is sufficient. However, to calculate motor performances such as torque pulsation or back-EMF, analyses at various rotor positions are required. In this study, we use the “moving mesh” application mode of COMSOL Multiphysics to analyze the system at various rotor positions without remeshing. In this subsection, the performances for the design model obtained by the topology optimization are not measured, since these are not available. Cogging torque is a torque produced from the interaction between PMs and stator slots in the absence of current sources. Based on the principle of virtual work , the cogging torque Tcog defined as: dWmag ð29Þ dyM where Wmag the magnetic energy of the analysis system and yM is the mechanical angle. Figure 9(a) shows the cogging torque profiles in rotor positions ranging from Tcog ¼ 

(b)

0.25 0.2 0.15 0.1 0.05 0 –0.05 –0.1 –0.15 –0.2 –0.25

16 14 12 10 8 6 4 2 0

Basic model Opt. model 1 Opt. model 2

0

20

40

60

80

Flux barriers in IPM synchronous motors

18

Back-EMF (V)

Torque (Nm)

(a)

100 120 140 160 180

0

20

40

Electric angle (°E)

(d)

18 16 14 12 10 8 6 4 2 0

Basic model Opt. model 1 Opt. model 2

1

60

80

100 120 140 160 180

Electric angle (°E)

3 5 Number of Harmonics

Torque (NM)

Amplitude (V)

(c)

1011

Basic model Opt. model 1 Opt. model 2

18.0 17.5 17.0 16.5 16.0 15.5 15.0 14.5 14.0 13.5 13.0

Basic model Opt. model 1 Opt. model 2

0

7

20

40

60

80

100 120 140 160 180

Electric angle (°E)

Figure 9. Motor performances

Notes: (a) Cogging torques; (b) back-EMFs; (c) back-EMF harmonic components; (d) torque ripples

0-1801E. The torque values (peak to peak amplitude) of the optimized models are considerably reduced, by 35.583.8 percent, when compared with the basic model. In particular, the cogging torque of the optimized model 1 is reduced by more than half compared with that of model 2, although the objective value of model 2 is smaller, i.e., more preferable, than that of model 1. The back-EMF is the voltage produced in a winding due to the magnet motion in the rotor and is defined as: E¼

dL dL ¼ oE dt dyE

ð30Þ

Here, L and oE are the flux linkage and electric angular velocity, respectively. A rotation rate of 500 rpm is assumed when computing the back-EMF. As shown in Figure 9(b, c), while the first order harmonics of the optimized models are preserved, higher harmonics are remarkably reduced. As a result, the total harmonic distortion (THD) is considerably suppressed, as shown in Table II, and the THD of the back-EMF for model 1 is observed to be slightly lower than that of model 2.

Cogging torque (peak to peak) (Nm) THD of back EMF Average torque (Nm) Torque ripple (peak to peak) (Nm)

Basic model

Opt. model 1

Opt. model 2

0.413 0.191 16.3 2.34

0.0669 (83.8% k) 0.0296 (84.5% k) 15.9 (4.87% k) 1.56 (33.3% k)

0.266 (35.5% k) 0.0342 (82.2% k) 15.5 (2.45% k) 1.83 (21.8% k)

Table II. Performance comparison

COMPEL 33,3

1012

Figure 9(d) shows the FEA results of torque ripple, where the number of winding turns per phase is 72 and the input current 3A (rms). As shown in Table II, the average torque of the optimized models is slightly reduced, but their ripples (peak to peak) are decreased by 21.8-33.3 percent compared with the basic model. 4.3 0/1 projection model The phase field models commonly have a diffuse interfacial layer shown in Figure 1. The domain is separated by a continuous variation of material properties within a narrow diffuse interface region. As shown in the left side of Figure 10(a), there exists a diffuse interfacial layer between the solid phase and the void phase, where the interface thickness is x3x1. This smooth interface is available for the phase transition during the optimization process, but we have much difficulty in determining the border of the solid region. Practically, a 0/1 design with a sharp interface is more useful as shown in the right side of Figure 10(a). In our work, the original density field is projected to the 0/1 space based on the following function (Wang et al., 2011): ~ ¼ tanhðbZÞ þ tanhðbðf  ZÞÞ f tanhðbZÞ þ tanhðbð1  ZÞÞ

ð31Þ

Here, Z is a threshold, all density values f above which are projected to 1. The parameter b controls the smoothness of the projection function as shown in Figure 10(b). When b-N, the discrete design with the sharp interface can be obtained. Here, ~ is represented in b ¼ 512 and Z ¼ 0.5 are used, and then the projected density field f the right side of Figure 10(c), where the outline of elements adjacent to the interface looks clearly. Figure 11 shows the motor performances of the 0/1 projection models like cogging torque, back-EMF and torque ripple. The change of the interface leads to a big difference to the cogging torque as shown in Figure 11(a). In particular, the torque value of the optimized model 1 is almost doubled. However, there are small changes in the back-EMF and the torque ripple as shown in Figure 11(b, c). The cogging torque is more sensitive to geometries of the rotor and the stator than the back-EMF and the torque ripple. The difference between the original optimized models and the 0/1 projection models can be reduced by controlling the diffusivity of the reaction-diffusion equation. With decrease of diffusivity, the diffuse interfacial layer gets closer to the sharp interface. But, very fine mesh is required for nearly a sharp interface. 5. Conclusions The shapes of flux barriers in IPM synchronous motors were optimized to produce nearly sinusoidal field distributions in the air-gap between the rotor and the stator. The density field was evolved using the phase field method based on the Allen-Cahn equation. This approach enabled the shape optimization method to track the diffuse interface. Although the results were highly dependent on the position and the number of holes in the initial configuration, the obtained flux barrier shapes were much clearer than the configurations obtained by the topology optimization. The optimized models obtained with the phase field method showed considerable reductions in cogging torque, torque ripple and the higher harmonics of back-EMF. Additionally, we investigated the performances of the 0/1 projection models.

Flux barriers in IPM synchronous motors (a)

Smooth interface



1013

Sharp interface  1

1

0.5

x1

0

x2

x3

x

0

x2

x

(b) 1

Projected density ()

0.8 0.6 0.4 β=4 β=64

0.2 0 –0.1 0

–0.2

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

β=8 β=512

1

1.1

Density field ()

(c)





Notes: (a) Smooth interface and sharp interface; (b) projection using the hyperbolic tangent function ( = 0.5); (c) distribution of the original density (left side) and the projected density (right side)

Figure 10. Projection of the density field

COMPEL 33,3

1014

(a)

0.2 0.15

Torque (Nm)

0.1 0.05 0 –0.05 Opt. model 1

–0.1

Opt. model 1 (0/1 projection)

–0.15

Opt. model 2 Opt. model 2 (0/1 projection)

–0.2 0

20

40

60

80

100

120

140

160

180

Electric angle (°E)

(b)

18 16 Black-EMF (V)

14 12 10 8 6

Opt. model 1

4

Opt. model 1 (0/1 projection) Opt. model 2

2

Opt. model 2 (0/1 projection)

0 0

(c)

20

40

60

80 100 120 Electric angle (°E)

140

180

160

17.5 17.0

Torque (Nm)

16.5 16.0 15.5 15.0 14.5

Opt. model 1 Opt. model 1 (0/1 projection) Opt. model 2 Opt. model 2 (0/1 projection)

14.0 13.5 13.0

Figure 11. Motor performances of 0/1 projection models

0

20

40

60 80 100 120 Electric angle (°E)

140

160

180

Notes: (a) Cogging torques; (b) back-EMFs; (c) torque ripples

References Abraham, E.R. (1998), “The generation of plankton patchiness by turbulent stirring”, Nature, Vol. 391, pp. 577-580. Allaire, G. (2007), Numerical Analysis and Optimization: An Introduction to Mathematical Modelling and Numerical Simulation, Oxford University Press, New York, NY. Bianchi, N. (2005), Electrical Machine Analysis Using Finite Elements, CRC Press, New York, NY. Bianchi, N., Bolognani, S., Bon, D. and Pre´, M.D. (2009), “Rotor flux-barrier design for torque ripple reduction in synchronous reluctance and PM-assisted synchronous reluctance motors”, IEEE Transactions on Industry Applications, Vol. 45 No. 3, pp. 921-928. Coey, J.M.D. (2002), “Permanent magnet applications”, Journal of Magnetism and Magnetic Materials, Vol. 248 No. 3, pp. 441-456. Choi, J.S. and Yoo, J. (2008), “Structural optimization of ferromagnetic materials based on the magnetic reluctivity for magnetic field problems”, Computer Methods in Applied Mechanics Engineering, Vol. 197 Nos 49-50, pp. 4193-4206. Choi, J.S., Izui, K., Nishiwaki, S., Kawamoto, A. and Nomura, T. (2012), “Rotor pole design of IPM motors for a sinusoidal air-gap flux density distribution”, Structural and Multidisciplinary Optimization, Vol. 46 No. 3, pp. 445-455. Choi, J.S., Yamada, T., Izui, K., Nishiwaki, S. and Yoo, J. (2011), “Topology optimization using a reaction-diffusion equation”, Computer Methods in Applied Mechanics Engineering, Vol. 200 Nos 29-32, pp. 2407-2420. Choi, K.K. and Kim, N.H. (2005), Structural Sensitivity Analysis and Optimization 1: Linear Systems, Springer, Berlin. Evans, S.A. (2010), “Salient pole shoe shapes of interior permanent magnet synchronous machines”, International Conference on Electrical Machines, September 6-8. Fang, L., Kim, S., Kwon, S. and Hong, J. (2010), “Novel double-barrier rotor designs in interior-PM motor for reducing torque pulsation”, IEEE Transactions on Magnetics, Vol. 46 No. 6, pp. 2183-2186. Fang, L., Kwon, S., Zhang, P. and Hong, J. (2006), “Torque ripple reduction of multi-layer interior permanent magnet synchronous motor by using response surface methodology”, International Conference on Electrical Machines, September 2-5, Chania. Heikkila¨, T. (2002), “Permanent magnet synchronous motor for industrial inverter applicationsanalysis and design”, dissertation, Lappeenranta University of Technology, Lappeenranta. Hong, H. and Yoo, J. (2011), “Shape design of the surface mounted permanent magnet in a synchronous machine”, IEEE Transactions on Magnetics, Vol. 47 No. 8, pp. 2109-2117. Hsieh, M.-F. and Hsu, Y.-S. (2005), “An investigation on influence of magnet arc shaping upon back electromotive force waveforms for design of permanent-magnet brushless motors”, IEEE Transactions on Magnetics, Vol. 41 No. 10, pp. 3949-3951. Li, Y., Zou, J. and Lu, Y. (2003), “Optimum design of magnet shape in permanent-magnet synchronous motors”, IEEE Transactions on Magnetics, Vol. 39 No. 6, pp. 3523-3526. Miller, T.J.E. (1989), Brushless Permanent-Magnet and Reluctance Motor Drive, Clarendon Press, Oxford. Peters, N. (2000), Turbulent Combustion, Cambridge University Press, Cambridge. Ross, J., Mu¨ller, S.C. and Vidal, C. (1988), “Chemical waves”, Science, Vol. 240 No. 4851, pp. 460-465. Takezawa, A., Nishiwaki, S. and Kitamura, M. (2010), “Shape and topology optimization based on the phase field method and sensitivity analysis”, Journal of Computational Physics, Vol. 229 No. 7, pp. 2697-2718.

Flux barriers in IPM synchronous motors 1015

COMPEL 33,3

1016

Wang, F., Lazarov, B.S. and Sigmund, O. (2011), “On projection methods, convergence and robust formulations in topology optimization”, Structural and Multidisciplinary Optimization, Vol. 43 No. 6, pp. 767-784. Yoo, J., Yang, S. and Choi, J.S. (2008), “Optimal design of an electromagnetic coupler to maximize force to a specific direction”, IEEE Transactions on Magnetics, Vol. 44 No. 7, pp. 1737-1742. Corresponding author Dr Jae Seok Choi can be contacted at: [email protected]

To purchase reprints of this article please e-mail: [email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints

The current issue and full text archive of this journal is available at www.emeraldinsight.com/0332-1649.htm

Conformal antennas arrays radiation synthesis using immunity tactic Sidi Ahmed Djennas, Belkacem Benadda, Lotfi Merad and Fethi Tarik Bendimerad

Radiation synthesis using immunity tactic 1017

Department of Electrical and Electronics Engineering, Abou-Bekr Belkaı¨d University, Tlemcen, Algeria Abstract Purpose – The purpose of this paper is to introduce to scientific community a new optimization technique and its application to the radiation synthesis case. Design/methodology/approach – The immunity tactic is a new powerful optimization tool inspired by immune system. It was used with success to achieve the conformal antenna radiation synthesis in an acceptable processing time. Findings – Radiation synthesis of conformal antenna arrays based on immunity tactic generates very good results compared with other optimization methods. The comparison is very satisfactory as regards accuracy and processing time. Research limitations/implications – The improvement of convergence and accuracy will be done certainly by use of other variants of the technique or combination with others. Originality/value – The paper exposes with details a new optimization technique based on immune system and its behavior. The results, for the special case of conformal antenna arrays radiation synthesis, are very satisfactory and very encouraging. The impact of the new technique will be, without doubt, positive on optimization field. Keywords Optimization, Signal processing, Antenna Paper type Research paper

I. Introduction In the world of antennas, the radiation synthesis is crucial for applications that require a specific pattern as radiation nulling for noise rejection, sharp beam for accurate localization, multi-beam for multi-detection, etc. (Stutzman and Thiele, 1981). Many works have been carried out to give solution to the problem (Haupt, 1997). Many methods have been applied, with more or less success, with the perspective of improving the accuracy, the time processing, the stability, etc. (Vaskelainen, 1997; Khzmalyan and Kondrat’yev, 1996; Rodrı´guez et al., 1999; Comisso and Vescovo, 2007). The radiation synthesis consists of determination of optimal electric weight to be applied to patches in order to create the radiation pattern that is closest to the desired one. This will be achieved by optimization process since it is impossible to respects all constraints imposed by the desired or destination pattern (Bucci et al., 1995; Botha and McNamara, 1993; Jiao et al., 1993; Djennas et al., 2009). Among the optimization process there are deterministic and stochastic methods. The deterministic optimization methods are rigid and subject to local optima problem. The stochastic methods are very flexible and less vulnerable to local optima. For these reasons, we adopt the stochasticity for our approach and build our new method on that fact. The new technique, conceived and baptized immunity tactic will be presented in this survey. We classify the new technique under the stochastic processes while governed by deterministic rules. We are greatly inspired for this technique by the

COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering Vol. 33 No. 3, 2014 pp. 1017-1037 r Emerald Group Publishing Limited 0332-1649 DOI 10.1108/COMPEL-06-2013-0193

COMPEL 33,3

1018

immune activity, which have tendency to eliminate the strange bodies and to preserve the self-organism. The analogy with the synthesis will be as follow: .

elimination of all weak solutions; and

.

preservation of the best solution.

Some biologic immune operators will be borrowed and adapted to our technique, which remain governed by deterministic rules to avoid any destabilization. The efficiency and the robustness of our original technique will be highlighted through some simulation cases. For conformal arrays, the modeling of radiation according to some constraints is a subject of great importance (Damiano et al., 1996; Girard et al., 2000; Comisso and Vescovo, 2011). In this perspective and as innovation, we propose in this paper a novel and an innovative technique, inspired from the biological immune activity. The same technique simulates the self, which is part of body and the foreign, which is not part of body. In fact, the rejected solutions will be destroyed by combined actions at each defensive. At the end, only the optimal solution will be conserved. The idea consists of considering the space of solutions as a body organism, where the unaccepted solutions are bacteria or viruses that must be eliminated when they are identified by the defensive system. The solution that escapes from destruction will be the best. This paper deals exclusively with the new technique and its integration to the conformal antennas arrays radiation synthesis (Djennas et al., 2009). We will give generalities on the immune system and its constituents. After, we will establish the analogy between the immunity tactic and the feed parameters optimization. Furthermore, we will enumerate the structural organization of the synthesis statement. II. Biological immunity: the constituents and the functioning As an introduction, we offer a primer on the basic functions of the immune system. This system is a fluid network, designed to protect from agents of disease, and to heal wounds delivered by injury or invasion (Dreher, 1995; Alcami and Koszinowski, 2000; Carayannopoulos and Yokoyama, 2004; Tortorella et al., 2000). In order to do its job properly, the immune system must be exquisitely sensitive in detecting the surface features of other cells and substances. It must distinguish the fingerprints of intruders from those of own cells. But the defense network is called upon to be as aggressive as it must be sensitive. Its task is to identify and then to eliminate foreign agents, be they bacteria, viruses, fungi, toxic chemicals, or cancer cells, with precision and dispatch (Bai and Odin, 2003). The immune system consists of different classes of soldier cells, which carry out specialized functions, including cells that prompt, cells that alert, cells that facilitate, cells that activate, cells that surround, cells that kill, even cells that clean up. Many immune cells also synthesize and secrete special molecules that act as messengers, regulators, or helpers in the process of defending against invaders (Bousso et al., 2000; Stewart-Jones et al., 2003). II.1 Antigens Antigens are the fingerprints of immunity. They are identifying molecules that reside on the surface of cells and, like fingerprints; they are unique for the cells that bear them. All of body cells have antigens that signal self, a message which dictates: that they are part of body and therefore not to be attacked.

Microorganisms, viruses, or any agent that invades bodies, also have identifying antigens on their surfaces, which signal foreign to the immune system, readying it for an immediate attack. II.2 Antibodies Antibodies are the body’s complement to antigens. Think of each antigen as a unique lock, and the protein molecules called antibodies as custom-made keys for every variation of lock. Antibodies are the marvels of immunity that can fit into the keyhole of any one of millions of different antigens. Each one has a unique molecular configuration, and we can produce antibodies that latch perfectly into every conceivable antigen. III. Immunity in optimization context To exploit the immunity in optimization problems, we will establish the connection in four stages. III.1 Fragmentation and distribution In this stage and in first, we proceed by fragmenting or quantifying on some values, all parameters subject to the optimization process. In other words, we fragmentize every parameter value, and then we distribute arbitrarily the result on a matrix structure with coding and addressing or indexing for every fragment. Figure 1 illustrates the principle. According to Figure 1, the numerical value of a parameter n will be then fragmented into positive and negative quantities (Qx), so that the sum of these gives the value of parameter n. We attribute to every quantity an arbitrary address (Ax) and an arbitrary code (Cx) on antigens matrix X. Ax will be the analogous for the locus or site of the antigen-antibody duel. The code serves as an antigenic determinant for the identification of antigens by antibodies (Djennas et al., 2009). Antigen matrix X is a matrix where every element supports not only a parameter quantity (or fragment) but also an integer code, and the whole will be localized by indices (i, j) on this matrix. The antigen matrix plays the role of the entire antigenic repertory, which is presented by foreign bodies (bacteria, viruses, fungi, etc.). The code is an integer, randomly taken in a well-determined interval. It represents a key and therefore it will be analogous to the antigenic receptor. Its identification allows the destruction of the attributed quantity. III.2 Identification and destruction When antigens matrix is charged, then we applied on it another matrix. This new matrix, qualified as the antibodies matrix (F), contains codes only. In the immunity context, these codes are receptors. If the site (i, j) and the code correspond on the two matrices, then the element will be destroyed. This concept is illustrated on Figure 2. On Figure 2, the quantities corresponding to antigens: C1, C2, C8 will be destroyed because of: (1)

the site correspondence, i.e. i and j are the same on the two matrices; and

(2)

the code correspondence, i.e. the same codes on the two matrices.

While the codes C4 and C6 are present on the antigens matrix X and on the antibodies matrix F, but the corresponding quantities will be preserved because of breach of site correspondence.

Radiation synthesis using immunity tactic 1019

COMPEL 33,3

Q1, A1, C1 Q2, A2, C2 Parameter 1

Q3, A3, C3 Q4, A4, C4 Q5, A5, C5

1020

Q6, A6, C6 Q7, A7, C7 Parameter 2 . . . . . . . . . . . . . . .

Q8, A8, C8 Q9, A9, C9 Q10, A10, C10 . . . . Qp, Ap, Cp Qp+1, Ap+1, Cp+1

Parameter n

Qp+2, Ap+2, Cp+2 Qp+3, Ap+3, Cp+3 Qp+4, Ap+4, Cp+4

j

⇓ C6

C9

C4

C1 C1 C2 C5 C8

Figure 1. Fragmentation, addressing, and coding for parameters on antigens matrix X

C7 C3 i Antigens matrix Ξ on first stage

The site correspondence for codes C3, C4, C6, C9, C10 exists but at the same time the code correspondence is absents. For this reason, the quantities will be then preserved. The quantities identified by C5, C7, have neither sites correspondence nor codes correspondence. As result, they will be conserved. The implementation of this complex mechanism, assumes the paths diversification, and consequently allows process to escape from local optima (Djennas et al., 2009).

Radiation synthesis using immunity tactic

j

C4

1021 C1 C2

C6

C8

C

i Antibodies matrix Φ

j

C6

C9

C4

C10 C1 C2 C5 C8 C7 C3

i Antigens matrix on second stage

III.3 Proliferation The following stage will be the proliferation of antigens from cellular division. The preserved quantities will proliferate and will give rise to the same quantities with the same codes but not the same sites. The cellular proliferation counterbalances antigens destruction. This operator saves the quantities from any extinction. Figure 3, project the cellular division mechanism. Only the preserved quantities can then be reproduced, randomly, in number and sites. On Figure 3: .

the corresponding quantity to C3 was reproduced one time on one arbitrary site;

Figure 2. Antigens-antibodies correspondence and destruction stage

COMPEL 33,3

j

C10 C6

C9

C4

1022 C10 C10

C5

C6

C9

C3

C7

C10 C6

Figure 3. Antigens proliferation principle

C3

i Antigens matrix on third stage .

the corresponding quantity to C6 was reproduced two times on two arbitrary sites;

.

the corresponding quantity to C9 was reproduced two times on one arbitrary double-site;

.

the corresponding quantity to C10 was reproduced three times on three arbitrary sites;

.

the corresponding quantity to C4 was reproduced one time on the same site of its generative (mother cell); and

.

the corresponding quantities to C5 and C7 were not reproduced.

For the proliferation operator, different combinations in numbers and sites can exist in the course of every cycle or more precisely every defensive (Djennas et al., 2009). The number of cell division is randomly taken from range [1:n], where n is the maximum number allowed for cell division in course of defensive. The (i, j) site on the matrix is also generated randomly during every defensive. III.4 Summation and reconstitution The final stage is the summation of the preserved quantities on the antigens matrix and then their affectation for reconstitution of the new parameters. Thus, the process evolves toward next defensive. It takes end for a predefined number of defensives or when a pre-estimated precision is reached. This mechanism is shown on Figure 4. The summation obeys the vicinity principle. That is to say, and with respect to Figure 4, for parameter 1, we choose randomly a start code, let us say (C5), and then we add the close neighbor (C10). After that, we add again the closest neighbor to C10 (C9) and next we add the closest to C9 (C10), and so on for the other parameters. The number of summed codes for any parameter is randomly generated during defensive, but with two obligations. The first one is that the number of summed codes does not exceed the average number of fragments multiplied by the maximum number of cell division (n) on every defensive. The second one is that we make sure that all codes on the antigens

Radiation synthesis using immunity tactic

j

C10 C6

C9

C4

1023 C10 C10

C5

C6

C9

C3

C10

C7 C6

C3

i Antigens matrix on fourth stage

⇓ C5+C6+C9+C9

Parameter 1

C10+C6+C4+C7

Parameter 2

. ..

. ..

C10+C3

Parameter n

matrix will be summed. So, as an illustrative example in Figure 4, parameter 1 is reconstituted by four summations on codes, while parameter n is of 2. For consistency, although the sum is applied on codes we assume the attributed quantities sum. The sums of quantities constitute the new parameters for the next defensive. Now, the process can evolve toward a second defensive after constitution of the new parameters. It comes to end for a predefined number of defensives or when a pre-estimated precision is reached. Proliferation combined to summation ensures diversity and dynamicity to circumvent the local optima problem. On Figure 4, the sum on codes insinuates the attributed quantities sum. IV. Creation of the two antagonistic matrices As we have seen, the antibodies matrix (F) is created to destroy certain elements of the antigens matrix (X). The creation of this matrix is governed by the stochastic and the deterministic. IV.1 Deterministic parameters The matrices F and X are square ones. The number of columns and of rows varies in course of defensives. This number is constrained by the processing time, the number of parameters and of fragments.

Figure 4. Quantities summation and parameters reconstitution principle

COMPEL 33,3

The number of elements on antibodies matrix, i.e. the really occupied entries, obeys to the formula: 2

3  lddr M  1 k 6 7 PM 5 P ¼ O4 ðk  1Þ

ð1Þ

1024 O is the round operator; P the number of the really occupied entries; PM the maximum number of the really occupied entries; k the absorption factor, it is fixed to two in our survey. This factor absorbs linearly the number of occupied entries, so that it does not overflow; dr the distance between the optimized function and the destination function; dM the maximal distance between the optimized function and the destination function; l the regulation coefficient, it varies from 0 to 1. This coefficient absorbs exponentially the number of occupied entries, so that it does not overflow. The other deterministic entity for the two matrices F and X is the codes constitution. These codes are taken from an interval, which has the range varying as follow:  G ¼ GM

dr dM

Z ð2Þ

G is the interval range for codes; GM the maximal tolerated range for codes interval; Z the adaptive coefficient, it varies from 0 to 1. This coefficient controls the interval range and, therefore, avoids overtaking of the maximal and minimal values of the codes interval. Then we can write: GC ¼ z þ G

ð3Þ

 the integers interval, taken from G with round operator. z is the random integer; G While it is supposed that G is an interval with infinity of values, but we will take integers only and affect these to codes. So, we will select integer codes from Gc in random manner. Example. If: GM ¼ [2:7], dr/dM, Z ¼ 2, z ¼ 10, then: G¼ ½0:98 : 3:34; G ¼ ½1; 2; 3 and GC ¼ ½11; 12; 13 The fragmentation of each parameter is different and requires its division on a number of fractions. This number as well as the one of the parameters, contributes strongly on the size of matrices. Therefore, it would be obvious that the minimum number of elements (all entries) for the two matrices will be given by: Pm ¼

i¼n X

Yi

ð4Þ

i¼1

Yi is the number of fragments for parameter i; n the number of optimized parameters.

If we add the proliferation operation and we subtract the one of destruction, without forget the fact that two co-genetic cells can have the same site, then it will be necessary to write: n log2 ðnÞ Pm ð5Þ Pr ¼ 2 Pr is the number of elements for matrices; n the cell divisions maximum number, in course of defensive. The parameters fragmentation will be done by dividing the ith parameter by a random integer (Yi), which is generated in course of resolution process. On the other hand, it will be necessary to discriminate between P and Pr. By caution we must write: PmoPoPMoPr.

Radiation synthesis using immunity tactic 1025

IV.2 Stochastic distribution In the immunity tactic, stochastic operation relates to the elements distribution. This operation will be achieved by distributing the elements, randomly on different sites (i, j) upon the two matrices. This random distribution simulates the contingence of the antigens-antibodies connection. In this case, the contact between antigens and antibodies is only a fact of the hazard. The hazardous character assigned to the contact is a stimulant for solutions diversity to avoid any stagnation in the resolution process. V. Basic notions The radiated field by Ns elements on conformal array is given by (Botha and McNamara, 1993; Bucci et al., 1995; Djennas et al., 2009): f ðy; jÞ ¼

i¼Ns X

2p

ai e j l ð~ri~uÞ Ei ðy; jÞ

ð6Þ

i¼1

With ai is the ith element feed. It is the parameter to optimize; l the wavelength; ri the position vector in the Cartesian system. ~ u ¼~ uo the propagation vector defined by angles y and j; Ei(y,j) the elementary contribution in the propagation direction; Ns the number of elements. These different parameters are shown on Figure 5.

Z

θ → uo

ri O

X

ϕ

R

R

→ um

Y

Figure 5. Geometric parameters in the global Cartesian coordinates system

COMPEL 33,3

1026

Besides, the conformal aspect of array complicates the problem and imposes the fact that elementary contribution Ei (y, j) is different from an element to another (Botha and McNamara, 1993; Bucci et al., 1995; Djennas et al., 2009). In other words, we must calculate the element contribution for every direction in global Cartesian system by use of transition matrix (TM). This matrix ensures the transition between two coordinate systems, i.e. from element’s local Cartesian system to array’s global Cartesian system. TM. The local Cartesian coordinates system, which is associated to the element, relates to the global Cartesian coordinates system, which is associated to the array, by means of angles yi and ji, those define the normal vector upon element. The coordinates system transition is carried out by two rotations, namely around the z-axis with ji angle for the first one, and around y 0 -axis (that was defined by the previous rotation) with yi angle, for the second one. These two rotations generate the TM. The TM expression will be given after the following classical operations: Rotation around the z-axis with ji angle: 0 1 0 1 2 3 X cos yi sin ji 0 X @ Y A ! @ Y 0 A ) TM1 ¼ 4  sin ji cos ji 0 5 Z 0 0 1 Z Rotation around the y0 -axis with yi angle: 0 1 0 01 2 X cos yi X @ Y 0 A ! @ Y 0 A ) TM2 ¼ 4 0 sin yi Z0 Z And:

0 1 0

3  sin yi 0 5 cos yi

TM ¼ TM2 x TM1 2

cos yi cos ji TM ¼ 4  sin ji sin yi cos ji

cos yi sin ji cos ji sin yi sin ji

ð7Þ 3

 sin yi 0 5 cos yi

ð8Þ

Then, the propagation vector in the local coordinates system will be expressed as follow: ~ u 0 ¼ TM:~ u

ð9Þ

VI. Synthesis statement For resolving the synthesis problem, we proceed by structuring it. This structure is subject to a sub-problems organization. VI.1 Configuration data matrix (L matrix) If we examine the radiation expression, we can clearly see that this expression can be given in the following compact form: f ¼ aLT 2p

Lðk; l Þ ¼ e j l ð~rl~uk Þ El ðyk ; jk Þ

ð10Þ ð11Þ

l is the element/L column index. It varies from 1 to Ns. k the discrete point upon the sampled space/L row index. It varies from 1 to Ne. With: Ne ¼ Ny Nj Ny ¼

p 2p þ 1; Nj ¼ þ1 dy dj

ð12Þ ð13Þ

Radiation synthesis using immunity tactic 1027

Ns is the number of elements/rows; Ne the number of observation sphere samples/ columns; dy the sampling step, in elevation. dj the sampling step, in azimuth. The modular construction for L matrix is illustrated on Figure 6. VI.2 Destination pattern characterization When we optimize, we target that the obtained function be as close as possible to the desired result. This last, can be defined for the synthesis problem by a values function or by an intervals function. We have opted for an intervals function because it gives more freedom degrees. The destination pattern or desired function or mask - can be imposed on entire space, on part of space or in certain planes. The destination pattern possesses two bounds, the upper one and the lower one that together, must trap the radiation pattern. The lower bound Ml and the upper bound Mu define the following parameters: (1)

the main-lobe level, set at 0 dB;

(2)

the maximum ripple value in decibels: MRV;

(3)

the maximum side-lobe level in decibels: MSL;

(4)

the major beam-width in degrees: MXW;

(5)

the minor beam-width in degrees: MNW;

(6)

the main-lobe scan angle in degrees: yc;

(7)

the principal plane in degrees: jc; and

(8)

the coverage region in degrees: CR.

The parameters are graphically shown, on Figure 7 for revolution view and on Figure 8 for projected view. - is specified by two bounds and therefore we can write: Ml ðy; jÞpf ðy; jÞpMu ðy; jÞ

ð14Þ

Mu(y,j) is the upper bound; M1(y, j) the lower bound. The observation sphere sampling

The samples file

The conformal array configuration

The array specifications file

Λ matrix

Figure 6. Synoptic chart for L matrix construction

COMPEL 33,3

dB 0 –MRV MSL MNW MXW

1028 CR

ϕc

Figure 7. Destination pattern in revolution view

θc

θ ⋅cos ϕ

θ ⋅ sin ϕ

10

Normalized decibel scale

0 –10 Ml –20 Mu –30 –40 –50

Figure 8. jc-projection for Figure 7

–60 –100 –80 –60 –40 –20 0 20 40 60 Observation angle in elevation

80

100

Permitted area for the calculated pattern

VI.3 Error criterion definition The error criterion in its general form, defines the distance that separates the optimized function from the desired function. In the synthesis problem, it represents the distance between the calculated pattern and the mask. Then, in our survey, we define the criterion as follow (Djennas et al., 2009): 2

j¼N Pj i¼N Py

6 6 j¼1 dr ¼ 6 6 4

3b1 aði; jÞ  ðBði; jÞÞ

i¼1 j¼N Pj i¼N Py j¼1 i¼1

aði; jÞ

bði;jÞ

M

7 7 7 7 5

ð15Þ

With: Bði; jÞ ¼ f ði; jÞ  Mu ði; jÞ Bði; jÞ ¼ 0

if f ði; jÞ4Mu ði; jÞ

if Ml ði; jÞof ði; jÞoMu ði; jÞ

Bði; jÞ ¼ Ml ði; jÞ  f ði; jÞ bM ¼ Max½bði; jÞ

if f ði; jÞoMl ði; jÞ

ð16Þ

a(i,j) is the linear weight coefficients; b(i,j) the power weight coefficients; bM the maximal value of b coefficients. The utilization of the immunity tactic as optimization technique will allow us to minimize the error criterion. The optimal solution will be the one that gives minimum distance, i.e., minimum dr. By optimistic mind, the ideal solution would be for a nil dr, thing that is rarely reached for optimization problems. The error criterion stays the only link between the synthesis and the immunity tactic. Its minimization allows the extraction of optimal excitation parameters ai. These parameters, once injected in the radiation pattern expression, generate the optimal function, which will be as close as possible to - function. The adapted resolution concept for the conformal arrays synthesis, founded on the immunity tactic, is summarized upon the organization chart of Figure 9.

Radiation synthesis using immunity tactic 1029

Number of defensives Start: g = 0 Contingent generation of excitation parameters

- Fragmentation and distribution of parameters. - Coding of cells carrying fragments. - Indexation of cells carrying fragments. Generation of the antibodies matrix (Φ)

Generation of the antigens matrix (Ξ)

- Application of Ξ and Φ. - Detection and destruction of the corresponding fragments. - Proliferation of the preserved fragments from destruction. - Fragments summation and reconstitution of the new parameters. - Pattern calculation - Distance calculation No

Λ matrix destination pattern

g = g+1 g > gmax or dr < drmin Yes

- Extraction of the optimal parameters - Extraction of the optimal radiation pattern End

Figure 9. Organization chart for the conformal antennas arrays synthesis founded on immunity tactic

COMPEL 33,3

1030

VII. Results and discussions VII.1 Dihedron array As we can see on Figure 10, the first studied array is the dihedron, with 1201 vertex angle. The distance between neighboring elements is of 0.5 l, which corresponds to 5 GHz of frequency. If we refer to z-axis, then the successive distances of elements: 1 and 8, 2 and 7, 3 and 6, 4 and 5, are: 9, 6.5, 4, 1.3 cm. Destination pattern characterization The maximum ripple value: 4 dB The maximum side-lobe level: 25 dB The major beam-width: 701 The minor beam-width: 351 The coverage region: 3601 The optimization is carried out on all space in order to respect the destination pattern not only in one plane but in 3D. On this subject, in all 3D figures, we introduce the abscissa a and the ordinate b, with: a ¼ y cos (j) b ¼ y sin (j) y: elevation angle, j: azimuth angle. The obtained results through simulations based on the immunity tactic, are illustrated on Figures 11-15. Figure 16 shows time performance comparison between the new technique and two other nature-inspired algorithms, namely genetic algorithm and simulated annealing. The comparison is carried out on the basis of 20 test simulations. The decrease of the relative error that separates the calculated pattern from the destination pattern is then remarkable in course of defensives. The confrontation with masks reveals an adequate approach and testifies of the efficiency of our novel optimization technique. However, and by examination of Figure 11, we can deduce that the annulment of the relative error is not possible, thing, which remains obvious for the majority of optimization problems. On Figure 16 it is clear that the immunity tactic assume convergence within a very acceptable time. The mean processing time is of 136 sec for Immunity Tactic, 151 sec for Genetic Algorithm, and 170 sec for Simulated Annealing. VII.2 Tetrahedron array Now the second array proposed for synthesis is a tetrahedral one, with 301 vertex angle. On Figure 17, this array possesses 16 elements at the frequency of 5 GHz. The elements are distributed on the structure with the configuration: four elements by four tiers. The distance between neighboring elements: d ¼ 3 cm. Z

4

5

3

6

2

7 d

1

Figure 10. Dihedron array

8 Y

30° X

Radiation synthesis using immunity tactic

20 18 16

Relative error

14

1031

12 10 8 6 4 2 0

0

500

1,000 1,500 2,000 2,500 3,000 3,500 4,000 Defensives number

Figure 11. Relative error evolution in the course

EM field magnitude

0 –10 –20 –30 –40 –50 –60 100 Beta

100

0 –100

–100

0 Alpha

In base-vertex direction, the elements in each tier are distant from z-axis as follow: 4.3, 3.5, 2.7, 1.9 cm. Destination pattern characterization The maximum ripple value: 3 dB The maximum side-lobe level: 20 dB The major beam-width: 801 The minor beam-width: 401 The synthesis based on the immunity tactic, have generated different results. These results are displayed on: .

Figure 18 for the evolution of the relative error as a function of the defensives number.

Figure 12. Revolution view upon synthesized radiation (yc ¼ 01, jc ¼ 901)

COMPEL 33,3

0

Normalized magnitude

–10

1032

–20

–30 –40

–50

Figure 13. Projected view for Figure 12 principal radiation plane: jc ¼ 901

–60

–150

–100

–50 0 50 Theta in degrees

100

2

3 4 5 6 Order of the element

7

150

1 0.9

Normalized amplitude

0.8 0.7 0.6 0.5 0.4 0.3 0.2

Figure 14. Normalized amplitudes of the optimum weights (yc ¼ 01, jc ¼ 901)

0.1 0

1

8

.

Figure 19 for revolution view upon the synthesized radiation.

.

Figure 20 for projected view upon the principal radiation plane.

.

Figure 21 for normalized amplitudes of the optimum weights.

.

Figure 22 for phases of the optimum weights.

.

Figure 23 for processing time performance.

The respect of the constraints imposed by destination patterns appears clearly and in an average sense on all figures. Therefore, we can admit the fact that our new approach is undeniable and fully justified. Concerning time performance, we notice that the new

Radiation synthesis using immunity tactic

6

Phase in radian

5

4

1033

3

2

Figure 15. Phases of the optimum weights (yc ¼ 01, jc ¼ 901)

1

0

1

2

3 4 5 6 Order of the element

7

8

250

Time in Sec

200

150

100 Immunity Tactic 50

Genetic Algorithm Simulated Annealing

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Simulation order

technique is slightly faster than the two others. In average, the processing times are, respectively, 176, 203, and 229 sec. As conclusion, we notice that our calculation program code which is founded on the novel technique is efficient and well structured. In addition, the decrease of the relative error is conspicuous and proves the convergent nature and the stability of the immunity tactic. The new technique is based on the balance between two opposite operators, namely, proliferation and destruction. The alternation of these operators drags toward the solution and assumes convergence. VIII. Conclusion The perennial attacks led by the immune system against the strange organisms had constituted the foundation of our new and original conception for the conformal

Figure 16. Processing time performance comparison Dihedron array case

COMPEL 33,3

Z

13 9

1034

16

5

12

1

X

8

d 4

Figure 17. Tetrahedral array Y

Relative error

15

10

5

Figure 18. Relative error evolution in the course of defensives (yc ¼ 901, jc ¼ 601)

0

0

500

1,000 1,500 2,000 2,500 3,000 3,500 4,000 Defensives number

EM field magnitude

0

Figure 19. Revolution view upon synthesized radiation (yc ¼ 901, jc ¼ 601)

–10 –20 –30 –40 –50 –60 100 Beta

100

0 –100

0 –100

Alpha

Radiation synthesis using immunity tactic

0

Normalized magnitude

–10 –20

1035 –30 –40 –50

–60

–150

–100

–50 0 50 Theta in degrees

100

Figure 20. Projected view for Figure 18 principal radiation plane: jc ¼ 601

150

1 0.9

Normalized amplitude

0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

2

4

6 8 10 12 Order of the element

14

16

18

antennas arrays synthesis. In this context, and opposite the biologic immunity, the conception foresees the elimination of the inadequate solutions and the preservation of the best. Our original technique, namely the immunity tactic is exploited thereafter, in the conformal antennas arrays radiation synthesis. As other nature-based optimizations methods, namely: genetic algorithm, simulated annealing, etc., the new technique is very efficient and gives results as better as the others in an acceptable processing time. Our findings are very encouraging and stimulate us to pursue the works under the orientation of this original technique, with as aim to extract all its advantages on optimization in general and on synthesis in particular. In view of its properties, the immunity tactic, its variants, and its combinations will open without any doubt, several future prospects in front of optimization and synthesis problems with effectiveness, correctness, and accuracy.

Figure 21. Normalized amplitudes of the optimum weights (yc ¼ 901, jc ¼ 601)

COMPEL 33,3

7 6

1036

Phase in radian

5 4 3 2 1

Figure 22. Phases of the optimum weights (yc ¼ 901, jc ¼ 601)

0

0

400

2

4

6 8 10 12 Order of the element

14

16

18

Immunity Tactic Genetic Algorithm

350

Simulated Annealing

Time in Sec

300 250 200 150 100

Figure 23. Processing time performance comparison tetrahedron array case

50 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Simulation order

References Alcami, A. and Koszinowski, U.H. (2000), “Viral mechanisms of immune evasion”, Trends in Microbiology, Vol. 8 No. 9, pp. 410-418. Bai, J. and Odin, J.A. (2003), “Apoptosis and the liver: relation to autoimmunity and related conditions”, Autoimmunity Reviews, Vol. 2 No. 1, pp. 36-42. Botha, E. and McNamara, D.A. (1993), “Conformal array synthesis using alternating projections, with maximal likelihood estimation used in one of the projection operators”, Electronics Letters, Vol. 29 No. 20, pp. 1733-1734. Bousso, P., Lemaitre, F., Bilsborough, J. and Kourilsky, P. (2000), “Facing two T-Cell epitopes: a degree of randomness in the primary response is lost upon secondary immunization”, Journal of Immunology, Vol. 165 No. 2, pp. 760-767.

Bucci, O.M., D’Elia, G. and Romito, G. (1995), “Power synthesis of conformal arrays by a generalized projection method”, IEE Proc. – Microw. Antennas Propag., Vol. 142 No. 6, pp. 467-471. Carayannopoulos, L.N. and Yokoyama, W.M. (2004), “Recognition of infected cells by natural killer cells”, Current Opinion in Immunology, Vol. 16 No. 1, pp. 26-33. Comisso, M. and Vescovo, R. (2007), “Multi-beam synthesis with null constraints by phase control for antenna arrays of arbitrary geometry”, Electronics Letters, Vol. 43 No. 7, pp. 374-375. Comisso, M. and Vescovo, R. (2011), “3D power synthesis with reduction of near-field and dynamic range ratio for conformal antenna arrays”, IEEE Trans. Antennas Propag, Vol. 59 No. 4, pp. 1164-1174. Damiano, J.P., Scotto, M. and Ribero, J.M. (1996), “Application of an algebraic tool to fast analysis and synthesis of conformal printed antennas”, Electronic Letters, Vol. 32 No. 22, pp. 2033-2035. Djennas, S.A., Bendimerad, F.T. and Djennas, S.M. (2009), “A new technique for planar array antennas synthesis”, International Journal of Electronics and Communications (AE), Vol. 63 No. 10, pp. 859-870. Dreher, H. (1995), The Immune Power Personality, Penguin Books, New York. Girard, T., Staraj, R., Cambiaggio, E. and Muller, F. (2000), “Synthesis of conformal antenna array”, Millennium Conference on Antennas & Propagation, A.P. 2000, Davos, pp. 331-342. Haupt, R.L. (1997), “Phase-only adaptive nulling with a genetic algorithm”, IEEE Trans. Antennas Propag, Vol. 45 No. 6, pp. 1009-1015. Jiao, Y.C., Wei, W.Y., Huang, L.W. and Wu, H.S. (1993), “A new low sidelobe pattern synthesis technique for conformal arrays”, IEEE Transaction on Antennas and Propagation, Vol. 41 No. 6, pp. 824-831. Khzmalyan, D. and Kondrat’yev, A.S. (1996), “Phase-only synthesis of antenna array amplitude pattern”, Int. J. Electron., Vol. 81 No. 5, pp. 585-589. Rodrı´guez, J.A., Landesa, L., Rodı´guez, J.L., Obelleiro, F., Ares, F. and Garcı´a-Pino, A. (1999), “Pattern synthesis of array antennas with arbitrary elements by simulated annealing and adaptive array theory”, Microw. Opt. Technol. Lett, Vol. 20 No. 1, pp. 48-50. Stewart-Jones, G.B., McMichael, A.J., Bell, J.I., Stuart, D.I. and Jones, E.Y. (2003), “A structural basis for immuno-dominant human T-Cell receptor recognition”, Nature Immunology, Vol. 4 No. 7, pp. 657-663. Stutzman, W.L. and Thiele, G.A. (1981), Antenna Theory and Design, John Wiley & Sons, New York. Tortorella, D., Gewurz, B.E., Furman, M.H., Schust, D.J. and Ploegh, H.L. (2000), “Viral subversion of the immune system”, Annual Review of Immunology, Vol. 18 No. 1, pp. 861-926. Vaskelainen, L.I. (1997), “Iterative least–squares methods for conformal array antennas with optimized polarization and frequency properties”, IEEE Transaction on Antennas and Propagation, Vol. 45 No. 7, pp. 1179-1185. Corresponding author Sidi Ahmed Djennas can be contacted at: [email protected]

To purchase reprints of this article please e-mail: [email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints

Radiation synthesis using immunity tactic 1037

The current issue and full text archive of this journal is available at www.emeraldinsight.com/0332-1649.htm

COMPEL 33,3

Multi-level design of an isolation transformer using collaborative optimization

1038

Alexandru C. Berbecea and Fre´de´ric Gillon L2EP, Optimisation, Ecole Centrale de Lille, Lille, France, and

Pascal Brochet Universite´ de Technologie de Belfort-Montbe´liard, Belfort, France Abstract Purpose – The purpose of this paper is to present an application of a multidisciplinary multi-level design optimization methodology for the optimal design of a complex device from the field of electrical engineering throughout discipline-based decomposition. The considered benchmark is a single-phase low voltage safety isolation transformer. Design/methodology/approach – The multidisciplinary optimization of a safety isolation transformer is addressed within this paper. The bi-level collaborative optimization (CO) strategy is employed to coordinate the optimization of the different disciplinary analytical models of the transformer (no-load and full-load electromagnetic models and thermal model). The results represent the joint decision of the three distinct disciplinary optimizers involved in the design process, under the coordination of the CO’s master optimizer. In order to validate the proposed approach, the results are compared to those obtained using a classical single-level optimization method – sequential quadratic programming – carried out using a multidisciplinary feasible formulation for handling the evaluation of the coupling model of the transformer. Findings – Results show a good convergence of the CO process with the analytical modeling of the transformer, with a reduced number of coordination iterations. However, a relatively important number of disciplinary models evaluations were required by the local optimizers. Originality/value – The CO multi-level methodology represents a new approach in the field of electrical engineering. The advantage of this approach consists in that it integrates decisions from different teams of specialists within the optimal design process of complex systems and all exchanges are managed within a unique coordination process. Keywords Design process, Collaborative optimization, Complex system, Non-linear optimization, Transformer benchmark Paper type Research paper

Nomenclature Optimization nomenclature

COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering Vol. 33 No. 3, 2014 pp. 1038-1050 r Emerald Group Publishing Limited 0332-1649 DOI 10.1108/COMPEL-05-2013-0170

AAO CO FPI IDF MDA MDF MDO SA SLO SO SQP

all-at-once collaborative optimization fixed-point iteration individual discipline feasible multidisciplinary analysis multidisciplinary feasible multidisciplinary optimization system analyzer single-level optimization system optimizer sequential quadratic programming

ci gi x yi zi, z*i Ji , J* i

coupling variables vector vector of discipline-specific inequality constraints design variables vector vector of interdisciplinary coupling variables, relative to ith discipline problem ith discipline design vector, respectively, optimal configuration ith discipline current, respectively, optimal objective function value

Transformer nomenclature EMM EMML EMM0 THM a,b,c,d Z ff 1, ff 2 I1, I10

electromagnetic model full-load electromagnetic model no-load electromagnetic model thermal model four dimensions of the iron part of the transformer transformer efficiency filling factor of the primary and secondary winding, respectively primary, respectively, magnetizing current

Lco, Lir mass n1 S1, S2 Tco, Tir V2, DV2

Joules, respectively, iron losses total mass of the transformer number of primary turns primary, respectively, secondary winding’s section copper, respectively, iron part’s temperature secondary voltage, respectively, secondary voltage drop

I. Introduction The optimal design of complex systems, such as electromagnetic devices, requires accounting for the device’s different underlying physical phenomena. Multidisciplinary optimization methods represent a common practice for such problems in aircraft industry (Allison et al., 2006). The paper presents the multidisciplinary optimal design of a single-phase low-voltage safety isolation transformer, introduced as benchmark problem in Tran et al. (2007a). The transformer is represented through an analytical modeling (Tran et al., 2007b), benefiting from a fast evaluation and with a fair accuracy. The analytical model of the transformer represents in fact a coupling between three disciplinary models: two electromagnetic models (full-load (EMML), respectively, no-load (EMM0)) and a thermal model (THM). A recent optimization study (Ben-Ayed and Brisset, 2012) addressed the optimal design of the isolation transformer through single-level multidisciplinary optimization strategies (multidisciplinary feasible (MDF), individual discipline feasible (IDF) and all-at-once (AAO)). While single-level methods (MDF, IDF and AAO) have been widely studied and have made full proof of their capabilities (De´pince´ et al., 2007), the multi-level multidisciplinary optimization (MDO) strategies are still in their youth period. A great potential is expected from such multi-level MDO strategies. A survey of the different single- and multi-level MDO strategies can be found in Balesdent et al. (2012). The optimal design of transformers has been addressed in different studies using classical optimization approaches. In Subramanian and Bhuvaneswari (2006), an evolutionary algorithm has been employed for the optimal design of a three-phase power transformer, considering the global cost of manufacturing of the transformer as objective function. A mixed-integer non-linear programming problem has been defined in Amoiralis et al. (2009a) for the optimal design of a distribution transformer and solved using a branch-and-bound method combined with an evolutionary algorithm, based on electromagnetic and thermal numerical models. Four different objective functions: total owning cost, material cost, total mass and total losses have been employed for the optimal design study of a single-phase distribution transformer in Olivares-Galvan et al. (2011). An optimal design study, seeking to minimize the active part cost of a distribution transformer, has been addressed in Salkoski and Chorbev (2012), based on a differential evolution algorithm and an analytical modeling. In Sergeant et al. (2005), the optimal geometry design of a transformer driven active magnetic shield for an induction heating application is addressed using a genetic algorithm with time harmonic and finite element models. Two recent surveys of the existing design and optimization studies of power transformers (Amoiralis et al.,

Isolation transformer using CO 1039

COMPEL 33,3

1040

2009b; Khatri and Rahi, 2012) identified deterministic approaches, genetic algorithms and neural network methods as the main optimization techniques used for the optimal design of transformers. Within this study, one of the most representative multi-level MDO strategies, collaborative optimization (CO) (Braun et al., 1996), is considered for addressing the transformer optimal design (Berbecea et al., 2012). The CO strategy is a bi-level optimization strategy for complex systems which provides a high level of disciplinary autonomy and coordinates the different disciplines, though managing the interdisciplinary compatibility satisfaction. The increased level of disciplinary autonomy allows each engineer or team of engineers to intervene within its field of expertize. Moreover, different complex analysis tools can be integrated within the optimal design process in order to perform a multidisciplinary optimization. The CO strategy allows the use of different well-adapted optimizers for each disciplinary problem. A number of similarities exist between CO and another multi-level MDO approach, Analytical Target Cascading (Kreuawan et al., 2009; Moussouni et al., 2009). A parallel between the two multi-level optimization strategies has been made in Roth and Kroo (2008). The CO strategy also presents some well-known issues. One of these issues is related to its non-robustness, due to the fact that it faces instability at convergence (Balesdent et al., 2012). An improved version of CO, named enhanced collaborative optimization, which eliminates some of the numerical difficulties faced by CO, has been proposed in Roth and Kroo (2008). With the classical CO formulation, the disciplinary optimizations are imbricated within the CO coordination loop, though generating a large number of disciplinary model evaluations. In order to reduce the large number of disciplinary model evaluations, a non-nested version of CO has been proposed in Wu et al. (2012) and tested over several classical analytical test problems. The non-nested CO strategy has been applied for solving a satellite MDO problem, showing an important computational time reduction. The CO of an electromechanical actuator, implying decisions both from electrical and mechanical domains has been addressed in Ammar et al. (2005). The different disciplinary analyzers are often represented by complex numerical codes, which require an important amount of time for their evaluation. The direct integration of such complex codes within the CO process leads to a prohibitive overall optimization time. To overcome this drawback, the integration of metamodels within the different disciplinary optimizations has been proposed in the literature (Zadeh et al., 2009). The work presented in this paper starts with the brief description of the isolation transformer optimization benchmark and the analytical coupling model employed. Then, the decomposition of the coupling model of the transformer and the bi-level optimization problem formulation according to the CO strategy are presented. The optimal results of the CO run and the convergence of the different disciplinary optimizers involved in the multidisciplinary design process are then presented. The results are validated using a classical single-level optimization process (SLO) with a MDF formulation for handling the coupling model of the transformer. The evolution of the number of model evaluations for the two approaches is discussed. Finally, some concluding remarks are given along with some future development lines. II. Transformer optimization benchmark Safety isolation transformer model The device to be optimally dimensioned is a safety isolation transformer, represented through analytical modeling (Tran et al., 2007a, b). The model has been taken from an existing benchmark problem (Tran et al., 2007a). Thus, the modeling of the device is not

developed here, since the main purpose of this paper is to focus on the multidisciplinary optimization approach. The representation of the multidisciplinary coupling model of the transformer is presented in Figure 1. The isolation transformer is represented within this study by three distinct disciplinary models, a EMML, a EMM0 and THM. The three models of the transformer are coupled through different disciplinary variables, as can be seen from Figure 1. Though, in order to compute the secondary voltage drop DV2 and the magnetizing current I10, the EMM0 requires the value of the secondary voltage V2 and primary current I1, which are supplied by the EMML. A first loop exists though between the EMML and EMM0, and which is solved using a fixed-point iteration method (FPI). The THM computes the values for the windings and magnetic core temperature Tco, Tir, respectively, based on the values for the Joules and iron losses Lco, Lir, respectively, supplied by the EMML. A second loop exists though between the EMML and THM which is equally solved using the FPI method. The design variables vector x containing the seven design variables of the transformer is a common entry to all three disciplinary models. These design variables are related to the dimension of the safety isolation transformer. The evaluation of the transformer’s coupling model for one configuration (one design vector x) requires between 5 and 10 iterations for the FPI method to converge. However, the simulation is fast and it takes about 1 ms.

Isolation transformer using CO 1041

Optimization problem formulation for the transformer benchmark The optimization benchmark problem of the low-voltage single-phase safety isolation transformer has been described in Ben-Ayed et al. (2011). The optimization problem consists of seven design variables: four variables defining the geometry of the transformer’s iron core (a,b,c,d), two variables for the copper wire section of the primary and secondary windings, S2 S1, respectively, and one variable determining the number of primary turns n1. A number of seven geometric, electromagnetic and thermal constraints are expressed in order to ensure the feasibility of the design. The goal of the optimization problem consists of finding the lightest transformer configuration in terms of total mass of the device. The formulation of the transformer optimization problem is expressed in (1): Minimize

f ðxÞ ¼ massðxÞ

subject to

Tco  120p0 I10 =I1  0:1p0 x ¼ ½a; b; c; d; n1 ; S1 ; S2  a 2 ½3; 30mm S1 2 ½0:15; 19mm2

x

with

Tir  100p0 DV2 =V2  0:1p0

ff 1  0:5p0 ff 2  0:5p0

0:8  Zp0

b 2 ½14; 95mm S2 2 ½0:15; 19mm2

c 2 ½6; 40mm n1 2 ½200; 1200

d 2 ½10; 80mm

x I1 V2 x Tco

Full-load Electromag. model (EMML)

 I1

V2

V2

x

Lco

Lco

Lir

Lir

No-load Electromag. model (EMM0)

Thermal model (THM)

ð1Þ

V2 I10 mass

Tir Tco

Figure 1. Multidisciplinary coupling model representation of the transformer

COMPEL 33,3

1042

III. Single-level optimization of the transformer using the MDF formulation The single-level optimization of the safety isolation transformer has been considered by employing the multidisciplinary feasible (MDF) formulation for the evaluation of the coupling model of the transformer. The structure of the SLO of the transformer using the MDF formulation is presented graphically in Figure 2. The multidisciplinary analysis (MDA) of the transformer is performed by a FPI method. Though, the consistency between the different disciplines considered is ensured by the system analyzer (SA). A system optimizer (SO) is in charge with optimizing the coupled model of the transformer. The Sequential Quadratic Programming (SQP) algorithm is used here as optimizer for finding the optimal solution of the transformer benchmark problem. The sequencing of the SLO is summarized within (2): MDF ¼ SO½MDA ¼ SO½SA½EMM0 ! EMM ! THM  ¼ SQP ½FPI ½EMM0 ! EMM ! THM 

ð2Þ

Using the MDF formulation, at each evaluation of the coupled model, required by the optimization algorithm, the design is consistent if the FPI from the MDA has converged. The transformer’s coupling model evaluation and the optimization process are completely decoupled by employing the MDF formulation. The results of this single-level optimization are presented later on and compared simultaneously with the results obtained using the bi-level optimization strategy. IV. CO of the transformer The CO (Braun et al., 1996) is a bi-level MDO strategy. The structure of CO for the transformer problem is presented in Figure 3. Using the CO paradigm, each disciplinary model of the transformer is associated an optimizer. Two distinct levels are identified within the CO structure, a coordination level and a disciplinary level. The coordination level has the goal of optimizing the global objective while ensuring the interdisciplinary compatibility. The optimization problems from the discipline level seek to attain the interdisciplinary compatibility, while respecting the discipline-specific constraints. Within this strategy, each disciplinary model is associated an optimizer and all disciplinary optimizers are linked together thanks to a coordination optimizer. At each iteration of the coordination-level optimization, an optimization of all disciplinary models takes place. Though, the disciplinary-level System optimizer (SO) Goal Minimize f = mass s.t. Disciplinary constraints (g )

x

x f,g

Figure 2. Single-level optimization of the safety isolation transformer using the MDF formulation

I10,ΔV2, mass ,I1,V2

Tco,Tir

No-load EMM I1,V2

x ΔV2

System Analyzer (SA)

Full-load EMM Tco

x

Lco,Lir

THM

optimizations are nested within the coordination level loop. The design consistency, from the point of view of all disciplines involved within the optimization process, is attained only at the end of the CO process, giving that the multi-level optimization process has converged. In Figure 3, y represents the vector of interdisciplinary coupling variables, z are the design variables of the problem, J *i is the optimal objective function value of the ith disciplinary optimization problem, ci are the coupling variables, outputs of the ith discipline and gi is the vector of discipline-specific inequality constraints. In compare to the SLO using the MDF formulation, where the optimization and the model evaluation are completely decoupled, here the consistency of the design is provided by the CO process. However, the three models and the three disciplinary optimizers are independent. Hence, the computations can be executed in parallel or on different computers having different solvers. The management is done by an extra optimizer – the coordination optimizer.

Isolation transformer using CO 1043

Multi-level optimization problem formulation The sequencing of the bi-level optimization process is summarized within (3): CO ¼ SO½SO½EMM0 kSO½EMML kSO½THM 

ð3Þ

Each of the system optimizations is carried out using the SQP algorithm. Here, the exchanges are performed between the optimization processes. The optimization problem formulation of the coordination level is expressed in (4): Ocoord

Minimize

f ðy; zÞ ¼ massðy; zÞ

with

z ¼ ½a; b; c; d; n1 ; S1 ; S2 

subject to

y ¼ ½y1 ; y2 ; y3  ¼ ½½DV2 ; I10 ; ½V2 ; I1 ; Lco ; Lir ; ½Tco    J  ¼ J1 ðz ; c1 Þ; J2 ðz ; c2 Þ; J3 ðz ; c3 Þ pee

y;z

ð4Þ

where e represents the predefined tolerance for the consistency constraints satisfaction, with its value equal to 103, z represents the vector of design variables of the transformer, z* represents the vector of optimal design variables from the discipline level, y represents the vector of disciplinary coupling variables, ci represents the vector of coupling variables from the ith discipline optimization and J *i represents the optimal value of the objective function of the ith discipline optimization.

Coordination level optimizer Goal Minimize mass s.t. Interdisciplinary Compatibility Constraints z1, y1

J 1*

No-load EM optimizer Goal interdisciplinary compatibility s.t. localconstraints z*1

J1, c1, g1

No-load EMM

z2, y2

J *2

J*3

z3, y3

Full-load EM optimizer

TH optimizer

Goal interdisciplinary compatibility s.t. localconstraints

Goal interdisciplinary compatibility s.t. Iocalconstraints

z*2

J2, c2, g2

Full-load EMM

z*3

J3,c3, g3 THM

Figure 3. Collaborative optimization of the safety isolation transformer

COMPEL 33,3

1044

The formulation of the no-load EM discipline optimization problem is expressed in (5):

2 J1 ðz ; c1 Þ ¼ z1  z1 2 þky2  c2 k22 OEMM0 Minimize z1   with z1 ¼ a ; b ; c ; d  ; n1 ; S1 ; S2 ð5Þ c1 ¼ ½DV2 ; I10  subject to g1;1 ðz1 Þ ¼ DV2 =V2  0:1p0 g1;2 ðz1 Þ ¼ I10 =I1  0:1p0 The goal of the no-load EM discipline optimization is to match the values of the design variables z1 and the coupling variables y1 imposed by the system-level coordination loop, whose expression is given in (2), while respecting the constraints specific to the no-load EM discipline in terms of secondary voltage drop value DV2 and magnetizing current value I10, respectively. The formulation of the full-load EM discipline optimization problem is expressed in (6):

2 J2 ðz ; c2 Þ ¼ z2  z2 2 þky2  c2 k22 OEMML Minimize  z2   with z2 ¼ a ; b ; c ; d ; n1 ; S1 ; S2 c2 ¼ bV2 ; I1 ; Lco ; Lir c ð6Þ subject to g2;1 ðz2 Þ ¼ 0:8  Zp0 g2;2 ðz2 Þ ¼ ff 1  0:5p0 g2;3 ðz2 Þ ¼ ff 2  0:5p0 The full-load EM discipline optimization seeks to match at best the values received from the coordination level for the design variables and the coupling variables, while respecting its specific constraint, relative to the minimum value of the transformer efficiency. The full-load EM optimizer was also assigned the two geometric constraint functions of the transformer optimization problem, which are relative to the filling factors of the two windings, primary and secondary, ff 1 and ff 2, respectively. The formulation of the TH discipline optimization problem is expressed in (7):

2 J3 ðz ; c3 Þ ¼ z3  z3 2 þky3  c3 k22 OTHM Minimize z3   with z3 ¼ a ; b ; c ; d ; n1 ; S1 ; S2 ð7Þ c3 ¼ ½Tco  subject to g3:1 ðz3 Þ ¼ Tco  120p0 g3;2 ðz3 ¼ÞTir  100p0 The thermal discipline (TH) optimizer is in charge with respecting the constraint functions relative to the maximum winding and magnetic core temperatures, Tco and Tir, respectively. At the beginning of the bi-level optimization process using the CO formulation, a feasible design is supplied as initial start point. This constraint is due to the coordination optimizer used (SQP).

The seven constraints of the initial optimization problem (1) are well dispatched to the three disciplinary optimization problems. Using the CO process, the unique optimization process of the whole isolation transformer is decomposed in four optimization processes linked together. V. Results Transformer optimal results using the CO strategy The convergence of the CO process with the analytical models of the transformer was achieved with only seven iterations. An optimal mass value of 2.36 kg was obtained after seven coordination level iterations. An improvement of about 18 percent was obtained between the initial (and feasible) solution and the optimum found by the CO process. The Figure 4(a) shows the evolution of the objective function (i.e. the total mass of the transformer) from the coordination optimization process. This optimization process stops when a maximum improvement in the objective function is obtained within the predefined tolerance of 103 for the interdisciplinary consistency constraints. The CO convergence required a total of 128 coordination level function evaluations, where each evaluation corresponds to a complete optimization of all three disciplines. Though, 3  128 disciplinary optimizations were performed with the three models. The evolution of the disciplinary objective function values for the three discipline-level optimization problems considered is shown in Figure 4(b). The CO process starts with very small values for the disciplinary objective functions (corresponding to the initial consistent and feasible design), and these values increase along the CO process iterations until attaining the predefined value for the consistency constraints satisfaction of 103. The underlying disciplinary optimization processes have been imposed a tolerance value of 106 for both objective and constraints functions. The CO process stops when the change in the objective function value (i.e. mass of the transformer) is less than the pre-imposed tolerance value and the constraints (i.e. interdisciplinary consistency constraints) are satisfied within their tolerance.

Isolation transformer using CO 1045

Transformer optimal results using the SLO with SQP and the MDF formulation To validate the results obtained using the CO formulation, a SLO using SQP and the MDF formulation has been launched. The optimization process has been supplied with System-level objective function evolution

(a)

Discipline-level objective function evolution x 10–3

(b) 1 0.8

2.7 Ji* (–)

mass (kg)

2.8

2.6

0.6

J*1

0.4

J*2 J*3

2.5 0.2 2.4 0 1

2 3 4 5 6 CO system-level iteration (–)

7

1

2

3

4

5

6

CO system-level iteration (–)

7

Figure 4. CO sub-systems objective functions evolution for the isolation transformer

COMPEL 33,3

1046

a known feasible initial start point, the same point supplied for CO. A predefined tolerance of 106 has been imposed the SQP algorithm for both the objective and constraints functions. The gradient-based SQP algorithm required a total number of 1,139 evaluations of the coupling model of the transformer to converge to the global optimum of the benchmark problem, with 5-10 disciplinary models evaluations required by the FPI method. The SLO process starts with a feasible initial design. During the optimization process, designs presenting a lower mass value, but violating the constraints, are also evaluated, but the SQP algorithm converges toward an optimal design which respects the constraints. The convergence of the SLO process with the MDF formulation is attained after 14 iterations of the SQP algorithm. The evolution of the objective function (i.e. mass of the transformer) is represented graphically in Figure 5. The numerical values of the optimal design obtained are presented in Table I. Single-level and bi-level optimization results comparison The results from the bi-level CO process and the SLO using SQP and the MDF formulation are presented for comparison within Table I. An optimal mass value of 2.36 kg was obtained by the CO process. This value is slightly superior to the global optimum obtained by the SLO strategy (2.31 kg), the difference being related to the predefined tolerance for the satisfaction of the interdisciplinary consistency constraints. Moreover, the CO process represents a global optimization approach, in compare to SQP, which is a local optimizer and though being able to determine the optimum design with an increased accuracy.

2.36

Mass (kg)

2.34 2.32 2.3 2.28

Figure 5. Evolution of the objective function during the SLO process with MDF formulation

Table I. SLO and CO optimal results comparison for the transformer benchmark

2.26 2.24 2

4

6 8 10 SLO iteration (–)

12

14

Design variables Obj. Constraints a b c d n1 S1 S2 Mass Tcopper Tiron DV2/V2 I10/I1 Z Opt. proc. (mm) (mm) (mm) (mm) (–) (mm2) (mm2) (Kg) (1C) (1C) (–) (–) (–) SLO CO

12.9 16.5

50.1 53.3

16.6 18.1

43.2 640 31.0 728

0.32 0.31

2.91 2.51

2.31 2.36

108.8 104.9

99.9 95.9

0.07 0.08

0.1 0.89 0.08 0.89

The total number of model evaluations required by the two optimization strategies considered, SLO and CO, is given in Table II. Within the SLO process employing the MDF formulation, the evaluations of the three disciplinary models, EMM0, EMML and THM cannot be dissociated, due to the different couplings existing between each other. The disciplinary convergence loops within the MDF formulation, represented graphically in Figure 2, are managed by the FPI method employed. Considering a mean of ten model evaluations (EMM0 þ EMML þTHM) per each FPI process run, a total number of 3*10*122 ¼ 3,660 discipline model evaluations is required by the SLO process. Compared to the 7,996 þ 7,893 þ 3,652 ¼ 19,541 discipline evaluations required by the CO process, the SLO process requires about five times less computational effort to solve the transformer optimal design problem. The evolution of the number of model evaluations for both SLO and CO along the main optimization process iterations is presented graphically in Figure 6. From Figure 6 it can be remarked that the SLO strategy requires a larger number of iterations, but with much fewer model evaluations.

Isolation transformer using CO 1047

VI. Conclusions The multi-level design of an electromagnetic device, a safety isolation transformer, is addressed in this paper. The CO multi-level optimization strategy has been employed for solving the optimal design problem, based on the analytical model of the transformer. The optimization process converges in only seven iterations of the coordination process of the multi-level optimization. These results are validated using Opt. process Model

SLO MDF (EMM0 þ EMML þTHM) 122

No. of evaluations

EMM0

CO EMML

THM

7,996

7,893

3,652

Table II. SLO and CO process total number of model evaluations for the transformer benchmark

No. of model evaluations (–)

104

103

102

COEMM0

101

COEMML COTHM SLOMDF(EMM

0 + EMML + THM)

100

0

2

4 6 8 10 12 Main optimization process iteration (–)

14

Figure 6. Evolution of the model evaluations number for SLO and CO along main optimization process iterations

COMPEL 33,3

1048

a classical single-level optimization approach with a MDF formulation. However, the total number of disciplinary model evaluations is relatively important in compare to the SLO approach. This fact is mainly due to the nested character of the CO formulation employed. The CO strategy could also be employed with high accuracy simulation models such as finite element models. However, the relatively important number of total transformer disciplinary model evaluations required by the classical CO process impedes the direct integration of 3-D FE models into the optimization process. One way to overcome this drawback is to integrate a space mapping technique (output space mapping) technique within the CO structure for solving the different disciplinary optimization problems of the multi-level structure and this idea will be addressed in a future work. References Allison, J., Roth, B., Kokkolaras, M., Kroo, I. and Papalambros, P.Y. (2006), “Aircraft family design using decomposition-based methods”, paper presented at the 11th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, Portsmouth, VA, September 6-8, available at: http://deepblue.lib.umich.edu/bitstream/handle/2027.42/76504/AIAA2006-6950-514.pdf (accessed October 10, 2013). Ammar, I., Gerbaud, L., Marin, P.R. and Wurtz, F. (2005), “Co-sizing of an electromechanical device by using optimisation process”, COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering, Vol. 24 No. 3, pp. 997-1012. Amoiralis, E.I., Georgilakis, P.S., Tsili, M.A. and Kladas, A.G. (2009a), “Global transformer optimization method using evolutionary design and numerical field computation”, IEEE Transactions on Magnetics, Vol. 45 No. 3, pp. 1720-1723. Amoiralis, E.I., Tsili, M.A. and Kladas, A.G. (2009b), “Transformer design and optimization: a literature survey”, IEEE Transactions on Power Delivery, Vol. 24 No. 4, pp. 1999-2024. Balesdent, M., Be´rend, N., De´pince´, P. and Chriette, A. (2012), “A survey of multidisciplinary design optimization methods in launch vehicle design”, Structural and Multidisciplinary Optimization, Vol. 45 No. 5, pp. 619-642. Ben-Ayed, R. and Brisset, S. (2012), “Multidisciplinary optimization formulations benefits on space mapping techniques”, COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering, Vol. 31 No. 3, pp. 945-957. Ben-Ayed, R., Berbecea, A.C., Brisset, S., Gillon, F. and Brochet, P. (2011), “Comparison between efficient global optimization and output space mapping technique”, International Journal of Electromagnetics and Mechanics, Vol. 37 Nos 2-3, pp. 109-120. Berbecea, A.C., Gillon, F. and Brochet, P. (2012), “Multi-level design of an isolation transformer using collaborative optimization”, 12th Optimization and Inverse Problems in Electromagnetism, OIPE 2012 Proceedings of the International Workshop, Ghent, September 19-21. Braun, R., Gage, P., Kroo, I. and Sobieski, I. (1996), “Implementation and performance issues in collaborative optimization”, Working Paper No. AIAA 2004-4047, Langley Research Center, NASA, Hampton, VA. De´pince´, P., Gue´das, B. and Picard, J. (2007), “Multidisciplinary and multiobjective optimization: comparison of several methods”, paper presented at the 7th Structural and Multidisciplinary Optimization World Congress, Seoul, May 21-25, available at: http://hal. archives-ouvertes.fr/docs/00/44/96/05/PDF/Depince_Guedas_Picard_2007.pdf (accessed October 10, 2013). Khatri, A. and Rahi, O.P. (2012), “Optimal design of transformer: a compressive bibliographical survey”, International Journal of Scientific Engineering and Technology, Vol. 1 No. 2, pp. 159-167.

Kreuawan, S., Moussouni, F., Gillon, F., Brisset, S., Brochet, P. and Nicod, L. (2009), “Optimal design of a traction system using target cascading”, International Journal of Applied Electromagnetics and Mechanics, Vol. 30 Nos 3-4, pp. 163-178. Moussouni, F., Kreuawan, S., Brisset, S., Gillon, F., Brochet, P. and Nicod, L. (2009), “Multi-level design optimization using target cascading”, COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering, Vol. 28 No. 5, pp. 1162-1178. Olivares-Galvan, J.C., Georgilakis, P.S. and Escarela-Perez, R. (2011), “Optimal design of single-phase shell-type distribution transformers based on a multiple design method validated by measurements”, Electrical Engineering, Vol. 93 No. 4, pp. 237-246. Roth, B. and Kroo, I. (2008), “Enhanced collaborative optimization: a decomposition-based method for multidisciplinary design”, in ASME 2008 Proceedings of the Conference in Brooklyn, New York, NY, 2008, pp. 927-936. Salkoski, R. and Chorbev, I. (2012), “Design optimization of distribution transformers based on differential evolution algorithms”, in 4th ICT Innovations 2012 Web Proceedings of the International Conference in Ohrid, Macedonia, 2012, ISSN 1857-7288, pp. 35-44. Sergeant, P., Dupre´, L. and Melkebeek, J. (2005), “Optimizing a transformer driven active magnetic shield in induction heating”, COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering, Vol. 24 No. 4, pp. 1241-1257. Subramanian, S. and Bhuvaneswari, R. (2006), “Improved fast evolutionary program for optimum design of power transformer”, COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering, Vol. 25 No. 4, pp. 995-1006. Tran, T.V., Brisset, S. and Brochet, P. (2007a), “A benchmark for multi-objective, multi-level and combinatorial optimizations of a safety isolating transformer”, in 16th Computation of Electromagnetic Fields, COMPUMAG 2007 Proceedings of the International Conference in Aachen, pp. 167-168, available at: http://l2ep.univ-lille1.fr/come/benchmarktransformer.htm (accessed October 10, 2013). Tran, T.V., Brisset, S. and Brochet, P. (2007b), “Combinatorial and multi-level optimization of a safety isolating transformer”, International Journal of Applied Electromagnetics and Mechanics, Vol. 26 Nos 3-4, pp. 201-208. Wu, B., Huang, H. and Wu, W. (2012), “A nonnested collaborative optimization method for multidisciplinary design problems”, in 16th IEEE Computer Supported Cooperative Work in Design, CSCWD 2012 Proceedings of the International Conference, pp. 148-152. Zadeh, P.M., Toropov, V.V. and Wood, A.S. (2009), “Metamodel-based collaborative optimization framework”, Structural and Multidisciplinary Optimization, Vol. 38 No. 2, pp. 103-115. About the authors Dr Alexandru C. Berbecea was born in Drobeta, Romania in 1983. He received the BEng degree in electrical engineering from both INPG, France and UPT, Romania in 2006. He obtained a PhD in electrical engineering from Ecole Centrale de Lille, France, in 2012 in the optimal design of electromagnetic devices. His main research interests include metamodel-based, multidisciplinary and multi-level optimization. He is currently working as a protection engineer for the Romanian Power Grid Company. Dr Alexandru C. Berbecea is the corresponding author and can be contacted at: [email protected] Dr Fre´de´ric Gillon was born in France in 1967. He obtained an engineer diploma in 1992 and a PhD in electrical engineering of the University of Lille in 1997. He is currently Assistant Professor at Ecole Centrale de Lille since 1999. His area of research is the optimal design of

Isolation transformer using CO 1049

COMPEL 33,3

1050

electric systems. The main applications are: linear motors, axial and radial synchronous motors and railway propulsion systems. Dr Pascal Brochet, after a PhD in Applied Mathematics in 1983 at USTL, he has worked for seven years for an automotive equipment company, as a Research Engineer in the field of computer-aided design of electrical machines. He joined Ecole Centrale de Lille in 1990, where he became full Professor and Head of a research team on the design of electrical machines. In 2011 he joined UTBM, where he is the Director of the university. His main interests include modeling, numerical simulation, design and optimization of electrical machines.

To purchase reprints of this article please e-mail: [email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints

The current issue and full text archive of this journal is available at www.emeraldinsight.com/0332-1649.htm

Model-free discrete control for robot manipulators using a fuzzy estimator Mohammad Mehdi Fateh, Siamak Azargoshasb and Saeed Khorashadizadeh

Model-free discrete control

1051

Department of Electrical and Robotic Engineering, Shahrood University of Technology, Shahrood, Iran Abstract Purpose – Discrete control of robot manipulators with uncertain model is the purpose of this paper. Design/methodology/approach – The proposed control design is model-free by employing an adaptive fuzzy estimator in the controller for the estimation of uncertainty as unknown function. An adaptive mechanism is proposed in order to overcome uncertainties. Parameters of the fuzzy estimator are adapted to minimize the estimation error using a gradient descent algorithm. Findings – The proposed model-free discrete control is robust against all uncertainties associated with the model of robotic system including the robot manipulator and actuators, and external disturbances. Stability analysis verifies the proposed control approach. Simulation results show its efficiency in the tracking control. Originality/value – A novel model-free discrete control approach for electrically driven robot manipulators is proposed. An adaptive fuzzy estimator is used in the controller to overcome uncertainties. The parameters of the estimator are regulated by a gradient descent algorithm. The most gradient descent algorithms have used a known cost function based on the tracking error for adaptation whereas the proposed gradient descent algorithm uses a cost function based on the uncertainty estimation error. Then, the uncertainty estimation error is calculated from the joint position error and its derivative using the closed-loop system. Keywords Modelling, Robotics, Uncertainty estimation, Adaptive fuzzy logic control, Function approximation Paper type Research paper

1. Introduction In the past decades, digital computers have been intensively used in the control systems. The current tendency toward digital control emanates from the availability of cheap digital computers and the advantages in utilizing digital signals instead of continuous signals. Digital control systems are superior to continuous time control systems from different points of view. For instance, digital systems are more flexible to changes, more immune to environmental noises and less computational (Ogata, 1987). Therefore, paying attention to design and analysis of discrete control is required in order to employ digital computers as controllers. The on-line discrete time control of industrial robots has been used already (Zagorianos et al., 1995). The research activities are aimed to improve the control performance and guarantee stability in the presence of some problems such as uncertainties, nonlinearities, short sampling period and discretizing issues. Discrete control of robot manipulators has gained a great deal of research in various forms of control algorithms. The discrete repetitive linear controls namely Q-filter, convolution, learning and basis function were presented and compared in (Kempf et al., 1993). Among them, the Q-filter algorithm as an internal control shows the fastest

COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering Vol. 33 No. 3, 2014 pp. 1051-1067 r Emerald Group Publishing Limited 0332-1649 DOI 10.1108/COMPEL-05-2013-0185

COMPEL 33,3

1052

execution speed, the lowest computational complexity, ease of design and implementation. However, tracking errors cannot converge to zero due to nonlinearity of the robotic system. Some discrete control approaches were developed for robotic manipulators in order to overcome uncertainty and nonlinearity such as discrete sliding-mode control (Corradini et al., 2012). A discrete decentralized time-varying nonlinear control of robot manipulators was presented using a discretized model (Tumeh, 1990). Some discrete controls such as repetitive control (Tsai et al., 1988), model reference adaptive control (Tsai and Tomizuka, 1989), time-optimal minimum-norm control (Fateh et al., 2013), linear quadratic repetitive control (Fateh and Baluchzadeh, 2012) and optimal control (Fateh and Baluchzadeh, 2013) have used a model of robot manipulator and compensate the model’s uncertainty. A discrete learning controller for vision-guided robot trajectory imitation was proposed with no prior knowledge of the camera-robot model (Jiang et al., 2007). A stable indirect adaptive control was developed based on the discrete-time T-S fuzzy model (Ruiyun and Mietek, 2008). Model-based control is the basis of analysis and design for advanced control approaches. The most important problem with model-based control approaches may be the mismatch between the actual model and nominal model. Some discrete models such as Neuman and Tourassis (1985) are too complex, computationally extensive and impractical in real-time control. On the other hand, simplified discrete models employed in the digital control (Mareels et al., 1992) produces errors. In order to overcome model uncertainties, a discrete sliding-mode control was presented for robot manipulators (Shoja Majidabad and Shandiz, 2012). Alternatively, this paper presents a model-free control by estimation of the uncertainty. Fuzzy systems have been widely employed for function approximation based on the universal approximation property of fuzzy systems (Wang, 1997). Adaptive fuzzy approaches have shown acceptable performances in approximating and compensating uncertainty because of their high capability of adaptation (Hwang and Kim, 2006; Sun et al., 1999). An adaptive controller for the nonlinear multi-input/multi-output time-delay systems was designed based on the fuzzy approximation (Chen et al., 2013). An indirect adaptive fuzzy control scheme was developed for a class of nonlinear discrete-time systems (Wuxi, 2008). To achieve tracking control of a class of unknown nonlinear dynamical systems, an adaptive discrete-time fuzzy logic controller was developed (Ge et al., 2004). Alternatively, the neural network control approaches were developed based on the function approximation ability of the neural networks. An adaptive neural network control was introduced for a class of multi-input/multioutput nonlinear systems with unknown bounded disturbances in discrete-time domain (Jagannathan et al., 2000). Using abilities of fuzzy control and neural networks, a stable discrete-time adaptive tracking controller for a robotic manipulator was presented (Sun et al., 2007). Among algorithms used for function approximation by fuzzy systems, the gradient descent algorithm is very simple, powerful and popular. To implement this algorithm, however, the unknown function in some points should be known (Wang, 1997). To overcome uncertainty, a novel model-free discrete control approach for electrically driven robot manipulators is presented in this paper. In our best knowledge, this approach was not applied before to the same problem. In addition, the gradient descent algorithm presented in this paper differs from other gradient descent algorithms. The gradient descent algorithms used for control algorithms have employed a cost function based on the tracking error whereas the proposed algorithm has used a cost

function based on the uncertainty estimation error. The tracking error is the difference between the measured output and the computed output whereas the uncertainty estimation error is the difference between the uncertainty and its estimation. The proposed robust control is superior to the conventional robust control due to the following reasons. One of the most challenging problems in designing a robust controller is to employ an uncertainty bound parameter which should be known in advance or estimated (Qu and Dawson, 1996). The value of uncertainty bound parameter is very crucial. Overestimation of this parameter will result in saturation of input and higher frequency of chattering in the switching control laws, while underestimation will increase the tracking error (Fateh, 2010). However, the robust control method presented in this paper eliminates the need for the uncertainty bound parameter. This paper is organized as follows: Section 2 introduces modelling of the electrically driven robot manipulator. Section 3 presents the proposed discrete control law. Section 4 describes the discrete adaptive fuzzy method to estimate and compensate uncertainties. Section 5 deals with stability analysis and performance evaluation. Section 6 presents simulation results and finally, Section 7 concludes the paper. 2. Modelling Consider an electrical robot driven by geared permanent magnet dc motors. The dynamics of robot manipulator is expressed (Spong et al., 2006): .. . . DðqÞq þ Cðq; qÞq þ GðqÞ ¼ s ð1Þ

..

where qARn is the vector of joint positions, D(q)ARn  n the inertia matrix of manipulator, C(q,)ARn the vector of centrifugal and Coriolis torques, G(q)ARn the vector of gravitational torques and sARn the vector of joint torques. We assume that the mechanical system is perfectly rigid. The electric motors provide the joint torques s by: . ð2Þ Jhm þ B hm þ rt ¼ tm

.

where smARn is the torque vector of motors, hmARn is the position vector of motors and J,B,rARn  n are the diagonal matrices for inertia, damping and reduction gear of . motors, respectively. The vector of joint velocities q is obtained by the vector of motor velocities hm through the gears as: . . r hm ¼ q ð3Þ Note that vectors and matrices are represented in the bold form for clarity. In order to obtain the motor voltages as the inputs of system, consider the electrical equation of geared permanent magnet dc motors in the matrix form as: . . ð4Þ RIa þ LIa þ Kb r1 q ¼ v where vARn is the vector of motor voltages and IaARn the vector of motor currents. R,L,KbARn  n represent the diagonal matrices for the armature resistance, inductance and back-emf constant of the motors, respectively. The motor torque

Model-free discrete control

1053

COMPEL 33,3

vector sm as the input for the dynamic Equation (2) is produced by the motor current vector: Km Ia ¼ tm

1054

ð5Þ

where Km is the diagonal matrix of the torque constants. A model for the electrically driven robot in the state space is introduced by the use of (1)-(5) as: . x ¼ fðxÞ þ bv ð6Þ where:

2

3 x2 fðxÞ ¼ 4 ðJr1 þ rDðx1 ÞÞ1 ððBr1 þ rCðx1 ; x2 ÞÞx2  rGðx1 Þ þ Km x3 Þ 5; b L1 ðKb r1 x2 þ Rx3 Þ 2 3 2 3 0 q . ¼ 4 0 5; x ¼ 4 q 5 1 L Ia

ð7Þ The state space Equation (6) shows a highly coupled nonlinear large multivariable system. Complexity of the model has opened a serious challenge in the literature of robot modelling and control. Although the system dynamics (6) is complex, a novel simple decentralized control is proposed. 3. Proposed discrete control law The motor voltages vector v is the input and the joint positions vector q is the output of the robotic system (6) which is a multi-input/multi-output system. In order to design a decentralized controller for each joint of robot, we decompose system (6) into singleinput/single-output systems while the motor voltage v is the input and the joint position q is the output of each single-input/single-output system. As a result, each joint of the robotic system is controlled by a controller. For this purpose, we pay attention to Equation (4) to express the voltage equation for the ith motor as: . . ð8Þ RIa þ LIa þ Kb r 1 q þ j ¼ v The variable j denotes the external disturbance. Equation (8) is preferred for control . purposes compared with complex system (6). It includes both the input v and output q . One can obtain from (8) a linear discrete system using a sampling period T that is a small positive constant. Substituting kT into t for k ¼ 1,2, y provides a discrete time model of the form of: . . ð9Þ RIa;k þ LIa;k þ Kb r 1 qk þ jk ¼ vk . . where vk ¼ v(kT), Ia,k ¼ Ia(kT), qk ¼ qðkTÞ and jk ¼ j(kT). The system dynamics can be rewritten as: . ð10Þ q k þ Fk ¼ v k Then, one can express Fk by subtracting (10) from (9) to obtain: . . . Fk ¼ RIa;k þ LIa;k þ Kb r 1 qk  qk þ jk

ð11Þ

In fact, Fk is referred to as the uncertainty. It includes external disturbance jk and . . motor’s model uncertainty, RIa;k þ LIa;k þ Kb r 1 qk . Using (10), a control law is proposed as: vk ¼ vmax satðuk =vmax Þ where sat(  ) denotes the saturation function defined by: 8 uk Xvmax < 1 satðuk =vmax Þ ¼ uk =vmax juk jovmax : 1 uk p  vmax

ð12Þ

1055 ð13Þ

where vmax is the maximum permitted voltage of motor and uk is described as: . uk ¼ qd;k þ kp ðqd;k  qk Þ þ F^k ð14Þ where F^k is the estimate of Fk using an adaptive fuzzy system presented in the next section, qd, k is the desired joint position, and k p is a control design parameter. We called the proposed control (12) as Nonlinear Discrete Time Adaptive Fuzzy Control (NDTAFC). 4. Discrete adaptive fuzzy estimation of the uncertainty Applying control law (12) to system (10) obtains the closed loop system: . qk þ Fk ¼ nmax satðuk =nmax Þ In the case of uk4vmax, according to (13) we have: . qk þ Fk ¼ nmax

ð15Þ

ð16Þ

Therefore, the estimator F^k is not effective in the closed loop system. One can suggest: . F^k ¼ nmax  qk ð17Þ As a result: F^k ¼ Fk This means that the estimation error becomes zero. In the case of ukovmax, according to (13), we have: . qk þ Fk ¼ nmax One can suggest: . F^k ¼ nmax  qk

ð18Þ

ð19Þ

ð20Þ

As a result: F^k ¼ Fk This means that the estimation error becomes zero.

Model-free discrete control

ð21Þ

COMPEL 33,3

1056

In the case of |u|pvmax, according to (13), (14) and (15) we have: . . qk þ Fk ¼ qd;k þ kp ðqd;k  qk Þ þ F^k

ð22Þ

Therefore, the closed loop system can be written as: . ek þ kp ek ¼ Fk  F^k

ð23Þ

where ek is the tracking error expressed by: ek ¼ qd;k  qk

ð24Þ

. Suppose that F^ is the output of an adaptive fuzzy system with inputs of ek and ek . If three fuzzy sets are given to each fuzzy input, the whole control space will be covered by nine fuzzy rules. The linguistic fuzzy rules are proposed in the Mamdani type of the form: . ð25Þ FRl : if ek is Al and ek is Bl then F^k is Cl where FRl denotes the lth fuzzy rule for l ¼ 1, y, 9. In the lth rule, Al, Bl and Cl are fuzzy . membership functions belonging to the fuzzy variables ek, ek and F^k , respectively. Three Gaussian membership functions named as Positive (P), Zero (Z) and Negative (N) are defined for the input ek in the operating range of manipulator as shown in Figure 1. . The same membership functions as for ek are assigned to ek . Nine symmetric Gaussian membership functions named as Positive Very High (PVH), Positive High (PH), Positive Medium (PM), Zero (Z), Negative Medium (NM), Negative High (NH) and Negative Very High (NVH) are defined for F^k in the form of: 0 !1 ^k  y l 2 F k A for l ¼ 1; . . . ; 9 ð26Þ ml ðF^k Þ ¼ exp@ s Fuzzy sets 1 P Z N

membership function

0.8

0.6

0.4

0.2

Figure 1. Membership functions of the input ek

0 –1

–0.5

0 ek (rad)

0.5

1

where s and ykl are the design parameters. s is constant whereas ykl is adjusted by an adaptive law. The fuzzy rules should be defined such that the tracking control system goes to the equilibrium point. We may use an expert’s knowledge, the trial and error method or an optimization algorithm to design the fuzzy controller. The obtained fuzzy rules are given in Table I. In this paper, F^k is adapted using a gradient descent algorithm to minimize the tracking error. If we use the product inference engine, singleton fuzzier, center average defuzzier and Gaussian membership functions, the fuzzy system (Wang, 1997) is of the form: P9 ykl zkl . ð27Þ F^k ðek ; ek Þ ¼ PL¼1 9 l L¼1 zk where: . zkl ¼ mAl ðek Þ mBl ðek Þ ð28Þ where mAl ðek Þ 2 ½0 ; 1 . and mBl ðek Þ 2 ½0 ; 1 are the membership functions for the fuzzy sets mAl and mBl , respectively, and ykl is the center of fuzzy set Cl. The objective is to design a fuzzy . system F^k ðek ; ek Þ , so that the estimation error: 1 Ek ¼ ðFk  F^k Þ2 ð29Þ 2 l is minimized. The parameter which should be adjusted online is yk . The adaptation law in the gradient decent algorithm is given by (Wang, 1997): qEk l ¼ ykl  a l ð30Þ ykþ1 q yk where a is a positive constant which determines the speed of convergence and qEk =qykl is calculated as: zkl ðFk  F^k Þ qEk ¼  ð31Þ P9 l qykl L¼1 zk

Model-free discrete control

1057

Note that Fk is an unknown function. So it is unavailable and cannot be used in the adaptation law. To solve the problem, this paper proposes a novel technique by substituting (23) into (31) to obtain: . zkl ðek þ kp ek Þ qEk ¼  P 9 l q ykl L¼1 zk

ek P Z N

ð32Þ

N

. ek Z

P

Z NM VNH

PM Z NM

VPH PM Z

Table I. Fuzzy rules

COMPEL 33,3

The proposed model (10) is purposeful. The used technique for calculating ykl in (32) for the gradient descent algorithm implies that the ðpartialEk =ðpartial algorithm does not require input-output data for the function. Instead, it uses the tracking error and its derivative. Substituting (32) into (30) yields to adaptive rule: l ¼ ykl þ a ykþ1

1058

. zkl ðek þ kp ek Þ P9 l L¼1 zk

ð33Þ

The most gradient descent algorithms have used a known cost function based on the tracking error for adaptation whereas the proposed algorithm has used an unknown cost function based on the uncertainty estimation error. In the gradient descent algorithms, the tracking error is the difference between the measured output and the computed output. In contrast, this paper has defined a cost function based on the uncertainty estimation error as Ek ¼ 0:5ðFk  F^k Þ2 in (29) where Fk is lumped uncertainty, and F^k is its estimation. As a result of minimizing the cost function, the uncertainty estimation error will be ignorable. However, the lumped uncertainty is unknown or cannot be measured whereas the output can be measured in other gradient descent algorithms. To solve the problem, this paper has found a novel solution to . define the uncertainty estimation error Fk  F^k as ek þ kp ek in (23) using the closed. loop system in which both the joint position tracking error ek and its derivative ek can be measured. 5. Stability analysis Theory: Discrete control law (12) by adaptive fuzzy uncertainty estimator (27) applied for the robotic system (6) can guarantee the stability of the control system in the sense that all system states are bounded. Proof: In order to analyze the stability, the following assumptions are made: Assumption 1 The desired trajectory qd must be smooth in the sense that qd and its derivatives up to a necessary order are available and all uniformly bounded (Qu and Dawson, 1996). As a necessary condition to design a robust control, the external disturbance must be bounded. Thus, the following assumption is made: Assumption 2 The external disturbance j is bounded as jjðtÞjpjmax . Control law (12) makes the following assumption: Assumption 3 The motor voltage is bounded as jvjpvmax . Assumption 4 The motor should be sufficiently strong to drive the robot for tracking the desired joint velocity under the maximum permitted voltages. It is assumed that: Assumption 5 The robotic manipulator is rigid.

The assumption of perfect rigidity does not hold when using a gear box. Flexibility is inherited with reduction gears. In the case of small joint flexibility considered for industrial robots, the joint flexibility can be considered as unmodeled dynamics in the lumped uncertainty used in this paper. The closed loop system (15) implies that the voltage of each motor, vk, is bounded since vk ¼ vmax satðuk =vmax Þ and vmax pjvmax satðuk =vmax Þjpvmax . According to a proof given by Fateh (2012), in an electrically driven robot when. the motor voltage is . bounded the motor velocity q, the motor current Ia and Ia are bounded. The boundedness of q is proven as follows: considering the control law (13), we should discuss about boundedness of q in the three areas stated as uk 4nmax , juk jpnmax and uk o  nmax . In the case of juk jpnmax , the closed loop system is given by (23) which can be represented as: . ð34Þ ek þ kp ek ¼ w where w ¼ Fk  F^k . The linear first order differential Equation (34) with k p40 is a stable system based on the Routh-Horwitz criteria. The output ek is bounded if the input w be bounded. The gradient descent algorithm obtains the reduction of error  ^ 2 ^ expressed by ðEk Þ ¼ ð1 2Þ ðFk  Fk Þ in (29). Therefore, Fk  Fk  is bounded. With w ¼ ^ Fk  Fk , the boundedness of ek is proven. From (24), we have qk ¼ qd,kek. According to the Assumption 1, qd,k is bounded. Thus, the boundedness of qk is proven. In the case of uk4vmax, substituting (14) into uk4vmax yields to: . qd;k þ kp ðqd;k  qk Þ þ F^k 4nmax Substituting (17) into (35) obtains that: . . qk þ kp qk oqd;k þ kp qd;k .  . Since qd;k þ kp qd;k pqd;k þ kp qd;k  , one can imply: .  . qk þ kp qk oqd;k þ kp qd;k 

ð35Þ

ð36Þ

ð37Þ

In the case of uk o  nmax , substituting (14) into uk o  nmax yields to: . qd;k þ kp ðqd;k  qk Þ þ F^k o  nmax Substituting (20) into (38) obtains that: . . qd;k þ kp qd;k oqk þ kp qk .  . Since qd;k þ kp qd;k pqd;k þ kp qd;k , one can imply: .  . qd;k þ kp qd;k oqk þ kp qk

ð38Þ

ð39Þ

ð40Þ

From (37) and (40), we have: .    qk þ kp qk oq. d;k þ kp qd;k 

ð41Þ

Model-free discrete control

1059

COMPEL 33,3

Let us define: . qk þ k p qk ¼ w

ð42Þ

.  jwjoqd;k þ kp qd;k 

ð43Þ

where w is bounded as:

1060

.  And qd;k þ kp qd;k  is a bounded value as stated by Assumption 1. System (42) is a linear stable system under a bounded input. Thus, its output qk is bounded. . For every motor, the joint position q, velocity q , and the motor current Ia are . bounded. Therefore, the boundedness of states q, q and Ia is proven. 6. Simulation results The NDTAFC in (12) is simulated using an articulated robot driven by permanent magnet dc motors with details given by Fateh and Khorashadizadeh (2012). The maximum voltage of each motor is set to umax ¼ 40 V. The same desired joint trajectories are given to the three joints as shown in Figure 2. The desired trajectory starts from zero and goes to 2 rad in 10 sec. The desired position for every joint is given by: yd;k ¼ 1  cosðpkT=10Þfor0pkTo10

ð45Þ

The control tracking performance depends on how fast the reference trajectory is. However, to perform position control for many tasks by industrial robots, the presented trajectory is usual. In addition, the given desired trajectory is smooth such that its derivative up to required order must be available as stated in Assumption 1. Simulation 1. The adaptation rule (33) is set to by ykl ð0Þ ¼ 0 , a ¼ 15 and k p ¼ 20. The external disturbance is given zero. The tracking performance of the NDTAFC is shown in Figures 3 and 4. The system outputs together with the desired trajectories are shown in Figure 4. The notations ek1, ek2 and ek3 are the tracking errors for Desired trajectory 2

qd (rad)

1.5

1

0.5

Figure 2. The desired trajectory

0

0

2

4 6 Time (sec)

8

10

8

Model-free discrete control

Performance of the NDTAFC

x 10–5

ek1 6

ek2 ek3

tracking eror (rad)

4

1061

2 0 –2 –4

10

Figure 3. Performance of the NDTAFC

10

Figure 4. Tracking performance in the NDTAFC

–6 0

2

4

6 Time (sec)

8

Tracking performance 2.5

2 qd

joint position

1.5

q1 q2

1

q3

0.5

0

–0.5

0

2

4 6 Time (sec)

8

the joint 1, joint 2 and joint 3, respectively. The maximum error of 7.56  105 rad occurs in joint 2 that is ignorable. The adaptation of parameters are shown in Figure 5. It is detected that the parameters vector cannot converge to constant value. Based on some necessary and sufficient conditions, the parameters vector can converge to its correct value in adaptive control (Boyd and Sastry, 1986). If the reference signal contains enough frequencies then the parameters vector converges to its correct value.

COMPEL 33,3

Adaptation of parameters 12 p1 p2 p3 p4 p5 p6 p7 p8 p9

10 8 parameters

1062

6 4 2 0

Figure 5. Adaptation of parameters

–2

0

2

4 6 Time (sec)

8

10

In this study, however, the desired trajectory shown in Figure 2 does not have enough frequencies. The lumped uncertainty described by (11) is shown in Figure 6. The lumped uncertainty is well estimated as shown in Figure 7. The maximum estimation error is 2.67  103 V when starting, then the estimation error goes under 9  104 V. Motors behave well under the permitted voltages as shown in Figure 8. Simulation 2. The adaptation rule (29) is set with ykl ð0Þ ¼ 0 , a ¼ 7 and k p ¼ 7. The external disturbance is inserted to the input of each motor as a periodic pulse function uncertainty without sudden changes 16 Link1 Link2 Link3

14 12

uncertainty (v)

10 8 6 4 2 0

Figure 6. Uncertainty without sudden changes

–2

0

2

4

6 Time (sec)

8

10

3

x 10–3

Model-free discrete control

Performance of uncertainty estimation Link1 Link2 Link3

2.5 2

1063

1.5 f-f^

1 0.5 0 –0.5 –1 –1.5

0

2

4 6 Time (sec)

8

10

Figure 7. Performance of uncertainty estimation expressed by Fk  F^k

Control efforts in NDTAFC 16 V1 V2 V3

14

motor voltage (v)

12 10 8 6 4 2 0 –2

0

2

4

6 Time (sec)

8

10

with a period of 1 S, amplitude of 2 V, time delay of 0.7 S, and pulse width 30 percent of period. The effects of disturbances in tracking errors are represented in Figure 9. The motor voltages behave well under the maximum permitted value of 40 V as shown in Figure 10. Simulation 3. The adaptation rule (29) is set with ykl ð0Þ ¼ 0 , a ¼ 7 and k p ¼ 7. The effect of initial error is studied while a value of 0.1 rad is given as initial errors to joints. The inserted disturbance is removed to pay attention on the effect of initial error.

Figure 8. Control efforts in the NDTAFC

COMPEL 33,3

2.5

Performance of the NDTAFC

x 10–3

ek1 2

ek2

1064

tracking error (rad)

1.5

ek3

1 0.5 0 –0.5 –1 –1.5

Figure 9. Performance of the NDTAFC in disturbance rejection

–2

0

2

4 6 Time (sec)

8

10

Control efforts in NDTAFC 20 V1 V2 V3

motor voltage(v)

15

10

5

0

Figure 10. Control efforts of NDTAFC for disturbance rejection

–5

0

2

4

6 Time (sec)

8

10

Control system starts from a point that is far away from the equilibrium; however, it goes to the equilibrium point well. The tracking errors reduce to the value of about 2.73  104 rad while they start from 0.1 rad as shown in Figure 11. The responses behave smoothly to reduce the size of tracking errors. The robustness of system in terms of stability and tracking performance is presented by this simulation in directing the tracking system to the equilibrium. The control efforts show jumps when starting to control the effect of nonzero initial errors. They behave well under the maximum permitted voltages as shown in Figure 12.

Model-free discrete control

Performance of the NDTAFC 0.02

0

tracking error (rad)

ek1

1065

ek2

–0.02

ek3 –0.04

–0.06

–0.08

–0.1

0

2

4 6 Time (sec)

8

10

Figure 11. Tracking performance under a large initial error in the NDTAFC

10

Figure 12. Control effort under a large initial error in the NDTAFC

Control efforts in NDTAFC 15 V1 V2 V3

10

motor voltage (v)

5 0 –5 –10 –15 –20 –25 –30

0

2

4 6 Time (sec)

8

7. Conclusion This paper has developed a model-free discrete time control of robot manipulators using voltage control strategy. The adaptive fuzzy system has estimated the lumped uncertainty very well. The proposed gradient descent algorithm has been able to minimize the estimation error without using any data for the lumped uncertainty. Instead, it has used the tracking error and its derivative obtained by the control system. The gradient descent algorithm has performed adaptively to reduce the

COMPEL 33,3

1066

tracking error in the presence of uncertainties. The proposed NDTAFC has shown a very good performance such that the value of tracking error and its derivative are ultimately bounded to small values. The proposed control algorithm has guaranteed the stability analysis and simulation results have shown the effectiveness of the method. The control approach is robust with a very good tracking performance. References Boyd, S. and Sastry, S. (1986), “Necessary and sufficient conditions for parameter convergence in adaptive control”, Automatica, Vol. 22 No. 6, pp. 629-639. Chen, B., Liu, X., Liu, K. and Lin, C. (2013), “Adaptive control for nonlinear MIMO time-delay systems based on fuzzy approximation”, Information Sciences, Vol. 222 No. 10, pp. 576-592. Corradini, M.L., Fossi, V., Giantomassi, A., Ippoliti, G., Longhi, S. and Orlando, G. (2012), “Discrete time sliding mode control of robotic manipulators: development and experimental validation”, Control Engineering Practice, Vol. 20 No. 8, pp. 816-822. Fateh, M.M. (2010), “Proper uncertainty bound parameter to robust control of electrical manipulators using nominal model”, Nonlinear Dynamics, Vol. 61 No. 4, pp. 655-666. Fateh, M.M. (2012), “Robust control of flexible-joint robots using voltage control strategy”, Nonlinear Dynamics, Vol. 67 No. 2, pp. 1525-1537. Fateh, M.M. and Baluchzadeh, M. (2012), “Modeling and robust discrete LQ repetitive control of electrically driven robots”, International Journal of Automation and Computing (accepted 7 December 2012). Fateh, M.M. and Baluchzadeh, M. (2013), “Discrete optimal control for robot manipulators”, The International Journal for Computation and Mathematics in Electrical and Electronic Engineering. Fateh, M.M. and Khorashadizadeh, S. (2012), “Robust control of electrically driven robots by adaptive fuzzy estimation of uncertainty”, Nonlinear Dynamics, Vol. 69 No. 3, pp. 1465-1477. Fateh, M.M., Ahsani Tehrani, H. and Karbassi, S.M. (2013), “Repetitive control of electrically driven robot manipulators”, Int. J. Systems Science, Vol. 44 No. 4, pp. 775-785. Ge, S.S., Zhang, J. and Lee, T.H. (2004), “Adaptive neural network control for a class of MIMO nonlinear systems with disturbances in discrete-time”, IEEE Trans. Syst. Man Cybern., Part B, Cybern, Vol. 34 No. 4, pp. 1630-1645. Hwang, J.P. and Kim, E. (2006), “Robust tracking control of an electrically driven robot: adaptive fuzzy logic approach”, IEEE Trans. Fuzzy Syst, Vol. 14 No. 2, pp. 232-247. Jagannathan, S., Vandegrift, M.W. and Lewis, F.L. (2000), “Adaptive fuzzy logic control of discrete-time dynamical systems”, Automatica, Vol. 36, pp. 229-241. Jiang, P., Bamforth, L.C.A., Feng, Z., Baruch, J.E.F. and Chen, Y.Q. (2007), “Indirect iterative learning control for a discrete visual servo without a camera-robot model”, IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, Vol. 37 No. 4, pp. 863-876. Kempf, C., Messner, W., Tornizuka, M. and Horowitz, R. (1993), “Comparison of four discrete-time repetitive control algorithms”, IEEE Cont Syst Mag, Vol. 13 No. 6, pp. 48-54. Mareels, I.M.Y., Penfold, H.B. and Evans, R.J. (1992), “Controlling nonlinear time-varying systems via Euler approximations”, Autotomatica, Vol. 28 No. 4, pp. 681-696. Neuman, C.P. and Tourassis, V.D. (1985), “Discrete dynamic robot models”, IEEE Trans Syst Man Cybern, Vol. 15 No. 2, pp. 193-204. Ogata, K. (1987), Discrete-Time Control Systems, Prentice-Hall, NJ.

Qu, Z. and Dawson, D.M. (1996), Robust Tracking Control of Robot Manipulators, IEEE Press Inc, New York, NY. Ruiyun, Q. and Mietek, A.B. (2008), “Stable indirect adaptive control based on discrete-time T-S fuzzy model”, Fuzzy Sets and Systems, Vol. 159 No. 8, pp. 900-925. Shoja Majidabad, S. and Shandiz, H.T. (2012), “Discrete-time based sliding-mode control of robot manipulators”, International Journal of Intelligent Computing and Cybernetics, Vol. 5 No. 3, pp. 340-358. Spong, M.W., Hutchinson, S. and Vidyasagar, M. (2006), Robot Modelling and Control, Wiley, Hoboken, NJ. Sun, F., Li, L., Li, H.-X. and Liu, H. (2007), “Neuro-fuzzy dynamic-inversion-based adaptive control for robotic manipulators-discrete time case”, IEEE Transactions on Industrial Electronics, Vol. 54 No. 3, pp. 1342-1351. Sun, F.C., Sun, Z.Q. and Feng, G. (1999), “An adaptive fuzzy controller based on sliding mode for robot manipulators”, IEEE Trans. Syst. Man Cybern., Part B, Cybern, Vol. 29 No. 5, pp. 661-667. Tsai, M.C. and Tomizuka, M. (1989), “Model reference adaptive control and repetitive control for robot manipulators”, IEEE International Conference on Robotics and Automation, Vol. 3, pp. 1650-1655. Tsai, M.C., Anwar, G. and Tomizuka, M. (1988), “Discrete time repetitive control for robot manipulators”, IEEE International Conference on Robotics and Automation, Vol. 3 pp. 1341-1346. Tumeh, Z.S. (1990), “Discrete decentralized time-varying nonlinear control of robot manipulators”, Proceedings of the 29th IEEE Conference on Decision and Control, Honolulu, HI, Vol. 3, pp. 1978-1979. Wang, L.X. (1997), A Course in Fuzzy Systems and Control, Prentice-Hall, New York, NY. Wuxi, S. (2008), “Indirect adaptive fuzzy control for a class of nonlinear discrete-time systems”, Journal of Systems Engineering and Electronics, Vol. 19 No. 6, pp. 1203-1207. Zagorianos, A., Tzafestas, S.G. and Stavrakakis, G.S. (1995), “Online discrete-time control of industrial robots”, Robotics and Autonomous Systems, Vol. 14, pp. 289-299. About the authors Dr Mohammad Mehdi Fateh received his BSc Degree from the Isfahan University of Technology, in 1988 and his MSc Degree in Electrical Engineering from the Tarbiat Modares University, Iran, in 1991. He received his PhD degree in robotic engineering from the Southampton University, UK, in 2001. He is a full Professor with the Department of Electrical and Robotic Engineering at the Shahrood University of Technology in Iran. His research interests include robust nonlinear control, fuzzy control, robotics and intelligent systems, mechatronics and automation. Dr Mohammad Mehdi Fateh is the corresponding author and can be contacted at: [email protected] Siamak Azargoshasb received his BSc Degree in Electrical Engineering from the Islamic Azad University of Najafabad, Iran in 2008 and his MSc Degree from the Shahrood University of Technology, Iran in 2010. Now, he is a PhD student in control engineering at the Shahrood University of Technology. His research interests include robotics and control. Saeed Khorashadizadeh received his BSc Degree in Electrical Engineering from the Ferdowsi University of Mashad, Iran in 2009 and the MSc Degree from the Shahrood University, Iran in 2011. Now, he is a PhD student in control engineering at the Shahrood University of Technology. His research interests include robotics and control.

To purchase reprints of this article please e-mail: [email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints

Model-free discrete control

1067